content
stringlengths
86
994k
meta
stringlengths
288
619
More AP Stat help November 28th 2009, 06:01 PM #1 Nov 2009 More AP Stat help 1. A consumer organization estimates that 29% of new cars have a cosmetic defect such as a scratch or a dent when they are delivered to car dealers. This same organization believes that 7% have a functional defects---something that does not work properly---and that 2% of new cars have both kinds of problems. a) If you buy a new car, what's the probability that it has some kind of defect? b) What's the probability it has a cosmetic defect but no functional defect? c) If you notice a dent on a new car, what's the probability it has a functional defect? d) Are the two kinds of defects disjoint events? Explain. No. You can have both kinds of defects on one car. e) Do you think the two kinds of defects are independent events? Explain. Yes. Having one type of defect does not mean you will have the other type. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/117273-more-ap-stat-help.html","timestamp":"2014-04-19T02:54:08Z","content_type":null,"content_length":"29440","record_id":"<urn:uuid:a862d4a5-1c57-47da-962c-a6cb38e2d564>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Just out: Relativistic Quantum Chemistry - The Fundamental Theory of Molecular Science The latest news from academia, regulators research labs and other things of interest Posted: February 12, 2009 Just out: Relativistic Quantum Chemistry - The Fundamental Theory of Molecular (Nanowerk News) John Wiley and Sons Ltd's new report "Relativistic Quantum Chemistry: The Fundamental Theory of Molecular Science" will be published on March 3, 2009. Written by two researchers in the field, this book is a reference to explain the principles and fundamentals in a self-contained, complete and consistent way. Much attention is paid to the didactical value, with the chapters interconnected and based on each other. From the contents: Relativistic Theory of a Free Electron: Dirac's Equation Dirac Theory of a Single Electron in a Central Potential Many-Electron Theory I: Quantum Electrodynamics Many-Electron Theory II: Dirac-Hartree-Fock Theory Elimination of the Small Component Unitary Transformation Schemes Relativistic Density Functional Theory Physical Observables and Molecular Properties Interpretive Approach to Relativistic Quantum Chemistry From beginning to end, the authors deduce all the concepts and rules, such that readers are able to understand the fundamentals and principles behind the theory. Essential reading for theoretical chemists and physicists.
{"url":"http://www.nanowerk.com/news/newsid=9259.php","timestamp":"2014-04-17T12:56:15Z","content_type":null,"content_length":"32712","record_id":"<urn:uuid:2fbec1b6-19e4-4f37-9dac-9785de342280>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
What is WAVE SPEED MEASURED IN UNITS? What units are used to measure the speed of a wave? Meters/second What are the units used to measure speed?, sometimes called "velocity". Others have mentioned certain PARTS of the units used to measure speed What units are used to measure the speed of a wave? Meters/second. What is the standard unit of measurement for wave speed? meters per second. Standard unit for the speed of light? Please vote if the answer you were given helped you or not, thats the best way to improve our algorithm. You can also submit an answer or search documents about what is wave speed measured in units. Frequency is the number of occurrences of a repeating event per unit time. ... A traditional unit of measure used with rotating mechanical devices is revolutions per minute, ... media in which the wave speed is independent of frequency), ... Speed is a measure of how fast something is moving. It is calculated by dividing the distance travelled by the time taken. speed of a wave measures how far one particular crest (or trough) moves in a given amount of time. Assuming a sinusoidal wave moving at a fixed wave speed, ... Wavelength is a measure of the distance between repetitions of a shape feature such as peaks, ... "bursts" of wave action where each wave packet travels as a unit, ... Look at the equation lambda*f = c That reads wavelength times frequency gives speed. Speed has units of m/s Wavelength (units of meters for this ... In sound it's measured in htz. Like a sound wave each htz is a wave of sound. 10,000 htz is 10,000 waves of turbulence per ... Light is measured in many units. ... Or the spectroscopic shift can be converted into the speed of the receeding object, and measured either in kilometers per second, or (because on a galactic scale, speeds are so high) ... How to Measure Laser Waves. Science Question;What is the Frequency Of sound wave? What unit is it measured in? I need to no for my science work .. Best Answer Asker's Choice . ... Speed of sound wave in air= Frequency of sound wave x Wavelength of sound wave One theme of this unit has been that "a wave is a disturbance moving through a medium." There are two distinct objects in this phrase ... If a sound wave (speed = 340 m/s) returns to the camera 0.150 seconds after leaving the camera, then how far away is the object? See Answer The nature of a wave was discussed in Lesson 1 of this unit. ... Period, being a time, is measured in units of time such as seconds, hours, days or years. ... Frequency and speed are distinctly different quantities. Wave speed will be discussed in more detail later in this lesson. Investigate! Wavelength can be defined as the distance between the crests of a wave (most commonly a sound or electromagnetic wave). ... What is the units of measure for wavelength? Meters, usually nanometers (10 ^-9). Leave a Comment. Please login to add comments. Peak 5 or 8 second gust speed (m/s) measured during the eight-minute ... The direction from which the wind waves at the wind wave period (WWPD) are coming. The units are degrees ... (APHA 1980). Units are Formazine Turbidity Units (FTU). pH (PH) A measure of the acidity or alkalinity ... Frequency, wavelength, amplitude and wave speed. You need to know about these quantities used in waves - what they mean, the symbols used for them and the units used to measure them. What unit is wave frequency measured ChaCha Answer: The standard unit for frequency is the hertz (Hz). What is the symbol units of wave speed? ChaCha Answer: There is no symbol unit for a wave's speed. It will depend upon how you are me... When describing waves that have a constant speed, such as sound or light waves, the frequency is inversely proportional to the wavelength. ... Hertz (Hz) is a unit of measure that denotes the number of times an event occurs in one second. Frequency is measured by the units of measurement known as Hertz denoted by the symbol Hz ... What is a wave measured in a frequency? ... angular frequency ω (also referred to by the terms angular speed, radial frequency, circular frequency, orbital frequency, radian frequency, ... Amplitude is measured in the amount of force applied over an area. The most common unit of measurement of force applied to an area for acoustic study is the Newtons per square meter ... A more meaningful reference has been developed to measure the average amplitude of a wave over time, ... How the wavelength of wave is measured? My idea of electromagnetic wave is there is an invisible line traveling through space moving up and down like a sine wave we see on CRO. ... which represent the true wave. If you know the speed or energy, ... What units is E=MC measured in? Special ... Wave speed is the speed at which a wave travels and an important concept in the wave unit covered in Physics. Video explanation on wave speed and example problems. Toggle navigation. math. Algebra; Geometry; Algebra 2; Trigonometry; Precalculus; Calculus; science. Biology; Chemistry; Physics; units. Speed and velocity are both measured using the same units. The SI unit of distance and displacement is the meter. ... stadium wave 16.447: 59.211: fastest human: ice skating (Jeremy Wotherspoon) 18: 64: champagne cork: 20: 70: ... which are massless particles each traveling in a wave-like pattern and moving at the speed of light. Each photon contains a certain amount ... each with a different set of physical units? After all, frequency is measured in cycles per second ... What units are used to measure wavelength? Welcome New User! ( Create Account ... What units do you measure speed? Get paid for participating. Ask questions ... Wavelength of radio waves can be measured in metres or centimetres for microwaves. ... (i.e. sound waves or beats) per second. It was named after scientist Heinrich Hertz. Ask a Question ... What Is The Unit Frequency Is Measured By Elaboration? Top Related ... angular frequency ω (also referred to by the terms angular speed, radial frequency, circular frequency, orbital ... Frequencies are measured in hertz (Hz), ... The relationship between frequency and wavelength is that for a given speed, as the frequency increases, the wavelength decreases. ... It is standard SI unit to measure intensity of sound or radiation. ... The speed of sound is the distance travelled during a unit of time by a sound wave propagating through an elastic medium. In dry air at , the speed of sound is... Explore this Topic. How is Sound Measured? Use our free online units converter for wave number. the calculator home page. Home ... and v p is the phase velocity of the wave. The phase velocity is the speed at which one phase of the wave ... which is to say there are more cycles when we have a higher frequency for the same measured ... Wavelength frequency conversion sound equation formula lambda audio radio acoustics acoustic sound typical waves wave length measure speed of sound formula for frequency light waves cycle duration period speed of light color electromagnetic ... a common unit of energy measure in atomic physics. ... related words for Units for measuring computer information or speed and other words for Units for measuring computer information or speed by ... a measure of how fast information moves from one ... a unit for measuring sound waves, radio waves, and computer speed. K. ... measured in units of candelas. ... The wave can be restarted by again clicking on the button, ... where c is the speed of light (measured in meters per second), n is the frequency of the light in hertz (Hz), ... (scalar) quantity: speed (formula) variable: r units of measure: m/sec ... unit of measure: relationships: length: scalar: L ... ... frequency is most often measured in cycles per second (cps ... / 440). Be certain to measure the speed of sound and wavelength in the same units. Notice how if the speed of sound changed due to temperature ... the wavelength of sound waves from a moving source are compressed ahead ... Frequency is a measure of the number of occurrences of a ... per second, 100 Hz means one hundred cycles per second, and so on Wavelength is the distance between repeating units of propagating wave of a given ... In Metric Units, the Speed of Light in vacuum is exactly ... This can be measured in light waves, sound waves. Ask a Question Q&A ... What Is Wave Speed? What Is Wave Energy? What Is ... Parts of a Wave. Definition of Wave Frequency. Measure Wave Frequency. Sound Waves. Transverse Wave. Definition of Wave Height. What Units Are Waves Measured in. Water ... Use our free online units converter for frequency ... Frequency is understood as the number of repeating events in a given unit of time; a measure of how frequently something ... This frequency is inversely proportional to wavelength by the phase speed, or speed that a wave travels in a ... ... measures the energy of a transverse wave a) measured from the equilibrium ... _____ b) Wavelength _____ cm c) Amplitude _____ cm d) frequency _____ Hz e.) speed _____ cm/s Wave 3 a) How many waves ... Show equation, work, final answer with correct units.) 1. What is the ... Physics > Quantum Physics ... Measure the speed of electron wave By theory of electron wave(E=hv=mcc,P=h/λ=mV),we ... This is the difference between the phase velocity and the signal or group velocity. You can do a computer simulation if you want to. The velocity you cite, v = c^2/V is the ... Sound Waves Acoustical sound waves are measured in decibels and decibels is the unit of measure that we use in measuring sound. Intensity of the sound wave is a measure of how loud ... we refer to that as the speed of sound. Light waves travel at the frequency of 186,411 miles per second or ... How is the speed of light measured? ... Nowadays, the speed of light in vacuum is defined to have an exact fixed value when given in standard units. ... The Velocity of Light and Radio Waves, Froome and Essen. ... The unit of frequency of a radio wave -- one cycle per second -- is named the hertz, in honor of Heinrich Hertz. His experiment ... have no mass, and travel at the speed of ... An electromagnetic wave can also be described in terms of its energy—in units of measure called electron ... How has the speed of light been measured? That's a very good question. In the ... Heinrich Hertz generated some electromagnetic waves in his laboratory. He measured their speed and came up with that familiar number, ... ... in technical applications it is typically used to measure wave rates or processing speed. These frequencies often occur multiple times per second and therefore are measured in hertz (Hz) or related units of ... Wave Rates. Frequency can be used to measure the rate of waves, such as sound ... The speed of a wave pulse traveling along a string or wire is determined by knowing its mass per unit length and its tension. ... Tension is the force conducted along the string and is measured in newtons, N. Clock speed is the measure of how fast a computer completes basic operations. The faster a computer's clock speed is, the more... wise GEEK clear ... Hertz is the name given to the unit "wave cycle per second" (Named after Heinrich Hertz). ... each traveling in a wave-like pattern at the speed of light. ... Frequency is measured in cycles per second, or Hertz. ... Generally, scientists use whatever units are easiest for the type of EM radiation they work with. The SI derived units for these derived quantities are obtained from these equations and the seven SI base units. Examples ... speed, velocity: ... meter per second squared : m/s 2: wave number: reciprocal meter: m-1: mass density: kilogram per cubic meter: kg/m 3: specific volume: cubic meter ... Wind speed units & wind directions Wind speed Calculator (Enter value in any field and click "Calculate") The electrical signals can also be used to measure the characteristics of the sound, such as amplitude and frequency. Similarly, hydrophones convert sound in water into electrical signals that can be amplified, ... Because sound is a pressure wave, ... ... Throughout this discussion, we measure angles in units of radians. Recall that 2π radians are equivalent to 360°, ... because ω is a rotational speed, it is measured in units of radians per second. ... c is a constant of proportionality. In some cases, c is the speed of the wave. If you didn't find what you were looking for you can always try Google Search Add this page to your blog, web, or forum. This will help people know what is What is WAVE SPEED MEASURED IN UNITS
{"url":"http://mrwhatis.net/wave-speed-measured-in-units.html","timestamp":"2014-04-19T13:18:41Z","content_type":null,"content_length":"38159","record_id":"<urn:uuid:e98b29a0-27f1-4747-aca8-ee294e8db5f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
e by Title Date of Issue Title Authors 2002 Hyperbolic axial dispersion model for heat exchangers Sahoo, R K; Roetzel, W 2010 Hysteresis Controller and Delta Modulator- Two Viable Schemes for Current Controlled Voltage Source Inverter Reddy, B V; Chitti Babu, B 2009 Hysteresis Controller and Delta Modulator- Two Viable Schemes for Current Controlled Voltage Source Inverter Reddy, B V; Chitti Babu, B 2010 Image Compression Using Discrete Tchebichef Transform Algorithm Senapati, R K; Pati, U C; Mahapatra, K K 2008 Image Encryption by Novel Cryptosystem Using Matrix Transformation Acharya, B; Patra, S K; Panda, G 2008 Image Encryption Using Self-Invertible Key Matrix of Hill Cipher Algorithm Panigrahy, S K; Acharya, B; Jena, D 2005 Image segmentation Using Parallel Tabu Search Algorithm and MRF Model Nanda, P K; Patra, D 2006 Image Segmentation Using Thresholding and Genetic Algorithm Kanungo, P; Nanda, P K; Samal, U C 2009 Impact of Environmental and Experimental Parameters on FRP Composites Ray, B C Mar-2012 Impact of In-band Crosstalk & Crosstalk Aware Datapath Selection in WDM/DWDM Networks Das, S K; Swain, T R; Patra, S K Aug-2009 Impact of Private Foreign Capital Inflows on Economic Growth in India: An Empirical Analysis Sethi, N; Sucharita, S 2014 Implementation of Fuzzy-PID Controller to Liquid Level System using LabVIEW Prusty, S B; Pati, U C; Mahapatra, K K Mar-2013 Implementation of ON/OFF and PID controller using TCP Protocol Based on Virtual Instrumentation Bisoyi, A; Pati, U C 2008 Implementation of Taguchi Design for Erosion of Fiber-Reinforced Polyester Composite Systems with SiC Filler Patnaik, A; Satapathy, Alok; Mahapatra, S; Dash, R R 2007 Implementation of Taguchi Method for Tribo-Performance of Hybrid Composites Patnaik, A; Satapathy, Alok; Mahapatra, S S Mar-2013 Implementing clean coal technology through gasification and liquefaction – the indian perspective Sahu, H B Dec-2010 Implications of bath composition and ultrasound on the structure and properties of copper thin films Mallik, A; Ray, B C 2007 Improved Adaptive Impulsive Noise Suppression Sa, Pankaj K; Majhi, B; Panda, G Dec-2010 An Improved Artificial Immune System for Solving Loading Problems in Flexible Manufacturing Systems Dhal, P R; Mahapatra, S S; Datta, S; Mishra, A 2009 An Improved Differential Evolution Trained Neural Network Scheme for Nonlinear System Identification Subudhi, B; Jena, D 29-Aug-2009 An Improved Dynamic Response of Voltage Source Inverter using Novel Hysteresis Dead Band Current Controller Chitti Babu, B; Reddy, B V
{"url":"http://dspace.nitrkl.ac.in/dspace/browse-title?starts_with=I","timestamp":"2014-04-16T10:46:22Z","content_type":null,"content_length":"17315","record_id":"<urn:uuid:a73ddf7f-f61f-4238-9463-73af1117fc53>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Hangman 1 Re: Hangman 1 _ _ _ M _ _ _ _ Re: Hangman 1 Is there a big T in there? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 Re: Hangman 1 _ _ _ M _ T _ _ Re: Hangman 1 Is there a very large A in there? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 Re: Hangman 1 There has to be an R in there. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 : Hangman 1 _ _ _ M _ TR_ Re: Hangman 1 The word is Geometry! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 _ _ _ _ _ _ _ _ _ _ _ _ Re: Hangman 1 Is there an E in there? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Hangman 1 Is there a T in there? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 Re: Hangman 1 T _ _ _ _ _ E _ _ _ _ _ Re: Hangman 1 An N? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Hangman 1 How about an M? That is surely in there. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 T _ _ m _ _ E _ _ _ _ N Re: Hangman 1 The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Hangman 1 Re: Hangman 1 You capitalized the T, is this a name? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 Re: Hangman 1 Thomas Edison In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 what the...... _ _ _ _ _ _ _ Re: Hangman 1 Is there a T? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 _ _ _ _ _ t _ Re: Hangman 1 Can I get an R? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hangman 1 No R
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=241435","timestamp":"2014-04-18T03:12:55Z","content_type":null,"content_length":"32394","record_id":"<urn:uuid:23141eea-d4cd-47f1-b429-98e935cb6c2f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference Between Odds Ratio and Relative Risk Odds Ratio Vs Relative Risk When two groups are under study or observation, you can use two measures to describe the comparative likelihood of an event happening. These two measures are the odds ratio and relative risk. Both are two different statistical concepts, although so much related to each other. Relative risk (RR) is simply the probability or relationship of two events. Let’s say A is event 1 and B is event 2. One can get the RR by dividing B from A or A/B. This is exactly how experts come up with popular lines like ‘Habitual alcoholic beverage drinkers are 2-4 times more at risk of developing liver problems than non-alcoholic beverage drinkers!’ This means that the likelihood of variable A which is the risk of developing liver disease for habitual alcoholic beverage drinkers is relative to the same exact risk being talked about for variable B which includes the non-alcoholic beverage drinkers. In this regard, if you belong to group B and that you are only 10% at risk for dying then it must be true that those from group A are 20-40% more at risk of dying. The other measure ‘“ odds ratio (OR) is a term that already speaks of what it describes. Instead of using pure percentages (like in RR), OR uses ratio of odds. Take note, OR explains ‘odds’ not in its colloquial definition (i.e. chance) but rather on its statistical definition which is the probability of an event over (divided by) the probability of a certain event not happening. A good example is the tossing of a coin. When you happen to land the coin with its tails up 60% of the time (obviously it lands with heads 40% of the time), the odds of tails in your case is 60/40= 1.5 (1.5 times more likely to get tails than heads). But ordinarily, there’s really a 50 percent chance of landing on either heads or tails. So the odds are 50/50=1. So the question is on how likely this event will not happen compared to it happening. The straightforward answer is that you are just equally likely to get either way. In written formula, with A being the likelihood for group 1 while B being the likelihood for group 2, the formula to get the OR is [A/(1-A)]/[B/(1-B)]. So if the probability of having liver disease among habitual alcoholic beverage drinkers is 20% and among non-alcoholic beverage drinkers is 2% the OR will be = [20%/(1-20%)] / [2%/(2-1%/)]=12.25 and the RR of having liver disease when drinking alcoholic beverages will be = 20%/2%=10. The RR and OR often have close results, but in some other situations they have very far numerical values most especially if the risk of occurrence is really very high to begin with. This scenario gives a high OR while the RR is kept at a minimum. 1. The RR is much simpler to interpret and is most likely consistent with everyone’s intuition. It is the risk of a situation relative (in relation) to exposure. The formula is A/B. 2. OR is a bit more complicated and uses the formula [A/(1-A)]/[B/(1-B)]. Search DifferenceBetween.net : Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family. Leave a Response Articles on DifferenceBetween.net are general information, and are not intended to substitute for professional advice. The information is "AS IS", "WITH ALL FAULTS". User assumes all risk of use, damage, or injury. You agree that we have no liability for any damages.
{"url":"http://www.differencebetween.net/science/difference-between-odds-ratio-and-relative-risk/","timestamp":"2014-04-16T16:02:50Z","content_type":null,"content_length":"44645","record_id":"<urn:uuid:23c790d4-0948-4e59-9911-23d2f8abb54f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Everyday Situations - Lesh Lesh, R. (1985). Processes, skills, and abilities needed to use mathematics in everyday situations. Education and Urban Society, 17(4), 439-446. Lesh criticizes traditional textbooks and teaching methods, saying that their one-step problems rarely exercise students' skills, and that they do not reflect real-life mathematical situations. Lesh believes that there should be as much--or more--emphasis in the mathematics classroom on understanding mathematical concepts and possible mathematical relationships as on accurate computation. Students should be taught to recognize situations in which their mathematical skills can be utilized. Mathematizable situations in the classroom, such as wallpapering a wall or balancing a class budget, cause students to engage in multiple mathematics processes and to learn how mathematical concepts are related to one another in a useful and meaningful way. Such experiences also require students to talk and think about mathematics with one another and with the teacher. Direct Quotes: "A goal of the project was to identify important processes, skills, and understandings that are needed by students to use mathematical ideas in everyday situations. A substantial part of the project consisted of students working together in small groups on problems that might reasonably occur in the normal lives of the students and their families: balancing a checkbook, planning a vacation within a budget, wallpapering a room, estimating distances using a map, and so on" (p. 439). "Getting a collection of isolated concepts in a youngster's head (e.g., measurement, addition, multiplication, decimals, proportional reasoning, fractions, negative numbers) does not guarantee that these ideas will be organized and related to one another in some useful way; it does not guarantee that situations will be recognized in which the ideas are useful or that they will be retrievable when they are needed" (pp. 439-440). "The 'back to basics' movement that is currently influencing many of the nation's schools is often uninformed and misdirected. Results of National Assessment Tests, for example, show that 'Johnny can add; computation with whole numbers is far from a lost art' (Carpenter et al., 1975: 457). In fact, although there are plenty of students who do poorly on computation problems, results from most large-scale testing programs show that there are fewer of them today than at any time in the past. Today's youngsters run into difficulty in making inferences, solving problems, evaluating the reasonableness of results, using references to 'look up' what they need to know, and so on. It is the complex skills, not the basic skills that are deteriorating. What we need is to get back to complexity, where thinking is required in addition to simply knowing some isolated fact or procedure. In realistic situations in which mathematics is used, question asking, information gathering, and trial-answer evaluating are often more important than simple answer giving. Real problems usually require more than simple one-step solutions" (p. 441). [On a problem about individuals' overall performances in sports events, in which multiple pieces of information were given, including scores on running and jumping and qualitative comments from the coaches, the students] "seemed driven to do some messy calculation with the numbers, sensible or not. In fact, most of the students were quite skillful at the arithmetic of *numbers*, but many real problems involve the arithmetic of *quantities* -- a skill that mathematics textbook authors tend to assume is covered in science, and science textbook authors tend to assume is covered in mathematics" (p. 444). "In problems that require multi-step solution procedures, it is important to *plan* what you are going to do before doing it; *monitor* what you are doing *while* you are doing it; and *evaluate* the sensibility of your results. These skills are seldom practiced in simple one-step textbook problems... The students who participated in our project did improve in their ability to deal with problems [like the one described above], which we believe are similar to the kinds they will meet in everyday situations, job situations, or later mathematics courses. To some extent, our students improved because we worked with them individually on some specific skills: graphing, measuring, estimating, and so forth. They probably also got better on problems that require planning, organizing, and recording simply because our problems required them to practice and use these skills. Further, they were forced to reorganize their mathematical ideas because our problems usually involved more than a single concept. The more organized character of their knowledge should benefit them greatly in future mathematics courses" (p. 445). Carpenter, T. P., T. G. Coburn, R. E. Kays, and J. W. Wilson (1975). Results and implications of the NAEP mathematics assessment: secondary school. Mathematics Teacher, 68, 453-470. Summary by Maria Ong
{"url":"http://mathforum.org/sarah/Discussion.Sessions/Lesh.html","timestamp":"2014-04-19T22:37:47Z","content_type":null,"content_length":"8135","record_id":"<urn:uuid:1da95fdf-add8-4eb8-92c3-f84180245862>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Haskell Code by HsColour {- | A design study about how to design signal processors that adapt to a common sample rate. I simplified "Synthesizer.Inference.DesignStudy.Arrow" to this module which uses only Applicative functors. module Synthesizer.Inference.DesignStudy.Applicative where import Data.List (intersect) import Control.Applicative (Applicative(..), liftA3, ) data Rates = Rates [Int] | Any deriving Show -- it is a Reader monad with context processing data Processor a = P Rates (Rates -> a) intersectRates :: Rates -> Rates -> Rates intersectRates Any y = y intersectRates x Any = x intersectRates (Rates xs) (Rates ys) = Rates $ intersect xs ys instance Functor Processor where fmap f (P r f0) = P r (f . f0) instance Applicative Processor where pure x = P Any (const x) (P r0 f0) <*> (P r1 f1) = P (intersectRates r0 r1) (\r -> f0 r (f1 r)) runProcessor :: Processor a -> a runProcessor (P r f) = f r -- test processors processor1, processor2, processor3 :: Processor Rates processor1 = P (Rates [44100, 48000]) id processor2 = P Any id processor3 = P (Rates [48000]) id process :: Processor (Rates, Rates, Rates) process = liftA3 (,,) processor1 processor2 processor3 test :: (Rates, Rates, Rates) test = runProcessor process
{"url":"http://hackage.haskell.org/package/synthesizer-0.0.3/docs/src/Synthesizer-Inference-DesignStudy-Applicative.html","timestamp":"2014-04-21T13:16:29Z","content_type":null,"content_length":"9350","record_id":"<urn:uuid:d2d8bad7-047a-4e39-ad40-08c329a7a08f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - light geodesic path Quote by pervect, mine text is red: You'll need to clarify a bit. It sounds like you are asking for the null geodesics (the paths that light follow) given the metric of space-time (g_ij). not just g_ij However, you need to specify the velocity as well as its position to calculate the geodesic. exactly, so I said: on step k, we're in x_i, with 'velocity' dx_i (=dt,i=4) ...if I'm understanding it correctly, because while geodesic deviation will cause light beams to "curve", that curvature is equivalent ot an acceleration, and it won't affect the velocity of light (which after all is always equal to 'c' locally). that is, in tangent euclidean coordinates, but x_i I am speaking about, are distant static observer ones (there must be simple iterative solution)
{"url":"http://www.physicsforums.com/showpost.php?p=418890&postcount=3","timestamp":"2014-04-16T13:56:03Z","content_type":null,"content_length":"8312","record_id":"<urn:uuid:5402c702-08cb-44da-bd84-fa42110fc16c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Books have been written on the "definition" of probability. We shall merely note two properties: (a) statistical independence (events must be completely unrelated), and (b) the law of large numbers. This says that if p[1] is the probability of getting an event in Class 1 and we observe that N[1] out of N events are in Class 1, then we have A common example of direct probability in physics is that in which one has exact knowledge of a final-state wave function (or probability density). One such case is that in which we know in advance the angular distribution f (x), where x = cosx[1] in an interval x[1] is Nf (x[1])x[1], where N, the total number of scattered particles, is a very large number. Note that the function f(x) is normalized to unity: As physicists, we call such a function a distribution function. Mathematicians call it a probability density function. Note that an element of probability, dp, is
{"url":"http://ned.ipac.caltech.edu/level5/Sept01/Orear/Orear3.html","timestamp":"2014-04-17T18:23:09Z","content_type":null,"content_length":"2906","record_id":"<urn:uuid:2ec8297f-617d-4a59-b09b-87696f2e1bfc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Symmetry and duality in normal basis multiplication - J. SYMB. COMP , 2000 "... ..." , 1993 "... Interest in normal bases over finite fields stems both from mathematical theory and practical applications. There has been a lot of literature dealing with various properties of normal bases (for finite fields and for Galois extension of arbitrary fields). The advantage of using normal bases to repr ..." Cited by 9 (0 self) Add to MetaCart Interest in normal bases over finite fields stems both from mathematical theory and practical applications. There has been a lot of literature dealing with various properties of normal bases (for finite fields and for Galois extension of arbitrary fields). The advantage of using normal bases to represent finite fields was noted by Hensel in 1888. With the introduction of optimal normal bases, large finite fields, that can be used in secure and e#cient implementation of several cryptosystems, have recently been realized in hardware. The present thesis studies various theoretical and practical aspects of normal bases in finite fields. We first give some characterizations of normal bases. Then by using linear algebra, we prove that F q n has a basis over F q such that any element in F q represented in this basis generates a normal basis if and only if some groups of coordinates are not simultaneously zero. We show how to construct an irreducible polynomial of degree 2 n with linearly i...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1386151","timestamp":"2014-04-18T17:03:32Z","content_type":null,"content_length":"14395","record_id":"<urn:uuid:dbfc8da6-d8b5-469f-8f32-fa814c03bdc4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnetic field generated by current in semicircular loop at a point on axis So, the parametric representation of a point on the semi-circle would be (0, bcos(t), bsin(t)) where b is the radius of the semi-circle. The vector R is just [d, 0, 0] where d is the distance on the axis of the point and then the e is the unit vector from R-r But what's dr? And where does the switch to cylindrical coord come in?
{"url":"http://www.physicsforums.com/showthread.php?t=386396","timestamp":"2014-04-19T04:35:59Z","content_type":null,"content_length":"32519","record_id":"<urn:uuid:94a09b38-552b-4e0e-80d0-76a5d5b737f5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
The Universe Adventure Consequences of the Big Bang Using the wealth of empirical information from redshift surveys, particle accelerator experiments, and detailed studies of galactic evolution and CMB anisotropy, scientists can predict what types of conditions must have been present in the early Universe. Combining these parameters with the theoretical framework provided by General Relativity, cosmologists have generated a powerful model capable of describing both the history and fate of the Universe. Satellites, such as WMAP, provide scientists with important information about the history and fate of the Universe. The Age of the Universe We can estimate the age of the Universe by uncovering the ages of some of the cosmic bodies in the Universe. • Earth: Using radioactive dating, we have discovered that the approximate age of Earth is 4.2 billion years. So, the Universe must be older than that. • Stars: We can observe many stars at different ages. We can deduce from this that the oldest stars formed 10 to 12 billion years ago. So, the Universe must be older than that. Cosmologists estimate the Universe to be 13.7 billion years old. How did they arrive at that? Accurately Determining the Age of the Universe Analyzing data from supernovae is one technique scientists use to estimate the age of the Universe. Cosmologists get a more accurate estimate for the age of the Universe by analyzing the Universe's expansion rate. By studying the history of the expansion rate using redshift data from distant galaxies and supernovae, we can project the expansion of space back to the beginning of time: the Big Bang. Running the expansion model backwards in this way tells us that the Universe is roughly 13.7 billion years old, by our most accurate estimates.
{"url":"http://universeadventure.org/big_bang/conseq-ageofuniv.htm","timestamp":"2014-04-19T01:51:24Z","content_type":null,"content_length":"8655","record_id":"<urn:uuid:dd29c54f-5c54-4b9c-af72-b96cd675fc81>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Interest Rate Problem November 2nd 2009, 11:58 AM #1 Oct 2009 Interest Rate Problem If I had $3000 dollars compounded annually for 5 years, what would my interest rate be? So far I have: A(1) = 3000( 1 + r/1)^((1)(5)) Then I got: ln3000 + 5ln( 1+r ) But, now I am unsure on how to go on from this point. Any help would be greatly appreciated! Thanks so much!! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/111987-interest-rate-problem.html","timestamp":"2014-04-19T01:09:41Z","content_type":null,"content_length":"28725","record_id":"<urn:uuid:bbb86e5c-86a6-455b-a9f1-dfd434d55c02>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this notation valid ? December 5th 2010, 09:32 PM Is this notation valid ? Hi all, let $\mathbb{W}_t$ be a subset of $\mathbb{W}$ for any $t$ (its elements are defined by $t$). If I want to say that if I performed union of every set $\mathbb{W}_t$ with $1 \leqslant t \leqslant k$, I would obtain $\mathbb{W}$, is it valid notation to write : $\displaystyle \mathbb{W} = \bigcup_{t=1}^{k} \mathbb{W}_t$ I don't want to use the heavy $\mathbb{W} = \mathbb{W}_1 \cup \mathbb{W}_2 \cup \dots \cup \mathbb{W}_k$ notation, is the shortcut above valid and understandable ? Thanks all :) December 6th 2010, 12:09 AM Hi all, let $\mathbb{W}_t$ be a subset of $\mathbb{W}$ for any $t$ (its elements are defined by $t$). If I want to say that if I performed union of every set $\mathbb{W}_t$ with $1 \leqslant t \leqslant k$, I would obtain $\mathbb{W}$, is it valid notation to write : $\displaystyle \mathbb{W} = \bigcup_{t=1}^{k} \mathbb{W}_t$ I don't want to use the heavy $\mathbb{W} = \mathbb{W}_1 \cup \mathbb{W}_2 \cup \dots \cup \mathbb{W}_k$ notation, is the shortcut above valid and understandable ? Thanks all :) Yes. Not only is it valid and understandable, that is the standard notation for what you are trying to do. December 6th 2010, 12:14 AM Yes, this notation is fine. As a minor point, the previous discussion should make it clear that t is an integer, not a real. December 6th 2010, 12:19 AM Yes, it is an integer, I often forget to add obvious stuff like this although I know I shouldn't :( Thanks for your replies !! (Nod) December 6th 2010, 04:11 AM Rather than say "defined by t", I would say instead "t is the index". Otherwise, from a practical, everyday working point of view, this is accepted notation, and is commonly found in mathematics, even among logicians and set theorists. However, from a more strict, albeit pedantic, point of view, the notation is flawed: 'W' is being used in two different and incompatible ways. First it's used to symbolize a certain function, then it is used to symbolize a particular subset of the union of the range of said function. Usually, for purposes of mathematical communication, there is no harm in such usage, but from a more strictly formal point of view, the usage is flawed in the way I mentioned. December 6th 2010, 05:13 AM Sorry I am confused by the above post what function is 'W' being used to symbolise? December 6th 2010, 09:32 AM The function whose domain is some set of indices, of which 1 through k are members, and whose value is Wt for each t in the domain. December 6th 2010, 11:34 AM [QUOTE=Bacterius;592350]Hi all, If I want to say that if I performed union of every set $\mathbb{W}_t$ with $1 \leqslant t \leqslant k$, I would obtain $\mathbb{W}$, is it valid notation to write : is this what you are talking about? (sorry I am still a bit confused) December 6th 2010, 06:35 PM Thanks Moeblee for the detailed reply, I appreciate it :) Hmmmm, $\mathbb{W}$ would be a non-empty set of natural integers (for instance), and $\mathbb{W}_t$ represents a subset of $\mathbb{W}$, whose elements are chosen as a function of $t \in \mathbb {N}$. A crude example would be : Originally Posted by Example $1 \leq t \leq 3$, and $\mathbb{W} = \{4, 11, 15, 16, 71\}$ $\mathbb{W}_t = \{4, 11, 15\}$ if $t = 1$, $\mathbb{W}_t = \{15, 71\}$ if $t = 2$ and $\mathbb{W}_t = \{4, 16\}$ if $t = 3$. And I just wanted to know if it was correct (syntaxically speaking) to write : $\displaystyle \bigcup_{t=1}^{3} \mathbb{W}_t = \mathbb{W}_1 \cup \mathbb{W}_2 \cup \mathbb{W}_3 = \mathbb{W}$ For any $k \in \mathbb{N}$ - for the example $k = 3$. December 6th 2010, 06:53 PM Yes, that's all correct. And your use of 'W' in that way is well understood among mathematicians including set theorists. My only point though is that from an extremely technical point of view, the symbol 'W' should not be used both for the function itself and the union that you mentioned. But, again, that is only from a very technical point of view that is not of concern in ordinary, everyday mathematical communication. December 6th 2010, 06:57 PM MoeBlee I was replying to "hmmmm" :p (that was awkward, no pun intended) Thanks for clarifying though, I'll make good use of it :) December 6th 2010, 11:33 PM yeah i wasnt confused by the notation i was asking moeblee for clarification of the point about W being used in two ways because I didnt understand sorry i think I may have just made the thread a bit confusing but thanks anyway
{"url":"http://mathhelpforum.com/discrete-math/165432-notation-valid-print.html","timestamp":"2014-04-18T07:14:39Z","content_type":null,"content_length":"17208","record_id":"<urn:uuid:8fd5aaca-0468-4f5e-9c55-5c5dd907a9f8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Accidental release source terms Environmental Health Accidental release source terms December 18, 2013, 2:53 pm Accidental release source terms are the mathematical equations that quantify the flow rate at which accidental releases of air pollutants into the ambient environment can occur at industrial facilities such as petroleum refineries, natural gas processing plants, petrochemical and chemical plants, oil and gas transportation pipelines, and many other industrial facilities. Accidental releases in such facilities may occur through acts of nature (e.g., floods, hurricanes or earthquakes), operational errors, faulty design or inadequate maintenance. Governmental regulations in a many countries require that the probability of such accidental releases be analyzed and their quantitative impact upon the environment and human health be determined so that mitigating steps can be planned and implemented. There are a number of mathematical calculation methods for determining the flow rate at which gaseous and liquid pollutants might be released from various types of accidents. Such calculation methods are referred to as source terms, and this article on accidental release source terms explains some of the calculation methods used for determining the mass flow rate at which gaseous pollutants may be accidentally released. Given those mass flow rates, air pollution dispersion modeling studies can then be performed. Accidental release of a pressurized gas When gas stored under pressure in a closed vessel is discharged to the atmosphere through a hole or other opening, the gas velocity through that opening may be choked or it may be non-choked. Choked flow (also referred to as critical flow) is a limiting or maximum condition at which the gas velocity has attained the speed of sound in the gas. Choked flow occurs when the ratio of the absolute upstream pressure to the absolute downstream pressure is equal to or greater than: where k is the specific heat ratio of the discharged gas (sometimes called the isentropic expansion factor and sometimes denoted as γ, the Greek letter "gamma"). For many gases, k ranges from about 1.09 to about 1.41, and therefore the expression in equation (1) ranges from 1.7 to about 1.9, which means that choked velocity usually occurs when the absolute upstream vessel pressure is at least 1.7 to 1.9 times as high as the absolute downstream pressure. In the case of a leak to the ambient atmosphere, the downstream pressure is the atmospheric When the gas velocity is choked, the equation for the mass flow rate in SI units is: where the terms are defined as stated below. If the upstream gas density, ρu is not known directly, then it is useful to eliminate it by using the ideal gas law corrected for the real gas compressibility Z: For the above equations, it is important to note that although the gas velocity reaches a maximum and becomes choked, the mass flow rate is not choked. The mass flow rate can still be increased if the upstream pressure is increased or the temperature is decreased. Whenever the ratio of the absolute upstream pressure to the absolute downstream pressure is less than in expression (1) above, then the gas velocity is non-choked and the equation for mass flow rate or this equivalent form: ṁ = mass flow rate, kilograms (kg) per second C = discharge coefficient, dimensionless (usually about 0.72) A = discharge hole area, square meters k = cp ÷ cv = specific heat ratio of the gas cp = specific heat capacity of the gas at constant pressure cv = specific heat capacity of the gas at constant volume ρu = real gas upstream density, kg per cubic meter = (M Pu) ÷ (Z R Tu) Pu = absolute upstream pressure, Pa Pd = absolute downstream pressure, Pa M = the gas molecular mass, kg/kmol (also known as the molecular weight) R = the universal gas law constant = 8314.472 Pa·m3 ÷ (kmol·K) Tu = absolute upstream gas temperature, K Z = the gas compressibility factor at Pu and Tu, dimensionless The above equations calculate the initial instantaneous mass flow rate for the pressure and temperature existing in the source vessel when a release first occurs. The initial instantaneous flow rate from a leak in a pressurized gas system or vessel is much higher than the average flow rate during the overall release period because the pressure and flow rate decrease with time as the system or vessel empties. Calculating the flow rate versus time since the initiation of the leak is much more complicated, but more accurate. A comparison between two methods for performing such calculations is available online. The technical literature can be confusing because many authors do not explain whether they are using the universal gas law constant R which applies to any ideal gas or whether they are using Rs which only applies to a specific individual gas. The relationship between the two is Rs = R/M. • The above equations are for a real gas. • For an ideal gas, Z = 1 and ρ is the ideal gas density. • kmol = 1000 mol Evaporation of a non-boiling liquid pool Three different methods of calculating the rate of evaporation from a non-boiling liquid pool are presented in this section. The results obtained by the three methods are somewhat different. The U.S. EPA method The following equations are for predicting the rate at which liquid evaporates from the surface of a pool of liquid which is at or near the ambient temperature. The equations were developed by the U.S. Environmental Protection Agency (U.S. EPA) using units which were a mixture of metric usage and United States usage. The non-metric units have been converted to metric units for this presentation. E = ( 0.1268 ÷ T) u0.78 M0.667 A P E = evaporation rate, kg/min u = windspeed just above the pool liquid surface, m/s M = pool liquid molecular mass, dimensionless A = surface area of the pool liquid, m2 P = vapor pressure of the pool liquid at the pool temperature, kPa T = pool liquid absolute temperature, K The U.S. EPA also defined the pool depth as 0.01 meter (m), i.e., one centimeter, so that the surface area of the pool liquid could be calculated as: A = (pool volume, in m3) ÷ (0.01) The U.S. Air Force method The following equations are for predicting the rate at which liquid evaporates from the surface of a pool of liquid which is at or near the ambient temperature. The equations were derived from field tests performed by the U.S. Air Force with pools of liquid hydrazine. E = (4.161 × 10– 5) u0.75 TF M (PS ÷ PH) E = evaporation flux, (kg/min) / m2 of pool surface u = windspeed just above the liquid surface, m/s TA = absolute ambient temperature, K TF = pool liquid temperature correction factor, dimensionless (see equations (a) and (b) below) TP = pool liquid temperature, °C M = pool liquid molecular weight, g/mol PS = pool liquid vapor pressure at ambient temperature, mmHg PH = hydrazine vapor pressure at ambient temperature, mmHg (see equation (c) below) (a) If TP = 0 °C or less, then TF = 1.0 (b) If TP > 0 °C, then TF = 1.0 + 0.0043 TP2 (c) PH = 760 exp[ 65.3319 − ( 7245.2 ÷ TA ) − ( 8.22 ln TA ) + ( 6.1557 × 10– 3 ) TA ] Note: The function ln x is the natural logarithm (base e) of x, and the function exp x is e (approximately 2.7183) raised to the power of x. Stiver and Mackay's method The following equations are for predicting the rate at which liquid evaporates from the surface of a pool of liquid which is at or near the ambient temperature. The equations were developed by Warren Stiver and Dennis Mackay of the Chemical Engineering Department at the University of Toronto. E = k P M ÷ (R TA) E = evaporation flux, (kg/s)/m2 of pool surface k = mass transfer coefficient, m/s (which is taken to be 0.002 u) TA = absolute ambient temperature, K M = pool liquid molecular weight, g/mol P = pool liquid vapor pressure at ambient temperature, Pa R = the universal gas law constant of 8314.472 Pa·m3 ÷ (kmol·K) u = windspeed just above the liquid surface, m/s Evaporation of boiling, cold liquid pool The following equation is for predicting the rate at which liquid evaporates from the surface of a pool of cold liquid (i.e., at a liquid temperature of about 0 °C or less). E = ( 0.0001 ) ( 7.7026 − 0.0288 B ) ( M ) e–­ 0.0077B –­ 0.1376 E = evaporation flux, (kg/min)/m2 of pool surface B = pool liquid boiling point at atmospheric pressure, °C M = pool liquid molecular weight, g/mol e = 2.7183, the base of the natural logarithm Adiabatic flash of liquified gas release Liquified gases such as ammonia or chlorine are often stored in cylinders or vessels at ambient temperatures and pressures well above atmospheric pressure. When such a liquified gas is released into the ambient atmosphere, the resultant reduction of pressure causes some of the liquified gas to vaporize immediately. This is commonly referred to as "adiabatic flashing" and the following equation, derived from a simple heat balance, is used to predict how much of the liquified gas is vaporized: X = 100 ( HuL – HdL ) ÷ ( HdV – HdL ) X = weight percent vaporized HuL = upstream liquid enthalpy at upstream temperature and pressure, J/kg HdV = flashed vapor enthalpy at downstream pressure and corresponding saturation temperature, J/kg HdL = residual liquid enthalpy at downstream pressure and corresponding saturation temperature, J/kg If the enthalpy data required for the above equation is unavailable, then the following equation may be used: X = 100 × cp (Tu – Td) ÷ Hv X = weight percent vaporized cp = liquid specific heat at upstream temperature and pressure, J/(kg · °C) Tu = upstream liquid temperature, °C Td = liquid saturation temperature corresponding to the downstream pressure, °C Hv = liquid heat of vaporization at downstream pressure and corresponding saturation temperature, J/kg Note: The words "upstream" and "downstream" refer to before and after the liquid passes through the release opening. • Editors: D.W. Green and R.H. Perry (1984), Perry's Chemical Engineers' Handbook, 6th Edition, McGraw Hill, ISBN 0-07-049479-7. • Handbook of Chemical Hazard Analysis Procedures (Appendix B), Federal Emergency Management Agency, U.S. Dept. of Transportation, and U.S. Environmental Protection Agency, 1989. Available online at Handbook of Chemical Hazard Analysis, Appendix B. Scroll down to page 391 of 520 PDF pages. This handbook also provides the references below: □ J.H. Clewell (1983), A Simple Method For Estimating the Source Strength Of Spills Of Toxic Liquids, Energy Systems Laboratory, ESL-TR-83-03. □ G. Ille and C. Springer (1978), The Evaporation And Dispersion Of Hydrazine Propellants From Ground Spill, Environmental Engineering Development Office, CEEDO 712-78-30. □ J.P. Kahler, R.C. Curry and R.A. Kandler (1980), Calculating Toxic Corridors, Air Force Weather Service, AWS TR-80/003. • Risk Management Program Guidance For Offsite Consequence Analysis, U.S. EPA publication EPA-550-B-99-009, April 1999. Available online at Guidance for Offsite Consequence Analysis (Appendix D: Equation D-1 in Section D.2.3 and Equation D-7 in Section D.6) • Methods For The Calculation Of Physical Effects Due To Releases Of Hazardous Substances (Liquids and Gases), CPR 14E (Yellow Book), Chapter 2, pp. 2.67 - 2.68, The Netherlands Organization Of Applied Scientific Research (TNO), 2005. Available for free registration and download here • Calculating Accidental Release Rates From Pressurized Gas Systems From the www.air-dispersion.com website. • W. Stiver and D. Mackay (1983), A Spill Hazard Ranking System For Chemicals, Proceedings of the Technical Seminar on Chemical Spills, Toronto, Canada, pp. 261-266 . • W. Stiver and D. Mackay (1983), Evaporation Rates of Chemical Spills, Proceedings of the Technical Seminar on Chemical Spills, Toronto, Canada, pp. 1-8. Beychok, M. (2013). Accidental release source terms. Retrieved from http://www.eoearth.org/view/article/51cbf2057896bb431f6a78d3
{"url":"http://www.eoearth.org/view/article/51cbf2057896bb431f6a78d3/?topic=51cbfc78f702fc2ba8129e7d","timestamp":"2014-04-19T22:35:35Z","content_type":null,"content_length":"81832","record_id":"<urn:uuid:7855a9f8-f536-41d0-885b-7c8937f05da3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Where can I find numerical codes for all LATEX Mathematical symbols From: Will Robertson <wspr81@gmail.com> Date: Fri, 18 Feb 2011 09:40:03 +1030 Message-Id: <9CD376DD-76D1-47EB-8339-DD34A988A5AC@gmail.com> Cc: saf sied <saf_itpro@yahoo.com>, "www-math@w3.org" < www-math@w3.org> To: David Carlisle <davidc@nag.co.uk> (Sent quickly. Please excuse brevity.) On 18/02/2011, at 8:58 AM, David Carlisle <davidc@nag.co.uk> wrote: > There is some information on latex mappings in the mathml and entity spec sources in the file unicode.xml which may be downloaded from > http://www.w3.org/2003/entities/2007xml/ > but probably the latex unicode-math pakgae has the most up to date and maintained information: > http://www.tex.ac.uk/tex-archive/macros/latex/contrib/unicode-math/unimath-symbols.pdf Note that these are almost entirely defined through the work of the STIX project, so they should be comprehensive, but note 1. The table is mostly a superset of what's available in LaTeX + amsmath + others, but if a font has symbols that didn't make it into unicode yet (e.g., the mnsymbol package) those symbols cannot be listed. 2. Some different font packages in LaTeX will give different names to the same symbol. 3. You can only use all of the symbols defined above if you use unicode-math, which only runs on LaTeX with the XeTeX or LuaTeX engine; hence it's a little experimental at this stage. Received on Thursday, 17 February 2011 23:07:58 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 17 February 2011 23:07:59 GMT
{"url":"http://lists.w3.org/Archives/Public/www-math/2011Feb/0010.html","timestamp":"2014-04-19T05:18:14Z","content_type":null,"content_length":"11242","record_id":"<urn:uuid:921f6754-9a35-4a7d-931b-397b1d2d5211>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Euclidean/Plane Geometry This page: Dr. Math See also the Dr. Math FAQ: geometric formulas Internet Library: About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Euclidean/Plane Geometry Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Classifying quadrilaterals. Pythagorean theorem proofs. A student asks for help with geometry proofs, and Dr. Math suggests two books. Geometry Project - Problem: Balls bounce off of solid objects. Is there a pattern to the bounce? Can you predict the bounce? Can you prove Bretschneider's Theorem for the area of a quadrilateral? Also, can you show that any quadrilateral with supplementary opposing angles can be inscribed in a circle? We just started learning proofs, and I don't understand how to figure out the ordering. Can you explain? How can I find the angle and the point on a mirror to shine a light at in order to illuminate an object? Straight lines are parallel if they are equally distant and never intersect. Can the graphs of quadratic or cubic equations be considered parallel if they are equally distant and never intersect? A man buys a roll of carpet 9 ft. wide by 12 ft. long to fit a 10ft. by 10 ft. room. When the roll of carpet is unrolled, a hole is discovered in the middle of the carpet... You have to carpet a 9x12 room, but when you go the store they only have a 10x10 carpet and a 1x8 piece of carpet... Find the vertex of a catenary curve. Can a triangle have a unique centre of gravity? How do I know which theorem is the correct one to use next when I'm trying to prove something? One corner of a rectangle is on the center of a circle. The radius is larger than the small side of the rectangle but smaller than the large side. What is the area of their intersection? I need feed points for a circle of radius 1 km with center at a given latitude and longitude. Eight or more points should be enough for the program. Technically speaking, can the term 'perimeter' apply to a circle in a mathematical context? How can I design an algorithm to classify shapes based on a relatively small set of (x,y) coordinates that describe the boundary of a closed object? The hockey rink is a rectangle, 120 ft. by 60 ft. The scraper cleans a 4- ft.-wide strip... on which trip will it have cleaned half the area of the rink? A teacher wonders, "Since when do collapsible compasses copy lengths?" Suggesting that Euclid may have posited this basic ability early among his propositions -- for the purposes of simplifying more interesting constructions -- Doctor Peterson then goes on to discuss the pedagogical pros and cons of compass quality, reliance, and over-reliance. What is a collapsible compass, and when would you use one? A student struggles with a proof that starts with a triangle and a point in its interior. Doctor Floor offers a boost by drawing a diagram, then invoking the triangle inequality twice. Why are angles called complementary and supplementary? Can you show me a proof, with full justification, of the following theorem? Two circles of the same radius touch at A ... Exploring connected sets with examples in Euclidean space. Can you explain the connection between the circumference of a circle and the surface area of a sphere? Is it possible to contruct a one degree angle using only a straightedge and compass? Write the converse, inverse, and contrapositive of each conditional and determine whether they are true or false; if false, give a counterexample. How can I prove the converse of the Parallel Lines Theorem: If a transversal intersects two lines so that the alternate angles are equal, then the lines are parallel? QBasic measures angles clockwise from north, while mathematicians measure them counterclockwise from east. How can I convert QBasic's angle measures to mathematical ones? Also, do negative angles How can we find an equation for the number of unit squares that are cut by a line going from corner to corner on a rectangle? How many regions are formed by n straight lines if no three meet in a single point and no two are parallel? What is the maximum area of an 8"x13" sheet of paper that you can cover by using seven 3"x5" standard index cards? A cow is tied to a 100 ft. rope attached to a pole in the center of a circle of radius 50 ft. This circle has a ten foot opening, out of which the cow can walk to graze. What's the grazing area ? Why is the area of a unit square the product of the two sides? What is wrong with D' = sqrt(X^2 - X'2)? My math teacher says the sum of the exterior angles of a triangle is 900 degrees (360*3 - 180). I think that the sum is 360 degrees. Who is correct? What is the formal definition of 'opposite sides' of a polygon? Does a regular pentagon have opposite sides? Does a concave polygon have opposite sides? How can we define it consistent with our Can you give a precise definition of 'oval'? Can you give me definitions for: Pythagorean Triplets, Principle of Duality, Euclid's Elements, Cycloid, Fermat's Last Theorem? What is the 'official' definition of 'edge'? Specifically, is an edge restricted to the intersection of two non-coplanar faces or do two- dimensional shapes have edges? I'm also curious about a definition of 'face'. How many faces does a two-dimensional shape have? How do you find the degree measure for an angle from pi/60 rad? What are complements and supplements? Page: [<prev] 1 2 3 4 5 6 7 8 9 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_2d.html?s_keyid=39480505&f_keyid=39480508&start_at=41&num_to_see=40","timestamp":"2014-04-16T05:29:34Z","content_type":null,"content_length":"24666","record_id":"<urn:uuid:cf916519-f569-476e-8445-8397a4bce831>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Problem of the Week On the brick "Women's Walkway" from the intersection of 33rd and Chestnut to the intersection of 34th and Walnut in Philadelphia, I became fascinated with the pattern of the bricks. Here is an area that I'm filling with the pattern: I thought it was economical how some bricks were cut in half and some of the halves were also cut in half to neatly fit to the edges. If the top faces of the uncut bricks are 2 units by 1 unit, 1. how many bricks would it take to cover the full area of the pattern shown above? 2. what is the area? [Note: If you are interested, there is a web page about the Women's Walkway project.]
{"url":"http://mathforum.org/pd/process/walkway.html","timestamp":"2014-04-19T08:23:16Z","content_type":null,"content_length":"2673","record_id":"<urn:uuid:9116445e-b545-465d-af25-24d004bab99c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2006/020 Scrambling Adversarial Errors Using Few Random Bits, Optimal Information Reconciliation, and Better Private CodesAdam SmithAbstract: When communicating over a noisy channel, it is typically much easier to deal with random, independent errors with a known distribution than with adversarial errors. This paper looks at how one can use schemes designed for random errors in an adversarial context, at the cost of relatively few additional random bits and without using unproven computational assumptions. The basic approach is to permute the positions of a bit string using a permutation drawn from a $t$-wise independent family, where $t=o(n)$. This leads to two new results: 1. We construct *computationally efficient* information reconciliation protocols correcting $pn$ adversarial binary Hamming errors with optimal communication and entropy loss $n(h(p)+o(1))$ bits, where $n$ is the length of the strings and $h()$ is the binary entropy function. Information reconciliation protocols are important tools for dealing with noisy secrets in cryptography; they are also used to synchronize remote copies of large files. 2. We improve the randomness complexity (key length) of efficiently decodable capacity-approaching "private codes" from $\Theta(n\log n)$ to $n+o(n)$. We also present a simplified proof of an existential result on private codes due to Langberg (FOCS '04). Category / Keywords: cryptographic protocols / Information reconciliation, fuzzy cryptography, error-correcting codes, private codes, information theory, derandomization, combinatorial cryptography Date: received 17 Jan 2006, last revised 23 Jan 2006Contact author: adam smith at weizmann ac ilAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20060123:135618 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2006/020/20060123:135618","timestamp":"2014-04-16T19:13:25Z","content_type":null,"content_length":"3430","record_id":"<urn:uuid:d554ab1b-dd94-4d9d-80f7-deb360428a86>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Csáki, Endre - Alfréd Rényi Mathematical Institute of the Hungarian Academy of Sciences • Strong limit theorems for a simple random walk on the 2-dimensional comb • Heavy points of a d-dimensional simple random walk • Strong approximations of threedimensional Wiener sausages AKI 1 and Yueyun HU 2 • Almost sure limit theorems for the maximum of stationary Gaussian sequences • C2 Z2 x • Frequently visited sets for random walks Endre Csaki # Antonia Foldes + Pal Revesz # Jay Rosen # Zhan Shi • Publ. Math. Debrecen Manuscript (June 15, 2009) • On Vervaat and Vervaat-error type processes for partial sums and renewals • On the behavior of random walk around heavy points Endre Cski 1 • Random walk local time approximated by a Wiener sheet combined with an independent Brownian motion • ON THE NUMBER OF CUTPOINTS OF THE TRANSIENT NEAREST NEIGHBOR • On the increments of the principal value of Brownian local time • On the local time of the asymmetric Bernoulli walk Dedicated to Professor Sndor Csrg on his sixtieth birthday • Publ. Math. Debrecen Manuscript (June 15, 2009) • Strong approximations of three-dimensional Wiener sausages Endre CSAKI1 and Yueyun HU2 • X0 = 0, X1, X2, . . . Ei := P(Xn+1 = i + 1 | Xn = i) = 1 -P(Xn+1 = i -1 | Xn = i) • Lengths and heights of random walk Endre Cs aki 1+ and Yueyun Hu 2 • A joint functional law for the Wiener process and principal value • ! #"%$ &$'( ) 0 $'132 4658789A@CB&D8EFHGPIRQTSVU EFXW`Y E@aE@3DTb QdcfeHgihqp F 5sr p I • AN VINCZE (1912{1999) AND HIS CONTRIBUTION TO • Maximal Local Time of a d-dimensional Simple Random Walk on Subsets. • Lengths and heights of random walk Endre Csaki1 and Yueyun Hu2 • Strong limit theorems for a simple random walk on the 2-dimensional comb • Large void zones and occupation times for coalescing random walks • Strong approximations of three-dimensional Wiener sausages Endre CS'AKI1 and Yueyun HU2 • On Vervaat and Vervaat-error type processes for partial sums and renewals • Pointwise and uniform asymptotics of the Vervaat error process • Boundary Crossings and the Distribution Function of the • On a class of additive functionals of two-dimensional Brownian motion and random walk • On the ranked excursion heights of a Kiefer process by • On the local time of random walk on the 2-dimensional comb • Joint asymptotic behavior of local and occupation times of random walk in higher dimension • On the ranked excursion heights of a Kiefer process AKI 1 and Yueyun HU 2 • Strong approximations of additive functionals of a planar Brownian motion • Frequently visited sets for random walks Endre Csaki • Long excursions of a random walk Endre Cs'aki*, P'al R'ev'esz* and Zhan Shi • On the joint asymptotic behaviours of ranked heights of Brownian excursions • On the increments of the principal value of Brownian local time • Increment sizes of the principal value of Brownian local time • A universal result in almost sure central limit theory Istv'an Berkes*, Endre Cs'aki • Maximal Local Time of a d-dimensional Simple Random Walk on Subsets. • Almost sure limit theorems for the maximum of stationary Gaussian sequences • ISTVA'N VINCZE (1912-1999) AND HIS CONTRIBUTION TO • On the increments of the principal value of Brownian local time • Large void zones and occupation times for coalescing random walks • Fields Institute Communications Volume 00, 0000 • A joint functional law for the Wiener process and principal value • Almost Sure Limit Theorems for Sums and Maxima from the Domain of Geometric Partial Attraction of Semistable Laws • Fractional Brownian motions as "higher-order" fractional derivatives of Brownian local times • A strong invariance principle for two-dimensional random walk in random scenery • Fields Institute Communications Volume 00, 0000 • Favourite sites, favourite values and jumping sizes for random walk and Brownian motion • Strong approximations of additive functionals of a planar Brownian motion • Asymptotic properties of ranked heights in Brownian excursions • Frequently visited sets for random walks Endre Cs'aki* Ant'onia F"oldesy P'al R'ev'esz* Jay Rosenz Zhan Shi • Heavy points of a d-dimensional simple random walk • Lengths and heights of random walk Endre Cs'aki1y and Yueyun Hu2 • Strong limit theorems for anisotropic random walks on Z 2 Endre Cski 1 • Z2 Z2
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/01/483.html","timestamp":"2014-04-20T12:37:28Z","content_type":null,"content_length":"16605","record_id":"<urn:uuid:7b7dc9ea-ea82-4378-ba46-16cf81217914>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear stability, transient growth and the role of viscosity stratification in compressible Couette flow Malik, M and Dey, J and Alam, Meheboob (2008) Linear stability, transient growth and the role of viscosity stratification in compressible Couette flow. In: Int. Conf. Aerospace Science and Technology, NAL, Bangalore. Full text not available from this repository. Linear stability and the nonmodal transient energy growth in compressible plane Couette flow are investigated for two prototype mean flows: (a) the uniform shear flow with constant viscosity, and (b) the nonuniform shear flow with stratified viscosity. Both mean flows are linearly unstable for a range of supersonic Mach numbers (M). For a given M, the critical Reynolds number (Re) is significantly smaller for the uniform shear flow than its nonuniform shear counterpart; for a given Re, the dominant instability (over all streamwise wave numbers, α) of each mean flow belongs to different modes for a range of supersonic M. An analysis of perturbation energy reveals that the instability is primarily caused by an excess transfer of energy from mean flow to perturbations. It is shown that the energy transfer from mean flow occurs close to the moving top wall for “mode I” instability, whereas it occurs in the bulk of the flow domain for “mode II.” For the nonmodal transient growth analysis, it is shown that the maximum temporal amplification of perturbation energy, Gmax, and the corresponding time scale are significantly larger for the uniform shear case compared to those for its nonuniform counterpart. For α=0, the linear stability operator can be partitioned into L∼L̅ +Re2 Lp, and the Re-dependent operator Lp is shown to have a negligibly small contribution to perturbation energy which is responsible for the validity of the well-known quadratic-scaling law in uniform shear flow: G(t∕Re)∼Re2. In contrast, the dominance of Lp is responsible for the invalidity of this scaling law in nonuniform shear flow. An inviscid reduced model, based on Ellingsen-Palm-type solution, has been shown to capture all salient features of transient energy growth of full viscous problem. For both modal and nonmodal instability, it is shown that the viscosity stratification of the underlying mean flow would lead to a delayed transition in compressible Couette Actions (login required)
{"url":"http://eprints.iisc.ernet.in/40973/","timestamp":"2014-04-17T13:26:25Z","content_type":null,"content_length":"22565","record_id":"<urn:uuid:59f98cfb-4849-4ab8-b3b5-39f039c2d0c9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Shannon and Maxine work in the same building and leave work Question Stats: 68%31% (05:08)based on 47 sessions alexpavlos wrote: Shannon and Maxine work in the same building and leave work at the same time. Shannon lives due north of work and Maxine lives due south. The distance between Maxine's house and Shannon's house is 60 miles. If they both drive home at the rate 2R miles per hour, Maxine arrives home 40 minutes after Shannon. If Maxine rider her bike home at the rate of R per hour and Shannon still drives at a rate of 2R miles per hour, Shannon arrives home 2 hours before Maxine. How far does maxine live from work? A. 20 B. 34 C. 38 D. 40 E. 46 Here is the ratios approach to the problem: Shannon drives at the speed of 2R in both the cases so she takes the same time. In the first case Maxine reaches home 40 mins after Shannon. In the second case, Maxine reaches 2 hrs after Shannon. Why did Maxine take 1 hr 20 mins extra in the second case? Because she drove at half the speed. Speed1: Speed 2 = 2:1 Time 1: Time 2 = 1:2 ( since distance stays the same) The difference between Time1 and Time 2 is 1 hr 20 mins = 80 mins. So Time 1 must be 1hr 20 mins i.e. time taken by Maxine when she drives at speed 2R. Time taken by Shannon must be 1 hr 20 mins - 40 mins = 40 mins (because she reaches 40 mins early) When their speeds were same in the first case, Time taken by Maxine : Time taken by Shannon = 80 mins :40 mins = 2:1 Distance traveled by Maxine : Distance traveled by Shannon = 2:1 Total distance is 60 miles so Maxine lives 40 miles away and Shannon lives 20 miles away from office. You mentioned time difference between time1 and time 2 is 80 mins... How dod you take the value for time1 as 80 mins? Please explain the basis.
{"url":"http://gmatclub.com/forum/shannon-and-maxine-work-in-the-same-building-and-leave-work-132625.html?fl=similar","timestamp":"2014-04-16T04:24:55Z","content_type":null,"content_length":"185341","record_id":"<urn:uuid:756ae90d-e50f-4ddd-9b59-606647bd40bc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving hyperbolic identity Calculus Beyond Question Hi thomas49th! Originally Posted by thomas49th Errr sorry I don't follow :S. I've only just started hyperbolics today and havn't used an imaginary numbers with them yet? Thanks :) In that case, all you need to know is that "hyperbolic" trig functions cosh sinh tanh sech coth and cosech work the same as ordinary trig functions (for example, sinh(2x) = 2sinhx coshx), but occasionally you get a + instead of a minus (or vice versa) … I think only when you have But, to be on the safe side, use 's method!
{"url":"http://www.allquests.com/question/4070040/Proving-hyperbolic-identity.html","timestamp":"2014-04-20T23:27:03Z","content_type":null,"content_length":"28150","record_id":"<urn:uuid:6a3b94ec-4589-4315-8028-6992da738467>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Guttenberg, NJ Algebra 1 Tutor Find a Guttenberg, NJ Algebra 1 Tutor ...It is often the course where students become acquainted with symbolic manipulations of quantities. While it can be confusing at first (eg "how can a letter be a number?"), it can also broaden your intellectual scope. It's a step out of the morass of arithmetic into a more obviously structured way of thinking. 25 Subjects: including algebra 1, chemistry, calculus, physics ...I also recently taught a physics course at a local university. My student evaluations said that I was an excellent one-on-one teacher. Based on teaching in the classroom and one-on-one tutoring session, I think that anyone can learn math and science. 10 Subjects: including algebra 1, writing, physics, algebra 2 ...I'm sharing my test scores here. Hopefully they give a sense of my mastery of standardized testing strategy. Of course, the numbers don't tell the whole story, and I think what I really bring to the table is the patience and experience to bring students towards mastery themselves. 36 Subjects: including algebra 1, English, chemistry, calculus ...I have several experiences working with young children and tutoring math to students who are struggling with the subject both inside and outside of the classroom. I have tutored elementary and middle school students in the past and I am currently an American Reads tutor working in a middle schoo... 9 Subjects: including algebra 1, geometry, algebra 2, SAT math ...I am flexible with my schedule and location. We can meet at your place or a nearby coffee shop, depending on your needs and preferences. I am prompt to reply. 18 Subjects: including algebra 1, chemistry, writing, physics Related Guttenberg, NJ Tutors Guttenberg, NJ Accounting Tutors Guttenberg, NJ ACT Tutors Guttenberg, NJ Algebra Tutors Guttenberg, NJ Algebra 2 Tutors Guttenberg, NJ Calculus Tutors Guttenberg, NJ Geometry Tutors Guttenberg, NJ Math Tutors Guttenberg, NJ Prealgebra Tutors Guttenberg, NJ Precalculus Tutors Guttenberg, NJ SAT Tutors Guttenberg, NJ SAT Math Tutors Guttenberg, NJ Science Tutors Guttenberg, NJ Statistics Tutors Guttenberg, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/guttenberg_nj_algebra_1_tutors.php","timestamp":"2014-04-16T10:49:14Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:6316d286-aea2-4312-ba1d-00df318e23d1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximal number of maximal subgroups up vote 14 down vote favorite Let $G$ be a finite group. I want to find an upper bound on the number of the maximal subgroups. My questions is does it possible to prove that the number of maximal subgroups of any finite group $G$ is at most $|G|^{100}$? One can easily find that any subgroup is generated by at most $\log|G|$ elements thus the number of subgroups(in particular maximal subgroups) is at most $|G|^{\log|G|}$. Does it possible to improve this upper bound. For an abelian group the number of maximal subgroups is at most $|G|$ and in fact I do not know any example where the number of maximal subgroups is more than $|G|$. I am almost sure that I am not the first who is asking this question I would like to know if the answer to this question is known or either this is a hard question. There's a load of good stuff on this here: aimath.org/news/wallsconjecture/wall.conjecture.pdf – Nick Gill Dec 7 '12 at 15:46 add comment 2 Answers active oldest votes Another extended comment. The best bound on the number of subgroups is $|G|^{(1/4+o(1))\log_2 |G|}$ proved in http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.913&rep= rep1&type=pdf by Borovik, Pyber and Shalev. For the number of maximal solvable subgroups they get $|G|^c$ for some constant. They don't estimate c but conjecture it is 1. up vote 14 down Added. The survey László Pyber, Asymptotic results for simple groups and some applications. Groups and computation, II (New Brunswick, NJ, 1995), 309–327, DIMACS Ser. Discrete Math. vote accepted Theoret. Comput. Sci., 28, Amer. Math. Soc., Providence, RI, 1997 claims there is a bound of the form $|G|^c$ for any group G on maximal subgroups but c is not described. It is conjectured c=1 will do. Added. Jesse in the comments below cites a more recent and better bound of $c|G|^{3/2}$. I am making this CW to not get credit for his answer. 6 In the paper Martin W. Liebecka, Laszlo Pyberb, Aner Shalev: On a conjecture of G.E. Wall. Journal of Algebra, 317, no.1, p. 184-197 (2007) they give an upper bound of $c|G|^{3/2} $ for some absolute constant $c$. – Jesse Peterson Aug 15 '12 at 17:33 1 @Jesse, that should be an answer. – Benjamin Steinberg Aug 15 '12 at 17:54 3 There's no need to make this a CW. I learned of Wall's conjecture from Feng Xu who has considered generalizations for subfactors in a couple of papers. He asked a related question about this in April: mathoverflow.net/questions/94553/… – Jesse Peterson Aug 15 '12 at 18:59 1 NB There's a typo in the reference - that should be Martin Liebeck. – HJRW Aug 15 '12 at 20:51 1 There is a book called subgroup lattices. Google it. – Benjamin Steinberg Aug 16 '12 at 13:46 show 5 more comments The document I linked to above is sufficiently striking as to warrant an answer of its own. I hope it complements the community wiki above. As mentioned above the relevant conjecture in this area is due to Wall: Conjecture The number of maximal subgroups of a finite group $G$ is less than the order of $G$. This has been the subject of much study with the landmark work (until recently) being the above-cited work of Liebeck, Pyber and Shalev. In addition to the result mentioned above they show that the conjecture is true if the group $G$ is simple, up to a finite number of exceptions. Now a quote from the linked document is relevant: This largely directed attention to composite groups, where Wall in his original paper had at least shown the conjecture to be true for finite solvable groups. The key remaining cases were known to be semidirect products of a vector space V with a nearly simple finite group G acting faithfully and irreducibly on it. up vote 8 down vote It turns out that in this case Wall's conjecture implies some bounds on the cohomology groups $H^1(G,V)$. And, as the document relates, examples have now been found which violate these bounds. In particular, Wall's conjecture does not hold. In light of this development, the bound $C|G|^{3/2}$ mentioned above, also due to Liebeck, Pyber and Shalev, assumes greater importance. Although, as the linked document mentions, it is likely that the value $3/2$ can be reduced a great deal. One final interesting quote: A conjecture of Aschbacker and Guralnick, not made at the conference... would now rise to be the main conjecture in maximal subgroup theory. (The conjecture states that it is the number of conjugacy classes of maximal subgroups that is bounded, less than the number of conjugacy classes of elements in the group.) Anyone interested should definitely read this document. Not only is it interesting mathematically, it's a very engaging account of how this recent breakthrough was achieved. Your link was broken. I think I fixed it (that is it links to something now, though it's possible you intended to link to something else on that site). – Noah Snyder Dec 7 '12 at 22:26 Fantastic answer! I hadn't heard about this breakthrough, though like Jesse I've been interested in Wall's conjecture from the subfactor side. – Noah Snyder Dec 7 '12 at 22:36 @Noah, thanks for fixing the link (you got the right one) and for the positive feedback - appreciated. nick – Nick Gill Dec 10 '12 at 16:39 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups lattices reference-request rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/104759/maximal-number-of-maximal-subgroups","timestamp":"2014-04-18T18:40:42Z","content_type":null,"content_length":"69139","record_id":"<urn:uuid:468d76b3-6d1e-4122-9db4-86697862e7cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Concept & Skill Development Lecture Road Safety Messages for All: When walking on a road, when is it Series: Webvideo consolidation of site lessons and www.whyslopes.com >> Work and Study Tips >> H Jigsaw puzzles and problem solving safer to be on the side allowing one to see oncoming traffic? lesson ideas in preparation. Price to be Next: [H more - Routine to non-routine problem solving.] Previous: [G. Written determined. work formats for developing and showing skill.] [1] [2] [3] [4] [5] [6] [7] [8] Play with this [unsigned] [9] [10][11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] Bright Students: Top universities want you. While Complex Number Java Applet many have high fees: many will lower them, many Problem Solving Tips and Methods will provide funds, many have more scholarships to visually do complex number arithmetic with polar and Cartesian than students. Postage is cheap. Apply and ask how Solving problems in mathematics and other subjects sounds practical. There are coordinates and with the head-to-tail addition of arrows in the much help is available. Caution: some programs are two kinds of problems, routine and not. The solution of routine problems may be plane. Click and drag complex numbers A and B to change their rewarding. Others lead nowhere. After acceptance, given in class for students to apply. When a problem is routine, routine locations. it may be easy or not to switch. solutions should be employed. So routine solution methods for routine problems should be met and memorized to avoid the extra work required for solving Pattern Based Reason Are you a careful reader, writer and thinker? Five problems whose solution is not given. The rule of thumb is a follows. logic chapters lead to greater precision and Online Volume 1A, Pattern Based Reason, describes origins, comprehension in reading and writing at home, in • For Routine Problems, learn and use Routine Methods. benefits and limits of rule- and pattern-based reason and school, at work and in mathematics. decisions in society, science, technology, engineering and - 1 versus 2-way implication rules - A different • For non-routine problems, be combinatorial, follow strageties, and try mathematics. Not all is certain. We may strive for objectivity, starting point - Writing or introducting the 1-way whatever may work, near or far. but not reach it. Online postscripts offer a story-telling view implication rule IF B THEN A as A IF B may of learning: [ A ] [ B ] [ C ] [ D ] to suggest how we share emphasize the difference between it or the latter, There is a need to master or at least identify what problems are routine. theory and practice in many fields of knowledge. and the 2-way implication A IF and ONLY IF B. Otherwise, you will spend time in looking for and inventing solutions for - Deductive Chains of Reason - See which problems whose solutions should routine or automatic. Site Reviews implications can and cannot be used together to arrive at more implications or conclusions, The Jigsaw Puzzle Approach 1996 - - Mathematical Induction - a light romantic view that becomes serious. Problem solving may be like putting together a jigsaw puzzle. In solving a Magellan, the McKinley Internet Directory - Responsibility Arguments - his, hers or no one's jigsaw puzzle, we may begin with the sides as pieces with straight edeges are - Islands and Divisions of Knowledge - a model for fewer in number and must be aligned, after that the more difficult to place : many arts and disciplines including mathematics inside pieces may be fitted together. Jigsaw puzzles may be made more course design: Different entry points may make challenging by hiding the picture they are suppose to form, or by assembling the learning and teaching easier. Are you ready for pieces upside down. That being said, with the pieces picture side up, we may put Mathphobics, this site may ease your fears of the subject, them? try to put them together with trial and error as needed, but with continuity and perhaps even help you enjoy it. The tone of the little lessons drawn shape limiting the trial and error. This trial and error combination of and "appetizers" on math and logic is unintimidating, sometimes Deciml Place Value - funny ways to read multidigit pieces that go together may be ad hoc, opportunistic and in general funny and very clear. There are a number of different angles decimals forwards and backwards in groups of 3 or combinatorial. The trial and error requires persistence. With that, over time, offered, and you do not need to follow any linear lesson plan. 6. more and more of the puzzle will be solved until, if all the pieces are present, Just pick and peck. The site also offers some reflections on - Decimals for Tutors - lean how to explain or the problem is fully solved. Unfortunately, jigsaw puzzle pieces may walk away teaching, so that teachers can not only use the site as part of justify operations. Long division of polynomials over time, so there no guarantee that all the effort made will lead to complete their lesson, but also learn from it. is easier for student who master long division picture to solve the puzzle. More generally, when we are tackling a nonrouting with decimals. problem or puzzle, the existence of solutions is not always certain. 2000 - Waterboro Public Library, home schooling section: - Primes Factors - Efficient fraction skills and later studies of polynomials depend on this. Text Book Problems and Exercises CRITICAL THINKING AND LOGIC ... Articles and sections on topics - Fractions + Ratios - See how raising terms to such as how (and why) to learn mathematics in school; obtain equivalent fractions leads to methods for For most textbook problems and exercises, all the pieces or elements needed to pattern-based reason; finding a number; solving linear equations; addition, comparison, subtraction, multiplication solve the problem are likely to be present in the current or previous chapters. painless theorem proving; algebra and beyond; and complex and division of fractions. They may just need to be fitted together in ways similar to the worked problems numbers, trigonometry, and vectors. Also section on helping your - Arithmetic with units - Skills of value in daily or examples in the text or course notes. The similarity will be close for the child learn ... . Lots more! life and in the further study of rates, easiest problems and further for the more complicated ones. Skill in following proportionality constants and computations in textbook patterns may become routine or almost so with practice, with careful 2001 - Math Forum News Letter 14, science & technology. reading of the text or notes, with care not to forget earlier skills and methods. In senior high school and college mathematics and mathematical ... new sections on Complex Numbers and the Distributive Law for What is a Variable? - this entertaining oral & subjects, problem solving may remain routine and feasible with time and effort Complex Numbers offer a short way to reach and explain: geometric view may be before and besides more to see and master all the problem solving methods present in notes or a trigonometry, the Pythagorean theorem,trig formulas for dot- and formal definitions - is the view mathematically textbook. cross-products, the cosine law,a converse to the Pythagorean correct? Theorem - Formula Evaluation - Seeing and showing how to Thinking Outside the Box when need-be do and record steps or intermediate results of 2002 - NSDL Scout Report for Mathematics, Engineering, Technology multistep methods allows the steps or results to What is routine for one is not for another. Experience counts. Where a student -- Volume 1, Number 8 be seen and checked as done or later; and will may have to think hard to solve a problem, an older student or an instructor may improve both marks and skill. The format here tackle a problem based on past experience. Problem solving may think out of the Math resources for both students and teachers are given on this allows the domino effects of care and the domino box or the confines of earlier problem solving practices, when none of the site, spanning the general topics of arithmetic, logic, algebra, effects of mistakes to be seen. It also emphasizes latter practices apply, Thinking out of the box means look for new angles or calculus, complex numbers, and Euclidean geometry. Lessons and a proper use of the equal sign. different perspectives for tackling or addressing the problem. Or, it may how-tos with clear descriptions of many important concepts - Solve Linear Eqns with & then without fractional involve tackling what appears to be a related, similar or easier problem in the provide a good foundation for high school and college level operations on line segments - meet an visual hope that experience with the latter will make the original problem addressable. mathematics. There are sample problems that can help students introduction and learn how to present do and Not all certain. And for problems from real life, solutions may be routine, prepare for exams, or teachers can make their own assignments record steps in a way that demonstrate skill; solutions may be difficult to find, or the existence of solutions may be not be based on the problems. Everything presented on the site is not learn how to check answers, set the stage for known. Some trial and error may be required with success not always certain for only educational, but interesting as well. There is certainly solving word problems by by learning how to solve the original formulation of a problem. plenty of material; however, it is somewhat poorly organized. systems of equations in essentially one unknown, This does not take away from the quality of the information, set the stage for solving triangular and general Real World Problems though. systems of equations algebraically. - Function notation for Computation Rules - In real world problems and questions unlike most problems and exercises in a 2005 - The NSDL Scout Report for Mathematics Engineering and another way of looking at formulas. Does a book, there may be no given pattern to follow. Not all is certain. Here may be Technology -- Volume 4, Number 4 computation rule, and any rule equivalent to it, missing pieces or extra pieces, and no guarantee that the solution can be done. define a function? ... section Solving Linear Equations ... offers lesson ideas for - Axioms [some] as equivalent Computation Rule Preparing to Solve Problems teaching linear equations in high school or college. The approach view - another way for understanding and uses stick diagrams to solve linear equations because they explaining axioms. Master Logic: Again, poblem solving is like putting together a jigsaw puzzle. In "provide a concrete or visual context for many of the rules or - Using Formulas Backwards - Most rules, formulas the case of textbook problems, all the pieces are present and just need to be patterns for solving equations, a context that may develop and relations may be used forwards and backwards. fitted together following the clues, and an possible a picture showing the equation solving skills and confidence." The idea is to build up Talking about it should lead everyone to expect a desired result. In the case of real world problems, there may be missing pieces student confidence in problem solving before presenting any backward use alone or plural, after mastery of or extra pieces, and no guarantee that the solution can be done. formal algebraic statement of the rule and patterns for solving forward use. Proportionality relations may be use equations. ... backward first to find a proportionality constant Problem solving besides thinking out of the box and being opportunistic an before being used forwards and backwards to solve combinatorial in looking for clues to use alone or with others requires - Euclidean Geometry - See how chains of reason appears in and a problem. precision in reading, writing and figuring. Imprecise logic and language besides geometric constructions. abilities will lead to difficulties. Precision reading and writing, and - Complex Numbers - Learn how rectangular and polar coordinates Early High School Geometry opportunistic trial and error skills for problem solving may be refined and may be used for adding, multiplying and reflecting points in the developed (we hope) by reading the following chapters in site Volumes 1A and 2. plane, in a manner known since the 1840s for representing and Maps + Plans Use demystifying "imaginary" numbers, and in a manner that provides a 1. Implication Rules (Volume 1, Part I, Pattern Based Reason) quicker, mathematically correct, path for defining "circular" - Measurement use maps, plans and diagrams drawn 2. chains of reason (Volume 1, Part I, Pattern Based Reason) trigonometric functions for all angles, not just acute ones, and to scale. 3. longer chains of reason (Volume 1, Part I, Pattern Based Reason) easily obtaining their properties. Students of vectors in the 4. islands and divisions of knowledge (Volume 1, Part I, Pattern Based Reason) plane may appreciate the complex number development of - 5. painless theorem proving (Volume 2, Three Skills for Algebra) trig-formulas for dot- and cross-products. Lines-Slopes [I] - Take I & take II respectively assume no - These appetizers and lessons show how rules and patterns may fit together to knowledge and some knowledge of the tangent function in arrive at conclusions or solve SOME problems. Other problems are just too hard. trigonometry. Coordinates We can't prevent that. Why study slopes - this fall 1983 calculus appetizer shone in - Use them not only for locating points but also Master Fractions many classes at the start of calculus. It could also be given for rotating and translating in the plane. after the intro of slopes to introduce function maxima and minima Many applied mathematics problems involving chopping and combining lengths, at the ends of closed intervals. - areas and volumes. So you need to know how to take a proper or improper fraction - Why Factor Polynomials - Online Chapter 2 to 7 offer a light of a length, area or volume. You need to understand that one length may be 2.5 introduction function maxima and minima while indicating why we What is Similarity times or 2½ times or (5/2) times another. Any if you do calculation, you need to calculate derivatives or slopes to linear and nonlinear curves y do it with care or at least do it with the knowledge that an error in one step =f(x) - another view of using maps, plans and diagrams makes all that follows wrong. The ability to figure well and precisely, so that - Arithmetic Exercises with hints of algebra. - Answers are drawn to scale in the plane and space. Many you answer is correct, shows or suggests the ability to follow methods, one step given. If there are many differences between your answers and human-made objects are similar by design. at a time and one step after another in any subject, and in problem solving as those online, hire a tutor, one has done very well in a full year well. of calculus to correct your work. You may be worse than you - think. Algebra Word Problems 7 Complex Numbers Appetizer. Return to Page Top If your interest is in solving algebra word problems at the high school level, I What is or where is the square root of -1. With would recommend learning how to solve linear equations in several unknowns in an rectangular and polar coordinates, see how to add, effortless fashion. High school students who can solve linear equations in one multiply and reflect points or arrows in the unknown are often given word problems where extra variables have to be plane. The visual or geometric approach here known eliminated to formulate a single equation in one unknown quantity to solve. The in various forms since the 1840s, demystifies the trick here is to draw or extract a single equation from the given information. square root of -1 and the associated concept of But in most such words problems, it is easier to extract or draw from the given "imaginary" numbers. Here complex number information several linear equations in several unknowns to solve. Each sentence multiplication illustrates rotation and dilation in the word problem gives an equation in one or more unknowns or quantities. Now operations in the plane. the algebraic way of writing and thinking can be used to eliminate variables and to solve for the one or more quantities of interest in an effortless fashion. The algebraic solution of linear equations involves the elimination of variables Geometric Notions with Ruler & Compass to obtain say one equation in one unknown. This elimination process may be Constructions better done and recorded with algebraic notation. Going directly to one equation in one unknown to solve a problem requires more work to be done with words. 1 Initial Concepts & Terms 2 Angle, Vertex & Side www.whyslopes.com >> Work and Study Tips >> H Jigsaw puzzles and problem solving Correspondence in Triangles 3 Triangle Isometry/ Next: [H more - Routine to non-routine problem solving.] Previous: [G. Written Congruence 4 Side Side Side Method 5 Side Angle work formats for developing and showing skill.] [1] [2] [3] [4] [5] [6] [7] [8] Side Method 6 Angle Bisection 7 Angle Side Angle [9] [10][11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] Method 8 Isoceles Triangles 9 Line Segment Bisection 10 From point to line, Drop Return to Page Top Perpendicular 11 How Side Side Side Fails 12 How Side Angle Side Fails 13 How Angle Side Angle Return to Page Top
{"url":"http://www.whyslopes.com/0900Work_and_Study_Tips/0800H_Jigsaw_puzzles_and_problem_solving.html","timestamp":"2014-04-19T09:24:35Z","content_type":null,"content_length":"40848","record_id":"<urn:uuid:0332b6d5-b1b4-4d87-b6f1-085476c5a540>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Integral Help (Check if I'm doing these right) December 13th 2007, 11:44 AM #1 Integral Help (Check if I'm doing these right) Use the form of the definition of the integral given in the Properties of Integrals to evaluate integrals below: 1. $\int\limits_1^4{(x^2+2x-5)dx}$ I then get: $\int\limits_1^4{(x^2)dx} + \int\limits_1^4{(2x)dx} - \int\limits_1^4{(5)dx}$ I do the calculations and end up with $\frac{n(n+1)(2n+1)}{n^3} (\frac{27}{6})$ Take up the limit as n -> infinite and get $\int\limits_1^4{(x^2+2x-5)dx} = \frac{-1}{2}$ 2. $\int\limits_0^5{(1+2x^3)}dx$ basically I did the same thing for this problem and end up with $\int\limits_0^5{(1+2x^3)}dx = 1$ Here's another problem that I quite don't understand. 3. Find the Riemann sum for $f(x) = sinx$ , $0 \le x \le \frac{3pi}{2}$ with six terms, taking he saple points to be right endpoints. Give your answer correct to six decimal places. There are several other parts to this particular question but I think I can manage to answer those if I find out how to do this part. No, I am sorry to say, you are not doing them right. What exactly are you trying to do?. I see a possible relationship to a Riemann sum there. By the right endpoint method: $\frac{27}{n^{3}}\sum_{k=1}^{n}k^{2}+\frac{36}{n^{2 }}\sum_{k=1}^{n}k-\frac{6}{n}\sum_{k=1}^{n}1$ Now, use the identities you learned. I see you used one in your post(the sum of the squares). Then, take the limit as $n\rightarrow{\infty}$. You're shooting for 21, not -1/2. Try the same technique on this one, if that's what you need to do. No, I am sorry to say, you are not doing them right. What exactly are you trying to do?. I see a possible relationship to a Riemann sum there. By the right endpoint method: $\frac{27}{n^{3}}\sum_{k=1}^{n}k^{2}+\frac{36}{n^{2 }}\sum_{k=1}^{n}k-\frac{6}{n}\sum_{k=1}^{n}1$ Now, use the identities you learned. I see you used one in your post(the sum of the squares). Then, take the limit as $n\rightarrow{\infty}$. You're shooting for 21, not -1/2. Try the same technique on this one, if that's what you need to do. Hmm... I see what I have done wrong. Thank you for your help. I still need help with the last question though which I added while you were posting. when using K, K = any number in that interval right? Break sin(x) up into 6 partitions over the designated limits That's the width of each rectangle. Use that in your sine to find the heighths of the rectangles. The area of a rectangle is width times height. Add them up. $sin(\frac{\pi}{4})+sin(\frac{\pi}{2})+......+sin(\ frac{3\pi}{2})$ Then multiply that by your width of $\frac{\pi}{4}$ Here's a graph: Last edited by galactus; November 24th 2008 at 05:38 AM. I'm having problems simplifying this: $f(x_{i}) = \frac{5}{n} [1 + 2(1 + \frac{5i}{n})^3]$ any help? $f(x_{i}) = \frac{5}{n} [1 + 2(1 + \frac{5i}{n})^3]$ $= \frac{5}{n} + \frac{10}{n}(1 + \frac{5i}{n})^3$ I get to that point but the cube, or shall I say the 1 +, is bothering me. Is there any way to simplify it or something? This is my solution to the second one. ${\Delta}{x} = {\frac{5}{n}}$ and $x_i = \frac{5i}{n}$ $f(xi) = [ 1 + 2(\frac{5i}{n})^3] \frac{5}{n}$ I simplify and get $= [\frac{15}{n} + \frac{625i^3}{n^4}$ Do the stuff with the sigma notation and get $\frac{15}{n} + \frac{625}{2}(\frac{n(n+1)}{n4})^2$ I take the limit of that and get $\frac{625}{2}$ Looks like you need to add 5. Then you get: $\frac{1250}{n^{4}}\sum_{k=1}^{n}k^{3}+\frac{5}{n}\ sum_{k=1}^{n}1$ = $\frac{625}{n}+\frac{625}{2n^{2}}+\frac{635}{2}$ Now, see the limit?. Looks like you need to add 5. Then you get: $\frac{1250}{n^{4}}\sum_{k=1}^{n}k^{3}+\frac{5}{n}\ sum_{k=1}^{n}1$ = $\frac{625}{n}+\frac{625}{2n^{2}}+\frac{635}{2}$ Now, see the limit?. hmm. yeah I got to go over and make sure I didn't make the same mistake with other problems. Thanks December 13th 2007, 12:08 PM #2 December 13th 2007, 12:11 PM #3 December 13th 2007, 12:57 PM #4 December 13th 2007, 03:30 PM #5 December 13th 2007, 04:05 PM #6 December 13th 2007, 04:36 PM #7 December 13th 2007, 04:49 PM #8 December 13th 2007, 04:51 PM #9
{"url":"http://mathhelpforum.com/calculus/24825-integral-help-check-if-i-m-doing-these-right.html","timestamp":"2014-04-17T20:29:40Z","content_type":null,"content_length":"67002","record_id":"<urn:uuid:3d846935-f3cc-4e07-8a6c-3c6286fe0ed3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
The Soft Error Experts The business of System Reliability : Defining the characteristics of our best target market with a simple cost model. Supply chain managers have developed tools to monitor the supply of components for large-scale systems, including parameters like lead time, single source of supply, potential replacement parts, price, INCOTERM in their analyses. They usually work with component engineers to gather the data needed for their management tools. Reliability has historically been factored into the design (life duration) and the maintenance program mostly for deterministic ageing effect creating a slow drift in performance. Another type of reliability issue is unexpected, random failure like SEEs that can occur at anytime, anywhere in the system, causing potential catastrophic failures. Because of the difficulty of modelling and predicting SEEs, they are not necessarily well analyzed as they are sometimes like the needle in the haystack for large systems. When components show this type of reliability issues in the field, the information is supposed to be fed back to the supply chain’s monitoring system, which organizes repairs and recalls, and design change request if needed. These operations often come at a large cost to the system vendor, both internal and as contract penalties. A simplified cost estimation model of this risk and the associated costs can be represented below: Total cost is: C = C1 + P[1]*C3 + C4 Where: C1 represents the cost associated to development, implementation, fabrication, sales, etc. C3 represents the cost of repair or recall of the product. C4 represents the cost of maintenance. Adding the capability of assessing and correcting SEEs before shipment drastically lowers the risk of failure in the field. It modifies the cost structure as shown below: The total cost would then be: C[R] = C1 + C2 + P[2]*C3 + C4 (2) Where: C2 is the overall cost of improving the reliability (analysis, test, mitigation, ...). P2 is the probability of a failure given the result (and recommendations) of the reliability audit. Statistically speaking, this probability follows Bayesian statistics. The difference of cost between the two approaches would then be: ΔC = C[R] – C = C2 + C3*(P[2] - P[1]) (3) ΔC needs to be negative in order for reliability audit to make business sense and generate positive Return On Investment. Note that if reliability analysis is performed well, the repair cost C3 should be much lower than in the previous case. For the sake of simplicity, we'll assume that C3 is the same (worst case). ΔC<0 nbsp="" o:p="">0> The conditions for this to happen are clear from (3), and define our target markets and our offering: · C3 is large. It corresponds to certain applications and industries. Our experience of cases where this cost can be prohibitive: aerospace, medical devices and cloud infrastructure · P2 << P1: this condition is achieved mainly with accurate analysis tools, deep knowledge and expertise in the field and effective mitigation strategies · C2 is as small as possible: C2 has two components, the cost of analysis and the cost of mitigation. C2 depends on the stage where the problem is audited, obviously the earlier the cheaper. Therefore, for the target markets for which cost of failure is prohibitive, we have to bring enough expertise to significantly lower the probability of failure through accurate analysis and effective mitigation, and our intervention should be as early as possible in the design phase. These statement are very helpful when defining our product portfolio and the type of data and engagement that we'll be seeking. if you find these thoughts interesting, or you'd like to react to this blog, let us know! Your comments are welcome as usual! DAC 2012: Interview with EE Times DAC was quite busy this year at the Moscone center in San Francisco. It was a good way to test market demand for Soft Error solutions, or at least the interest of different industries about this reliability problem. Whereas many are aware of the problem, more than ever see it as a concern and try to be proactive about it. Summary of concerned markets is shown in this snapshot taken from the graphics on our booth: We've also been interviewed by EETimes online TV. You can see the it here. The Australian Transportation Safety Board (ATSB) released on December 19^th 2011 the final report of investigations of two repeated nose dive incidents during a Qantas Airline Airbus A330 flight from Singapore to Perth in October 2008 (Qantas Flight 72) resulting in an accident. The plane landed at a nearby airport after the incident which caused at least 110 injured passengers/crewmembers and some damages to the inside of the aircraft. Here’s the full report: http://www.atsb.gov.au/media/3532398/ao2008070.pdf See section 3.6.6 page 143 for discussion about Single Event Effect and Appendix H. The report is inconclusive about the root cause of the incident. Its origin occurred in an avionic equipment called the TLN101 ADIRU (Air Data Inertial Reference Unit). Incorrect data for all the flight parameters were sent by this unit to the other avionics equipments, eventually creating a false angle of attack information misleading the central computer that reacted with a quick nose down maneuver. The report mentions that probably the wrong signal came from the CPU (Intel 80960MC) inside the ADIRU. Other chips interacting with the CPU and therefore potentially sending wrong signals are an ASIC from AMIs, wait state RAM from Austin Semiconductor (64kx4) and RAM from Mosaic Semiconductor (128kx8). When skimming through the report and especially the section about SEE, I had few thoughts: The estimated SEU failure rate of the equipment is 1.96 e-5 per hour, or 19.600 FIT (note: none of the memories were protected by ECC or EDAC). At an altitude of 37.000 ft the neutron acceleration factor compared to ground reference (NYC) is 83x (report data), therefore the equivalent FIT at ground level should be 236 FIT. The order of magnitude seems about right, even though I’d like to have more data about the process node and total size of embedded memory. This FIT rate is just an estimate (from theory, not from test) and seems to take only memory SBU (Single Bit upset) into account. The investigators couldn’t reproduce the symptoms through test. They focused mainly on neutron testing at 14MeV. I imagine this is because it was a source that they could access easily. Maybe a wider neutron range up to hundreds of MeV (like white neutron spectrum at Los Alamos, TSL or Triumf) would have been more appropriate, especially to create MCU (Multi Cell Upsets). The report states that the rate MCU/SBU is about 1%, so they didn’t investigate further. This depends on the process node! At latest technologies (40nm, 28nm) this ratio can be up to 40% on SRAM. The components seemed to have been manufactured with older process nodes. But as such, did they check the effect of thermal neutron (Boron 10 was used in older technologies)? Of alpha particles contamination of the package? I believe that this report needs a little more details on the issue, a little more investigation to try to be more conclusive….Any thoughts and comments? Every second the equivalent of 63 billion CDs of data transit through the world’s internet (source: Cisco). That’s 1.5ZB per year (1ZB= 270 B!). As of December 31st 2011, 2.3 billion persons are using the internet, a 5.3X increase from the previous year! (source: internetworldstat.com). As lifestyle in almost every country in every continent is moving towards ubiquitous mobile lifestyle, demand for remote mass storage and cloud computing capacity is rapidly increasing. These numbers are mindboggling and leave us to estimate the impact of failure of this infrastructure leading to service disruption: we are not talking about thousands nor millions of users affected. We are talking about hundred of millions! Obviously cloud services architectures involve heavy redundancies, mirror imaging of servers in different geographical location, disaster recovery procedures…Still, isn’t there some single point of failure? As it is a well adopted fact that software can show bugs, viruses, worms…how about hardware? When firewalls, watchdogs and other software procedure are commonly put in place, aren’t we less keen to accept hardware failure? Even trickier: what about hardware generated data corruption? The hardware shows no sign of failure, the software is not infected by viruses….still something’s not Soft errors, even as being a small contributor of the overall reliability of systems, can still be the source of undetected failures that propagate to whole systems. What are we doing to mitigate this problem, especially in cloud computing and data storage infrastructure? Soft error in semiconductor devices is a reality and a threat to safety and reliability in micro electronic devices. This tutorial gives you tools to assess your device level of risk and provide a 4 step solutions plan to prevent soft error .
{"url":"http://www.thesofterrorexperts.blogspot.com/","timestamp":"2014-04-18T00:12:53Z","content_type":null,"content_length":"78010","record_id":"<urn:uuid:73347cbd-48bd-433e-9c83-eb61283da38e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
DCL, extended precision, and floating point? The Question is: According to DCL Help, symbol data types available are basically just string and integer. Is there any way to use a longword or double type in order to calculate numbers that are greater than 2x32. Do I need to code my own program to do the calculations and output the numbers. And is there any way to represent floating point with DCL? The Answer is : DCL supports longword integers and strings. Longwords are 32 bits. There is no DCL-provided mechanism to calculate with non-integer values, nor with values that exceed a longword. You will want to code your own program, as DCL is not particularly suited to this task. In theory, you could code DCL routines to provide multiple-precision arithmetic, but the results would be neither pretty nor speedy. DCL is appropriate for file and string manipulations, but if you want to do numbers, please use a compiler or a calculator -- freeware or shareware executable images that provide for calculations from within DCL procedures have been referenced in newsgroups in the past.
{"url":"http://h71000.www7.hp.com/wizard/wiz_5462.html","timestamp":"2014-04-20T05:43:31Z","content_type":null,"content_length":"6703","record_id":"<urn:uuid:4d1cbc7b-a8fd-44b9-b1b7-eec929ee19ce>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Distinct equivalent classes the null set December 30th 2009, 02:33 AM Distinct equivalent classes the null set Let R be an equivalence relation on a set S. Let E and F be two distinct equivalence classes of R. Prove that E and F = null set. Do i show that itys transitive im a bit stuck. Help much appreciated thanks. December 30th 2009, 02:43 AM Do you mean $E \cap F = \emptyset$? Assume there exists an element in $E \cap F$, call it $x$. Then you can apply transitivity as every element of $E$ is equivalent to $x$, as is every element of $F$. $e \sim x$ and $x \sim f \ Rightarrow e \sim f \Rightarrow E = F$, a contradiction. December 30th 2009, 02:45 AM Do you mean $E \cap F = \emptyset$? Assume there exists an element in $E \cap F$, call it $x$. Then you can apply transitivity as every element of $E$ is equivalent to $x$, as is every element of $F$. $e \sim x$ and $x \sim f \ Rightarrow e \sim f \Rightarrow E = F$, a contradiction. Yep thats what i meant thanks. January 5th 2010, 10:21 AM Do you mean $E \cap F = \emptyset$? Assume there exists an element in $E \cap F$, call it $x$. Then you can apply transitivity as every element of $E$ is equivalent to $x$, as is every element of $F$. $e \sim x$ and $x \sim f \ Rightarrow e \sim f \Rightarrow E = F$, a contradiction. acually im a bit confused how is this a contradiction, it doesnt show its in the null set, just that e is in f? January 5th 2010, 10:50 AM You assumed that there is some element in $E\cap F$. As Swlabr showed, this implies $E=F$, however you took E,F to be distinct equivalence classes, ie. $E eq F$ - so this is a contradiction. January 5th 2010, 12:47 PM More of a forward-knowledge looking back approach (since you need to do this problem to prove what I'm about to say), but a relation $R$ on $S$ induces a partition $\pi$ of $S$ where the blocks are the equivalence classes. If that is a definition in your book the answer follows immediately.
{"url":"http://mathhelpforum.com/advanced-algebra/121965-distinct-equivalent-classes-null-set-print.html","timestamp":"2014-04-19T04:52:40Z","content_type":null,"content_length":"14120","record_id":"<urn:uuid:29c999ee-39ac-4c5e-b785-ab5e7f9d6355>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Segmentation by grouping junctions Results 1 - 10 of 82 - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2001 "... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..." Cited by 1384 (52 self) Add to MetaCart In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an α-βswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an α-expansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2001 "... After [10, 15, 12, 2, 4] minimum cut/maximum flow algorithms on graphs emerged as an increasingly useful tool for exact or approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time compl ..." Cited by 794 (48 self) Add to MetaCart After [10, 15, 12, 2, 4] minimum cut/maximum flow algorithms on graphs emerged as an increasingly useful tool for exact or approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2004 "... Abstract—In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph construction ..." Cited by 699 (21 self) Add to MetaCart Abstract—In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available. , 2001 "... In this paper we describe a new technique for general purpose interactive segmentation of N-dimensional images. The user marks certain pixels as “object” or “background” to provide hard constraints for segmentation. Additional soft constraints incorporate both boundary and region information. Graph ..." Cited by 657 (14 self) Add to MetaCart In this paper we describe a new technique for general purpose interactive segmentation of N-dimensional images. The user marks certain pixels as “object” or “background” to provide hard constraints for segmentation. Additional soft constraints incorporate both boundary and region information. Graph cuts are used to find the globally optimal segmentation of the N-dimensional image. The obtained solution gives the best balance of boundary and region properties among all segmentations satisfying the constraints. The topology of our segmentation is unrestricted and both “object” and “background” segments may consist of sev-eral isolatedparts. Some experimental results are presented in the context ofphotohideo editing and medical image seg-mentation. We also demonstrate an interesting Gestalt example. A fast implementation of our segmentation method is possible via a new mar-$ow algorithm in [2]. - IJCV , 2003 "... In this paper we present a statistical framework for modeling the appearance of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to model an object by a collection of parts arranged in a deformable configuration. The appearance ..." Cited by 524 (15 self) Add to MetaCart In this paper we present a statistical framework for modeling the appearance of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to model an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We use these models to address the problem of detecting an object in an image as well as the problem of learning an object model from training examples, and present efficient algorithms for both these problems. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images. "... Several new algorithms for visual correspondence based on graph cuts [7, 14, 17] have recently been developed. While these methods give very strong results in practice, they do not handle occlusions properly. Specifically, they treat the two input images asymmetrically, and they do not ensure that a ..." Cited by 266 (11 self) Add to MetaCart Several new algorithms for visual correspondence based on graph cuts [7, 14, 17] have recently been developed. While these methods give very strong results in practice, they do not handle occlusions properly. Specifically, they treat the two input images asymmetrically, and they do not ensure that a pixel corresponds to at most one pixel in the other image. In this paper, we present a new method which properly addresses occlusions, while preserving the advantages of graph cut algorithms. We give experimental results for stereo as well as motion, which demonstrate that our method performs well both at detecting occlusions and computing disparities. - In Proc. Int. Conf. Computer Vision , 2003 "... We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigendecomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We t ..." Cited by 181 (7 self) Add to MetaCart We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigendecomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported. optima consist not only of the eigenvectors, but of a whole family spanned by the eigenvectors through orthonormal transforms. The goal is to find the right orthonormal transform that leads to a discretization. ˜X normalize - In IEEE Conference on Computer Vision and Pattern Recognition , 1998 "... Markov Random Fields (MRF’s) can be used for a wide variety of vision problems. In this paper we focus on MRF’s with two-valued clique potentials, which form a generalized Potts model. We show that the maximum a posteriori estimate of such an MRF can be obtained by solving a multiway minimum cut pro ..." Cited by 166 (22 self) Add to MetaCart Markov Random Fields (MRF’s) can be used for a wide variety of vision problems. In this paper we focus on MRF’s with two-valued clique potentials, which form a generalized Potts model. We show that the maximum a posteriori estimate of such an MRF can be obtained by solving a multiway minimum cut problem on a graph. We develop efficient algorithms for computing good approximations to the minimum multiway cut. The visual correspondence problem can be formulated as an MRF in our framework; this yields quite promising results on real data with ground truth. We also apply our techniques to MRF’s with linear clique potentials. 1 - IN IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE , 1999 "... In a traditional classification problem, we wish to assign one of k labels (or classes) to each of n objects, in a way that is consistent with some observed data that we have about the problem. An active line of research in this area is concerned with classification when one has information about pa ..." Cited by 161 (2 self) Add to MetaCart In a traditional classification problem, we wish to assign one of k labels (or classes) to each of n objects, in a way that is consistent with some observed data that we have about the problem. An active line of research in this area is concerned with classification when one has information about pairwise relationships among the objects to be classified; this issue is one of the principal motivations for the framework of Markov random fields, and it arises in areas such as image processing, biometry, and document analysis. In its most basic form, this style of analysis seeks a classification that optimizes a combinatorial function consisting of assignment costs --- based on the individual choice of label we make for each object --- and separation costs --- based on the pair of choices we make for two "related" objects. We formulate a general classification problem of this type, the metric labeling problem; we show that it contains as special cases a number of standard classification f... - Proc. IEEE Computer Vision and Pattern Recognition Conf. , 2000 "... A pictorial structure is a collection of parts arranged in a deformable configuration. Each part is represented using a simple appearance model and the deformable configuration is represented by spring-like connections between pairs of parts. While pictorial structures were introduced a number of ye ..." Cited by 161 (9 self) Add to MetaCart A pictorial structure is a collection of parts arranged in a deformable configuration. Each part is represented using a simple appearance model and the deformable configuration is represented by spring-like connections between pairs of parts. While pictorial structures were introduced a number of years ago, they have not been broadly applied to matching and recognition problems. This has been due in part to the computational difficulty of matching pictorial structures to images. In this paper we present an efficient algorithm for finding the best global match of a pictorial structure to an image. The running time of the algorithm is optimal and it it takes only a few seconds to match a model with ve to ten parts. With this improved algorithm, pictorial structures provide a practical and powerful framework for qualitative descriptions of objects and scenes, and are suitable for many generic image recognition problems. We illustrate the approach using simple models of a person and a car.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=16115","timestamp":"2014-04-21T13:15:05Z","content_type":null,"content_length":"39605","record_id":"<urn:uuid:e205fe8b-766f-48e1-b95c-9a3d06455cab>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Deformation theory of representations of an algebraic group up vote 22 down vote favorite For an algebraic group G and a representation V, I think it's a standard result (but I don't have a reference) that • the obstruction to deforming V as a representation of G is an element of H^2(G,V⊗V^*) • if the obstruction is zero, isomorphism classes of deformations are parameterized by H^1(G,V⊗V^*) • automorphisms of a given deformation (as a deformation of V; i.e. restricting to the identity modulo your square-zero ideal) are parameterized by H^0(G,V⊗V^*) where the H^i refer to standard group cohomology (derived functors of invariants). The analogous statement, where the algebraic group G is replaced by a Lie algebra g and group cohomology is replaced by Lie algebra cohomology, is true, but the only proof I know is a big calculation. I started running the calculation for the case of an algebraic group, and it looks like it works, but it's a mess. Surely there's a long exact sequence out there, or some homological algebra cleverness, that proves this result cleanly. Does anybody know how to do this, or have a reference for these results? This feels like an application of cotangent complex ninjitsu, but I guess that's true about all deformation problems. While I'm at it, I'd also like to prove that the obstruction, isoclass, and automorphism spaces of deformations of G as a group are H^3(G,Ad), H^2(G,Ad), and H^1(G,Ad), respectively. Again, I can prove the Lie algebra analogues of these results by an unenlightening calculation. Background: What's a deformation? Why do I care? I may as well explain exactly what I mean by "a deformation" and why I care about them. Last things first, why do I care? The idea is to study the moduli space of representations, which essentially means understanding how representations of a group behave in families. That is, given a representation V of G, what possible representations could appear "nearby" in a family of representations parameterized by, say, a curve? The appropriate formalization of "nearby" is to consider families over a local ring. If you're thinking of a representation as a matrix for every element of the group, you should imagine that I want to replace every matrix entry (which is a number) by a power series whose constant term is the original entry, in such a way that the matrices still compose correctly. It's useful to look "even more locally" by considering families over complete local rings (think: now I just take formal power series, ignoring convergence issues). This is a limit of families over Artin rings (think: truncated power series, where I set x^n=0 for large enough n). So here's what I mean precisely. Suppose A and A' are Artin rings, where A' is a square-zero extension of A (i.e. we're given a surjection f:A'→A such that I:=ker(f) is a square-zero ideal in A'). A representation of G over A is a free module V over A together with an action of G. A deformation of V to A' is a free module V' over A' with an action of G so that when I reduce V' modulo I (tensor with A over A'), I get V (with the action I had before). An automorphism of a deformation V' of V as a deformation is an automorphism V'→V' whose reduction modulo I is the identity map on V. The "obstruction to deforming" V is something somewhere which is zero if and only if a deformation exists. I should add that the obstruction, isoclass, and automorphism spaces will of course depend on the ideal I. They should really be cohomology groups with coefficients in V⊗V^*⊗I, but I think it's normal to omit the I in casual conversation. add comment 5 Answers active oldest votes A representation of G on a vector space V is a descent datum for V, viewed as a vector bundle over a point, to BG. That is, linear representations of G are "the same" as vector bundles on BG. So the question is equivalent to the analogous question about deformations of vector bundles on BG. We could just as easily ask about deformations of vector bundles on any space X. Given a vector bundle V on X, consider the category of all first-order deformations of V. An object is a vector bundle over X', where X' is an infinitesimal thickening (in the example, one may take X = BG x E where E is a local Artin ring and X' = BG x E' where E' is a square-zero extension whose ideal is isomorphic as a module to the residue field). A morphism is a morphism of vector bundles on X' that induces the identity morphism on V over X. If X is allowed to vary, this category varies contravariantly with X. Vector bundles satisfy fppf descent, so this forms a fppf stack over X. This stack is very special: locally it has a section (fppf locally a deformation exists) and any two sections are locally isomorphic. It is therefore a gerbe. Moreover, the up vote 21 down isomorphism group between any two deformations of V is canonically a torsor under the group End(V) (this is fun to check). vote accepted Gerbes banded by an abelian group H are classified by H^2(X,H) (this is also fun to check); the class is zero if and only if the gerbe has a section. If the gerbe has a section, the isomorphism classes of sections form a torsor under H^1(X,H). The isomorphisms between any two sections form a torsor under H^0(X,H). (This implies that the automorphism group of any section is H^0(X,H).) In our case, H = End(V), so we obtain a class in H^2(X,End(V)) and if this class is zero, our gerbe has a section, i.e., a deformation exists. In this case, all deformations form a torsor under H^1(X,End(V)), and the automorphism group of a deformation is H^0(X,End(V)). All of the cohomology groups above are sheaf cohomology in the fppf topology. If you are using a different definition of group cohomology, there is still something to check. $G$ was an algebraic group. Now what is meant by $BG$ ? – Wilberd van der Kallen Jan 19 '11 at 15:36 $BG$ is the stack of $G$-torsors (principal $G$-bundles). – Keerthi Madapusi Pera Jan 19 '11 at 16:13 add comment The statements about the group and Lie algebra in the question are special cases of a more general fact. Namely, if $A$ is an associative algebra and $V$ an $A$-module, then obstructions to deformations of $V$ lie in the Hochschild cohomology group $HH^2(A,{\rm End}(V))$, freedom of deformation in $HH^1(A,{\rm End}(V))$, and infinitesimal automorphisms in $HH^0(A,{\rm End}(V))$. This is rather easy to check using the bar complex. up vote 16 Now, the statement for Lie algebras is the special case $A=U({\mathfrak g})$, recalling that for any $U({\mathfrak g})$-bimodule $M$, down vote $$ HH^\ast (U({\mathfrak g}),M)=H^\ast({\mathfrak g},M_{ad}). $$ Similarly, for affine algebraic groups, it is the special case $A=O(G)^\ast$, where $O(G)$ is the coalgebra of regular functions, recalling that for any (algebraic) $G$-bimodule $M$, $$ HH^\ast(O(G)^\ast,M)=H^\ast(G,M_{ad}). $$ add comment Here's not a complete answer, but I think an enlightening trick. Deformations of V over the dual numbers are always in bijection with Ext^1(V,V) in any abelian category. The trick is that if you have a deformation V', you have a long exact sequence: Hom(V,V) -> Hom(V',V) -> Hom(V,V) -> Ext^1(V,V) -> Ext^1(V',V) -> Ext^1(V,V) -> Ext^2(V,V) up vote 8 down You can see that the extension splits if and only if the image of the identity under the boundary map is trivial (using Baer sum, you can extend this trick to show that two extensions vote are isomorphic if and only if the image of the identity is the same). I think the obstruction in Ext^2(V,V) you had in mind is the image of that class under the next boundary map, by a similar argument. add comment I will offer a sketch of an argument, and maybe someone who knows what a stack is can make it happen for real. There is probably a non-stacky deformation theory of commutative Hopf algebras, but I don't know what it looks like. Deforming G as a group should be the same as deforming BG as a plain old geometric object. Pulling back a point in BG along a cover by a point is very roughly taking a based loop space, and the deformed loop space comes with the deformed composition law. Similarly, deforming a representation of G should be the same as deforming a sheaf on BG. up vote I'm going to assume G is smooth. Then the tangent complex of BG mapping to a point is just the sheaf Ad, concentrated in degree 1. If we boldly assume that deformation theory of/on stacks 6 down works just like deformations of/on schemes, but maybe with some degree shifts, we should get the answers you want. For deforming G in particular, there is a canonical class in H^2(BG, Ad vote [-1]) that classifies obstructions, and if that vanishes, H^1(BG, Ad[-1]) classifies deformations and H^0(BG, Ad[-1]) classifies automorphisms of a deformation. When deforming the sheaf V, one usually sees the sheaf End(V) written as coefficients. Olsson wrote a paper on deformations of representable morphisms of stacks, and while the morphism BG -> S isn't representable, one might benefit from asking the author for additional details if one were, say, working in the same building as he. I've heard this approach that "deforming BG as a stack is the same as deforming G as a group," and I like it, but I don't see why any deformation of BG must be of the form BG' for G' a deformation of G. Also, I don't understand how to do deformations of representations, but I guess the thing I have to understand is deformation theory of coherent sheaves on a scheme. I'll ask Martin about it when I get the chance. – Anton Geraschenko Oct 16 '09 at 5:07 add comment About what Anton said at the end about deformations of a group. Let $m_0$ be the standard multiplication. Then I want to consider a deformation of the form $m:(G \times \epsilon \mathfrak {g}) \times (G \times \epsilon \mathfrak{g}) \to G \times \epsilon \mathfrak{g}$ where $m(g_1, g_2) = m_{0}(g_1,g_2) + \epsilon m_1 (g_1,g_2)$. When you write out the associativity condition up vote $m\circ (m \times 1) = m \circ (1 \times m)$ it seems that you find that $(g_1,g_2) \mapsto (m_{1}(g_{1},g_{2}))(g_{1}g_{2})^{-1}$ is a group cohomology cocycle for G acting on $\mathfrak{g} 4 down $ by the adjoint representation. Now one has to identify $H^{2}(G,Ad)$ with $H^{2}(BG,Ad)$ (taking care of the topology somehow). add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-groups rt.representation-theory deformation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/570/deformation-theory-of-representations-of-an-algebraic-group","timestamp":"2014-04-17T07:18:16Z","content_type":null,"content_length":"78342","record_id":"<urn:uuid:76a81c78-31c1-4215-b6fd-edbaf7232106>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Originally Posted by F = ma will be how much impact (i.e. force) the mass will produce. It is the increased kenetic energy a heavier mass will carry as compared to a lighter one given the same velocity that would have to be dissipated by the braking system, so: K = 1/2 * (MV^2) So given same velocity (V), the higher mass (M) will produce higher kenetic energy to be dissipated by the braking system. Man all this talk brings me back to college physics, which I'm kinda rusty on now. Correct me if I'm wrong in any of this. And if you wanna get really technical, you could find out what the kinetic friction is for your tires on dry asphalt (0.5 to 0.8 generally) and brake pads on rotors and factor that in. Oh, and don't forget wind resistance. Man, you could get really technical. I'll try my best to explain things here. I'm going to assume that whoever reads this post has little to no knowledge of classical mechanics. For those of you who took a course or two in physics, no offense. Just hang tight! Aviography is pretty much correct in everything he says, but in this scenario we really don't have to factor in energy, whether it be kinetic or potential (in regards to classical mechanics). For those of you who don't know, energy is a measurement of work equal to force times distance (W=F*x), measured in Joules (J). We could measure the amount of work done by the decelerating vehicle, but it is really not necessary. What we're really concerned with here is finding out the distance the vehicle will travel with the added weight compared to the vehicle without the added weight. In my scenario we will find out the distance, and as an added bonus, we will find out the time it takes to decelerate at a constant rate from a given velocity to rest with and without the added weight. If you know the formula for force, acceleration, and velocity, and you can manipulate the formulas to find out unknowns, you're pretty much in luck. Anyway, let's begin. First off, let's start off by using the SI system (which is mostly in metric units). Let's assume you have a 1200 kg vehicle and plan on adding 100 kg in the trunk later to test your theory. As you know, velocity (V) is equal to distance divided by time: V=x/t In the SI system, distance (x) is measured in meters, and time is measured in seconds (s) So let's assume you accelerate at a constant rate from 0 km/h to 100 km/h (62 mph) in 8 seconds. 100 kilometers per hour in meters per second is approximately 28.77 m/s. Therefore, after 8 seconds, your final velocity is 28.77 m/s. Acceleration (a) is equal to velocity divided by time: a=V/t Acceleration can also be calculated by change in velocity divided by time: (final velocity - initial velocity)/t The SI unit for acceleration is meters per second per second, or commonly phrased as meters per second squared (m/s^2). Our initial velocity was 0 km/h, or 0 m/s, and our final velocity is 28.77 m/s, so if we plug our numbers into the acceleration formula, we get roughly 3.6 m/s^2 (28.77 m/s - 0 m/s)/8 s. Force (F) is equal to mass multiplied by acceleration: F=m*a The SI unit for mass is measured in kilograms, and the SI unit for force is measured in Newtons (N), named after Sir Issac Newton. If you plug in 1200 kg for your mass and 3.6 m/s^2 for your acceleration, it would take 4320 N of force to accelerate your 1200 kg vehicle from 0 km/h to 100 km/h in 8 seconds. Following me so far? Simple physics, and we're not factoring in friction or wind resistance. Now here's a question: how much force would it take to decelerate your 1200 kg car from 100 km/h to 0 km/h in exactly 8 seconds? The answer is 4320 Newtons. Deceleration is merely negative acceleration. Technically, the answer could also be -4320 N depending on which directions you consider positive and negative. Basically you have to apply a certain force to accelerate a mass, and you would have to apply an equal amount of force in the opposite direction to bring that mass back to rest (4320 N - 4320 N = 0 N). To put this in terms of vehicle acceleration and deceleration, consider stepping on the gas pedal positive acceleration and hitting the brakes deceleration, or negative acceleration. Now let's add 100 kg to your 1200 kg car to bring a total mass of 1300 kg. Before you added the extra weight to your car, you accelerated at a rate of 3.6 m/s^2, and you decelerated at the rate of 3.6 m/s^2. Interpreting this, you hit the brakes with a certain amount of force that caused you to decelerate at 3.6 m/s^2. In this next scenario, you're going to decelerate at the same rate of 3.6 m/s^2. How much force would be generated if a 1300 kg mass were to be decelerated at a rate of 3.6 m/s^2? Assume acceleration to be positive in this scenario. Applying the equation F=m*a, we get 4680 Newtons of force. Interpreting this, before adding the extra weight, it only required 4320 N to stop your car, but after adding an extra 100 kg to your vehicle, it required 4680 N of force to stop the car. Now think about what you would have to do to stop your car with more force. Figured it out yet? You'd have to press the brake pedal harder. If you haven't already figured it out yet, if your acceleration remains constant, your vehicle is traveling the same distance regardless the mass of your vehicle. It just requires more force to reach that distance in a certain amount of time when the vehicle is heavier. If you're wondering "What distance? We didn't define any distance!" then let's backtrack. We're assuming you accelerated at a constant rate, but don't worry about that. You traveled from 0 km/h to 100 km/h. 100 km/h is 28.77 m/s, right? You traveled for 8 seconds, right? So 28.77 meters times 8 seconds is 230.16 meters (or approximately 755.12 feet). Now your goal was to decrease your stopping distance by adding extra weight. Hey, it was a good theory, but let me explain some more. You were probably assuming you could use the same amount of brake force (or less) to stop your moving vehicle with the added weight, right? Remember 4320 Newtons? That's the amount of force required to stop your 1200 kg moving vehicle from 100 km/h to 0 km/h in 8 seconds. Now let's say you apply 4320 Newtons of force to stop your vehicle with an extra 100 kg of added weight, or 1300 kg vehicle. What would your deceleration rate be? Rearranging F=m*a to solve for a, you get a=F/m, or force divided by mass. 4320 N/1300 kg equals approximately 3.32 m/s^2. So, now that we have figured out your deceleration rate, let's figure out how long it would take to stop your vehicle with 4320 N of force. As you know, acceleration is equal to velocity divided by time (a=V/t). We know the acceleration (3.32 m/s^2) and velocity (28.77 m/s), but the unknown is time. Still following? If you rearrange the formula for acceleration to find time, you get t=V/a. let's plug and chug. 28.77 m/s / 3.32 m/s^2 = 8.66 seconds approximately. It's not a huge difference, but as you can see, it would take longer to stop a heavier mass with the same amount of force. How much farther would you travel in 8.66 seconds? Remember velocity is equal to distance divided by time (V=x/t), and if you rearrange the equation to solve for distance, you get x=V*t. 28.77 m/s * 8.66 s = 249.15 meters approximately (or 817.42 feet). Without the added weight, you traveled a distance of 230.16 meters (or approximately 755.12 feet). The difference in distance is 18.99 meters, or 62.3 feet. Average length of a vehicle (according to a study done in Portland, Oregon measuring 390 vehicles) is roughly 13.52 feet. 62.3 feet is roughly 4.61 car lengths. If you used the same amount of force applied to the brakes on your vehicle with the 100 kg added weight as you did without the added weight, you would have traveled 4.61 extra car lengths to stop, assuming you traveled from 100 km/h to 0 km/h with an average deceleration rate of 3.32 m/s^2. I hope this little physics lesson and scenario helps you in your decision on whether or not to add extra weight to your vehicle to help braking time and distance. There are many other factors that would also affect your braking distance and time, including the coefficient of kinetic friction between your tires and the road (The coefficient of friction will vary due to material of tires and the road and whether the road is wet, dry, or has sand, gravel, or snow on it.), the coefficient of static and kinetic friction between your brake pads and rotors, the force of air resistance, and gravitational resistance due to an incline or decline (slope) of the road you are traveling on. Just remember that this scenario did not factor friction or external forces, and acceleration (and deceleration) were assumed to be a constant rate. As far as adding a few sandbags or bricks in your trunk, you will add more force against your rear axle (due to the force of gravity), and indirectly, your rear wheels. Since your car is front wheel drive, the back axle is merely pulled whereas the front axle is pushed by the engine. More force will be required from the engine to pull the extra weight, meaning that your vehicle will accelerate at a lower rate than normal in a certain gear at a certain RPM. If you want to accelerate at the same rate as you normally do, you could compensate by increasing your RPMs and applying more force, or press down harder on your gas pedal, but you could potentially lose traction on your front wheels and spin out until you gain traction again. Think of this as towing and applying too much gas causes you to spin out. Next, suppose you take a hard left turn (without braking) at a given velocity without the added weight in the trunk. In this scenario you do not lose traction in your rear wheels and spin out. The inertia of the vehicle wants to move forward, but your front axle is trying to force the vehicle to move left. The vehicle eventually curves to the left safely due to rotational inertia. Now suppose you take the same turn at the same velocity with the added weight. You could potentially sling your rear end around and fishtail. Why? As you try to curve your vehicle to the left, more force (I say force because in this scenario you are fighting wind resistance and friction from the tires and the surface.) and inertia is trying to keep the vehicle moving straight. Again, as you are turning, rotational inertia comes into play. Imagine a tennis ball tied to a string, and you are swinging the ball in circles from the string with your hand. Where the string is being held in your hand is the axis of the rotation. Your hand is a fixed axis. If you release the string, the ball will travel straight instead of curving. Now imagine swinging a bowling ball tied to a string. You swing the ball around and let the string go again. Notice how the bowling ball requires more force to rotate it? Now let's apply rotational inertia to your car with the extra weight added. Suppose your front axle is axis of rotation, the car itself is the string, and the extra weight in the trunk is the ball. More force is going to be required to rotate the vehicle around the left turn with the added weight instead of making the turn without the weight. Why? Inertia wants to keep the car moving forward. A certain amount of frictional force is applied between your tires and the road as you make a turn. Net force is the sum of all forces, and frictional force is considered negative since it is fighting your positive force. In this scenario, F(net) = F(applied) - F(frictional). While your vehicle with the added weight is making the turn, if your net force is greater than 0 Newtons, chances are you will lose traction and spin out in the direction your vehicle was traveling when frictional force was overpowered. This can be related to you letting go of the string attached to the ball. Remember on snowy or rainy roads, less frictional force is being applied to your vehicle. Therefore, adding extra weight to the rear of your vehicle would be a valid cause to lose traction during a curve or turn at a given velocity on less frictional roads. Adding snow tires would help increase friction between the tires and the road, though. Anyway, like I said, I hope this helps! You guys chime in if you want!
{"url":"http://www.driveaccord.net/forums/showthread.php?t=55423","timestamp":"2014-04-20T05:42:59Z","content_type":null,"content_length":"166136","record_id":"<urn:uuid:8bcf34a6-bd6b-42c9-9a1d-49bb45d7cbf3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] What's wrong with matrices? Ed Schofield schofield at ftw.at Sun Jul 9 01:35:29 CDT 2006 On 08/07/2006, at 10:22 PM, JJ wrote: > 3) In some operations, especially range selection operations, a N, > 1 matrix is > turned into a 1,N matrix. This is confusing relative to matlab and > problematic, > in my view. This sounds like a bug. Can you give any examples of this happening with the latest release or SVN version? > <snip> > 5) If X is a 1,n matrix, then X.T should return a n,1 matrix, I > think. This should be the case. Could you post a code snippet that violates Thanks for your feedback, JJ! -- Ed More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-July/009302.html","timestamp":"2014-04-19T15:49:37Z","content_type":null,"content_length":"3188","record_id":"<urn:uuid:ecaa5438-4c47-4d41-bdde-64af508a3fe2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
degrees / radians January 10th 2007, 12:43 AM degrees / radians Ok, I have some questions, these are all using radians Question 1 Convert the following angles to radians giving your answers correct to 3SF. a) 20 degrees b) -72 degrees c) 400 degrees d) -140 degrees e) 760 degrees Question 2 An angle "p" subtends an arc of length 25cm in a circle of radius R cm. The area of the sector POQ is 72cm (squared) Forumulate two equations in r and "p" Find the values of r and "p" Question 3 A cylindrical pipe of diameter 1.5m contains water to a depth of 0.9m. a) find the cross sectional area of the water b) if the water is flowing at a rate of 60 litres per second fin dthe speed of the water in m/s I would really appreciate any help offered, these are the final pieces of some homework and i really need help, thank you January 10th 2007, 04:41 AM The relationship between degrees and radians is that $\pi$ rad = 180 degrees. a) $\frac{20 \, degrees}{1} \cdot \frac{\pi \, rad}{180 \, degrees} \approx 0.349 \, rad$ The others are done the same way. January 10th 2007, 05:10 AM Hello, Kim! You're expected to know these formulas. In a circle of radius $r$ with a central angle of $\theta$ radians: . . the length of arc is: . $s \:=\:r\theta$ . . the area of the sector is: . $A \:=\:\frac{1}{2}r^2\theta$ 2) An angle $\theta$ subtends an arc of length 25cm in a circle of radius $R$ cm. The area of the sector POQ is 72 cm². (a) Formulate two equations in $R$ and $\theta.$ (b) Find the values of $R$ and $\theta.$ Part (a) We know that the arc length is 25 cm. . . So we have: . $R\theta\:=\:25$[1] We know that the area of the sector is 72 cm². . . So we have: . $\frac{1}{2}R^2\theta\:=\:72$[2] Part (b) Divide [2] by [1]: . $\frac{\frac{1}{2}R^2\theta}{R\theta} \:=\:\frac{72}{25}\quad\Rightarrow\quad\boxed{R \,=\,\frac{144}{25}\text{ cm}}$ Substitute into [1]: . $\frac{144}{25}\theta\:=\:25\quad\Rightarrow\quad\b oxed{\theta \,=\,\frac{625}{144}\text{ radians}}$
{"url":"http://mathhelpforum.com/trigonometry/9790-degrees-radians-print.html","timestamp":"2014-04-21T16:49:07Z","content_type":null,"content_length":"9951","record_id":"<urn:uuid:68325ea9-dac8-4e12-a8ab-4a0dfe59e9d3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Short Circuit Capacity: Basic Calculations and Transformer Sizing Need a transformer? Medium Voltage Cast Resin and Oil Immersed are available here Short Circuit Capacity: Basic Calculations and Transformer Sizing Copyright © 2001 Francis J. Martino Short circuit capacity calculation is used for many applications: sizing of transformers, selecting the interrupting capacity ratings of circuit breakers and fuses, determining if a line reactor is required for use with a variable frequency drive, etc. The purpose of the presentation is to gain a basic understanding of short circuit capacity. The application example utilizes transformer sizing for motor loads. Conductor impedances and their associated voltage drop are ignored not only to present a simplified illustration, but also to provide a method of approximation by which a plant engineer, electrician or production manager will be able to either evaluate a new application or review an existing application problem and resolve the matter quickly. Literature containing a detailed discussion of short circuit capacity calculations are available within the electrical power transmission industry. [1] The following calculations will determine the extra kVA capacity required for a three phase transformer that is used to feed a single three phase motor that is started with full voltage applied to its terminals, or, "across-the-line." Two transformers will be discussed, the first having an unlimited short circuit kVA capacity available at its primary terminals, and the second having a much lower input short circuit capacity available. kVA of a single phase transformer = V x A kVA of a three phase transformer = V x A x 1.732, where 1.732 = the square root of 3. The square root of 3 is introduced for the reason that, in a three phase system, the phases are 120 degrees apart and, therefore, can not be added arithmetically. They will add algebraically. Transformer Connected To Utility Power Line The first transformer is rated 1000 kVA, 480 secondary volts, 5.75% impedance. Rated full load amp output of the transformer is 1000 kVA / (480 x 1.732) = 1203 amps The 5.75% impedance rating indicates that 1203 amps will flow in the secondary if the secondary is short circuited line to line and the primary voltage is raised from zero volts to a point at which 5.75% of 480 volts, or, 27.6 volts, appears at the secondary terminals. Therefore, the impedance (Z) of the transformer secondary may now be calculated: Z = V / I = 27.6 volts / 1203 amps = .02294 ohms The transformer is connected directly to the utility power lines which we will assume are capable of supplying the transformer with an unlimited short circuit kVA capacity. The utility company will always determine and advise of the short circuit capacity available at any facility upon request. With unlimited short circuit kVA available from the utility, the short circuit amperage capacity which the transformer can deliver from its secondary is 480 volts / .02294 = 20,924 amps An alternative method of calculating short circuit capacity for the above transformer is: 1203 amps x 100 / 5.75% = 1203 / .0575 = 20,922 amps Another alternative is to consult a reference manual. Cutler- Hammer Consulting Application Catalog, 12th Edition, gives the specifications for the above mentioned transformer and the value of the short circuit capacity in Table A25 on page A-59. The short circuit capacity is given as 20,900 amps. Now we are ready to apply a motor to the terminals of the transformer secondary. We must determine the voltage drop which will be caused by the motor inrush on start. If the voltage remains within the rated voltage of the motor, then no oversizing of the transformer is required. Motors rated for 460 volts are for use with distribution systems that are rated at 480 volts. The rating system allows a twenty volt drop in the distribution system which may occur along the feeder cables which connect the power transformer to the load. The NEMA specification for a standard motor is that it requires the motor to be capable of operating at plus or minus 10% of nameplate voltage. Therefore, the voltage drop on inrush should not be allowed to drop below 460 volts less 10%, or, 414 volts. The transformer will be asked to supply power to a motor which has a full load amp rating of 1203 amps, which will fully load the transformer. Therefore, we will rate the motor at 460 V x 1203 A x 1.732, or, 958.5 kVA. We will assume that our motor will have an inrush of 600% of its full load rating which will cause an inrush of 460 V x 1203 A x 600% x 1.732 = 5751 kVA The voltage drop at the transformer terminals will be proportional to the motor load. The voltage drop will be expressed as a percentage of the inrush motor load compared to the maximum capability of the transformer. [2] The transformer has a maximum kVA capacity at its short circuit capability, which is 480 V x 20,924 A x 1.732 = 17,395 kVA The voltage drop on motor inrush will be 5751 kVA / 17,395 kVA = .331, or, 33.1% The transformer output voltage will drop to 480 x .669, or, 321 volts. Thus, we can see that the transformer is much too small to use a motor that has a full load rating equal to the full load capacity of the transformer. The transformer must be sized so that its short circuit capabilty is equal to or greater than 5751 kVA times 10, or, 57,510 kVA in order to have a voltage drop of 10% or less. Therefore, the short circuit amperage capacity of the transformer to be used must be a minimum of 57,510 kVA / (480 V x 1.732) = 69176 amps A typical 2500 kVA, 5.75% impedance transformer will have a short circuit capacity of 52,300 amps. The next highest standard size transformer at 3750 kVA will have a 6.5% impedance and would have a short circuit output capability of 69,395 amps which will be In the particular application discussed, the ratio of the selected standard size transformer kVA to motor kVA is 3750 kVA / 958.5 kVA = 3.91. Thus the transformer rating is 391% larger, or, nearly four times, the rating of the motor. Note the non-linear effect of the impedance rating of the transformers on their short circuit capacities. Transformer Connected To An Upstream Transformer The second transformer we will examine will have a finite short circuit capacity available at its primary rather than an unlimited capacity. We will assume that a facility derives its power from the same 1000 kVA transformer mentioned above and that the second transformer is connected directly to the terminals of the 1000 kVA transformer. Thus, feeder cables between the two transformers are eliminated and the impedance of cables are not taken into account. However, the smaller the motor leads, the less will be both the short circuit capacity and the voltage delivered to the motor terminals. The second transformer, which will have a 480 volt primary and a 480 volt secondary, will be used to power a 20 HP, 3 phase, 460 volt motor which will be started at full voltage. The motor will be the only load on the transformer. Sales catalogs by various manufacturers will invariably recommend a "minimum transformer kVA" of 21.6 for use with a 20 HP motor. The minimum transformer kVA ratings are for use with multiple motors on a single transformer. A multiple motor configuration will be discussed in the next section of this article. The 21.6 kVA is calculated as follows: 480 volts x 26 nominal amps x 1.732 = 21.6 kVA The transformer manufacturers will give a 20 HP motor a nominal full load amp rating of 27 amps, thus allowing no extra capacity: 460 volts x 27 nominal amps x 1.732 = 21.5 kVA One motor manufacturer has rated a 20 HP motor at 26 Full Load Amps, 460 VAC, 205 Locked Rotor Amps, 81% Power Factor. The motor will present a load of 460 volts x 26 amps x 1.732 = 20.7 kVA The starting motor kVA load with inrush current will be 460 V x 205 A x 1.732 = 163.3 kVA We will consider using a 30 kVA general purpose transformer to supply the 20 HP motor. The transformer will have a nominal impedance of 2.7% and an ouptut of 36.1 amps at 480 volts. The short circuit current capacity that can be delivered to the 21.6 kVA transformer by the upstream 1000 kVA transformer is 20,924 amps, or, 17,395 kVA. The short circuit amperage capacity of a transfomer with a limited system short circuit capacity available at its primary is: transformer full load amps / (transformer impedance + upstream system impedance as seen by the transformer) upstream system impedance as seen by the transformer = transformer kVA / available primary short circuit capacity kVA 36.1 amps / [2.7% + (30 kVA / 17,395 kVA)] = 36.1 / (2.7% + .0017%) = 36.1 / .0287 = 1258 short circuit amps The transformer output voltage drop upon motor inrush will be: motor inrush kVA / short circuit kVA = 163.3 kVA / (480 V x 1258 A x 1.732) = 163.3 kVA / 1046 kVA = .156 = 15.6 % A 30 kVA transformer rating is too small as the motor voltage drop will exceed 10%. A 45 kVA transformer with a 2.4% impedance and an output of 54.1 amps at 480 volts would have a short circuit capacity of 2034 amps. The voltage drop upon motor inrush would be 9.66%. For a single motor and transformer combination, one transformer manufacturer recommends that the motor full load running current not exceed 65% of the transformer full load amp rating. [3] Thus, for our 26 amp motor the transformer rating should be a minimum of 40 amps, or, 33.3 kVA. Multiple Motors On A Single Transformer The minimum transformer kVA is given by transformer manufacturers so that a transformer may be sized properly for multiple motors. If there are five motors on one transformer, add the minimum kVA ratings and then add transformer capacity as necessary to accomodate the inrush current of the largest motor. The transformer thusly selected will be capable of running and starting all five motors provided that only one motor is started at any one time. Additional capacity will be required for motors starting simultaneously. Also, if any motor is started more than once per hour, add 20% to that motor's minimum kVA rating to compensate for heat losses within the transformer. Motor Contribution to Short Circuit Capacity When a fault condition occurs, power system voltage will drop dramatically. All motors that are running at that time will not be able to sustain their running speed. As those motors slow in speed, the stored energy within their fields will be discharged into the power line. The nominal discharge of a motor will contribute to the fault a current equal to up to four times its full load current. With our 1000 kVA, 1203 amp transformer example given above, we will assume that all 1203 amps of load are from motors. The actual short circuit current will equal 20,924 amps plus 400% of 1203 amps for a total of 25,736 short circuit amps. When sizing the transformer for motor loads, the fault current contribution from the motors will not be a consideration for sizing. However, the motor contribution must be considered when sizing all branch circuit fuses and circuit breakers. The interrupting capacity ratings of those devices must equal or exceed the total short circuit capacity available at the point of application. Motor contribution to short circuit capacity must be included when adding a variable frequency drive to the system. See Variable Frequency Drives: Source Impedance and Line Reactors [1] "Cutler-Hammer Consulting Application Catalog," published by Cutler-Hammer, Division of Eaton Corporation, Pittsburgh, PA. [2] "Short Circuit Capacity and Voltage Sag," IEEE (Institute of Electrical and Electronic Engineers) Industry Application Society (IAS) Magazine, July/August 2000, page 38. You might find this magazine in universities which have Electrical Enginneering programs. Do not get it confused with IEEE Transactions on Industry Applications which is a different publication. [3] "Power Distribution Products" Catalog ATD-01, Acme Electric Corporation, 1995, page 125. Power Quality and Drives LLC Need a transformer? Medium Voltage Cast Resin and Oil Immersed are available here
{"url":"http://www.powerqualityanddrives.com/short_circuit_transformer/","timestamp":"2014-04-21T01:59:14Z","content_type":null,"content_length":"27912","record_id":"<urn:uuid:afa4eed7-c4b4-4e4f-a8cd-016b5d6907bb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Maintainer diagrams-discuss@googlegroups.com Safe Haskell None "Traces", aka embedded raytracers, for finding points on the edge of a diagram. See Diagrams.Core.Trace for internal implementation details. data Trace v Every diagram comes equipped with a trace. Intuitively, the trace for a diagram is like a raytracer: given a line (represented as a base point and a direction), the trace computes the distance from the base point along the line to the first intersection with the diagram. The distance can be negative if the intersection is in the opposite direction from the base point, or infinite if the ray never intersects the diagram. Note: to obtain the distance to the furthest intersection instead of the closest, just negate the direction vector and then negate the result. Note that the output should actually be interpreted not as an absolute distance, but as a multiplier relative to the input vector. That is, if the input vector is v and the returned scalar is s, the distance from the base point to the intersection is given by s * magnitude v. Action Name (Trace v) Show (Trace v) Ord (Scalar v) => Semigroup (Trace v) Ord (Scalar v) => Monoid (Trace v) (Ord (Scalar v), VectorSpace v) => Traced (Trace v) HasLinearMap v => Transformable (Trace v) VectorSpace v => HasOrigin (Trace v) Newtype (QDiagram b v m) (DUALTree (DownAnnots v) (UpAnnots b v m) () (Prim b v)) class (Ord (Scalar (V a)), VectorSpace (V a)) => Traced a Traced abstracts over things which have a trace. Traced b => Traced [b] Traced b => Traced (Set b) (Ord (Scalar v), VectorSpace v) => Traced (Trace v) Traced t => Traced (TransInv t) The trace of a single point is the empty trace, i.e. the one which returns positive infinity for every query. Arguably it should return a finite distance for (Ord (Scalar v), VectorSpace v) => vectors aimed directly at the given point and infinity for everything else, but due to floating-point inaccuracy this is problematic. Note that the envelope for Traced (Point v) a single point is not the empty envelope (see Diagrams.Core.Envelope). Traced a => Traced (Located a) The trace of a Located a is the trace of the a, translated to the location. Traced (FixedSegment R2) Traced (Trail R2) Traced (Path R2) (Traced a, Traced b, ~ * (V a) (V b)) => Traced (a, b) Traced b => Traced (Map k b) Traced (Segment Closed R2) (HasLinearMap v, VectorSpace v, Ord ( Scalar v)) => Traced (QDiagram b v m) (Ord (Scalar v), VectorSpace v, HasLinearMap v) => Traced (Subdiagram b v m) Diagram traces Querying traces traceV :: Traced a => Point (V a) -> V a -> a -> Maybe (V a) Compute the vector from the given point to the boundary of the given object in the given direction, or Nothing if there is no intersection. traceP :: Traced a => Point (V a) -> V a -> a -> Maybe (Point (V a)) Given a base point and direction, compute the closest point on the boundary of the given object, or Nothing if there is no intersection in the given direction. maxTraceV :: Traced a => Point (V a) -> V a -> a -> Maybe (V a) Like traceV, but computes a vector to the *furthest* point on the boundary instead of the closest. maxTraceP :: Traced a => Point (V a) -> V a -> a -> Maybe (Point (V a)) Like traceP, but computes the *furthest* point on the boundary instead of the closest. Subdiagram traces boundaryFrom :: (HasLinearMap v, Ord (Scalar v)) => Subdiagram b v m -> v -> Point vSource Compute the furthest point on the boundary of a subdiagram, beginning from the location (local origin) of the subdiagram and moving in the direction of the given vector. If there is no such point, the origin is returned; see also boundaryFromMay. boundaryFromMay :: (HasLinearMap v, Ord (Scalar v)) => Subdiagram b v m -> v -> Maybe (Point v)Source Compute the furthest point on the boundary of a subdiagram, beginning from the location (local origin) of the subdiagram and moving in the direction of the given vector, or Nothing if there is no such point.
{"url":"http://hackage.haskell.org/package/diagrams-lib-0.7.1/docs/Diagrams-Trace.html","timestamp":"2014-04-20T11:21:45Z","content_type":null,"content_length":"28194","record_id":"<urn:uuid:01a0cae6-c0d8-41ee-983b-0b92d426bc8a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
find power series find the power series representation of the function. heres the equation: http://img260.imageshack.us/img260/9021/untitledph1.jpg need help. thanks. We know that, $-1<x\leq 1$ $\tan^{-1}x=x-\frac{x^3}{3}+\frac{x^5}{5}-...$ Know if you multiply this term by term for $(1+x^2)$ we have, $x-\frac{x^3}{3}+\frac{x^5}{5}-....+x^3-\frac{x^5}{3}+\frac{x^ 7}{5}-...$ $x+\frac{2x^3}{3\cdot 1}-\frac{2x^5}{5\cdot 3}+\frac{2x^7}{7\cdot 5}-\frac{2x^9}{9\cdot 7}+...$ I just need to make the restriction, $-1<x<1\,$ rather then the endpoint because the Cauchy product applies only for the interval of absolute convergence.
{"url":"http://mathhelpforum.com/calculus/11358-find-power-series.html","timestamp":"2014-04-19T00:30:44Z","content_type":null,"content_length":"34375","record_id":"<urn:uuid:e9539e23-30e9-490f-9684-70fbb55f7805>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Entropy of linear function? Replies: 0 weschrist Entropy of linear function? Posted: Dec 22, 2012 4:35 PM Posts: 1 From: Big Hi all, Registered: Full disclosure, I'm a hydrogeologist... I play in the dirt and water. I love math and use it often, but I'm by no means an expert. My question is in regards to Shannon Entropy = H(x). As I understand it, it is basically a measure of information contained in a signal. H(x) = - sum(pi*log(pi)), where pi is the probability of getting value i A continuous series of the same value (i.e. all 1's) would have pi=1 for all i, which gives H(x)=0. A series of N values would have pi=ni/N where ni is the number of time value i occurs. I hope that's right, otherwise I'm in worse shape than I thought. A random set of values where no two values are the same would have pi=1/N. As N increases, the entropy increases. So my question is, if you have a linearly increasing set of values where every value is greater than the previous, do you really have high entropy (high information content)? Or is there some assumption of stationarity or transformation to ensure stationarity (i.e. take out the linear trend and drop the entropy to zero)? Any insight would be greatly appreciated.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2422171&messageID=7942692","timestamp":"2014-04-17T13:39:34Z","content_type":null,"content_length":"14647","record_id":"<urn:uuid:be9d195c-cbe4-4534-bf11-45d24ffdf9b3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Theoretical Mechanics big ideas I’ve taught Theoretical Mechanics before using Standards-Based Grading, but I think I want to make some tweaks to the focus of the standards. Here’s the old list. In this post I want to get my current thoughts down about where the big areas should be. Note that the work Danny Caballero has done at the University of Colorado has helped me with these thoughts a lot (though there the course is joined with math methods and spread over two semesters). Ok, here’s my list of big ideas, that I hope, once refined, will be the main categories of standards for the course (they’re numbered for ease in referral in the comments). 1. Differential equations 1. Equations of motion are very compact ways of describing physical laws 2. I think students should understand the form, solutions, approaches, visual representations, and numeric work surrounding these 3. Typical introductions to this include air resistance trajectories and oscillators 2. Calculus of variations 1. I did this very early last time, and I think it worked out. Having the Lagrangian approach as an early tool comes in handy for oscillations, gravitation, etc. 2. I usually only do Lagrangian, not Hamiltonian, as I’m still unconvinced that it provides a tool that can do a lot more than the Lagrangian approach, given the level of abstraction needed to learn it. 3. Momentum is king 1. Last time I enjoyed the class-wide conversation we had about whether momentum and kinetic energy are both needed to describe motion. This is really a focus on Newton’s laws. 4. Oscillations 1. damped, driven, etc. 5. Central force potentials 1. Kepler’s laws 2. I don’t like that students don’t know an ellipse when they see the mathematical evidence for it, but it’s a tour-de-force of physics. 3. I love Hohmann transfer problems. 6. Systems of particles 1. usefulness of center of mass 2. conservation issues 7. Noninertial frames 8. Rigid bodies 1. Linear algebra is not a pre-requisite for this class, so I’ve got some awkward work arounds for this. 9. Normal modes 1. Linear algebra is not a pre-requisite for this class, so I’ve got some awkward work arounds for this. The linear algebra thing is really a problem for some of the cool stuff that happens with rigid bodies and normal modes, but I’m pretty happy with my normal modes solution, and slightly happy with my rigid body approach. One notable absence is gravitation as a main topic. It’s a chapter in the main books I’ve looked at, but I like putting the focus on general central force stuff. Any thoughts would be greatly appreciated. 16 Responses to Theoretical Mechanics big ideas 1. Here’s one big idea that I think encompasses some of these: advanced mechanics is the story of r(t), v(t) and a(t) (or you could replace v(t) and a(t) with p(t)). What I found last quarter is that students were familiar with r, v, and a, but not as functions of time that could be manipulated or that formed the basis of other functions like the Lagrangian and Hamiltonian. I disagree with your thoughts on H; I would never stop at L. In my experience, the Hamiltonian is much, much more useful than L for working with anything but the simplest problems. Moving away from second order equations to sets of first order equations is a big win both conceptually (e.g. phase space) and practically (numerical solutions). It also ties in nicely with QM, of course. □ I really like the r, v, and a approach you’re suggesting here. The idea that they now need to be considered functions is very important, and perhaps can be considered part of my “differential equations” section. As for L vs H, here are 2 issues: 1) 2nd vs 1st order doesn’t matter if most of what you’re dealing with needs computational approaches, and 2) the connection to QM is not something I have time for in this course. Do you think that connection should be done in a single course, or have the students see the two ends of the connection in two different courses? ☆ For computational purpose, maybe Mma doesn’t care, but most methods I’m familiar with are for sets of first order equations. Is that not true? Maybe it’s just my bias because of my background in dynamical systems, but I really think that getting to H buys you a lot. I don’t spend much time on the connection to QM, but most of are students take QM and CM together (or in close succession), so I think seeing H in a second context (and seeing q and p on equal footing) is very valuable. ☆ Yeah, I should have been more clear: Mathematica doesn’t care what the order is when numerically integrating differential equations. I’m still not convinced that the H connection between classical and quantum mechanics is done well in either course. What book do you think takes a stab? ☆ For example, here’s how Griffith’s QM book introduces the Hamiltonian: “They (solutions to the TI-SWE) are states of difinite total energy. In classical mechanics, the total energy (kinetic plus potential) is call the Hamiltonian.” Eventually things like commutators are developed, but I don’t think there’s really a strong connection with the Hamiltonian approach in classical mechanics. ☆ “Chaos in Classical and Quantum Mechanics” by Martin Gutzwiller? :) (http://www.amazon.com/Chaos-Classical-Quantum-Mechanics-Gutzwiller/dp/3540971734) You mean of undergraduate texts? None. But that’s not what I’m trying to say. Seeing H and q and p in two different contexts lends something of a spiral approach that would otherwise be I would say something similar about finding principal axes or normal modes and the connection to eigenvalues and wavefunctions in QM. I don’t spend a lot of time on it, but I do point out that the same sort of stuff goes on in QM. 2. I’m also interested that you don’t do the Hamiltonian. While it may not be crucial for the classical mechanics course as you teach it, I think tend to think it is valuable to expose students to the idea of the Hamiltonian before they hit quantum mechanics. I like students to see different pieces of a concept in different courses as it provides an opportunity to make connections, and revisit what they have learned before. □ I really don’t like the way that Thornton, Marion deal with the Hamiltonian. It doesn’t really motivate it other than to point out when things are conserved, and it definitely doesn’t give you much purchase on the slope to understanding the quantum application. ☆ Have you looked at how Taylor deals with the Hamiltonian? It’s much better IMO. ☆ I’m embarrassed to admit I haven’t because I don’t have that book. How far down the path to QM does it go? 3. One point in favor of the Hamiltonian formalism is that it takes you to conjugate variables, and those lead to the link between physical symmetries and conservation laws — Noether’s theorem. And, philosophically, that’s pretty heady stuff. Even if you don’t “do that” in “detail”, throwing it in as a forward-reference lecture bit can help students appreciate that the physics they’re learning is connected to bigger, deeper things that they (hopefully) look forward to learning some day. My very wise mentor and former teacher, Bill Gerace, likes to say that every course he teaches is 20% review (of stuff they should have learned before but probably don’t really get as well as they should), 60% new material they should get, and 20% forward-reference material that they won’t really get this time around but will get much better in a future course for having seen it now. Honor, don’t hide, the dangling ends at the edges of your course! □ I do like the connection with conjugate variables, especially when connecting to the Heisenberg Uncertainty Principle. I also like the 20-60-20 idea. As usual, I’m trying to bring focus and efficiency to my class. I’m so happy I use my blog now instead of blank sheets of paper for these brainstorming sessions. □ I also like the idea of introducing the Hamiltonian in mechanics, even if it’s not fully grasped. You can use Poisson brackets to write out the algebra of classical observables and to make the connection to quantum mechanics. The Poisson bracket makes it easy to find how observables change in time: $\frac{\mathrm{d}\mathcal{O}}{\mathrm{d}t} = \{f, \mathcal{H}\} + \frac{\partial \mathcal{O}}{\partial t}$ It’s interesting that quantization is like choosing a the “quadratic” part of the algebra. Hamiltonian mechanics also has so many beautiful connections to the symplectic geometry of phase space. There’s the Hamilton-Jacobi equations. Utility? Maybe not so much. ☆ Yeah, this is deeper than I would usually choose to go in this course, Brian. I’m really starting to wonder if I should do more of this in quantum, though I haven’t taught that in a This entry was posted in physics, syllabus creation, teaching. Bookmark the permalink.
{"url":"http://arundquist.wordpress.com/2013/01/17/theoretical-mechanics-big-ideas/","timestamp":"2014-04-18T03:31:22Z","content_type":null,"content_length":"82526","record_id":"<urn:uuid:87a636a3-d4fd-4fc9-9023-85cdd2035461>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Circles and Ellipses 6.2: Circles and Ellipses Created by: CK-12 Learning Objectives • Understand the difference between an “oval” and an ellipse. • Recognize and work with equations for ellipses. • Derive the focal property of ellipses. • Understand the equivalence of different definitions of ellipses. • Reconstruct Dandelin’s sphere construction. • Know some of the different ways people have approached ellipses throughout history. • Understand some of the important applications of ellipses. Let’s begin with the first class of shapes discussed in the last section. When the plane makes a finite intersection with one side of the cone, we get either a circle or the “oval-shaped” object illustrated in the previous section. It turns out that this is no ordinary oval, but something called an ellipse, a shape with special properties. Like parallelograms, or any other shape with lots of interesting properties, ellipses can be defined by some of these properties, and then the other properties necessarily follow from the definition. For example, a parallelogram is typically defined as a quadrilateral with each pair of opposite pairs of sides parallel. Once you define it this way, it follows that the opposite sides must also be equal in length, and that the diagonals must bisect each other. Well, if you instead started by defining a parallelogram by one of these other properties, for instance opposite sides having equal lengths, then you would end up with the same class of shapes. The same thing happens for ellipses. One way to define an ellipse is as a “stretched out circle”. It’s the shape you would get if you sketched a circle on a deflated balloon and then stretched out the balloon evenly in two opposite directions: It’s also the shape of the surface of water that results when you tilt a round glass: Or an ellipse could be thought of as the shape of a circle drawn on a piece of paper when it is viewed at an angle. Equations of Ellipses This “stretching” can be represented algebraically. For simplicity, take the circle of radius 1 centered at the origin (0,0). The distance formula tells us that this is the set of points $(x, y)$ $D & = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2} \\1 & = \sqrt{(x - 0)^2 + (y - 0)^2} \\ 1 & = x^2 + y^2$$x^2 + y^2 = 1$$x-$$x$$a>1$ $\left (\frac{x}{a} \right )^2 + y^2 = 1$ Why does this stretch the circle horizontally? Well, the effect of dividing $x$$a$$y-$$(x, y)$$x$$a$$(x, y)$$(ax, y)$$a$$\left (\frac{x}{2} \right )^2 + y^2 = 1$ Generalizing the equation by allowing a stretch in the vertical direction, we get the following. $\left (\frac{x}{a} \right )^2 + \left (\frac{y}{b} \right )^2 =1$ The factor $a$$b$$a=b$$a eq b$$b < a$$a < b$ $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ This is called the standard form of the equation of an ellipse, assuming that the ellipse is centered at (0,0). To sketch a graph of an ellipse with the equation $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$x$$y$ Example 1 Sketch the graph of $\frac{x^2}{4} + \frac{y^2}{9} = 1$ Solution: This equation can be rewritten as $\frac{x^2}{2^2} + \frac{y^2}{3^2} = 1$$x = 0$$y = 0$$y-$$x-$ Example 2 Sketch the graph of $\frac{x^2}{16} + y^2 = 1$ Solution: This can be rewritten as $\frac{x^2}{4^2} + \frac{y^2}{1^2} = 1$ The segment spanning the long direction of the ellipse is called the major axis, and the segment spanning the short direction of the ellipse is called the minor axis. So in the last example the major axis is the segment from (-4,0) to (4,0) and the minor axis is the segment from (0,-1) to (0,1). The major and minor axes are examples of what are sometimes called reference lines. Apollonius, the Ancient Greek mathematician who wrote an early treatise on conics, used these and other reference lines to orient conic sections. Though the Greeks did not use a coordinate plane to discuss geometry, these reference lines offer a framing perspective that is similar to the Cartesian plane that we use today. Apollonius’ way of framing conics with reference lines was the closest mathematics came to the system of coordinate geometry that you know so well until Descartes’ and Fermat’s systematic work in the seventeenth century. Example 3 Not all equations for ellipses start off in the standard form above. For example, $25x^2 + 9y^2 = 225$is an ellipse. Put it in the proper form and graph it. Solution: First, divide both sides by 225, to get: $\frac{x^2}{9} + \frac{y^2}{25} = 1$$\frac{x^2}{3^2} + \frac{y^2}{5^2} = 1$ Review Questions 1. It was mentioned above that when a round glass of water is tilted, the surface of the water is an ellipse. Using our working definition of an ellipse as “stretched out circle”, explain why you think the water takes this shape. 2. Sketch the following ellipse: $36x^2 + 25y^2 = 900$ 3. Now try sketching this ellipse where the numbers don’t turn out to be so neat: $3x^2 + 4y^2 = 12$ Review Answers 1. Answers may vary, but should explain why the shape that results stretches a circle in one direction because the width of the glass is constant. The Focal Property In every ellipse there are two special points called the foci (foci is plural, focus is singular), which lie inside the ellipse and which can be used to define the shape. For an ellipse centered at (0,0) that is wider than it is tall, its major axis is horizontal and its foci are at $\left ( \sqrt{a^2 - b^2},0 \right )$$\left ( - \sqrt{a^2 - b^2},0 \right )$ What is the significance of these points? The ellipse has a geometric property relating to these points that is similar to a circle’s relationship with its center. Remember a circle can be thought of as the set of points in a plane that are a certain distance from the center point. In fact, that is typically the definition of a circle. Well, the foci act like the center except that there are two of them. An ellipse is the set of points where the sum of the distance between each point on the ellipse and each of the two foci is a constant number. In the diagram below, for any point $P$$F_1 P + F_2P = d$$F_1$$F_2$$d$ This definition using the sum of the focal distance gives us a great way to draw ellipses. Sure, you can just graph them on your calculator. But why not utilize a simpler technology that does the job just as well? As you know, a circle could be drawn by fixing a string to a piece of paper, tying the other end to a pencil, and then drawing the curve that keeps the string taut. Similarly, an ellipse can be drawn by taking a string that is longer than the distance between two points, fixing the two ends of the string to the two points. Then drawing all points that can be drawn when the string is taut and the pencil is touching it. In the diagram above, the dotted line represents the string of fixed length, which is attached at the foci $F_1$$F_2$$P$$F_1$$F_2$ Review Questions 4. Use string and tacks to draw ellipses that 1. are nearly circles 2. are very different than circles For each of these, what can you say about how the distance between the foci and the length of the string compare? Review Answers 4. Drawings may vary. For ellipses that are nearly circles, the distance between the foci is small compared to the length of string. The foci can also be used to measure how far an ellipse is “stretched” from a circle. The symbol $\varepsilon$eccentricity of an ellipse, and it is defined by the distance between the foci divided by the length of the major axis, or $\frac{\sqrt{a^2 - b^2}}{a}$$\frac{\sqrt{b^2 - a^2}}{b}$$a = b$ Review Questions 5. What is the full range of the eccentricity of an ellipse? What does it look like near the extremes of this range? Review Answers 5. The interval of possible values is $\varepsilon \in [0,1)$$\varepsilon = 0$ Turning the Definition of Ellipses on its Head Often, this focal property is not thought of as a property of ellipses, but rather a defining feature. To see that these are equivalent, we have some to work to do. Let’s start by proving that stretched out circles actually have this focal property. We want to prove that for a “stretched out circle” defined by the equation $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$a$$b$$\left ( \sqrt{a^2 - b ^2},0 \right )$$\left ( - \sqrt{a^2 - b^2},0 \right )$$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$\left ( \sqrt{a^2 - b^2},0 \right )$$\left ( - \sqrt{a^2 - b^2},0 \right )$ Proof: Suppose a point $(x,y)$$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$y$$x$ $\frac{y^2}{b^2} & = 1 - \frac{x^2}{a^2} \\y^2 & = \frac{a^2b^2 - b^2x^2}{a^2} \\y & = \sqrt{\frac{a^2b^2-b^2x^2}{a^2}}$ Now, using the distance formula to compute the sum of the distance between the pairs of points: $\left (x, \sqrt{\frac{a^2b^2-b^2x^2}{a^2}} \right )$$\left ( \sqrt{a^2 - b^2},0 \right )$$\left (x, \ sqrt{\frac{a^2b^2-b^2x^2}{a^2}} \right )$$\left(-\sqrt{a^2-b^2},0\right)$ $\sqrt{\left (x-\sqrt{a^2 - b^2} \right )^2 + \frac{a^2b^2-b^2x^2}{a^2}} + \sqrt{\left (x+\sqrt{a^2 - b^2} \right )^2 + \frac{a^2b^2-b^2x^2}{a^2}}$ This algebraic equation looks daunting, but simplifying the expressions inside these square roots results in a surprising result. This becomes $& \sqrt{\frac{a^2 \left (x - \sqrt{a^2 - b^2} \right )^2 + a^2b^2 - b^2 x^2}{a^2}} + \sqrt{\frac{a^2 \left (x+ \sqrt{a^2 - b^2} \right )^2 + a^2 b^2 - b^2x^2}{a^2}} \\& = \frac{1}{a} \left (\sqrt{a^ 2 \left (x- \sqrt{a^2 - b^2} \right )^2 + a^2b^2 - b^2x^2} + \sqrt{a^2 \left (x+ \sqrt{a^2 - b^2} \right )^2 + a^2b^2 - b^2x^2} \right ) \\& = \frac{1}{a} \left (\sqrt{a^2x^2 - 2a^2x \sqrt{a^2-b^2} + a^4 - a^2b^2 + a^2b^2 - b^2x^2} + \sqrt{a^2x^2 - 2a^2x\sqrt{a^2 - b^2} + a^4 - a^2b^2 + a^2b^2 - b^2x^2} \right ) \\& = \frac{1}{a} \left ( \sqrt{a^2x^2 - 2a^2 x \sqrt{a^2 - b^2} + a^4 - b^2x^2} + \ sqrt{a^2x^2 - 2a^2x \sqrt{a^2 - b^2} + a^4 - b^2x^2} \right ) \\& = \frac{1}{a} \left (\sqrt{\left (a^2 - x \sqrt{a^2 - b^2} \right )^2} + \sqrt{\left (a^2 + x \sqrt{a^2 - b^2} \right )^2} \right ) \ \& = \frac{1}{a} \left (a^2 - x \sqrt{a^2 - b^2} + a^2 + x \sqrt{a^2 - b^2} \right ) \\& = \frac{1}{a} (2a^2) \\& = 2a$ What a miraculous collapse! One of the gnarliest algebraic expressions that I have ever encountered turned into the simple expression $2a$$x$$y$The sum of the distances between every point on the ellipse and the two foci is always $2a$$2a$$x-$$(a,0)$ Review Questions 6. Compute the distance from the $x-$$(a,0)$$2a$ 7. What is the sum of the distances to the foci of the points on a vertically-oriented ellipse? Review Answers 6. The distance between the $x-$$(a,0)$$\left (\sqrt{a^2 - b^2}, 0 \right )$$a - \sqrt{a^2 - b^2}$$x-$$(a,0)$$\left (- \sqrt{a^2 - b^2}, 0 \right )$$a + \sqrt{a^2 - b^2}$$a - \sqrt{a^2 - b^2} + a + \sqrt{a^2 - b^2} = 2a$ 7. $2b$ Defining an Ellipse by Focal Distance So all this computing simply means that “stretched-out circles”—what we’ve been calling ellipses—satisfy the focal property. What would be great is if we could define ellipses by the focal property. This would be a nice generalization of the way we define circles. Recall that circles are defined as the set of points in a plane that are a constant distance from a center point. Analogously, ellipses could be defined as a set of points in a plane for which the sum of the distances to two focus points is a constant. In other words, this definition would yield the exact same set of shapes as the “stretched out circle” definition that we started with. Before proceeding we need to make sure of one thing. We’ve already proved that stretched out circles satisfy the focal property, but how do we know that any shape satisfying the focal property is in turn a stretched out circle? Well, the calculation above can simply be read backwards. In other words, suppose you have a set of points that satisfy the focal property, that each point whose sum of the distances to the points $(f,0)$$(-f,0)$$d$$d$$f$$2f<d$$a$$b$$d = 2a$$f = \sqrt{a^2 - b^2}$$(x, y)$$\left (\sqrt{a^2 - b^2}, 0 \right )$$\left (- \sqrt{a^2 - b^2}, 0 \right )$$2a$ Review Questions 8. Explain why for any two positive numbers $d$$f$$2f<d$$a$$b$$d = 2a$$f = \sqrt{a^2 - b^2}$ 9. We just told you that the above proof could be read backwards. But you need to be careful when following algebraic steps backwards, especially ones involving squares or square roots. 1. For example, what happens when you follow this argument backwards? $x & = -2 \\x^2 & = 4$ 2. Write a convincing argument that it is okay to follow the steps backwards in the above proof that every stretched circle has the focal property. Review Answers 8. Set $a = \frac{d}{2}$$f$$2f<d$$b$$f = \sqrt{a^2 - b^2}$$2f < d, 2f < 2a$$a$$d > 0$$f < a$$b$$f < a$$f$$a$$b$ Then the Pythagorean Theorem tells us that $f^2 + b^2 = a^2$$f = \sqrt{a^2 - b^2}$$a$ 1. Taking the square root of both sides of $x^2 = 4$$x = \pm 2$$(x = -2)$$(-2)^2$$2^2$ 2. The student’s reason should include the fact no information is lost (through squaring both sides or other operations) in any of these steps, so that each step is completely reversible. Equation of an Ellipse Not Centered at the Origin All the ellipses we’ve looked at so far are centered around the origin (0,0). To find an equation for ellipses centered around another point, say $(h, k)$$x$$x-h$$y$$y-k$$h$$h < 0$$k$$k < 0$ $\frac{(x-h)^2}{a^2} + \frac{(y-k)^2}{b^2} = 1$ It is centered about the point $(h,k)$$b<a$$\left ( h + \sqrt{a^2 - b^2}, k \right )$$\left ( h - \sqrt{a^2 - b^2}, k \right )$$a<b$$\left ( h,k + \sqrt{a^2 - b^2} \right )$$\left ( h,k - \sqrt{a^2 - b^2} \right )$ Example 4 Graph the equation $4x^2 + 8x + 9y^2 - 36y + 4 = 0$ Solution : We need to get the equation into the form of general equation above. The first step is to group all the $x$$y$$x^2$$y^2$ $4(x^2 + 2x) + 9(y^2 - 4y) = -4$ Now, we “complete the square” by adding the appropriate terms to the $x$$y$http://authors.ck12.org/wiki/index.php/Algebra_I-Chapter-10#Solving_Quadratic_Equations_by_Completing_the_Square for more on completing the square.) $4(x^2 + 2x + 1)+ 9(y^2 - 4y + 4) = -4 + 4 + 36$ Now we factor and divide by the coefficients to get: $\frac{(x+1)^2}{9} + \frac{(y-2)^2}{4} = 1$ And there we have it. Once it’s in this form, we see this is an ellipse is centered around the point (-1,2), it has a horizontal major axis of length 3 and a vertical minor axis of length 2, and from this we can make a sketch of the ellipse: Review Questions 10. Explain why subtracting $h$$x-$$k$$y-$$h$$k$ 11. Graph this ellipse. $x^2 - 6x + 5y^2 - 10y - 66 = 0$ 12. Now try this one that doesn’t have such nice numbers! $16x^2 - 48x + 125y^2 + 150y + 61 = 0$ 13. Now try this one. $3x^2 - 12x + 5y^2 + 10y - 3 = 0$ 14. What about this one. $5x^2 - 15x - 2y^2 + 8y - 50 =0$ Review Answers 10. If $(x,y)$$\frac{(x-h)^2}{a^2} + \frac{(y - k)^2}{b^2} = 1$$(x+h,y+k)$$\frac{(x-h)^2}{a^2} + \frac{(y - k)^2}{b^2} = 1$$h$$k$ 13. After completing the square, we have the sum of positive numbers equaling a negative number. This is an impossibility, so the equation has no solutions. 14. After completing the square, the $x$$y$ Difference Between an Ellipse and an Oval or Proving the Sliced Cone Definition of an Ellipse There is still one critical step missing in our exploration of ellipses. We showed that “stretched-out circles” satisfy the focal property, and that any shape satisfying this property is in fact a “stretched-out circle”. So these are actually the same class of shapes, and they are called ellipses. But not any oval-shaped curve is an ellipse. Draw a random oval and you’re not likely to be able to find two points that satisfy the focal property. In particular, when we cut a cone with a tilted plane, how do we know that the oval-shaped curve that results is a “stretched out circle” satisfying the focal property? Amazingly, the Ancient Greeks had an argument for this fact over two millennia ago. While it is impressive that this problem was solved so long ago, the argument itself involves an intricate construction that isn’t as illuminating as a more modern the one I’m going to show you instead. Most mathematicians prefer to use a more modern argument that is simply stunning in its simplicity. This modern argument isn’t fancy—the Greeks had all the tools they need to understand it—they just didn’t happen to think of it. It wasn’t until 1822 that the French mathematician Germinal Dandelin thought of this very clever construction. Dandelin found a way to find the foci and prove the focal property in one fell swoop. Here’s what he said. Take the conic section in question. Then choose a sphere that is just the right size so that when it’s dropped into the conic, it touches the intersecting plane, as well as being snug against the cone on all sides. If you prefer, you can think of the sphere as a perfectly round balloon that is blown up until it “just fits” inside the cone, still touching the plane. Then do the same on the other side of the plane. After we’ve drawn both of these spheres we have this picture: These spheres are often called “Dandelin spheres”, named after their discoverer. It turns out that not only is our shape an ellipse (which, like all ellipses satisfies the focal property), but these spheres touch the ellipse exactly at the two foci. To see this, consider this geometric argument. The first thing to notice is that the circles $C_1$$C_2$$l$$d$$C_1$$C_2$$C_1$$C_2$ The next thing to remember is a property of tangents to spheres that you may have learned in geometry. If two segments are drawn between a point and a sphere, and if the line containing each segment is tangent to the sphere, then the two segments are equal. In the diagram below, $AB=AC$this description for more about this property.) Now consider the point $P$$\overline{QR}$$d$$C_1$$C_2$$P$$d_1$$d_2$$d_1 =RP$$d_2 =PQ$$d_1 + d_2 = RP + PQ = QR = d$$d$$P$ Review Questions 15. What do the Dandelin spheres look like in the case of a circle? 16. What is the area of an ellipse with the equation $\frac{(x-h)^2}{a^2} + \frac{(y - k)^2}{b^2} = 1$ 17. What is the perimeter of an ellipse with the equation $\frac{(x-h)^2}{a^2} + \frac{(y - k)^2}{b^2} = 1$ Review Answers 15. The Dandelin spheres for a circle lie directly above one another, and both touch the circle at the center point. 16. The area of an ellipse is $ab \pi$$\pi$$a$$x-$$b$$y-$$a$$b$$ab$$ab \pi$ 17. This is actually a much more difficult question than the previous one. You’re on your own! Even the great Indian mathematician Ramanujan could only come up with an approximation: $p \approx \pi \ left[3 (a+b) - \sqrt{(3a+b)(a+3b)}\right]$ The number of places ellipses appear in the natural world is immense. Consider, for instance, how often the simplest kind of ellipse, the circle, appears in your life. You see circles emanating as waves when you throw a rock into a pond. You see circular pupils when you look at a set of eyes. You see what is roughly a circle as the image of the sun or moon in the sky. The path of an object swung around on a string is a circle. When any of these circles are viewed at an angle you see an elongated circle, or an ellipse. So it is important to keep in mind that the discussion that follows covers only a few of the instances of ellipses in our daily lives. Planetary Motion When a planet orbits the sun (or when any object orbits any other), it takes an elliptical path and the sun lies at one of the two foci of the ellipse. Johannes Kepler first proposed this at the beginning of the seventeenth century as one of three laws of planetary motions, after analyzing observational data of Tycho Brahe. His law is accurate enough to produce modern computations which are still used to predict the motion of artificial satellites. A century later, Newton’s law of gravity offers an explanation of why this law might be true. Review Questions 18. Though planets take an elliptical path around the sun, these ellipses often have a very low eccentricity, meaning they are close to being circles. The diagram above exaggerates the elliptical shape of a planet’s orbit. The Earth’s orbit has an eccentricity of 0.0167. Its minimum distance from the sun is 146 million km. What is its maximum distance from the sun? If the sun’s diameter is 1.4 million kilometers. Do both foci of the Earth’s orbit lie within the sun? Recall that the eccentricity of an ellipse is $\varepsilon = \frac{\sqrt{a^2 - b^2}}{a}$ 19. While the elliptical paths of planets are ellipses that are closely approximated by circles, comets and asteroids often have orbits that are ellipses with very high eccentricity. Halley’s comet has an eccentricity of 0.967, and comes within 54.6 million miles of the sun at its closest point, or “perihelion”. What is the furthest point it reaches from the sun? Review Answers 18. Assume that the orbit of the sun is an ellipse centered at (0,0). Then we can use the distance from the origin to the focus $\sqrt{a^2 - b^2}$$146 + 146 + 2 \sqrt{a^2 - b^2} = 2a$$0.167 = \frac{\ sqrt{a^2 - b^2}}{a}$$a = 175.270, b = 175.245$$c = 2.927$$2(2.927)=5.854$ 19. $\sim 3.25$ Echo Rooms The National Statuary Hall in the United States Capital Building is an example of an ellipse-shaped room, sometimes called an “echo room”, which provide an interesting application to a property of ellipses. If a person whispers very quietly at one of the foci, the sound echoes in a way such that a person at the other focus can often hear them very clearly. Rumor has it that John Quincy Adams took advantage of this property to eavesdrop on conversations in this room. The property of ellipses that makes echo rooms work is called the “optical property.” So why echoes, if this is an optical property? Well, light rays and sound waves bounce around in similar ways. In particular, they both bounce off walls at equal angles. In the diagram below, $\alpha = \beta$ For a curved wall, they bounce at equal angles to the tangent line at that point: So the “optical property” of ellipses is that lines between a point on the ellipse and the two foci form equal angles to the tangent at that point, or in other words, whispers coming from one foci bounce directly to the other foci. In the diagram below, for each $Q$$\angle{\alpha} \cong \angle{\beta}$ This seems reasonable, given the symmetry of the ellipse, but how do we know it is true? First, let’s prove an important property that is a bit more general. Suppose you have two points, $P_1$$P_2$ A nice way to find this is to reflect $P_2$$L$$P_2'$ Since $l$$P_1$$P_2'$$P_1$$P_2'$$l$$P_1$$P_2'$$P_1$$P_2$$l$$P_2$$P_1$$P_2'$$P_1$$P_2$$P_1$$P_2'$$l$$P_2'$$l$$P_2$$P_1$$P_2$$l$$P_1 Q$$Q P_2$ The part of the above diagram that is going to help us prove the optical property is that since $\angle{1} \cong \angle{2}$$\angle{2} \cong \angle{3}$$\angle{1} \cong \angle{3}$$P_1$$P_2$$l$ Now to prove the optical property of the ellipse, apply the above situation to the ellipse. In the picture below, $P_1$$P_2$$Q$$l$ The optical property states that $\angle{\alpha} \cong \angle{\beta}$$Q$$l$$Q$$l$$P_1$$P_2$$\angle{\alpha} \cong \angle{\beta}$ Review Questions 20. Design the largest possible echo room with the following constraints: You would like to spy on someone who will be 3 m from the tip of the ellipse. The room cannot be more than 100 m wide in any direction. How far from the person you’re spying on will you be standing. Review Answers 20. The echo room has a major axis of 100 m and a minor axis of 34.12 m. Situating the room in the coordinate plane, the room can be represented by the equation: $\frac{x^2}{2500} + \frac{y^2}{291} = Conic sections help us solve the problem of making a sundial. Depending on the season, the sun shines at a different angle. However, due to the elliptical nature of the Earth’s orbit about the sun, a shadow-casting stick can be placed in such a way that the shadow always tells the correct time of day, no matter what the time of year, as long as the stick is lined up with the earth’s pole of Review Questions 21. No matter what the orientation of a stick, if you trace out the path that the shadow of the tip makes on a flat surface, you will find it is an ellipse. Describe why this is true.(HINT: for simplicity, you can assume that you are making the measurements throughout the course of one day and that with the exception of the earth rotating about a pole, the sun and the earth are fixed with respect to one another.) 22. It was mentioned earlier in the chapter that when a round glass of water is tilted, the surface of the water is an ellipse. Or, in other words, this statement is claiming that the cross section of a cylinder is an ellipse. Prove that this is true. (Hint: to prove that this is an ellipse all you need to do is show that for any cross section of a cylinder there exists a cone that has the same cross section.) 23. From the exercise above, it appears that there is some overlap between “conic sections” and “cylindrical sections”. Are any of the classes of conic sections we found in the last section not cylindrical sections? Are there any cylindrical sections that are not conic sections? Review Answers 21. Answers may vary. 22. Answers may vary. 23. Answers may vary. A conic section that can be equivalently defined as: 1) any finite conic section, 2) a circle which has been dilated (or “stretched”) in one direction, 3) the set of points in which the sum of distances to two special points called the foci is constant. Major axis The segment spanning an ellipse in the longest direction. Minor axis The segment spanning an ellipse in the shortest direction. One of two points that defines an ellipse in the above definition. A measure of how “stretched out” an ellipse is. Formally, it is the distance between the two foci divided by the length of the major axis. The eccentricity ranges from 0 (a circle) to points close to 1, which are very elongated ellipses. Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/CK-12-Math-Analysis/r19/section/6.2/","timestamp":"2014-04-18T15:40:05Z","content_type":null,"content_length":"220446","record_id":"<urn:uuid:9ad9134f-374b-4050-b595-ce5076d88217>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Villa Rica Algebra 1 Tutor Find a Villa Rica Algebra 1 Tutor ...Furthermore, I offer services to help students in motivational, public speaking along with an array of other areas where tutoring and coaching is necessary. I am open to the challenge of learning and acquiring skills in order to tutor students in other subject areas that I may not directly offer... 26 Subjects: including algebra 1, reading, Spanish, writing ...I am fun and outgoing and I like to make learning fun! In high school, I took a course preparing for educational fields. During this course, I went to the local elementary school and sat in with a class everyday for a semester. 40 Subjects: including algebra 1, reading, English, Spanish ...I have completed the Rocky Mountain School of Photography Basic Color Photography course. I have produced portfolios for artists and produced images used on several Internet Commerce sites. In my accounting consulting practice, I have used QuickBooks Pro, Premier, and Enterprise versions. 27 Subjects: including algebra 1, English, French, physics ...I have excellent communication skills and enjoy teaching others. I can help with essay format, writing, sentence style, grammer, punctuation, mechanics, research, and reference lists. I also can help with MLA or APA formats. 17 Subjects: including algebra 1, English, reading, writing ...I have tutored individuals from elementary school up to college level. I have tutored the child that is in elementary school in all subjects. I have tutored middle and high school students in 18 Subjects: including algebra 1, reading, grammar, geometry
{"url":"http://www.purplemath.com/Villa_Rica_algebra_1_tutors.php","timestamp":"2014-04-17T07:25:52Z","content_type":null,"content_length":"23885","record_id":"<urn:uuid:cc3f3a83-79c3-4405-b390-c1761494aa14>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Glendale Heights Math Tutor Find a Glendale Heights Math Tutor Struggling in your math class? Want to raise your standardized test score? Email me today - I want to help! 18 Subjects: including algebra 1, algebra 2, chemistry, ACT Math ...Constant review of the material tells me WHAT to teach you. Test taking strategies, an in-depth knowledge of how people learn best, and an awareness of the specific format and breakdown of the ACT Math test will help you learn WHAT to study, HOW to study, and HOW to take the test. Hi! 17 Subjects: including statistics, algebra 1, geometry, prealgebra ...I think that's the best way for any student to learn. One thing that I think is problematic about the classroom environment is that it doesn't necessarily cater to different kinds of learners, so once I've had a chance to interact with any student that I'm tutoring and know generally what kind o... 29 Subjects: including ACT Math, study skills, ACT English, literature ...Freshman year, I was placed in the advanced math class at Barrington High School; although the math class did not have a formal name, it primarily covered trigonometry, logic, and algebra 2. In addition, I took an honors Chemistry class. My sophomore year (the year I began attending IMSA), I took three math classes which dove deeper into trigonometry, algebra, and precalculus. 16 Subjects: including calculus, chemistry, precalculus, study skills ...I am available mostly on Monday and Friday nights as well as anytime during the weekends. I can easily adapt to your personal educational needs. Feel free to contact me and we can get started!I previously volunteered as a tutor for students in K-8th grades in subjects such as math, English, spelling, science, social studies, reading and more. 36 Subjects: including algebra 1, algebra 2, ESL/ESOL, English
{"url":"http://www.purplemath.com/glendale_heights_il_math_tutors.php","timestamp":"2014-04-18T06:16:39Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:03ce1867-d8c2-4cd6-8ee7-ee00692cbf21>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Coefficient of Friction of a KE-Work-PE situation Knowing the spring constant, you can calculate the elastic energy at maximum displacement from the position where the spring is unstretched. Determine also the potential energy of the whole system (both masses) with respect to the same position. Calculate the total energy at each maximum displacements. Without friction and other dissipative forces the energy would stay constant. The difference is the work of frictional forces. Plot the energy in terms of L, the distance travelled. It should be nearly a straight line ΔE=-F(friction)*L.
{"url":"http://www.physicsforums.com/showpost.php?p=3678253&postcount=4","timestamp":"2014-04-19T09:48:18Z","content_type":null,"content_length":"7620","record_id":"<urn:uuid:39cee570-9565-4e32-af99-b9359d889d63>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Cheverly, MD Geometry Tutor Find a Cheverly, MD Geometry Tutor ...As a private tutor, I have accumulated over 750 hours assisting high school, undergraduate, and returning adult students. And as a research scientist, I am a published author and have conducted research in nonlinear dynamics and ocean acoustics. My teaching focuses on understanding concepts, connecting different concepts into a coherent whole and competency in problem solving. 9 Subjects: including geometry, calculus, physics, algebra 1 ...My tutoring style involves coaching the student into reaching the answers on their own. For example, If they struggle to find the correct equation, I teach them how to locate resources and how to look at the variables present to select the best way to solve a problem. After working through a pr... 39 Subjects: including geometry, Spanish, chemistry, writing ...At the tutoring session concepts will be made clear, examples are worked, the learner will be assisted to do similar items until she/he is able to do similar exercises independently. My approach is flexible to meet the needs of the learner. Depending on their needs, at the end of my tutoring my... 7 Subjects: including geometry, algebra 1, algebra 2, SAT math ...To provide the student with the best opportunity for course success, cancelled classes should be rescheduled as soon as possible. Each year I will be unavailable between 7:00-9:00 pm on the following days:- January 5th, March 7th and 9th, Good Friday, May 10th, July 25th, and September 26th and ... 20 Subjects: including geometry, chemistry, calculus, GED I have taught in Washington State, New York State and District of Columbia and am also a National Board for Professional Teaching Standards certified teacher. I enjoy tutoring students at all levels and have proven results. My lessons are fun and creative but most importantly, I try to show how Mathematics is relevant. 14 Subjects: including geometry, algebra 1, algebra 2, SAT math Related Cheverly, MD Tutors Cheverly, MD Accounting Tutors Cheverly, MD ACT Tutors Cheverly, MD Algebra Tutors Cheverly, MD Algebra 2 Tutors Cheverly, MD Calculus Tutors Cheverly, MD Geometry Tutors Cheverly, MD Math Tutors Cheverly, MD Prealgebra Tutors Cheverly, MD Precalculus Tutors Cheverly, MD SAT Tutors Cheverly, MD SAT Math Tutors Cheverly, MD Science Tutors Cheverly, MD Statistics Tutors Cheverly, MD Trigonometry Tutors Nearby Cities With geometry Tutor Bladensburg, MD geometry Tutors Capitol Heights geometry Tutors Edmonston, MD geometry Tutors Fairmount Heights, MD geometry Tutors Glenarden, MD geometry Tutors Hyattsville geometry Tutors Landover Hills, MD geometry Tutors Landover, MD geometry Tutors Lanham Seabrook, MD geometry Tutors New Carrollton, MD geometry Tutors North Brentwood, MD geometry Tutors Riverdale Park, MD geometry Tutors Riverdale Pk, MD geometry Tutors Riverdale, MD geometry Tutors Tuxedo, MD geometry Tutors
{"url":"http://www.purplemath.com/Cheverly_MD_Geometry_tutors.php","timestamp":"2014-04-17T01:11:55Z","content_type":null,"content_length":"24410","record_id":"<urn:uuid:138f06cc-6380-4ce5-a19e-9fef83853338>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Jackson Heights Geometry Tutor Find a Jackson Heights Geometry Tutor ...As a classroom teacher, I prepared my students to debate in a two team match up. I also served as specific and general evaluator at Toastmaster meetings, and frequently opened and closed meetings as President of the club. I took a college class in Logic and obtained an "A" in it. 41 Subjects: including geometry, English, reading, chemistry ...I'm strongest at tutoring students in math and science (geometry, algebra, trigonometry, chemistry, physics, and biology). I also tutor in French language and grammar, having studied the language for over 10 years. Lastly, I can help prepare for the math and science sections of the SAT and ACT, ... 37 Subjects: including geometry, chemistry, calculus, French ...Finally, I believe relating the subject matter being taught to their everyday experience helps to keep students interested. I graduated from Columbia University in 1986, and earned a PhD. in Sociology from Binghamton University in 1999. I have five years of teaching experience at the college level. 19 Subjects: including geometry, reading, algebra 2, SAT math ...I have a Ph.D. in Mathematics, and my thesis was in metric geometry (see http://www.ams.org/journals/proc/2011-139-12/S0002-9939-2011-10861-X/home.html). However, I believe that teaching is more fun than doing research. Geometry is a wonderful area of mathematics, and it actually is related to art, science, and modern technology. I will be happy to share it with you. 32 Subjects: including geometry, calculus, physics, statistics ...I have a passion for math and believe that all students are capable of improving their math skills with the proper assistance. I have several experiences working with young children and tutoring math to students who are struggling with the subject both inside and outside of the classroom. I hav... 9 Subjects: including geometry, algebra 1, algebra 2, SAT math
{"url":"http://www.purplemath.com/Jackson_Heights_geometry_tutors.php","timestamp":"2014-04-17T16:00:02Z","content_type":null,"content_length":"24325","record_id":"<urn:uuid:6b0edfde-a8a3-46a2-9ef6-853303456ea5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Gibbstown Math Tutor ...I have been an avid fiber artist since I was a young teenager. I do needlepoint and cross-stitch and have dozens of completed projects. I also am an advanced crocheter and have even developed my own patterns. 47 Subjects: including SAT math, precalculus, ACT Math, piano I have been teaching Algebra and middle school math for 4 years in Camden, NJ. My experience includes classroom teaching, after-school homework help, and one to one tutoring. I frequently work with students far below grade level and close education gaps. 8 Subjects: including precalculus, trigonometry, algebra 1, algebra 2 ...I have led Sierra Club outings for children 2 years and up where I created the educational components for the trips. I also was head bunk camp counselor for 4-5 yr olds. I took piano pedagogy graduate classes at Westminster Choir College for elementary ages and have been a private music teacher for ages 3-12 for almost 15 yrs. 51 Subjects: including SPSS, SAT math, English, algebra 1 ...I currently work as a tutor at a charter school, which relies heavily on reading. I try to make reading an enjoyable experience so that it motivates my students to want to learn. Good writing doesn't come from having a large vocabulary, as many people might think. 13 Subjects: including algebra 1, algebra 2, reading, writing ...I have 20+ years of solid experience tutoring college-level math and theoretical computer science, having mostly financed my education that way. I also have 7+ years experience teaching college-level math. I am a world-renowned expert in the Maple computer algebra system, which is used in many math, science, and engineering courses. 11 Subjects: including differential equations, logic, calculus, precalculus
{"url":"http://www.purplemath.com/Gibbstown_Math_tutors.php","timestamp":"2014-04-19T05:01:29Z","content_type":null,"content_length":"23680","record_id":"<urn:uuid:7759b060-9792-4b20-8aaf-e4f7711bc66f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Vectors, Matrices, Systems, Calculator++ Fx December 7th 2013, 08:01 AM #1 Dec 2013 Vectors, Matrices, Systems, Calculator++ Fx check out this advanced calculator: https://itunes.apple.com/us/app/calc...fx/id761929893 - Writes expressions in their mathematical form so that they look as on a blackboard or a textbook - Basic and advanced calculations (trigonometric, series, numerical integrations, finding roots, converting values between rectangular, polar form’s, etc) 
- Complex numbers calculations 
- Vector calculations (norm, unit vector, cross & dot products, angle two between vectors, scalar projection of vector onto another vector, vector projection of vector onto another vector, converting between polar, cylindrical, spherical, rectangular forms) 
- Matrix calculations (identity matrix, determinant, transpose, inverse, rank, ref, rref, LU, QR, cumulativ, per row operations (swap, add, multiply, multiply and add), augment
) - Solving systems of linear equations (systems can be exported as images or pdf files)
 - Graphing up to 8 expressions (graphs can be exported as images or pdf files) 
- Saving all calculations in history (history can be exported) 
- 1200+ constants
 - Storing and recalling data in/from memory slots or ANS(answer)
 - 15 memory slots for storing data (can store real or complex numbers, vectors and matrices)
 - Support for octal, hexadecimal and binary calculations Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/math-software/224890-vectors-matrices-systems-calculator-fx.html","timestamp":"2014-04-19T07:29:35Z","content_type":null,"content_length":"31829","record_id":"<urn:uuid:00a4ce4a-3604-48b1-b2f8-f0fea3392604>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
[Edu-sig] Basic dictionary question Kirby Urner urnerk at qwest.net Sun Oct 9 07:54:39 CEST 2005 Talk about fishing for expert help! Thanks Guido. It's a pruning algorithm where I strip way pieces that don't meet up at (x,y,z) bridge points. Lots of symmetry about the origin so just pure distance won't work (not unique enough). I think might still get away with a tuple-indexed dict (tallying for pairs, pruning the rest) if I boil my floating points down to fixed decimals -- a chance to play with the new decimal type perhaps. I appreciate the quick lecture on intransitive nature of fuzzy equality and __hash__ vs. __eq__. Helps me think. > You never said what you wanted to do with these points -- are you > looking to find which points are in fact close enough to be considered > the same point? I suppose you could sort them by the vector length > (sqrt(x**2 + y**2 + z**2), that sort of thing) and then you only have > to compare points that are close together in the sorted list (as long > as the abs values are closer than your epsilon). Or you could divide > 3-space up into cubes with sides equal epsilon, and then you'd only > have to look for points that end up in the same or in adjacent cubes. > --Guido More information about the Edu-sig mailing list
{"url":"https://mail.python.org/pipermail/edu-sig/2005-October/005293.html","timestamp":"2014-04-18T13:24:27Z","content_type":null,"content_length":"3737","record_id":"<urn:uuid:0209c548-0ea4-47cc-9cdc-e2d2a491a6bc>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Principles of hydrometer analysis The physical principles of sedimentation underlying the hydrometer analysis are presented in a number of texts including Das (2002); we briefly review them here. The hydrometer analysis applies Stokes’s law, which governs the terminal velocity at which spherical particles settle through a column of fluid (Craig, 1992). Stokes’s law assumes particles that (1) are rigid, spherical, and smooth; (2) have similar density; (3) are separated from each other; (4) do not interact during sedimentation; and (5) are large enough so that settlement is not governed by Brownian motion. The law is also strictly applicable to slow fluid movements that display laminar flow patterns (i.e., Reynolds number = <1) (Wen et al., 2002). Hydrometer analysis begins after thoroughly mixing the sediment and water, after which particles settle out of the water column according to Stokes’s law. The density of a sediment-water suspension depends on the concentration and specific gravity of the sediments present in the mixture. If the suspension is allowed to stand, particles will settle out of the suspension and the density of the sediment-water suspension will decrease. A hydrometer measures the density of the suspension at a known depth below the surface. The two basic calculations made during a hydrometer analysis are the particle diameter at a specific time and depth and the percentage of the original sample mass still left in suspension. We calculate the particle diameter according to the following equation: • D = equivalent sedimentation diameter of particle (millimeters), • η = viscosity of water (grams seconds per square centimeter), • G[s] = specific gravity of sediment, • L = effective depth measured from water surface to center of gravity of hydrometer bulb (centimeters), and • t = time measured from start of sedimentation (seconds). The percentage of particles remaining in suspension finer than particle diameter, D, is • G[s] = specific gravity of sediment, • V = total water-sediment volume (1000 mL), • M = dry sample mass (grams), • R[h] = corrected hydrometer reading of slurry mixture (grams per liter), and • B = hydrometer reading of reference mixture of dispersing agent and distilled water (grams per liter).
{"url":"http://publications.iodp.org/proceedings/308/205/205_4.htm","timestamp":"2014-04-17T12:36:03Z","content_type":null,"content_length":"10497","record_id":"<urn:uuid:0a1dacb6-acf1-4af0-bfc0-fdaea10027ed>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Miami ACT Tutor Find a Miami ACT Tutor ...My objective in grammar is to make my student adopt good habits and generate valuable knowledge and skills by following rules, which will lead to academic success. This requires regular and rigorous practice, and willingness to know, and respect those rules, and also an understanding that applyi... 20 Subjects: including ACT Math, reading, English, ESL/ESOL ...I am required to maintain a 3.0 average and have a 3.7 unweighted and a 6.06 weighted GPA. I also have over 500 community service hours which I obtained from volunteering at a daycare in Georgia over the course of 3 consecutive summers. I have always gotten A's in all my math courses and am a D... 12 Subjects: including ACT Math, calculus, algebra 2, algebra 1 ...I am very patient and believe in teaching by example. My general teaching strategy is the following: I generally cover the topic, then explain in detail, make the student do some problems or write depending on the subject, and finally I make them explain and teach the topic back to me. The following meeting, we review the topic once again to prove they mastered it. 30 Subjects: including ACT Math, chemistry, ESL/ESOL, English ...I am an experienced tutor in most high school and college-level Math and Science courses, particularly in the areas of General Chemistry, Organic Chemistry, Physics, Biology, Calculus and Statistics. I am also able to tutor for standardized exams including the ACT, SAT Reasoning Test, and the MC... 32 Subjects: including ACT Math, chemistry, physics, statistics ...I've been a professional tutor for over three years now, and I have significant experience in providing academic aid to my students in a variety of subjects. I began tutoring because I noticed during my years in high school that I was able to help my classmates and younger students understand ch... 42 Subjects: including ACT Math, reading, Spanish, English
{"url":"http://www.purplemath.com/Miami_ACT_tutors.php","timestamp":"2014-04-18T13:53:57Z","content_type":null,"content_length":"23572","record_id":"<urn:uuid:2d4f6ebe-d687-4226-bb3c-9235f01f6de7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Download Page Paper Models of Polyhedra (PDF-files) Home > Download Page Paper Models of Polyhedra (PDF-files) << previous >> next Download Page Paper Models of Polyhedra (PDF-files) A selection of the paper models is available in one document: - Thin lines (PDF)(2587K) - Medium thick lines (PDF)(2587K) Ziped PDF - Thin lines (PDF -Zip)(300K) - Medium thick lines (PDF-Zip)(300K) I advise you to use the 'thin lines' version. The lines are almost invisible on the completed model A selection of the paper models is available in one document: - Thin lines (PDF)(2587K) - Medium thick lines (PDF)(2587K) Ziped PDF - Thin lines (PDF -Zip)(300K) - Medium thick lines (PDF-Zip)(300K) I advise you to use the 'thin lines' version. The lines are almost invisible on the completed model Platonic Solids Pictures of Platonic Solids dodecahedron on pedestal tetrahedron on pedestal cube on pedestal octahedron on pedestal icosahedron on pedestal Archimedean Solids Pictures of Archimedean Solids truncated tetrahedron truncated octahedron truncated cube truncated cuboctahedron truncated icosidodecahedron snub cube snub dodecahedron truncated icosahedron (football) black and white truncated icosahedron (soccerball) truncated dodecahedron Kepler-Poinsot Polyhedra Pictures of Kepler-Poinsot Polyhedra great icosahedron great dodecahedron small stellated dodecahedron small stellated dodecahedron on pedestal great stellated dodecahedron Other Uniform Polyhedra Pictures of Other Uniform Polyhedra small rhombihexahedron small cubicuboctahedron small dodecicosidodecahedron small rhombidodecahedron small dodecahemiododecahedron 1. small ditrigonal icosidodecahedron 2. small ditrigonal icosidodecahedron. No inner lines in the pentagrams 3. small ditrigonal icosidodecahedron. Like the gif images (with inner lines in the pentag small snub icosicosidodecahedron small icosihemidodecahedron Pictures of Compounds stella octangula compound of cube and octahedron compound of dodecahedron and icosahedron compound of two cubes compound of three cubes compound of four cubes compound of five cubes compound of five octahedra compound of five tetrahedra compound of five tetrahedra (mirror image) compound of truncated icosahedron and pentakisdodecahedron compound of truncated icosahedron and pentakisdodecahedron (color) Pictures of Pyramids square pyramids cheops pyramid regular pentagonal pyramid multi side base pyramids LED-light lampshade hexagonal pyramid pyramids of the same height pyramids same height and diameter Concave Pyramids Pictures of Concave Pyramids star pyramids Pentagrammic pyramid (high) standing pentagrammic pyramid Illuminated standing pentagrammic pyramid asymmetric pentagrammic pyramid hexagrammic pyramid Truncated Pyramids Pictures of Truncated Pyramids truncated pyramids of the same height truncated square pyramid truncated star pyramids oblique truncated pyramid vertical truncated square pyramid Pictures of Dipyramids star dipyramids pentagrammic star on pedestal pentagrammic star on pedestal ( 2) pentagrammic star on hexagonal pedestal pentagrammic star on hexagrammic pedestal regular pentagonal dipyramid icosagonal dipyramid and other tetracontahedra Other Pyramids Pictures of Other Pyramids asymmetric square pyramids rhombic pyramids imperfect pyramids twisted pyramid compound of two asymmetric pyramid compound of two asymmetric pyramid version 2 Compound two different asymmetric pyramids Compound of two asymmetric pyramids of different sizes Compound of two asymmetric pyramids (color) compound of two truncated asymmetric pyramids three pyramids that form a cube six square pyramids that form a cube six triangular pyramids that form a cube bent pyramid bent pyramid in color step pyramid (pyramid of Djoser) step pyramid of Djoser in color five triangular pyramids in a cube five square pyramids in a cube pentagonal-decagonal pyramid square-octagonal pyramid Pictures of Prisms triangular prism triangular prism in color rectangular prism rectangular prism in color pentagonal prism hexagonal prism heptagonal prism octagonal prism enneagonal prism decagonal prism hendecagonal prism dodecagonal prism Pictures of Antiprisms rectangular antiprism pentagonal antiprism hexagonal antiprism heptagonal antiprism octagonal antiprism enneagonal antiprism decagonal antiprism Concave Prisms Pictures of Concave Prisms pentagrammic prism hexagrammic prism concave hexagonal prism Concave Antiprisms Pictures of Concave Antiprisms pentagrammic antiprism hexagrammic antiprism Other Prisms Pictures of Other Prisms trapezoidal prism oblique rectangular prism oblique pentagonal prism oblique rhombic prism twisted rectangular prism truncated hexagonal prism twisted decagonal prism twisted hexagonal prisms twisted pentagonal prisms twisted triangular prisms twisted dodecagonal prisms twisted octagonal prisms Other Polyhedra Pictures of Other Polyhedra pentagonal hexecontahedron pentagonal icositetrahedron square trapezohedron rhombic dodecahedron great rhombihexacron small triakisoctahedron small triambic icosahedron faceted sphericons (nine models) faceted sphericon (40 faces) and other tetracontahedra hybrid faceted sphericon half faceted sphericons third stellation of the icosahedron sixth stellation of the icosahedron seventh stellation of the icosahedron eighth stellation of the icosahedron ninth stellation of the icosahedron final stellation of the icosahedron Faces of stellations of the icosahedron Seven obelisks (square pentagonal hexagonal heptagonal octagonal decagonal conical) cubic shape 1 cubic shape 2 cubic shape 3 cubic shape 4 cubic shape 5 cubic shape 6 half archimedean solids black and white half truncated-icosahedron (half football) decagonal dipyramidal antiprism and other tetracontahedra pentagonal-pentagrammic shape prolate heptacontadihedron semi prolate heptacontadihedron prolate hectohexecontadihedron semi prolate hectohexecontadihedron faceted oloid tetrakis hexahedron truncated icosahedron variations football star decagonal solids Pictures of Kaleidocycles hexagonal kaleidocycle hexagonal kaleidocycles in color half closed hexagonal kaleidocycle open hexagonal kaleidocycle quarter closed hexagonal kaleidocycle seven twelfths closed hexagonal kaleidocycle octagonal kaleidocycle closed octagonal kaleidocycle closed octagonal kaleidocycle in color half closed octagonal kaleidocycle decagonal kaleidocycle closed decagonal kaleidocycle half closed decagonal kaleidocycle tetrakaidecagonal kaleidocycle closed tetrakaidecagonal kaleidocycle half closed tetrakaidecagonal kaleidocycle closed dodecagonal kaleidocycle dodecagonal kaleidocycle half closed dodecagonal kaleidocycle Other Paper Models Pictures of Other Paper Models tapared cylinder asymmetric cone oval cone square cone square truncated cone cubotruncated cone three quarter sphericon matryoshka house matryoshka house (half size) houses with vanishing point half oloid quarter oloid three quarter oloid oblique cylinder oblique truncated cylinder football house paper model Christmas tree paper model Christmas tree (no holes) paper model hexagrammic Christmas tree paper model hexagrammic Christmas tree (no holes) top star for paper model Christmas tree top star Christmas tree icosahedron Christmas tree topper Trapezohedron Christmas baubles Seven obelisks (square pentagonal hexagonal heptagonal octagonal decagonal conical) football trophy faceted sphericons (nine models) faceted sphericon (40 faces) and other tetracontahedra hybrid faceted sphericon Polyhedra Collections Tetrahedra Collection (more than 35 tetrahedra models): (ziped pdf-file 1.3 Mb) Tetrahedra Collection (more than 35 tetrahedra models): (7 Mb) Coloring tetrahedra Dodecahedra Collection (18 dedecahedra):(ziped pdf-file 5 Mb) Dodecahedra Collection (18 dedecahedra):(22 Mb) LED-light lampshade dodecahedron All Platonic Solids and Archimedean solids in color (19 models): (pdf-file 380 Kb) All Platonic Solids and Archimedean solids in light color (19 models): (pdf-file 170 Kb) Seven Archimedean solids and the matching Platonic solids (16 models): (pdf-file 340 Kb) Kepler-Poinsot Polyhedra in color (4 polyhedra in small and large size): ( pdf-file 320 K Compounds of Cubes and Compounds of Platonic Solids with Duals: (pdf-file 750 Kb) Platonic Solids in color divided in two: (pdf-file 100 Kb) Platonic Solids divided in two: (pdf-file 70 Kb) Platonic Solids in color with duals inside: (pdf-file 330 Kb) Platonic Solids with duals inside: (pdf-file 90 Kb) 5 Stellations of the Icosahedron: (pdf-file 350 Kb) Prism Collections Collection of prisms (850 kB) Collection of Antiprisms (635 kB) Collection of Concave Prisms and Anitprisms (PDF: 430 kB) Copyright © 1998-2014 Gijs Korthals Altes All rights reserved. It's permitted to make prints of the nets for non-commercial purposes only.
{"url":"http://korthalsaltes.com/selecion.php?sl=download","timestamp":"2014-04-18T05:30:39Z","content_type":null,"content_length":"58331","record_id":"<urn:uuid:28ba2b31-d7e3-4f7a-a81e-5252f1d4b9bf>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Traveling Wavefronts of Competing Pioneer and Climax Model with Nonlocal Diffusion Abstract and Applied Analysis Volume 2013 (2013), Article ID 725495, 12 pages Research Article Traveling Wavefronts of Competing Pioneer and Climax Model with Nonlocal Diffusion School of Mathematics, South China Normal University, Guangzhou 510631, China Received 17 November 2012; Revised 11 March 2013; Accepted 21 March 2013 Academic Editor: Dumitru Baleanu Copyright © 2013 Xiaojing Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We study a competing pioneer-climax species model with nonlocal diffusion. By constructing a pair of upper-lower solutions and using the iterative technique, we establish the existence of traveling wavefronts connecting the pioneer-existence equilibrium and the coexistence equilibrium. We also discuss the asymptotic behavior of the wave tail for the traveling wavefronts as . 1. Introduction As we know, the interactions among species are important in determining the process of evolution for the ecosystem, and the modeling accompanied with the mathematical analysis of the models can help people to understand and control the propagation of species. In general, the per capital growth rate (i.e., fitness) for a species in the model is assumed to be a function of a weighted total density of all interacting species. A well-known example is the standard Lotka-Volterra model; its fitness of a species is a linear function. It is natural to consider other kinds of fitness functions other than the linear one, because of the various species and interaction rules. In this paper, we will analyze a reaction-diffusion model describing pioneer and climax species. This model describes interaction among species with peculiar fitness functions. A species is called a pioneer species if it thrives best at lower density but its fitness decreases monotonically with total population density for overcrowded. Thus, the fitness function of a pioneer species is assumed to be a decreasing function. Pine and yellow poplar are the species of this type. A species is called a climax species if its fitness increases up to a maximum value and then decreases of its total density. Hence, a climax population is assumed to have a nonmonotone, “one-humped” smooth fitness function. Oak and maple are the climax species. A typical reaction-diffusion model for a pioneer-climax species is given by the following system: where and represent densities of the pioneer and climax species, respectively. and denote the pioneer fitness function and climax fitness function, respectively, . By making changes of variables , , system (1) changes into the form (the tildes of and are dropped out) where we still use and as the new coefficients without confusing. From the previous introduction, we assume that the pioneer fitness function satisfies for some , and the climax fitness function satisfies Ricker [1] used the fitness function , Hassell and Comins [2 ] used the fitness function , and Cushing [3] used the fitness function . It is obvious that these and have the curves in Figure 1. There are some existing results about the stability and traveling wave solutions for (2) ([4–6]). About traveling wave solutions, Brown et al. [4] studied the traveling wave of (1) connecting two boundary equilibria by singular perturbation technique, and Yuan and Zou [6] obtained the existence of traveling wave solutions connecting a monoculture state and a coexistence state by upper-lower solution method combined with the Schauder fixed point theorem. Also see van Vuuren [7] for the existence of traveling plane waves in a general class of competition-diffusion systems and Murray [8] for more biological description of traveling wave solutions. For system without spatial diffusion, the model will be Selgrade and Roberds [9], Sumner [10] analyzed the Hopf bifurcation of (5), and Selgrade and Namkoong [11], Sumner [12] considered the stable periodic behavior of (5). Because of the existence of rich equilibria and the various ranges of parameters, the dynamics of ordinary differential system (5) are complex, and a detailed review of all equilibrium types can be found in Buchanan [13, 14]. Although the Laplacian operator is always used to model the diffusion of the species, it suggests that the population at the location can only be influenced by the variation of the population near . As we know that the individuals can move freely, then the movement of individuals is bounded to affect the other individuals. So, the Laplacian operator may have some shortage to describe the diffusion. One way to deal with this problem is to replace the Laplacian operator with a convolution diffusion term This implies that the probability distribution function for the population at location moving to the location is . At time , the total individuals that move from the whole space into the location will be . Therefore, one may call it as a nonlocal diffusion, and, correspondingly, call as a local diffusion. During the recent years, the models with the nonlocal diffusion have been attracted much more attentions (see [15–18]). In this paper, instead of (2), we will concentrate on the following pioneer-climax species with nonlocal diffusion: where , are positive constants accounting for the diffusivity, is a kernel function which is continuous on satisfying We are interested in traveling wavefronts accounting for a mildinvasion of the two species (traveling wavefronts connecting a boundary equilibrium and the coexistence equilibrium). For system (2), the sufficient condition for (2) to have a mildinvasion is (see [6]), but our condition in this paper for system (6) is , which reveals a fact that the nonlocal diffusion of either the pioneer species or the climax species did affect the climax invasion and wave propagation. Please see Section 5 for the discussion. The remaining of this paper is organized as follows. In Section 2, there are some preliminaries about the equilibria and the system is transformed into a cooperative one. In Section 3, we prove the existence of traveling wavefronts by using an iteration scheme combined with a pair of admissible upper and lower solutions, which can be constructed obviously, and thus a criterion of the existence for traveling wavefronts is obtained. We also give a discussion on asymptotic behavior for the traveling wavefront tail as in Section 4. At last, we give some concluding discussions in Section 5. 2. Preliminaries It is evident that is a trivial equilibrium of (6). The system (6) has at least four equilibria and at most six equilibria. The existence of nonnegative steady states depends on the locations of the three nullclines: The long-term behavior of solutions to (6) can be qualitatively different caused by the different number, distribution, and types of equilibria. The dynamics of the system (6) are of course very rich and complex. However, in this paper, we will only consider the following case: The condition follows as a sequence. Under the previous assumption, (6) has four nontrivial equilibria: , , , and except for , where It is obvious that . We further assume that for the technical reason. See Figure 2 for this situation. As mentioned in the introduction, we are interested in the coexistence of the two species. That means that we will seek traveling wavefronts connecting equilibrium and equilibrium . By making changes of variables , and dropping the tildes, system (6) becomes and the equilibria , are changed into , , respectively, where , . 3. Existence of Traveling Wavefronts A traveling wavefront of (6) connecting equilibria and can be changed into a traveling wavefront of (11) connecting and . Therefore, we consider the system (11) hereby. A traveling wave solution of (11) is a solution with the form and , where and is a wave speed. A traveling wavefront is a traveling wave solution which has finite limits . Denoting the traveling wave coordinate still by , we derive the wave profile system from (11): Associated with (12), we consider its solutions subject to the following boundary value conditions: For , , implies ; implies but ; implies . Furthermore, the norm in is the Euclidean norm. Define For some constants , , letting , , we define an operator by Then, (12) can be written as an equivalent form: Denote by It is obvious that a traveling wave solution of the problem (12) and (13) is a fixed point of and vice verse. The following lemma states the monotone property of . Lemma 1. Assume that holds, for sufficiently large and , with and , ; one has(i),, (ii),for all . Proof. In order to prove (i), let , . Then For , and sufficiently large , it follows that , . Thus, if and for , we have For (ii), we know that and for . It follows that From , , we have Note that for . This leads to The proof is complete. The conclusion of the following lemma is direct. Lemma 2. Assume that , are sufficiently large. For with nondecreasing on , , are also nondecreasing on . We can easily see that also enjoys the same properties as those for settled in Lemmas 1 and 2. Let and It is clear that is a Banach space equipped with the norm defined by . Definition 3. A pair of continuous functions , is called an upper solution and a lower solution of (12), respectively, if there exists a set such that and are differentiable in and the essential bounded functions satisfy for . In what follows, we assume that (12) has an upper solution and a lower solution , such that(P1) for ;(P2), ;(P3) and are nondecreasing. Define the following profile set by It is obvious that is nonempty. For and , define , and Lemma 4. For , the functions , and , defined by (26) satisfy(i); (ii) for ;(iii) and is a pair of upper and lower solutions of (12);(iv) and are continuously differentiable on . Proof. We only give the argument for , and the situation for can be obtained by mathematical induction. From Definition 3, we obtain Let , . For any , there exists some such that or . We then derive where . By similar arguments, we can get The previous arguments implies that the conclusion (ii) holds. From the monotone property of , we can easily obtain that , are nondecreasing for , and therefore the conclusion (i) holds. For , by Lemma 1, we have In a similar way, we can prove that for . This indicates that and are a pair of upper and lower solutions of (12). The conclusion (iv) is obvious. The proof is complete. Lemma 5. , and the convergence is uniform with respect to the decay norm . Proof. We have from Lemma 4 that the following limit exists: It is easy to know that is a closed and convex set. By the nondecreasing property of , we have . In the following, we prove that the convergence is uniform with respect to the decay norm. Since for any , there exists a , such that for all , Now, we consider the sequences for . Note is nondecreasing on , and thus By , there exists a positive constant , such that for . Similarly, we can prove that there exists a positive number , such that for . From the previous estimates, we know that is equicontinuous on with respect to the supremum norm. On the other hand, we have from Lemma 4 (ii) that is uniformly bounded. By Arzéla-Ascoli theorem, there exist subsequences of which are uniformly convergent in . Without loss of generality, we still express this subsequence as . Thus, there exists a positive integer , such that Furthermore, we Summarizing the previous arguments, for we have The proof is complete. Theorem 6. Assume that holds; if (12) has a pair of upper and lower solutions that satisfy (P1)–(P3), then the system (11) has a traveling wavefront satisfying (13). Proof. By the Lebesgue's dominated convergence theorem and the iteration scheme (26), we have Therefore, is a fixed point of , which also satisfies (12). Furthermore, (P2) indicates that satisfy . On the other hand, we have from the monotonicity of that exists. Furthermore, since , we know that . By using L’Hôspital’s rule, we obtain Similarly, one can obtain . That is, is an equilibrium of (12). Note that the assumption (10) implies that there is only one positive equilibrium of (12) satisfying . Therefore, and is a traveling wavefront satisfying (13). The proof is complete. In order to construct a pair of admissible upper-lower solutions for (12), we linearize (12) at and obtain Thus, we consider the following characteristic equation: Note that The convex property of leads to for any . In view of the previous observation, we have the following lemma directly. Lemma 7. The following conclusions are true. (i)There exists a , , such that (ii)For , for .(iii)For , the equation has two zeros such that Now we are ready to construct the upper solution of (12). Lemma 8. Define Then for , is an upper solution of (12). Proof. Let be such that Notice that , we have . If , , , we have from the fact for and that Similarly, we have from the fact for , and for that If , , , we have The proof is complete. Let satisfying , we can obtain from (45) that Lemma 9. Define where is a constant to be chosen later. Then is a lower solution of (12). Furthermore, is a lower solution of (12) satisfying Proof. Let be such that , it follows that If , , , and for , we have Let By the fact that we have Note that (58) leads to for ; it follows that and thus Therefore, Let sufficiently large; we can have hence, If , , , and for , we have From the previous arguments, we obtain that is a lower solution of (12). By Lemma 4, we can get that is also a lower solution. Furthermore, for , , by direct calculation, we have for . Therefore, for . Similarly, we have for . The proof is complete. Theorem 10. Assume that and hold. Then for any , the system (11) has a traveling wavefront with speed , which connects and . Proof. The conclusion for can be obtained from the previous discussions. We only need to establish the existence of wave fronts when . Let with . For , (12) with admits a nondecreasing solution such that Without loss of generality, we assume that . Obviously, , , and satisfy As the same argument in Lemma 5, we can obtain that is uniformly bounded and equicontinuous on ; using Arzéla-Ascoli theorem and the standard diagonal method, we can obtain a subsequence of , still denoted by , such that uniformly for in any bounded subset of , as . Clearly, is nondecreasing and . By the dominated convergence theorem and (67), it follows that Since and exist, using L’Hôspital rule leads to Thus, is a traveling wavefront of the system (11) connecting and . Remark 11. We say that the is the minimal wave speed in the sense that (11) has no traveling wavefront with . We could briefly explain this in the following. In fact, the linearization of (12) at zero solution is (41), and the function is obtained by substituting in the second equation of (41). For , we know from (ii) of Lemma 7 that for any . We have from the second equation of (12) and the second equation of (41) that (12) cannot have a solution that satisfies . Theorem 12. Assume that and hold. Then for any , the system (6) has a traveling wavefront with speed c, which connects and . 4. Asymptotic Behavior for Traveling Wavefronts In this section, we discuss the asymptotic behavior for the traveling wavefronts obtained in the previous section as . Theorem 13. Let be a traveling wavefront of (11) decided by Theorem 10; then where is the smallest root of the characteristic equation (42). Proof. Note that Then we have which implies that Note that we have from and that which is uniformly on . By the second equation of (12), we have from (73), and convergence theorem that
{"url":"http://www.hindawi.com/journals/aaa/2013/725495/","timestamp":"2014-04-17T10:25:10Z","content_type":null,"content_length":"1048595","record_id":"<urn:uuid:b0ee6ff9-3574-4158-bbc8-8c78d56092c6>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Learning Outcomes Student Learning Outcomes (SLO) is a means to determine what students know, think, feel or do as a result of a given learning experience. Learn more about SLO. • Mathematics - 50 - 51 - 51A - 51B - 61 - 71 - 71A - 71B - 71X - 96 - 100 - 110 - 110H - 120 - 130 - 140 - 150 - 160 - 180 - 181 - 210 - 280 - 285 • • Computer Science - 110 - 140 - 145 - 150 - 170 - 190 - 210 - 220 - 230 • Computer Science CSCI 110 1. Students will be able to use and differentiate between basic concepts of computer hardware and software. 2. Students will be able to use data representation for the fundamental data types and perform conversions between binary-hexadecimal-decimal representations. 3. Students will be able to read, understand and trace the execution of programs written in C language. 4. For a given algorithm students will be able to write the C code using a modular approach. CSCI 140 1. Students will be able to analyze problems and design algorithms in pseudo code. 2. Students will be able to read, understand and trace the execution of programs written in C++ language. 3. Students will be able to use given classes and virtual functions in a class hierarchy to create new derived classes and the code that uses them. 4. For a given algorithm students will be able to write modular C++ code using classes in an OOP approach. CSCI 145 1. Students will be able to analyze problems and design appropriate algorithms. 2. Students will be able to code provided algorithms using Java language. 3. Students will be able to provide code for a Java class given objects’ attributes and behaviors. 4. Students will be able to use existing Java classes to perform required tasks. CSCI 150 1. Students will be able to manipulate data at the bit and byte levels. 2. Students will be able to identify the components of a computer and the organization of those components. 3. Students will be able to describe disk storage systems and file systems. 4. Students will be able to use assembly language instructions to write small programs. CSCI 170 1. Students will be able do basic UNIX OS administration tasks, including account management. 2. Students will be able to use the Unix file system 3. Students will be able to perform basic UNIX networking tasks including setting up a LAN using NIS 4. Students will be able to use Unix programming tools: compilers, Make utility, debugger, profiler, version control. 5. Students will be able to read-understand-write short scripts in a Unix shell. CSCI 190 1. Students will be able to use truth table for propositional calculus. 2. Students will be able to use math induction and recursive definitions and algorithms. 3. Students will be able to understand the terminology of finite graphs and trees and use the basic algorithms for traversal, shortest path, graph coloring. 4. Students will be able to use basic counting techniques, combinatorics concepts and binomial coefficients. CSCI 210 1. Students will be able to use Boolean algebra for algebraic simplification. 2. Students will be able to use truth tables, maps, and tabular reduction methods in combinational network design. 3. Students will be able to use state tables and diagrams in sequential network design. 4. Students will be able to differentiate between combinational and sequential logic networks. CSCI 220 1. Students will be able to analyze problems and select the appropriate data structure. 2. Students will be able to estimate running time given an algorithm. 3. Students will be able to implement and use linear data structures including sets, stacks, queues, and lists. 4. Students will be able to implement and use trees including binary tree, binary search trees, and heaps. CSCI 230 1. Students will be able to implement efficient searching techniques including hash tables and skip lists. 2. Students will be able to implement and analyze running time for various sorting algorithms. 3. Students will be able to represent graphs and implement well-known graph algorithms. 4. Students will be able to differentiate the costs between memory access and disk access.
{"url":"http://math.mtsac.edu/slo_cs.html","timestamp":"2014-04-18T05:42:40Z","content_type":null,"content_length":"15352","record_id":"<urn:uuid:1958b5af-76f6-4fd1-a8cf-e443f06c6573>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
A measure of the likelihood of a given process occurring in an accelerator, i.e., of particles colliding or otherwise interacting. Cross-section is expressed as an effective target area, which can be related to the quantum-mechanical probability of interacting by multiplying by factors such as the flux of particles entering the interaction region. The basic unit of cross-section for particle physics is the barn (b), equal to 10^-24 cm^2. Typical hadron collisions are measured in millibarns. Typical neutrino collision cross-sections are about 10^-11 cm^2. Related categories
{"url":"http://www.daviddarling.info/encyclopedia/C/cross-section.html","timestamp":"2014-04-18T08:48:27Z","content_type":null,"content_length":"5988","record_id":"<urn:uuid:badd45fa-b283-47a9-b8e0-4fc4d99ab1dd>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5068788 - Quantitative computed tomography system 1. Field of the Invention This invention relates to the field of quantitative computed tomography and, in particular, to a system for reproducibly measuring the CT number of in vivo tissue samples. 2. Description of the Related Art Computed Tomography (CT) was developed as a method for qualitatively analyzing soft tissues by non-invasively obtaining three-dimensional pictures of affected areas in a patient. Diagnosis of many pathological conditions, however, requires quantitative analysis of the affected organs or tissues. This has historically necessitated painful and often dangerous invasive methods of obtaining tissue samples for in vitro analysis, for example to determine the relative density or composition of the affected organs. Because of the non-invasiveness of CT analysis and the ability to scan a precise three-dimensional cross section of the patient, it has been proposed to use the data resulting from a CT scan quantitatively as well as qualitatively. One type of condition recognized as suitable for quantitative CT analysis is osteoporosis. Osteoporosis evaluation involves the measurement of bone mineral density in affected spinal tissues. However, quantitative CT analysis of osteoporosis has suffered from practical disadvantages which have severely limited its accuracy, despite the potential advantages obtained by such non-invasive quantitative CT analysis, and the widespread availability of CT scanners. The primary problem in utilizing quantitative CT analysis has been calibration of the CT numbers obtained in a CT scan to account for patient variability and positioning of the patient. The attenuation of the X-ray spectrum used in a CT scan is affected not only by tissues in the region of interest, for example a particular vertebra, but also by the thickness and density of tissues around the region of interest which also attenuate the X-ray beam. Since the density of the affected bone tissues is itself variable, it is not possible to calibrate the CT scan without an additional Recently, attempts have been made to correct for the problem of CT number variability of a tissue situated in a heterogeneous surrounding of other tissues by providing a reference, known as a "phantom" , located in the path of the CT scan but external to the patient. Such reference "phantoms" have been partially successful in accounting for effects of X-ray beam "hardening" and scatter that are correlated with patient variability, but are nevertheless subject to such problems as local differences in X-ray spectra and in scatter distribution between the phantom and the target tissue, variations resulting from patient movement with respect to the reference phantom, and volume averaging difficulties resulting from the current practice of taking the mean of the CT In order to overcome the problems associated with the use of external reference phantoms, it has been suggested that internal reference tissues located near the affected tissues could be utilized. Potential in vivo reference tissues include fat and muscle, whose relatively invariable densities and close proximity to vertebral bone tissues make them suitable candidates for use in calibrating CT scans without the need for external reference phantoms. However, it has heretofore been impossible to reproducibly obtain meaningful CT numbers for in vivo fat and muscle tissues because no method has been available for eliminating the inevitable effects of fat and muscle tissue mixing, and to account for the nonuniform distribution of such tissues throughout the CT slice. Any quantitative distribution obtained from a CT scan for areas of fat and muscle will invariably include areas representing fat and muscle mixtures, as well as background from other tissues in the path of the scan. Because the mixing is essentially random, CT numbers representing such mixtures are of no use in deriving a reference standard for correlating CT numbers with tissue density. The invention provides a method and means for solving the problems of the prior art by providing a way of analyzing a suitably chosen CT number histogram to delete the effects of background scatter and intermixing of tissues. CT numbers of individual tissues are obtained by locating leading edges of histogram distribution curves in regions of the histogram representing the individual tissues. The leading edge values are used as a starting point for construction of model curves representative of pure tissue samples against which the actual histogram distribution can be measured, for example by calculating and adjusting moments of the curve, following subtraction of assumed values for background and intermixing derived from the leading edge values. The adjusted CT numbers are used to create a reference plot by which other CT numbers can be converted to a physical quantity such as density for use in analyzing other tissues. FIGS. 1(a) through 1(e) illustrate the process by which a meaningful curve is obtained from an initial histogram distribution. FIGS. 2(a) and 2(b) illustrate the process of converting CT numbers to known quantities for use as a reference standard. FIGS. 3(a) and 3(b) are flowcharts detailing the method of the preferred embodiment. FIGS. 1(a) to 1(e) and FIGS. 2(a) and 2(b) illustrate the effects of implementing the preferred method, shown in FIGS. 3(a) and 3(b). Each of the method steps is implemented by means of appropriate software and hardware associated with a CT scanner. The first step (100) is to take a CT scan which includes an affected vertebra and proximate regions of fat and muscle tissue. The region of interest is selected to include as high a percentage of fat and muscle in the area of the affected bone tissue as possible. A physician or qualified lab technician will be able to select a likely region for analysis and enter the spatial parameters of the three-dimensional region into a computer. The higher the percentage of fat and muscle in the defined area, the more accurate the calibration. The computer then calculates a histogram of the pixel values resulting from the CT scan of the selected region, as indicated by step 101. The pixel values correspond to "CT numbers" which represent x-ray beam attenuation by the various tissues in the path of the beam. At this point, histogram values which are clearly the result of system errors and background effects can be filtered out by averaging the histogram over groups of adjacent CT numbers to eliminate sharp discontinuities in the histogram, as indicated by step 102. FIG. 1(a) shows an exemplary filtered histogram of CT number distribution in the area of a vertebra. The distribution of CT number frequencies results not only from individual fat and muscle tissues, but also from the inevitable mixing of fat and muscle, as well as the effects of attenuation and scatter of the X-ray by surrounding body tissues. Actual histograms will, of course, vary from patient to patient and according to the type of scanner employed. In a preferred embodiment of the invention, separated muscle and fat CT numbers are derived from this histogram for use in analyzing the vertebral tissues. The CT numbers obtained are then plotted against known density values for fat and muscle to obtain a CT number versus density reference curve 16, shown in FIG. 2(b). The manner in which the reference curve is obtained will be described in detail below, but first a description of the method by which the adjusted curves are obtained will be given. After obtaining a filtered histogram, the search parameters for locating fat and muscle CT numbers for the histogram must be defined (step 104). This is accomplished by recognizing certain general characteristics of such histograms. As shown in FIG. 1(a), the CT number for fat clearly lies in the area of the histogram curve designated by reference numeral 1 and having CT numbers of between approximately -140 and -20, while the CT numbers for muscle lie between approximately 20 and 100. It has been found that, depending on the scanner, the above-mentioned ranges will account for virtually all spinal muscle and fat peaks. Thus, the search is defined to include those areas of the histogram, as will be explained in more detail below. Furthermore, the search routine makes use of the fact that, in the absence of background effects and intermixing, the fat and muscle distributions would be separate "gaussian" curves. The effects of background attenuation and scatter can be seen in the non-zero CT number frequencies in areas along the CT number axis outside of the "fat" and "muscle" areas, and the influence of mixing is found in the deviation of the distributions from a "delta" or gaussian distribution. This is especially true on the right side of the fat "peak" and on the left side of the muscle "peak", as shown in FIG. 1(a), because the CT number, range for muscle is located to the right of the range for fat. The basic approach of the invention is to calculate the deviation of the actual CT number distribution curve from the theoretical gaussian curves representing pure fat and muscle by locating the leading edges of the respective fat and muscle curves. This is important because these are the areas least likely to be affected by intermixing. As shown in FIG. 1(a), the left side 3 of the fat distribution is defined to be a "leading" edge, while the right side 4 of the muscle distribution is defined to also be a "leading" edge. The base of a leading edge is determined by searching for areas at which the slope of the curve exceeds a threshold slope for a sufficient number of samples. This is because spikes in the background will create false slopes, as indicated by reference numeral 5 in FIG. 1(a). In order to locate the leading edges, the search parameters must be defined, as noted above. The most critical search parameters are, in no particular order, threshold, search start, search direction, search end, and base width. The first parameter mentioned, "threshold" is a minimum slope expected to be indicative of the beginning of a peak. The threshold is defined according to the size of the region of interest. The larger the region of interest, the smaller the slope deemed necessary to define a peak. The search start is defined to be outside of the CT number region of the histogram in which potential fat and muscle peaks have been observed to lie. In the histogram illustrated in FIG. 1, a search start for fat is chosen as a number less than -140 and a search start for muscle is chosen as a number greater than +100, illustrated in FIG. 1. It should be noted that by starting the search outside of the area of the histogram which includes the target tissue, an important advantage is obtained over possible alternative methods of analysis. This is because the search does not require a precise starting point, as would be the case if one attempted to immediately inscribe the highest probability fat and muscle areas in order to obtain a mean CT number. The search at this point is merely for "leading" edges, and not for exact muscle or fat locations. The search direction is then selected, for example in terms of a search "increment" . In the preferred embodiment, a search increment for fat of +1 means that the search will retrieve each CT number frequency for analysis, moving to the right along the histogram as indicated by arrow 7 in FIG. 1(b). Similarly, the search increment for muscle would be negative as indicated by arrow 8. An end search point is also selected as part of step 103. If the search proceeds beyond the point at which a leading edge is likely to be found, the search is terminated. In the illustrated histogram, +20 would be a reasonable endpoint for a muscle search, while 60 would be a reasonable endpoint for a fat search. Finally, a "basewidth" is selected. The "basewidth" essentially is the width at twenty percent of the height of an ideal pure fat or muscle histogram distribution. For both fat and muscle, the base width is selected to be 80 for the histogram shown in FIG. 1. This "basewidth" will be used to locate a presumed trailing edge for further analysis. Once the search parameters have been selected and defined as variables in the computer program, the search begins (step 104). As the histogram values are retrieved from a memory buffer, the derivative of the distribution curve is calculated and compared with the "threshold" (step 105). If the derivative exceeds the threshold for a sufficient number of increments, a leading edge is deemed to be detected (step 106). Once the location of the leading edge is established, the CT number frequency to the left of the curve is used to establish a baseline indicative of the background contribution at the leading edge. As noted above, if the fat and muscle CT numbers were not effected by background effects, the baseline would be zero. Furthermore, if there were no background or fat and muscle mixing, the frequency on the trailing side of the curve would theoretically be zero. The method of the invention assumes that the effects of tissue mixing linearly decrease across the peak, and everything below the theoretical base of the base of the peak is in effect deleted. The baseline contribution of background effects is determined, in effect, by taking the frequency value or height 11 (See FIG. 1(c)) of the curve at the leading edge (step 107). The height of the leading edge would be zero if no background and no mixing were present. To ensure a more accurate value for the background baseline, an average of values immediately preceding the leading edge may be The value of the "basewidth", described above, is then added to the CT number of the leading edge to locate an assumed trailing edge (step 108). Again, it is presumed that the height of the trailing edge would be zero in the case of a pure tissue sample with no background or tissue intermixing. At this time, the presumption that the effects of intermixing of the CT numbers decrease linearly from the trailing edge to the leading edge is utilized . The height of the trailing edge minus the height of background baseline is taken to reflect the amount of tissue intermixing at the trailing edge of the tissue "peak". Essentially, in steps 109 and 110, a line is drawn from the leading to the trailing edge, and everything below the line is deleted from the histogram. This is done by calculating the slope which such a line would follow, incrementing along the X-axis and subtracting the base line value plus the Y-value from the Y-value of the histogram curve (for example, if the histogram curve is defined by the function f(x), the slope of the line is m, and the base line value is b, then the subtracted curve f'(x) would be given by f'(x)=f(x) -(mx +b)). The result is a uniquely adjusted histogram curve which substantially eliminates the effects of both background and intermixing. The same process of background subtraction, using appropriate search parameters, is performed for both fat and muscle regions of the histogram. By drawing a line as described for the fat region, the effect of intermixing is extracted, making accessible as much of the intermixed fat CT numbers as possible for subsequent analysis, together with the CT numbers representative of fat tissue in the pure fat region of the histogram, indicated by the portion of the fat curve which is above the line. Similarly, by drawing the line in the muscle region of the histogram, a corresponding extraction of the effect of intermixing occurs, resulting in a separation of the intermixed muscle CT numbers for subsequent analysis. The background-subtracted curve is then subjected to a curve fitting algorithm (step 112) to determine its deviation from a true gaussian curve. As explained above, the parameters or "moments" of the gaussian curve are indicative of the parameters of a curve uninfluenced by surrounding tissue and tissue intermixing. In step 116, the mode of the curve is selected on the basis of this analysis, and taken to be the measured CT number for use in calibrating the scanner. A variety of statistical analysis techniques are known which may be used to analyze the curve for fit and to find the mode. It is also possible to analyze the curve in terms of other gaussian moments, although no other moment is believed to be as meaningful as the mode, which is the CT number of highest frequency. The specific preferred method of analysis is the "chi-squared" test, details of which are known to those skilled in the art and form no part of this invention. Once the curve has been transferred to "chi-squared space," its width, defined to be the width of the curve at 20% of its height, is compared with a theoretically acceptable width, for example the above-described basewidth, and the curve is rejected if it is not deemed to be representative of pure fat or muscle. The width obtained during the curve fitting analysis can then be substituted for the originally presumed base width, and the process of subtraction can be repeated in an iterative process for greater accuracy (step 115). FIG. 1(e) shows the results of the iterative process and also includes a histogram of trabecular tissue for comparison. It is also possible to use parameters other than the base width for analyzing whether the curve is acceptable. For example, the actual slope of the leading edge may be a more reliable indicator of the acceptability of the curve than the width. Once a CT number for fat and muscle has been obtained, the numbers may be used to obtain a calibrated plot of CT number versus density (step 118). Referring to FIGS. 2(a) and 2(b), the CT numbers are plotted along the Y-axis of a graph of CT number versus "equivalent bone density." The equivalent bone densities of both muscle and fat are known and are plotted along the X-axis of the graph. A line 16 is then drawn which shows the relationship between measured CT number and equivalent bone density. The equivalent bone densities will of course be different depending on the energy of the scanning beam. Also, it is noted that while the preferred equivalent bone densities are expressed in units of mg/cc of cortical bone, numerous other units of equivalent bone density are also known. As shown in FIG. 2(b), line 16 can then be used to obtain the equivalent bone density of the subject trabecular tissue. The resulting density values are highly reproducible, and are not dependent on precise positioning of the patient, unlike values obtained by prior art methods. Reproducibility studies on 33 images with intentional region of interest mispositioning of 1.5 to 2.5 mm yielded a sample standard deviation of 1-2 CT numbers for trabecular bone. This translates to a precision of better than 1% for normal and 1-2% for osteoporotic patients. Prior art methods yielded a reproducibility error slightly less than 2% for normal patients and as high as 3-4% for osteoporotic patients. In conclusion, the invention provides a useful tool for convenient, reliable derivation of quantitative CT values for an in vivo tissue. The tool does not require external calibration phantoms, and is therefore practical for clinical use. The invention permits a significant improvement in reproducibility due to the decreased dependence on patient positioning. It is contemplated that the method and means of the invention will also be applicable to the evaluation of conditions involving such tissues as lung, kidney, or liver tissues, and to tumor evaluation. Numerous other modifications will also occur to those skilled in the art. Therefore, the invention is not to be limited to the described embodiment but is to be defined solely by the appended claims.
{"url":"http://www.google.com/patents/US5068788?ie=ISO-8859-1&dq=6188988","timestamp":"2014-04-18T17:18:50Z","content_type":null,"content_length":"99600","record_id":"<urn:uuid:34cf1452-be52-4b7c-a237-db42ca11c12f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Area of Circles - Concept The area of circles is derived by dividing a circle into an infinite number of wedges formed by radii drawn from the center. When these wedges are rearranged, they form a rectangle whose height is the radius of the circle and whose base length is half of the circumference of the circle. The area of circles are also used in sectors, segments and annuluses. If we want to find the area bound by a circle, well we're going to have to chop this up into pieces of something that we know. And we know the area of a parallelogram is equal to its base times its corresponding height. So what I'm going to do is I'm going to imagine that I can cut this circle up into an infinite number of pieces. But since I can't draw an infinite number of pieces, I'm just going to start by drawing in let's say about 16 different pieces here. So if I could cut out all these pieces and rearrange them, what I would be able to do is I could, let's say we just looked at one of these pieces. So you're going to have the little curvature on the outside form the circle and that both of these are going to be a radius. So what I'm going to do is I'm going to set all of these up in opposing order and if I just continue doing this, I would be able to from a parallelogram where each of these is going to be one of the wedges that I cut out. So I said that my height here is going to be r and I need to find out what is my corresponding base. Well, if I look at this, this outside is part of my circle and this outside right here is also the other part of the circle. So if the whole circumference is equal to 2 pi r, then one of these bases is going to be half of this. And, if I divide this by 2, then half of my circumference is just pi times r. So this distance right here is pi times my radius. So if I know my height is r, and I know my base is pi times r, then I can say the area is just r times r which is r squared times pi. So if I could cut this up into a infinite testable amount of small wedges, I could create a rectangle where I know that my height is r and one of my bases is pi times r and these areas would be equal. So I can say that the area of my circle, where I know the radius is pi times the radius squared. circle radius circumference area
{"url":"https://www.brightstorm.com/math/geometry/area/area-of-circles/","timestamp":"2014-04-19T19:37:19Z","content_type":null,"content_length":"66764","record_id":"<urn:uuid:32bc1a6d-f6ca-4a23-8c46-9e5a93f7e1e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: UNIVERSITY OF REGINA Department of Mathematics and Statistics Speaker: D. Farenick Date: 17 February 2006 Time: 3.30 o'clock Location: College West 307.20 (Math & Stats Lounge) Title: Local multiplier algebras Abstract: The multiplier algebra M(A) of a (nonunital) C -algebra A is the maximal unitisation of A as a C -algebra. In the late 1970s, a number of very important results were obtained by G.A. Elliott that characterised those separable C -algebras A for which every derivation of A is determined by a multiplier of A--that is, is the restriction to A of an inner derivation of M(A). The notion of local multiplier applies to any C ­algebra A; in this case, one considers the multiplier algebras of essential ideals of A. Via direct limits on the partially ordered set of essential ideals of A, the local multiplier algebra of A is obtained. This lecture will
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/038/2500754.html","timestamp":"2014-04-16T23:06:43Z","content_type":null,"content_length":"7964","record_id":"<urn:uuid:6b287ce1-4534-4cff-98f8-49e825b4b2b5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Archives of the Caml mailing list > Message from Jacques Garrigue [Caml-list] Comments on type variables Date: -- (:) From: Jacques Garrigue <garrigue@k...> Subject: Re: [Caml-list] Comments on type variables From: Alain Frisch <frisch@clipper.ens.fr> > I'd like to make a few comments on the manipulation of type variables > in OCaml. They range from some points that could be worth mentioning > somewhere (FAQ, manual) to "features" that may be considered as bugs, > IMHO. At least I'll try to define what is a feature and what is a bug. In Caml named type variables are handled in the following way: 1) Annotations and coercions Type annotations (expr : t), let ... : t = expr, etc are constraints applied to inferred types. As such type variables in them have just an existential meaning, and they only affect sharing. In an alias (t as 'a), 'a is handled as an instance variable. Since sharing can happen with any other annotation in the same expression, we have to be careful when generalizing types. Caml chooses to only allow generalizing them at the level of the whole expression/definition. That is, just like for recursive functions, named typed variables inside an expression are monomorphic during the typing of this expression. If you don't need sharing, you can use "_" as variable name, which can be generalized as needed. 2) Type definitions All type variables are universally bound in the head of the They may be constrained, but only explicitly. That is, they are quantified universally, type 'a t = texp1 constraint 'a = texp2 type t = /\'a < texp2. texp1 where 'a is bound to be an instance of texp2. 3) Interfaces The semantics is the same as for type definitions, but the quantification is implicit. Constraint may appear from the use of val f : (#c as 'a) -> 'a f : /\'a < #c. 'a -> 'a 4) Classes They mix features from expressions and types, but the semantics of type variables is still the same: they are all bound at the level at the level of the class. Inside a definition group the quantification is existential, even in interfaces (some constraints may be inherited, or appear due to mutual recursion). A small discrepancy appears in constraints on the class type: class [tvars] c : ctype = cexpr Here tvars are shared in both ctype and cexpr, but variables appearing in ctype are not shared with does appearing in cexpr. I'm not sure of why it is so, but Jerome Vouilon may be able to 5) Polymorphic methods They form an exception to above rules. While their types appear inside class definitions or descriptions, they are quantified explicitly at the level of the method: method fold : 'a. (t -> 'a -> 'a) -> 'a -> 'a = ... In interfaces, the quantification is implicit for newly introduced method fold : (t -> 'a -> 'a) -> 'a -> 'a As you can see the exception is in the location of the quantification, otherwise they behave like type of value descriptions. 6) Local modules They do not have any specific behaviour, but definitions inside them have their type variables quantified independently, as you would expect if the module were defined at toplevel. So there are lots of cases to consider, but I would not describe it as messy: the basic idea is the scope is the definition, and you cannot generalize before getting out of the scope. > Generalization of explicit variables > ------------------------------------ > Explicit variables are generalized only at toplevel let-declarations > (see bug report 1156: > http://caml.inria.fr/bin/caml-bugs/open?id=1156;page=3;user=guest): > # let f (x : 'a) : 'a * 'a = (x,x) in (f 1, f true);; > This expression has type bool but is here used with type int Yes. This is a side-effect of using named variables. > Variables do not add polymorphism > ----------------------------------- > Contrary to the intuition one could have, variable never allow > more polymorphism (exception for polymorphic methods); indeed, they > _restrict_ the polymorphism. Yes. This is a property of ocaml that dropping parts of a term can only improve its polymorphism. > One could expect polymorphic recursion to be allowed when > function interfaces are explicitely given; this is not the case: Yes. Polymorphic recursion is not a magical feature. The existential semantics of type annotations do not allow to infer anything about polymorphism from a type annotation. > A paragraph in the FAQ is somewhat misleading: > << Polymorphism appears when we introduce type schemes, that is types > with universally quantified type variables. > ... > The polymorphism is only introduced by the definition of names during > a let construct (either local, or global). > >> > The last sentence seems to contradict the first; the programmer doesn't > introduce type schemes. The first instance is about types, the second about terms. In fact, to recover symmetry, the first sentence should probably be << Polymorphism appears when we introduce values whose types are schemes, ... >> > Variables do not guaranty polymorphism > -------------------------------------- Fully yes for expressions. Partially yes for type definitions and interfaces: a variable can only be constrained explicitly. If their is no constraint on a variable, then it is guaranteed polymorphic. > We have to rely on the module system: > # module M : sig val f : 'a -> 'a end = struct let f x = x + 1 end;; > Would it be possible to declare variables that have to remain polymorphic > (I don't know how to formalize this; just an idea ...), so as to reject > the following ? > let 'a. f (x : 'a) : 'a = x + 1;; Perfectly possible. But I'm not sure this is sufficient, as we are also interested by constrained variables. Maybe allow let 'a = texp. f ..... Another way is to integrate it with explicit polymorphism: let f : 'a. 'a -> 'a = ... The real question is what we are going to do with this information. If we don't use it, then interfaces are probably sufficient. But we could use it for polymorphism recursion, or first class > Variable scoping rules > ---------------------- > The scoping rules for type variables do not seem uniform. There are only two exceptions, and they concern classes. Otherwise the scoping is the single phrase, or a group of phrase connected by and for term definitions. Maybe restricting to a single phrase even for simultaneous term definitions would be more uniform. > In a structure, the scope of a variable in an expression is the toplevel > element: > # module M = struct let f (x : 'a) = x let g (x : 'a) = x + 1 end;; > module M : sig val f : 'a -> 'a val g : int -> int end > ... whereas for objects, the scope is the class definition: > # class o = object method f (x : 'a) = x method g (x : 'a) = x + 1 end;; > class o : object method f : int -> int method g : int -> int end Yes, a class is not a structure. A class is both an expression and a type definition, and as such rules common to expressions and type definitions apply. > Interaction with local modules > ------------------------------ > Inside a local module, type variables introduced outside the module are > invisible: > # let f (x : 'a) = > let module M = struct type t = 'a list end in ();; > Unbound type parameter 'a To be expected by the rules above. Functions with local modules are not functors. 'a is not a type constructor. > # let f (x : 'a) = > let module M = Set.Make(struct type t = 'a end) in ();; > Unbound type parameter 'a > # let f (x : 'a) = > let module M = struct let g (y : 'a) = y end in M.g 2;; > val f : 'a -> int = <fun> Side-effect of the independence of members in a structure. Is there a better semantics? > Type declaration flushes the introduced variables: > # let f (x : 'a) = let module M = struct type t end in (2 : 'a);; > val f : 'a -> int = <fun> That one is a bug. Bad one. And not immediate to correct, as type variables are reset all over the place. Thanks for reporting. > Too bad, when one need such a variable to restrict polymorphism: Indeed. If class typing were principal, there would be no such > Objects > ------- > At least it is clear for polymorphic methods where type variables > are introduced. > For class types, the explicit quantification is optional: > # class type o = object method f : 'a -> 'a method g : 'b -> 'b end;; > class type o = object method f : 'a. 'a -> 'a method g : 'b. 'b -> 'b end > # class type ['a] o = object method f : 'a -> 'a method g : 'b -> 'b end;; > class type ['a] o = object method f : 'a -> 'a method g : 'b. 'b -> 'b end > # type 'a t = { x : 'a; y : 'b };; > type 'a t = { x : 'a; y : 'b. 'b; } That one is not valid any more. In records, one might expect quantification to be existential (at the level of the whole record), so it seemed better to make the quantification explicit. > BTW, I don't understand the following error message: > # class type o = object method f : 'a -> 'a method g : 'a -> 'a end;; > The method g has type 'a -> 'b but is expected to have type 'c Such error message occurs when 'a or 'b is a universal variable, and you try to unify it with a type variable, against quantification rules. I must find a way to get better error messages. This is the case here because 'a is universal after the typing of f, but is reused (and shared) in the typing of g. I'm undecided about whether reuse on non-explicitly quantified universal variables should be allowed. This might be confusing if the intent was really sharing. > Also, why are class parameters enclosed in brackets ? > type 'a t = ... > but: > class ['a] t = ... > and not: > class 'a t = ... That's a purely syntactic problem class c = [int] t must be distinguished from class c = int t -> ... at parsing time. > '_a variables > ------------- > It is not possible to specify "impure" type variables: > # let z = let g y = y in g [];; > val z : '_a list = [] > # let z : '_a list = [];; > val z : 'a list = [] Indeed, there is no such concept as "impure" variable in input type expressions. What it should mean is not so clear. > Then, how do we specify an interface for the module struct let x = ref [] end ? > # module M = (struct let x = ref [] end : sig val x : '_a list ref end);; You must choose a type. Anyway, you will have to choose one at some time, so why wait? Note that it is illegal to export a non-generic type variable from a compilation unit. > Variable name in error message > ------------------------------ > Error messages would be more readable if they retained (when possible) > variable names: > # let f (a : 'a) (b : 'b list) = b + 1;; > This expression has type 'a list but is here used with type int I see. As Francois Pottier pointed out, it's not clear what it would mean when type variables are only quantified existentially, and can be merged all over the place. Note also that the unification algorithm may introduce new type variables, or create type which were not in the input. Hard to follow Hope this helps. To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/ Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
{"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2002/06/46298db6c0d6b9f2d28acf57b7e2b631.en.html","timestamp":"2014-04-19T06:21:51Z","content_type":null,"content_length":"17446","record_id":"<urn:uuid:c7739aaa-07d3-41b2-a6d1-9bce62dbfbba>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Optimization Problem: Find a point on the line y = 2x+5 that is closest to the origin • one year ago • one year ago Best Response You've already chosen the best response. distance of origin from a pt. on the line is sqrt.[(x)^2 + (2x+5)^2].................square it since the distance and its square maximise simoultaneously and then differentiate with respect to x and find the minima....the corresponding value of x and y will give you the required point.... Best Response You've already chosen the best response. Thanks, that is correct sir. Except I'm not sure why you differentiate D^2 instead of D. Will differentiating D give you the same answer? My textbook mentions the same thing, but it doesn't really explain why Best Response You've already chosen the best response. yeah differentiating D would also yield the same answer but differentiating a square root is a tedious job so instead we diff. D^2......if D is min. for some value of x, it's square will also be min. at same value of x as compared to the other values....got it..??? Best Response You've already chosen the best response. oh okay, that's pretty intuitive i guess Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fa02a27e4b029e9dc313057","timestamp":"2014-04-19T22:46:46Z","content_type":null,"content_length":"35284","record_id":"<urn:uuid:ed89c0f1-1ca3-4b90-a7fb-4b89f382145a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Recall The Ac Circuit Of Problem H8. Copy It To ... | Chegg.com Here is HW8, it is needed to solve the following problem Here is the problem: Image text transcribed for accessibility: Recall the ac circuit of Problem H8. Copy it to the diagram below (onto the dotted lines a-n and a'-n') and name it "phase a" Make an exact copy of the phase a circuit (replace nodes a and a' with b and b'), and copy it onto the diagram above, changing the voltage to Make an exact copy of the phase a circuit (replace nodes a and a' with c and c'), and copy it onto the diagram above, changing the voltage to: Solve for the currents (a,b,c). Solve for the line voltages (ab,bc,ca). Complete the phasor diagram below to scale (9 phasors: 6 voltages, 3 currents). Use three colors Compute the neutral current (n). Now disconnect the neutral (n-n'). Did the phase currents change? Based on (h), is the neutral really necessary? Define the circuit of (a) to be a 4-wire (abcn) balance 3-phase system. Define the modified circuit with the neutral removed to be a 3-wire (abc) balanced 3-phase system. Complete the 3-wire circuit diagram. In the 3-wire (abc) balanced 3-phase system, the loads and sources are said to be "wye- connected", where Re-draw the 3-wire circuit, keeping the wye-connected sources, but replacing the wye-connected load elements with delta-connected load elements. Note that node n' disappears. Compute the load currents: Using the values in (k), compute the phase currents (a,b,c). Do these values check with the results in (d)? Should they? Compute the 3-phase load in the wye. Complete the 3-phase power triangle. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/recall-ac-circuit-problem-h8-copy-diagram-onto-dotted-lines-n-n-name-phase-make-exact-copy-q4380780","timestamp":"2014-04-20T00:55:07Z","content_type":null,"content_length":"25905","record_id":"<urn:uuid:806ad471-1e53-4d09-9118-1cebfa865c55>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory Seminar Friday Dec. 13 Adam Marcus (Yale) WWH 1314 A solution to Weaver's $KS_2$ Friday Dec. 6 CCI meeting Friday Nov. 22 New York Area Theory Day 9:30AM -- 4:10PM Auditorium 109 Program Friday Nov. 15 Leonid Gurvits (CUNY) WWH 1314 Breaking e^n barrier for deterministic poly-time approximation of the permanent and settling Friedland's conjecture on the Monomer-Dimer Entropy Friday Nov. 8 CCI meeting Thursday Nov. 7 Ali Kemal Sinop (IAS) WWH 1314 Towards a better approximation for sparsest cut? Friday Nov. 1 Thomas Vidick (Newton Institute) WWH 1314 Three-player entangled XOR games are NP-hard to approximate Friday Oct. 25 Ravishankar Krishnaswamy (Princeton) WWH 1314 Capacitated Network Design: Algorithms and Hardness Friday Oct. 18 CCI meeting Thursday Oct. 17 Brendan Juba (Harvard) WWH 1314 Efficient reasoning in PAC Semantics Friday Oct. 11 Zeev Dvir (Princeton) WWH 1314 Incidence theorems and their applications Friday Oct. 4 Aleksandar Nikolov (Rutgers) WWH 1314 Approximating Hereditary Discrepancy (without privacy) Friday Sept. 27 Clément Canonne (Columbia) WWH 1314 Testing probability distributions using conditional samples Thursday Sept. 19 Ben Lee Volk (Technion) WWH 1314 Boolean functions with small spectral norm, sparse boolean functions and decision tree complexity Thursday Sept. 5 Ronald de Wolf (CWI) WWH 1302 Approximate degree bounds for all and almost all Boolean functions
{"url":"http://cs.nyu.edu/web/Calendar/colloquium/cs_theory_seminar/Fall-13.html","timestamp":"2014-04-16T05:13:10Z","content_type":null,"content_length":"40567","record_id":"<urn:uuid:f3a34ced-53a7-44a8-ab94-70f26654d10a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Jon on Friday, March 7, 2008 at 1:38pm. Should the triangle be solved beginning with Law of Sines of Law of Cosines. Then solve the triangle. Round to the nearest tenth. A=56 degrees, B=38 degrees, a=13. Sines. I get confused on the formula. I know C=86 degrees • Trig - Reiny, Friday, March 7, 2008 at 2:18pm To solve a triangle you must be given 3 independent bits of information. 2 sides and 1 angle 3 sides 1 side and 2 angles (note 3 angles is not "3 independent pieces of information, since if you know 2 angles, you automatically know the third) general simple rule: If you are given a side and its opposite angle, use the sine law if not, use cosine law. so for yours, clearly sine law. • Trig(I know all of that I just need to know how to use the formula) - Jon, Friday, March 7, 2008 at 2:47pm I know I need to use the law of sines but I don't really know how and which one to use like sinA/a= sin B/b and sinB/b =sinC/c and sinA/a = sinC/c. Thats where I get confused. • Trig - Reiny, Friday, March 7, 2008 at 2:56pm clearly you are given a side and its opposite angle, a and A, so that is obviously the ratio you are going to use SinA/a = SinB/b sin56/13 = sin38/b crossmultiply and solve for b. b = 13sin38/sin56 = 9.65 do the same to find c • Trig - Jon, Friday, March 7, 2008 at 3:23pm b=9.7(i had to round) im not sure if im doing this right. SinB/b = SinC/c Sin38/9.7 = Sin86/c c = 9.7sin38/sin86 c= 5.98 or 6 since I have to round to nearest tenth. □ Trig - Reiny, Friday, March 7, 2008 at 3:30pm why would you not stick with the exact ration sin56/13, which uses the original numbers given You used an answer that you obtained after you rounded off that answer, so you are compounding your error. • Trig - Jon, Friday, March 7, 2008 at 3:36pm so to find c all I have to do is Sin56/13 = Sin86/c c = 13Sin56/Sin86 c = 10.8 □ Trig - Reiny, Friday, March 7, 2008 at 3:50pm Sin56/13 = Sin86/c now cross-multiply csin56 = 13sin86 divide by sin56 c = 13sin86/sin56 c = 15.6 check: in any triangle, your smallest side should be across from the smallest angle, the largest side should be across the largest angle. Related Questions Trig - Should the triangle be solved beginning with Law of Sines of Law of ... Trig - check my answers plz! - 1. (P -15/17, -8/17) is found on the unit circle... Alge2/trig - Should the triangle be solved beginning with Laws of Sines or Laws ... math - solve each triangle using either the Law of Sines or the Law of Cosines. ... Algebra II/ Trig - Should the triangle be solved beginning with Laws of Sines or... Math - For problems 1 and 2, determine how many solutions there are for each ... Trig - Law of sines and cosines - ABC is an equilateral triangle with side ... Trig-Medians and law of cosines and sines - In triangle ABC, we have AB=3 and AC... Algebra 2 - Using the information given about a triangle, which law must you use... Precalc - Okay for this question I know I am supposed to use the law of sines. ...
{"url":"http://www.jiskha.com/display.cgi?id=1204915102","timestamp":"2014-04-19T07:09:39Z","content_type":null,"content_length":"10930","record_id":"<urn:uuid:eca54625-17fc-4c44-8001-cac92efede6b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Types of Angles Study Guide LearningExpress Editors Updated on Oct 5, 2011 Introduction to Types of Angles Lesson Summary This lesson will teach you how to classify and name several types of angles. You will also learn about opposite rays. Many of us use the term angle in everyday conversations. For example, we talk about camera angles, angles for pool shots and golf shots, and angles for furniture placement. In geometry, an angle is formed by two rays with a common endpoint. The symbol used to indicate an angle is Naming Angles People call you different names at different times. Sometimes, you are referred to by your full name, and other times, only by your first name or maybe even a nickname. These different names don't change who you are—just the way in which others refer to you. You can be named differently according to the setting you're in. For example, you may be called your given name at work, but your friends might call you by a nickname. Confusion can sometimes arise when these names are different. Just like you, an angle can be named in different ways. The different ways an angle can be named may be confusing if you do not understand the logic behind the different methods of naming. If three letters are used to name an angle, then the middle letter always names the vertex. If the angle does not share the vertex point with another angle, then you can name the angle with only the letter of the vertex. If a number is written inside the angle that is not the angle measurement, then you can name the angle by that number. You can name the following angle any one of these names: , Right Angles Angles that make a square corner are called right angles (see p. 26 for more details about what makes an angle a right angle). In drawings, the following symbol is used to indicate a right angle: Straight Angles Opposite rays are two rays with the same endpoint that form a line. They form a straight angle. In the following figure, Classifying Angles Angles are often classified by their measures. The degree is the most commonly used unit for measuring angles. One full turn, or a circle, equals 360°. Acute Angles An acute angle has a measure between 0° and 90°. Here are two examples of acute angles: Right Angles A right angle has a 90° measure. The corner of a piece of paper will fit exactly into a right angle. Here are two examples of right angles: Obtuse Angles An obtuse angle has a measure between 90° and 180°. Here are two examples of obtuse angles: Straight Angles A straight angle has a 180° measure. ABC is an example of a straight angle: Practice problems for these concepts can be found at: Types of Angles Practice Questions. From Geometry Success in 20 Minutes a Day. Copyright © 2010 by LearningExpress, LLC. All Rights Reserved. Ask a Question Have questions about this article or topic? Ask 150 Characters allowed Popular Articles Wondering what others found interesting? Check out our most popular articles.
{"url":"http://www.education.com/study-help/article/types-angles/","timestamp":"2014-04-17T16:44:15Z","content_type":null,"content_length":"89303","record_id":"<urn:uuid:d9a4958c-fa3f-4a4f-97c0-43bf6179fb8e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Array-Based Algorithm for Back to index An Array-Based Algorithm for Simultaneous Multidimensional Aggregates Yihong Zhao, Prasad M. Deshpande, and Jeffrey F. Naughton Summary by: Steve Gribble and Armando Fox One-line summary: Efficent relational on-line analytical processing (ROLAP) algorithms for computing the data cube have been developed; this paper presents a detailed analysis of an efficient multidimensional on-line analytical processing (MOLAP) algorithm for gleaming the cube. Data cubes are relevant, ergo efficient data cube computation is relevant, and this algorithm seems to work quite well. • this paper is a bear to read - they could do much better with more diagrams and less text • I'd like to see an analysis which results in the number of I/Os as a function of memory size, data size, dimension size, and number of dimensions. This seems like it should be possible (although not easy), and it would give me a better idea of the tradeoffs in this algorithm. Overview/Main Points • Check out the data cube summary to read all about data cubes. • ROLAP: use relational tables as data structure. Thus, a cell in a multidimensional space is a tuple, with some attributes representing location in the space, and one or more representing the value of that cell. Efficient ways to compute the cube given this structure: □ use grouping operation on the dimension attributes to cluster related tuples (e.g. sorting or hashing) □ use grouping performed on behalf of sub-aggregate as partial grouping to speed computation of another sub-aggregate □ compute an aggregate from another aggregate, rather than from the much larger base table. (e.g. compute 1-d aggregate from one of the 2-d aggregates rather than the N-d data.) • MOLAP: stores data as sparse arrays. The position of the value within the sparse array encodes the position of the value in the multidimensional space. • All about arrays: □ array is most likely far too big to fit in memory. Must therefore split array into chunks. Row-major or column-major is not efficient; each chunk should be a small N-d piece of the N-d space, and chunk size should correspond to block size of disk. □ even with chunking, many cells of chunks will be empty - use compression for these sparse arrays □ array may need to be built from non-array (e.g. table) data. Use a 2-pass algorithm: first pass assigns chunk numbers to tuples, second pass clusters together tuples by their chunk number. • Simple data cube computing algorithm □ Let's say we have a 3-D cube (dimensions labelled A, B, C) and we want to compute the aggregation across dimension C, which corresponds to a plane of size |A| * |B| of aggregate values. □ One possibility is to sweep the entire plane down the C dimension. Better to use chunking: sweep a chunk of size |Ac|*|Bc| (where Ac is the chunk size in the A dimension and Bc is the chunk size in the Bc dimension) down, then that chunks neighbor, etc., until have slowly built up the aggregation plane in patchwork fashion. Less memory is required for this. □ For a data cube, we need multiple aggregates (AB plane, BC plane, AC plane, A edge, B edge, C edge, and the total aggregation point). Far better to compute A from AB than A from entire data set ABC, etc. Think of aggregates in cube computation as a lattice, with ABC as the root, and ABC having children AB, BC, and AC; AC has children A and C, etc. Goal is to embed a tree in this lattice, and compute each aggregate from its parent in the tree. Trick is to find the most efficient tree. □ array algorithm: be sure to compute any lower-dimensional group-by from the higher-dimensional parent. For example, read in each chunk of ABC and use to produce an aggregate chunk of AB. Once this chunk of AB is finished, output to disk, and repeat for next chunk of AB. □ This algorithm is careful to use parent aggregates to save computation for child subaggregates, but it computes subaggregats independently (e.g. scan entire ABC to make AB, then rescan ABC entirely to produce BC.) Can do better. • multi-way algorithm □ n-d data cube has 2^n group bys, one for each subset in the power set of the cube. □ ideally, have large enough memory to store all group-bys, and finish cube in one scan. Unfortunately this is usually not possible; goal is then to minimize memory needed for each computation to achieve maxmimum overlap in aggregation computations. □ single-pass algorithm: define an order over which you will scan the entire cube (row major order, scanned across each dimension in some order, e.g for 2-d table AB, scan a0b0,a1b0,a2b0,a0b1,a1b1,a2b1,a0b2,a1b2,a2b2). Can compute how much memory is required to compute aggregates across each dimension. If memory order is ABC, only need 1 chunk for the BC aggregate (as we always scan down the A dimension), but need 4 for the B dimension, and 16 for the C dimension. Still need structure to coordinate overlapping computation - use a lattice with a minimum spanning tree. Nasty formulae are given for computing memory required for a given dimension order and resulting minimum spanning tree, from which one can deduce the optimal dimension order. □ If don't have enough memory for optimal dimension order, need to split up aggregation into multiple passes using the multi-way algorithm. End up storing incomplete trees, then filling them in on subsequent passes. • Performance studies show that array based algorithm does better than the ROLAP algorithms, even! Factors that affect performance are # of valid data entries, the dimension size (how many elements in each dimension), and number of dimensions. Back to index
{"url":"http://carlstrom.com/stanford/quals/mirror/swig.stanford.edu/pub/summaries/database/array.html","timestamp":"2014-04-20T03:10:49Z","content_type":null,"content_length":"6904","record_id":"<urn:uuid:5e410c23-62be-4569-9f35-43281656e713>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
February 2013 Math Digest Summaries of Media Coverage of Math Edited by Mike Breen and Annette Emerson, AMS Public Awareness Officers Contributors: Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (2004 AMS Media Fellow), Annette Emerson (AMS), Brie Finegold (University of Arizona), Baldur Hedinsson (2009 AMS Media Fellow), Allyn Jackson (Deputy Editor, Notices of the AMS), and Ben Polletta (Harvard Medical School) February 2013 Brie Finegold summarizes blogs on girls and science and on green mathematics: "Girls and science: why the gender gap exists and what to do about it," by Emma G. Keller. The Guardian UK--US News Blog, 5 February 2013. This ambitiously titled blog post opens by highlighting a recent international study that found that on average, teenage girls outperform their male peers in math and science, but not in the United States. The study centers around a test that has been given every three years since 2000 to 15-year-olds in developed nations, most recently in 2009 when teens in over 65 countries took the exam. The test, the Programme for International Student Assessment, is administered by the Organisation for Economic Cooperation and Development and covers mathematics, science, and reading. The test also includes a questionnaire concerning students' attitudes and backgrounds. You can view sample test questions. The blog post quickly departs from a discussion of the study to give advice on how to help readers' daughters improve their beliefs concerning their abilities. Much of the advice seems overly general or lacking in supporting evidence. For instance, the author encourages parents to bring up mathematics when their girls want to "buy something too expensive." Also, it conforms to stereotypes of girls being more interested in cooking and shopping than in other, more academic, pursuits. A tongue-in-cheek response entitled "Boys and Science: The Gender Gap and How to Maintain It" also appeared in a different Guardian blog. In addition, there was a response entitled "Pseudoscience and Gender-Stereotyping Won't Solve Gender Inequality in Science." While Emma Keller may be aiming to offer parents some practical advice on ways to encourage their daughters to pursue science and math, the measures she suggests do not seem likely to close the gender gap. Even now that we see ample evidence of strong female role models in science and math, the media often struggle to portray females in science without resorting to tired stereotypes like the ones we saw just a few months ago in the video Science: It's a Girl Thing. Maybe we should focus less on marketing the sciences to women as if they were a product and more on educating the general public concerning the achievements of female scientists who have enriched their fields. "Prospects for a Green Mathematics," by John Baez and David Tanzer. Azimuth, 15 February 2013. John Baez, mathematical physicist, and David Tanzer, a computer scientist, invite you to participate in the Azimuth Project, an international collaboration to create a focal point for scientists and engineers interested in saving the planet. While you may not see your mathematical expertise as being pertinent to solving the world's most pressing problems, the authors provide some interesting examples that may alter your views. For example, leaves may grow according to network optimization, and chemical reactions may be modeled by stochastic Petri nets. Even those who study abstract ideas like quantum field theory may be able to apply their knowledge to networks. There is plenty of new, "green" mathematics to be found in the study of networks and how they apply to biology. Baez and Tanzer seem to truly believe that mathematicians and theorists have something to offer in identifying those tipping points at which our biosphere might drastically change. As 2013 is being called the year of "Mathematics of Planet Earth," this post is apropos. The official start of this special year is on March 5, and you can find the description of this broad project on the website. --- Brie Finegold Return to Top "Newsmakers: Three Q's," Science, 22 February 2013, page 891. Science's news section asked MIT physicist Max Tegmark three questions about his symposium 'Is Beauty Truth?'. In the symposium professor Tegmark made the case for the universe and even reality itself being mathematical structures. "There is no evidence there is anything at all in our universe that is not mathematical," explained Tegmark. He described the mathematical universe hypothesis as a fundamentally optimistic way of looking at reality. If it is right, he said, "the road ahead is open and our future understanding is really only limited by our imagination." --- Baldur Hedinsson Return to Top "A mathematician puts Fermat's Last Theorem on an axiomatic diet," by Julie Rehmeyer. Science News, 20 February 2013. Nearly two decades after Andrew Wiles showed the world his complicated proof for Fermat's Last Theorem, mathematician Colin McLarty presented an alternative, pared-down proof at the 2013 Joint Mathematics Meetings requiring fewer axioms and mathematical leaps of faith. Wiles's proof relied heavily on principles of algebraic geometry, in particular an assumption that allows for building larger and larger sets while avoiding a well-known paradox. Given the simplicity of the theorem, and the fact that modern algebraic geometry was in its infancy when Fermat first penned the theorem for which he claimed he had a "marvelous demonstration" that unfortunately would not fit in the margin of the page, mathematicians have speculated about the existence of a simpler alternative to Wiles's proof, based instead on basic set theory. Enter McLarty, who has now shown that the theorem can be proven using only the basic axioms of mathematics. --- Lisa DeKeukeleare Return to Top "Don’t Let Economists and Politicians Hack Your Math," by Edward Frenkel. Slate, 8 February 2013. Edward Frenkel writes about a cunning, behind-the-scenes political plan to raise taxes and slash social benefits, and how the public's mathematical knowledge is the only way to stop sneaky politicians from fixing the numbers. Taxes, Social Security and Medicare are all pegged to the Consumer Price Index (CPI) as a measure of inflation. A proposal to change how this cost-of-living index is calculated was included in the fiscal cliff deal reached by the White House and Congress late last year until the very last moment. This change would have raised taxes and cut social benefits with just a little tinkering with a mathematical formula. He then brings forward the importance of the general public possessing the mathematical knowledge needed to protect the citizen from arbitrary decisions by the powerful few in the increasingly math-driven world. --- Baldur Hedinsson Return to Top "Honoring the Best and Brightest" and "Lubchenco Earns Spanish Research Prize." News. Science, 8 February 2013, page 633. These two news items higlight recent award winners, including four mathematicians. The first is about the National Medal of Science winners who were honored at a White House ceremony in early February. Among the winners were Barry Mazur (Harvard University) and Solomon Golomb (USC). As Notices Senior Writer and Math Digest contributor Allyn Jackson wrote in an AMS news item, Mazur "made important contributions early on to topology and has been an influential figure more recently in number theory and arithmetic algebraic geometry. Golomb has made significant contributions in mathematics and engineering, especially in coding theory."Photos: Barry Mazur (left) by Jim Harrison, Solomon Golomb (right) by Noe Montes/USC. The second article is about the 400,000-Euro Frontiers of Knowledge Award, which is made by the BBVA Foundation in Madrid. Two of the six awardees were Ingrid Daubechies (Duke University) and David Mumford (Brown University), pictured above, who shared the award for their work in data compression and pattern recognition. Read more about Daubechies and Mumford and link to more information about the award. Photos: Ingrid Daubechies, Duke Photography; David Mumford, Brown University. --- Mike Breen Return to Top "Mathematician discovers largest prime (so far)," by Tia Ghose. The Christian Science Monitor, 5 February 2013. On January 25, the largest known prime number--and the 48th known Mersenne prime--was discovered on the computer of University of Central Missouri mathematician Curtis Cooper. (Mersenne primes have the form 2^p – 1 where p is a prime number.) The 17,425,170-digit number is 2 raised to the 57,885,161 power, minus 1. After the discovery, other researchers independently verified the prime using different software programs. Cooper is a volunteer with the Great Internet Mersenne Prime Search (GIMPS), a giant network of volunteers who run the GIMPS software on their computers in the hopes of finding the largest known of these numbers. The GIMPS project, formed in 1996 by computer scientist George Woltman, has been responsible for identifying all 14 of the largest known Mersenne primes. The article reports that Cooper, who has found two other of the largest known Mersenne primes since 2005, is eligible for a $3,000 GIMPS award for this discovery. --- Claudia Clark Return to Top "Hard to Put Red-Light Violations Under a Lens," by Carl Bialik. The Wall Street Journal, 1 February 2013. The Wall Street Journal's Numbers Guy looks into the statistics of whether cameras at red lights improve traffic safety. These cameras record motorists who run red lights and who are consequently fined. The cameras can be found in over 500 U.S. urban areas and are intended to deter drivers from rushing through intersections after the light has just turned red and thus preventing dangerous crashes. But depending on how you do the numbers, the effect of these cameras on traffic safety varies. Some cities have found that crashes increased at intersections where cameras were installed. For example, including rear-end hits in the statistics, can mean that setting up cameras at an intersection increases the total number of accidents since drivers are more likely to slam on the brakes when they notice the cameras or see warning sings. Determining whether the pros outweigh the cons can be tricky for city administrators, but many researchers find that if the number of severe crashes declines overall, the increase in minor collisions is worthwhile. --- Baldur Hedinsson Return to Top
{"url":"http://ams.org/news/math-in-the-media/mathdigest-md-201302-toc","timestamp":"2014-04-21T07:36:03Z","content_type":null,"content_length":"26242","record_id":"<urn:uuid:98cb9593-2c2a-4b04-bf57-15744a8ee0de>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Entropy: Heat addition to surrounding. If 12007 kJ of heat is lost to the surroundings with an ambient temperature of 25 degrees centigrade during a cooling process, and the ambient temperature of the surroundings is unaffected by the heat addition, what is the entropy change of the surroundings? If Δs=∫δQ/T, then Δs=ΔQ/T=12007 kJ/(25+273.15)K= 40.272 kJ/K. Is my thinking process here correct?
{"url":"http://www.physicsforums.com/showpost.php?p=4151663&postcount=1","timestamp":"2014-04-16T13:40:32Z","content_type":null,"content_length":"8765","record_id":"<urn:uuid:b9ce2459-52a6-4408-9bee-1512b8453e76>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: A VECTORIAL VARIATIONAL MODE SOLVER AND O. (Alyona) Ivanova, Remco Stoffer, Manfred Hammer, E. (Brenny) van Groesen University of Twente, MESA+ Institute for Nanotechnology, P.O. Box 217, 7500 AE Enschede, The Netherlands Abstract ­ A Variational Vectorial Mode Solver for 3-D dielectric waveguides with arbitrary 2-D cross- sections is proposed. It is based on expansion of each component of a mode profile as a superposition of some a priori defined functions defined on one coordinate axis times some unknown continuous coefficient functions, defined on the other axis. By applying a variational restriction procedure the unknown coefficient functions are determined as an optimum approximation of the true vectorial mode profile. A couple of examples illustrate the performance of the method. Optical waveguides are main ingredients in many integrated optical circuits, and design tools for these waveguides are of great importance. Detailed overviews of such techniques are presented in [1, 2]. In this paper we extend the scalar Variational Mode Expansion Method [3] to fully vectorial simulations of lossless isotropic dielectric waveguides with ­ in principle ­ arbitrary refractive index distribution. The validity of the method is checked for a waveguide with non-rectangular piecewise-constant refractive index distribution and a diffused waveguide. A. Problem Formulation
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/516/3112878.html","timestamp":"2014-04-16T08:14:04Z","content_type":null,"content_length":"8923","record_id":"<urn:uuid:d2d336e2-1ebd-4e28-b538-c74c331f1c3d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Pico Rivera Geometry Tutor ...If you or your child are struggling in math and/or Spanish, feel free to contact me so we can make arrangements!Algebra is the foundation through which students can better understand higher levels of mathematics such as Algebra 2, Trigonometry, and Precalculus. It is important that students atta... 22 Subjects: including geometry, English, Spanish, reading ...Usually, students will learn Algebra 2 in their high school years. Understanding Algebra 2 is quite important since success in College Algebra and/or Pre-Calculus depends on understanding Algebra 2. Also, ACT and SAT Math both test Algebra 2. 38 Subjects: including geometry, reading, English, writing ...I know how to teach to students who love math and how to make it simple for students who hate math.I am a National Master of Chess with a 2215 USCF rating and have been playing in tournaments for 8 years now. My expertise is middle game and endgame. I also know the classic chess games of Capablanca and Fischer well. 11 Subjects: including geometry, calculus, algebra 1, algebra 2 ...I have tutored many students overt the years. In high school I tutored students in grades below me in math. During college I tutored students in German and as a senior in college I tutored students below me in finance and statistics. 24 Subjects: including geometry, statistics, finance, economics ...I am familiar with importing and exporting many different forms of data, and also merging data sets. Some of the PROC functions that I've used include the following: PROC REG PROC CORR PROC MEANS PROC GPLOT PROC G3D PROC ANOVA PROC GLM PROC TABULATE PROC SQL PROC PLOT PROC UNIVARIATE PROC GREPLA... 34 Subjects: including geometry, chemistry, writing, English
{"url":"http://www.purplemath.com/pico_rivera_ca_geometry_tutors.php","timestamp":"2014-04-21T15:21:05Z","content_type":null,"content_length":"23988","record_id":"<urn:uuid:04acf480-246f-4519-b9f4-cb89a2f96a0f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
A comparative study of survival models for breast cancer prognostication based on microarray data: does a single gene beat them all? • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Bioinformatics. Oct 1, 2008; 24(19): 2200–2208. A comparative study of survival models for breast cancer prognostication based on microarray data: does a single gene beat them all? Motivation: Survival prediction of breast cancer (BC) patients independently of treatment, also known as prognostication, is a complex task since clinically similar breast tumors, in addition to be molecularly heterogeneous, may exhibit different clinical outcomes. In recent years, the analysis of gene expression profiles by means of sophisticated data mining tools emerged as a promising technology to bring additional insights into BC biology and to improve the quality of prognostication. The aim of this work is to assess quantitatively the accuracy of prediction obtained with state-of-the-art data analysis techniques for BC microarray data through an independent and thorough framework. Results: Due to the large number of variables, the reduced amount of samples and the high degree of noise, complex prediction methods are highly exposed to performance degradation despite the use of cross-validation techniques. Our analysis shows that the most complex methods are not significantly better than the simplest one, a univariate model relying on a single proliferation gene. This result suggests that proliferation might be the most relevant biological process for BC prognostication and that the loss of interpretability deriving from the use of overcomplex methods may be not sufficiently counterbalanced by an improvement of the quality of prediction. Availability: The comparison study is implemented in an R package called survcomp and is available from http://www.ulb.ac.be/di/map/bhaibeka/software/survcomp/. Contact: bhaibeka/at/ulb.ac.be Supplementary information: Supplementary data are available at Bioinformatics online. During the last two decades, several clinical and pathological indicators such as histological grade, tumor size and lymph node involvement have been used for the survival prediction of breast cancer (BC) patients independently of treatment, also known as prognostication. Examples of clinical guidelines to the selection of patients who should receive adjuvant therapy are the St Gallen consensus criteria (Goldhirsh et al., 2003), the NIH guidelines (Eifel et al., 2001), the Nottingham prognostic index (NPI, Galea et al., 1992) and Adjuvant! Online (AOL, Olivotto et al., 2005). Although BC prognostication has been the object of intense research, a still open challenge is how to detect patients who needs adjuvant systemic therapy. The advent of array-based technology and the sequencing of the human genome brought new insights into breast cancer biology and prognosis. Interestingly, several research teams conducted comprehensive genome-wide assessments of gene expression profiling and identified prognostic gene expression signatures. Examples of gene signatures which were obtained by studying the relationship between gene expression profiles and clinical outcome, are the 70-gene (van't Veer et al., 2002) and 76-gene (Wang et al., 2005) signatures. With respect to clinical guidelines, these signatures were shown to correctly identify a larger group oflow-risk patients not requiring treatment. This is particularly relevant for clinicians, since reducing treatments means also reducing potential side effects and cutting costs. Another example of gene signature is reported in Sotiriou et al. 2006b). This study is focused on histological grade, a well-established pathological indicator rooted in the cell biology of breast cancer. In fact, clinicians encounter problems when confronted with patients with intermediate-grade tumors (Grade 2). These tumors, which represent 30–60% of cases, are a major source of inter-observer discrepancy and may display intermediate phenotype and survival, making treatment decisions for these patients a great challenge, with subsequent under- or over-treatment. By means of a supervised analysis, the authors developed a gene expression grade index based on 128 probes. The associated genes were mainly involved in cell-cycle regulation and proliferation and were consistently differentially expressed between low- and high-grade breast carcinomas. This signature, which essentially quantifies the degree of similarity between the tumor expression pattern of these genes and the tumor grade, was able to separate patients labeled with histological Grade 2 tumors into two groups having distinct clinical outcomes similar to those of histological Grades 1 and 3, respectively. Other research groups have proposed gene expression signatures that are predictive of the clinical outcome in breast cancer [see Sotiriou and Piccart (2007) for a review]. However, since different risk prediction methods, different accuracy measures and different validation sets were used, it is not easy to compare their performance in terms of BC prognostication. The purpose of this work is 2-fold: first set up a common and independent assessment framework to compare the performance of existing gene signatures and several state-of-the-art risk prediction methods; second, compare the prediction accuracy of these methods in several BC microarray prognostication tasks to elucidate the key characteristics of a successful risk prediction method and to bring additional insights into BC biology. Every risk prediction model aims to assign risk values or survival probabilities to patients on the basis of the information that is available at the time of diagnosis. This is known to be a difficult task because of several issues specific to survival microarray data. First of all, censored information cannot be exploited by traditional supervised classification and regression methods, but demands the adoption of specific survival analysis techniques, like the semi-parametric Cox's proportional hazards model (Cox, 1972). A second issue is the high dimensionality of microarray data. When the number of explanatory variables exceeds by far the number of patients in the sample cohort (high feature-to-sample ratio), overfitting of naively applied data mining methods and overoptimistic performance assessment lie in wait. At the same time, it is very difficult to select the most relevant variables for prediction, because of their interdependency and the reduced power of the statistical inference procedure for high feature-to-sample ratio datasets (Bontempi, 2007). As a consequence, it is common to select variables that fit nicely the training set and fail dramatically on independent validation sets, thus leading to unstable gene signatures (Ein-Dor et al., 2005; Michiels et al., 2005) and poor prediction models. A third issue is the lack of standards in performance assessment for risk prediction models. Indeed, there exist few accuracy measures for risk prediction, and, to the best of our knowledge, no articles studied their agreement on the same set of methods and datasets. Lastly, the validation and the comparison of BC microarray prognostication methods are made difficult due to the lack of independent data. In this work, we compare the performance of 13 risk prediction methods on more than 1000 patients. This is made possible thanks to the recent publications of several large microarray datasets in gene expression databases, such as the Gene Expression Omnibus (GEO, Barrett et al., 2005) An important outcome of the analysis is that, in spite of the large number of samples, there is no statistical evidence that complex methods outperform the simplest BC prognostication techniques. This result suggests that the loss of interpretability deriving from the use of overcomplex data analysis strategies may not be sufficiently counterbalanced by an improvement in the quality of prediction. Finally, it is worth to mention that the present article complies with the research reproducibility guidelines proposed in Gentleman (2005) in terms of availability of the code and reproducibility of results and figures.^1 A list of acronyms used throughout the article is given in Supplementary Table 1. 2 METHODS 2.1 Notations for survival analysis Throughout the article we will adopt the following notation: upper case and lower case letters represent random variables and their realization, respectively, while bold letters denote vectors or matrices. Let us denote the time as t. We suppose that a sample cohort of n patients is available and that for each patient we observed a p-dimensional vector of covariates x[i] with 1≤i≤n at the time of diagnosis t=0, as well as the evolution of the survival status. Since we limit our study to microarray data, the covariate x[i] denotes the expression of the whole genome of the i-th patient. Survival data for the i-th patient are denoted as follows: t[i] stands for the event time, c[i] for the censoring time and δ[i] for the censoring indicator (δ[i]=1 if t[i]≤c[i] and δ[i]=0 if t[i]>c[i ]). We introduce the counting process d[i](t)=1 if t[i]≤t and d[i](t)=0 if t[i]>t to denote survival status at any time t where d[i](t)=1 indicates that patient i experienced an event prior to time t 2.2 Risk prediction methods The aim of a risk prediction model is to predict future survival status for all patients in the cohort. All the risk prediction models considered in this study return a risk score denoted by R, that is a continuous value which quantifies the risk of a patient to experience an event. Clinicians often use the risk score to derive risk groups, denoted by G, on the basis of quantiles of the risk score distribution. Although the discretization of individual risk scores into a finite (and often small) set of risk groups may introduce bias (Gerds and Schumacher, 2001), this approach is very intuitive and conforms to the daily doctors' decision making process, e.g. the attribution of either low or high risk to patients. In the following, the quantity r[i] and g[i] will denote the risk score and the risk group for patient i, respectively. G is either 0 or 1 for a low- or high-risk patient, respectively. In this experimental study, we decided to focus on a set of 13 state-of-the-art methods (summarized in Table 1) with the ambition of being representative of a large number of risk prediction strategies. The first risk prediction method is also the simplest one and defines the risk score as the expression of a single proliferation gene (AURKA) well studied in literature (Hanahan and Weinberg, 2000). The following 10 methods (from 2 to 11) are characterized by the type of observed genotype (input data), the dimension reduction strategy, the structure of the model, the learning algorithm and the predicted phenotype (outcome variable). Characteristics of the risk prediction methods studied in this work Genotype: it can be the expression of a single proliferation gene (AURKA), the expression of a biologically driven selection of genes of interest (BD) or the expression of the whole genome (GW). AURKA and the small set of genes in BD were selected to represent several biological processes in BC (Hanahan and Weinberg, 2000). The selected genes were AURKA (also known as STK6, 7 or 15), PLAU (also known as uPA), STAT1, VEGF, CASP3, ESR1 and ERBB2, representing the proliferation, tumor invasion/metastasis, immune response, angiogenesis, apoptosis phenotypes and the ER and HER2 signaling, Dimension reduction strategy: we use either a simple univariate ranking (RANK) of the k most relevant features or a selection of the first k principal components (PCA). Univariate ranking uses Wilcoxon rank sum test (Wilcoxon, 1945) in the case of binary outcome or Cox's proportional hazards model (Cox, 1972) in the case of survival outcome. The signature size k is either fixed or tuned by cross-validation (CV) as described in Section 2.2.1. It is worth to note that no dimension reduction was performed for BD input data due to the low dimensionality of the input space. Structure of the model: we adopt either a multivariate (MULTIV) model or a linear combination of univariate models (COMBUNIV), particularly interesting in a high-dimensional setting (Haibe-Kains et al., 2008; Kittler et al., 1998). Learning algorithm: we consider four types of learning algorithms: (i) the linear combination of gene expressions weighted by the significance computed from the Wilcoxon rank sum test (WILCOXON) that allowed for identifying the most relevant genes to discriminate the patients with histological Grades 1 and 3 tumors, (ii) the multivariate linear regression model (LM), (iii) the linear combination of gene expressions weighted by the significance computed from the univariate Cox's proportional hazards model (COX) and (iv) the multivariate Cox's model with L1 regularization (RCOX) as implemented in Park and Hastie (2007). Phenotype: we use three different phenotypical informations to fit the prediction models: (i) the binary class defined by histological grades 1 and 3 (HG), (ii) the censored survival data (SURV) and (iii) the time of events (TOE), i.e. the times from diagnosis until the patients experienced an event. In the following, we will denote each of the 10 models with a unique label obtained by concatenating the acronyms referring to its characteristics (Table 1). For instance, BD.COMBUNIV.COX.SURV refers to a combination of univariate Cox's proportional hazards models fitted from a biologically driven selection of genes. The last two models taken into consideration are the published GENE76 (Wang et al., 2005) and GGI (Sotiriou et al., 2006b) models. The GENE76 model is defined as a hierarchical model using two linear combinations of the top gene expressions with respect to a ranking based on Cox's proportional hazards model. The choice of the linear combination to compute the risk score depends on the estrogen receptor status of the patient. The GGI model consists of a linear combination of the expressions of the top probes ranked according to their standardized mean difference (Hedges and Olkin, 1987) between patients with histological grades 1 and 3 tumors. The weights of the linear combination are simply the signs of the ranking statistics. 2.2.1 Tuning of hyperparameters The GGI and GENE76 models did not require any tuning of hyperparameters since they were fully defined in previous publications. Only the models based on dimension reduction and regularization required the tuning of an hyperparameter. For GW models, a simple ranking or a principal components analysis was used to select the k most relevant features. The hyperparameter k was either set to 30, this signature size being reported as a good trade-off between relevance and model complexity in the comparison study of Dudoit et al. (2002), or tuned using a 5-fold CV procedure (see Section 1 in Supplementary Material). The dimension reduction strategies using the latter procedure are referred as RANKCV and PCACV for the univariate ranking and the principal component analysis, respectively. For methods using RCOX as learning algorithm, the hyperparameter for the penalty term was tuned by using a 5-fold CV as in Park and Hastie (2007). 2.3 Performance assessment In order to assess the performance of the risk prediction methods, we used five accuracy measures: the time-dependent receiver operating characteristic (ROC) curve (Heagerty et al., 2000), the sensitivity and specificity, the concordance index (Harrell et al., 1996), the Brier score (Brier, 1950; Graf et al., 1999) and the traditional hazard ratio (HR) from Cox's proportional hazards model (Cox, 1972). 2.3.1 Time-dependent ROC Curve The ROC curve is a standard technique for assessing the performance of a continuous variable for binary classification (Sweets, 1988). A ROC curve is a plot of sensitivity versus 1−specificity for all the possible cutoff values of the continuous variable, denoted by c. In survival analysis, the continuous variable is the risk score, denoted by R, and the binary class to predict is the event occurrence, denoted by D(t). As the event occurrence is time-dependent, time-dependent ROC curves are more appropriate than conventional ones. In Heagerty et al. (2000), the authors proposed to summarize the discrimination potential of a risk score R, estimated at the diagnosis time t=0, by calculating ROC curves for cumulative event occurrence by time t. Once we define the sensitivity SE and the specificity SP as follows the ROC curve ROC(t) at time t is the plot of SE(c, t, r) versus 1−SP(c, t, r), where the cutoff point c is the parameter. In order to estimate the conditional probabilities in (1) and (2), accounting for possible censoring, we used the nearest neighbor estimator for the bivariate distribution function proposed by Akritas (1994). From the ROC curve ROC(t) we can derive the area under the curve (AUC) quantity, denoted by AUC(t). Since AUC depends on time t, we define the integrated area under the curve (IAUC) as the area under AUC(t), ∀ tT. Note that, in this study, the larger the AUC at time t, the better is the predictability of time to event (TTE) at time t. Similarly, the larger IAUC, the better is the average predictability of TTE. 2.3.2 Sensitivity and specificity A widely used performance criterion for a clinical test is the pair {sensitivity, specificity} (Simon, 2005). However, the calculation of these values from survival data requires estimators accounting for TTE and possible censoring. We used the estimators defined in (1) and (2) for sensitivity and specificity, respectively. For risk score prediction, we estimated the specificity for a sensitivity of 90% in accordance with the St Gallen (Goldhirsh et al., 2003) and National Institutes of Health (Eifel et al., 2001) treatment guidelines. For risk group prediction, we estimated both the sensitivity and the specificity of the binary classification returned by all the methods. Note that the larger the sensitivity and the specificity, the better is the predictability of TTE. 2.3.3 Concordance index The concordance index (C-index) computes the probability that, for a pair of randomly chosen comparable patients, the patient with the higher risk prediction will experience an event before the lower risk patient. The C-index takes the form where r[i] and r[j] stand for the risk predictions of the i-th and the j-th patient, respectively, and Ω is the set of all the pairs of patients {i, j} who meet one of the following conditions: (i) both patients i and j experienced an event and time t[i]t[j] or (ii) only patient i experienced an event and t[i]c[j]. In the case of risk group prediction, an additional condition must be met, that is the risk predictions are different for patients i and j (no ties in r). Note that the C-index is a generalization of the AUC(t), though it is unable to represent the evolution of performance with respect to time (Harrell et al., 1996). Standard errors, confidence intervals and P-values for the C-index are computed by assuming asymptotic normality (Pencina and D'Agostinno, 2004). Note that, in this study, the larger C-index, the better is the predictability of TTE. 2.3.4 Brier score The Brier score, denoted by BSC, is defined as the squared difference between an event occurrence and its predicted probabilities at time t. Probabilities of event, denoted by Q, can be derived from Cox's proportional hazards model fitted with the risk score R or risk group G predictions. Intuitively, if a patient experiences no event at time t, the event predicted probability should be close to zero. Symmetrically, if the patient experiences an event the probability should be close to one. The BSC formalizes this intuition by computing the time dependent quantity where the weights W are used to remove a large sample censoring bias (Gerds and Schumacher, 2006; Graf et al., 1999). A summary of the predictability error over times is returned by the integrated Brier score, denoted by IBSC. Note that the lower the BSC, the better is the predictability of TTE at time t. Similarly, the lower the IBSC, the better is the average predictability of TTE. For judging the (I)BSC, we will rely on the score of a benchmark risk prediction model which is obtained with the overall Kaplan–Meier estimator (Kaplan and Meier, 1958) for the survival function (this model is called KM in further sections). This simple risk prediction model corresponds to a model which assigns the same risk prediction to all patients. It ignores the information contained in explanatory variables completely and thus provides a suitable benchmark value similar as the one obtained with the null model in linear regression. 2.3.5 Hazard ratio In this work we used the HR as an accuracy measure for the risk group prediction in order to keep it interpretable and comparable between different risk prediction methods as the scale of predictions is well defined (see Section 2.1). HR is a summary of the risk difference between several survival curves estimated by Cox's proportional hazards model (Therneau and Grambsch, 2000). Cox's model assumes that the relative risk of event between groups is constant at each interval of time. The hazard function for a patient i as defined by Cox's proportional hazards model, can be written as Given the nature of the variable G, λ[0](t) is the hazard function for a patient in the low-risk group. Moreover, the hazard function for any patient in the high-risk group is ψλ[0](t) (proportional hazards), so ψ is the HR with ψ=exp(β). Note that, in this study, the larger the HR, the larger is the difference in survival probabilities between the groups of patients, and consequently the better is the discrimination between low- and high-risk groups. 2.4 Performance comparison To test whether a method performs significantly better than another one, we used two types of statistical tests: (i) a paired Student t-test based on the assumption of normality for the natural logarithm of the hazard ratio (i.e. the coefficient β in Cox's proportional hazards model) and the concordance index; (ii) a paired Wilcoxon rank sum test of event occurrence with respect to the time t for the AUC(t) and BSC(t). We considered that a method performs significantly better than another one if its performance is significantly better with a P < 0.05 and the difference between the performance estimates of the two methods is larger than 1% of the lowest value. Note that we did not statistically compare the estimations of sensitivity and specificity due to the lack of standard statistical test. The concordance indices and the hazard ratios for all the risk prediction methods are represented using a forest plot (Lewis and Clarke, 2001). The accuracy measures are shown as squares centered on the point estimate of the performance of each method. A horizontal line runs through the square to show its 95% confidence interval. 3 RESULTS 3.1 Breast cancer datasets In order to compare different risk prediction methods with published gene signatures, we used four large microarray BC datasets collected with Affymetrix microarray platform (22 283 common probes), called VDX (Wang et al., 2005), TBG (Desmedt et al., 2007), TAM (Haibe-Kains et al., 2008; Loi et al., 2007) and UPP (Miller et al., 2005). These datasets are publicly available from the GEO database ^2 through accession numbers {"type":"entrez-geo","attrs":{"text":"GSE2034","term_id":"2034"}}GSE2034, {"type":"entrez-geo","attrs":{"text":"GSE7390","term_id":"7390"}}GSE7390, {"type":"entrez-geo","attrs":{"text":"GSE6532","term_id":"6532"}}GSE6532/{"type":"entrez-geo","attrs":{"text":"GSE9195","term_id":"9195"}}GSE9195 and {"type":"entrez-geo","attrs": {"text":"GSE3494","term_id":"3494"}}GSE3494, respectively. VDX includes the gene expressions of 286 untreated node-negative BC patients and was used to build GENE76 and to validate GGI (see end of Section 2.2). Only TBG exhibited the same criteria for the selection of patients (198) than VDX, i.e. untreated node-negative BC patients, and was used as an official validation of GENE76 and GGI. TAM was composed of 354 ER-positive BC patients (the largest molecular group of BC) being homogeneously treated by tamoxifen therapy. UPP was composed of 251 patients being treated with heterogeneous therapies. Although the selection of patients was different for VDX and TBG, TAM and UPP datasets might contain important prognostic information as well and could, therefore, be used as additional validation sets. Due to their homogeneity in selection of patients, we should consider VDX and TBG as the most important datasets in this comparative study. The results obtained with TAM and UPP made possible a more thorough assessment of the performance. We considered the distant metastasis free survival of BC patients as the survival endpoint for VDX, TBG and TAM. This endpoint refers to the appearance of distant metastasis only. We considered the relapse free survival of BC patients, i.e. appearance of local, regional or distant relapses, in UPP as the information on distant metastasis was not available. All the survival data were censored at 10 years as in Desmedt et al. (2007). We used VDX as training set and TBG, TAM and UPP as validation sets. Although this choice was guided by the original publications in BC prognostication, we also performed all the analyses using TBG as training set to ensure that our results were not driven by the choice of the training set (Michiels et al., 2005). We obtained similar results that let the conclusions of this study unchanged (see Section 10 in Supplementary Material). We assessed the performance in the training set and in the three validation sets using all the risk prediction methods summarized in Table 1. The AURKA model was used as reference because of its low complexity. Both risk score and risk group predictions were compared. 3.2 Risk score prediction This section presents the results for the performance assessment of the risk score predictions using the five performance criteria presented in Section 2.3. The specificity for a sensitivity of 90%, is reported for all the risk prediction methods in Table 2. We observed values consistent with the literature (Buyse et al.; Desmedt et al., 2007; Foekens et al., 2006; van de Vijver et al., 2002). GGI was the best method in two validation sets, yielding larger specificity values than the simplest model AURKA. The increase was estimated to 17.4, 2.8 and 1.5% for TBG, TAM and UPP datasets, respectively. Specificity for a sensitivity of 90% for risk score prediction in the training set (VDX) and the three validation sets (TBG, TAM and UPP) The performance assessment using the concordance index, the time-dependent ROC curve and the Brier score are given in Supplementary Figures 1, 2–5 and 6–9, respectively. Looking at the most complex models, i.e. multivariate (MULTIV) survival (SURV) models using genome-wide data (GW) and GENE76, we observed overoptimistic performance estimates in training set compared to the validation sets. Although we used advanced machine learning techniques to control overfitting, i.e. linear combination of univariate models or L1 regularization in Cox's proportional hazards model, these complex models failed to outperform simpler ones in a validation setting. The simple AURKA model was competitive in all the datasets. As mentioned earlier, we statistically compared the performance of all the models with AURKA (Table 3). Only GGI was significantly better in at least two validation sets whatever the accuracy measure. It is worth to note that AURKA outperformed KM, the benchmark model for the Brier score (see Section 2.3.4), in all the datasets except for TBG. Performance for risk score prediction in the training set (VDX) and the three validation sets (TBG, TAM and UPP) As we did not observe a significant improvement using cross-validated dimension reduction strategies (see Section 2.2.1), we reported the performance of the methods using either RANKCV or PCACV in Supplementary Tables 3 and 4. We computed all the pairwise performance comparisons (see Section 9.1 in Supplementary Material) and observed again that complex models performed poorly in validation sets compared to simpler ones. According to the IAUC performance criterion in TBG dataset, the GW methods using SURV phenotype are significantly better than the other risk prediction methods. However, these significant results are not confirmed in the other validation sets. We noticed also that, unlike GGI, GENE76 was consistently worse than most of the other considered methods. 3.3 Risk group prediction This section presents the results for the performance assessment of the risk group predictions using the five performance criteria presented in Section 2.3. We used the tertile to define the risk groups (see Section 2.2), leaving 33% of the patients in the low-risk group and the remaining 66% in the high-risk group. This proportion is usually observed in BC prognostication (Buyse et al., 2006 ; Desmedt et al., 2007; van't Veer et al., 2002; Wang et al., 2005). The sensitivity and the specificity for all the risk prediction methods are reported in Table 4. Again, we observed values for sensitivity and specificity that are consistent with the literature. GGI was the best method in two validation sets, yielding larger sensitivity and specificity values than the simplest model AURKA. However, the difference is small except for TBG dataset. Sensitivity and specificity for risk group prediction in the training set (VDX) and the three validation sets (TBG, TAM and UPP) The performance assessment of the risk group predictions using the concordance index, the HR and the Brier score are given in Supplementary Figures 10, 11 and 12–15, respectively. Similarly to the risk score prediction, the simple AURKA model was competitive in all the datasets. We statistically compared the performance of all the models with AURKA (see Table 5). In order to illustrate the gain of using GGI over AURKA for risk groups prediction, we compared the survival curves for each dataset separately (see Supplementary Figs 16–19). Because neither AURKA nor GGI were fitted on VDX, this dataset is not considered as a training set. We observed a substantial improvement in risk group prediction in terms of survival probabilities in the low-risk group in using the risk groups predicted by GGI compared to AURKA. At 5 years, the increase in survival probabilities in the low-risk group was estimated to 2, 6, 0 and 1% for VDX, TBG, TAM and UPP, respectively. Performance for risk group prediction in the training set (VDX) and the three validation sets (TBG, TAM and UPP) As we did not observe a significant improvement using cross-validated dimension reduction strategies, we reported the results of the methods using either RANKCV or PCACV in Supplementary Tables 5 and We computed all the pairwise comparisons (see Section 9.2 in Supplementary Material) and made similar observations than for the risk score prediction. This holds true if we restrict the analysis to TBG dataset. However, less significant differences between methods were detected. This can be explained by the loss of prognostic information due to the risk group creation. We assisted recently to intense research in BC prognostication due to a growing availability of high-dimensional genomic information that could potentially be used for risk prediction (Simon, 2005). The situation is often characterized by a relatively small number of patients and a large number of explanatory variables. This high-dimensional setting can be seen as an opportunity to create better risk prediction models compared to those solely based on clinical data and/or single markers. At the same time this prevents from a straightforward use of classical approaches of statistical modeling and data analysis. To the best of our knowledge, the present study is the first that has systematically compared the performance of state-of-the-art survival methods for BC prognostication from gene expression data by using a training/validation framework and several accuracy measures for survival prediction. The public availability of large microarray BC datasets and the recently introduced measures for performance assessment in survival analysis allow to perform an in-depth comparative study in order to elucidate the key characteristics of a successful risk prediction method and to bring new insights into BC prognostication. We used four large microarray BC datasets (one for training and three for validation) in order to compute unbiased estimates of five accuracy measures (see Section 2.3) for 13 risk prediction methods (see Section 2.2). As expected, we observed that complex methods, e.g. multivariate survival models fitted using dimension reduction from genome-wide data or GENE76, performed very well in the training set. However, the performances in the validation sets were poorer and they failed to outperform consistently the simplest model, i.e. AURKA, in spite of the use of machine learning strategies (namely combination of univariate models or regularization) to reduce the risk of overfitting. These results highlighted the fact that the loss of interpretability deriving from the use of overcomplex methods in survival analysis of BC microarray data might be not sufficiently counterbalanced by an improvement in the quality of prediction. Interestingly, AURKA, the simplest model defining the risk score as the expression of a single proliferation gene, performed well in all the survival prediction tasks. From Tables 3 and and5,5, we noticed that AURKA was significantly better than KM, the benchmark model ignoring all genetic information, except only for the risk score prediction in TBG. Several other methods outperformed consistently KM as shown in Supplementary Figures 22 and 25. These results are very encouraging for BC prognostication as it was shown that, in diffuse large B-cell lymphoma prognostication for instance, most prediction models were not better than KM in a validation setting [see discussion of Schumacher et al. (2007)]. Moreover, GGI was the only model that outperformed AURKA in at least two validation sets whatever the accuracy measure for risk score and risk group predictions. As GGI is a linear combination of proliferation gene expressions (see Section 2.2), these results highlight the importance of proliferation measured by gene expression profiling in BC prognostication, and confirm the results of Sotiriou et al., (2006a, b). In order to go further in the comparison of the different risk prediction methods, we computed all the pairwise performance comparisons for all the methods (see Section 9 in Supplementary Material) and we observed that models using only the biologically driven selection of genes of interest (BD) led to similar performance than models using genome-wide data (GW) with ranking (RANK) or principal components analysis (PCA). This suggests that finding a combination of relevant variables in a high-dimensional setting is a difficult task since simple dimension reduction methods did not succeed to significantly improve the models from genome-wide data compared to simpler models using a very small set of genes selected from literature. Our results are consistent with earlier studies focused on stability of feature selection for GW methods (Ein-Dor et al., 2005; Michiels et al., 2005). Moreover, we observed that models fitting the histological grade (HG) as phenotype performed globally better in validation sets than models fitting survival data (SURV or TOE) suggesting that we did not succeed to catch additional information about prognostication in using survival models. The fact that the performance of GGI was the best in validation sets reinforced this observation as the GGI was built using a method similar to our weighted combination of relevant genes for the histological grade (see Section 2.2). It is worth to mention that all the accuracy measures were in nice agreement in our comparative study. Smaller significant differences in performance estimates were detected by the IAUC and the IBSC criteria, probably due to the type of statistical test [paired Wilcoxon rank sum test for AUC(t) or BSC(t) compared to paired Student t-test for C-index or HR, see Section 2.3]. A final analysis concerns the performance comparison with the classical indicators, such as the histological grade (Scarff and Torloni, 1968), AOL (Olivotto et al., 2005) and NPI (Galea et al., 1992 ). In our comparative study, GGI was consistently better than the histological grade (data not shown). We were not able to compute the risk scores for AOL and NPI on TAM and UPP datasets due to lack of information. As shown in Supplementary Table 10, AURKA and GGI outperformed consistently AOL in the VDX and TBG datasets, except for the IAUC in TBG. However, this is not the case for NPI. These results suggest that the superiority of microarray-based risk prediction methods is not obvious and need further investigations. In conclusion, our results challenge the use of microarray technology to screen the whole genome for BC prognostication of global populations of patients. Indeed, we found that models using a single gene or a small set of biologically driven selected genes yielded similar or even better performance than models fitted from genome-wide data. Although GGI, the model yielding the best performances in validation sets, uses a set of 128 probes, it can be considered as a simple extension of AURKA, i.e. a quantification of proliferation using more genes. Moreover the authors recently showed that we could yield similar performance in using only a small subset of these 128 probes (Durbecq et al., 2007). The use of high-sensitivity gene expression profiling technologies such as the reverse transcription polymerase chain reaction, in addition to be cheaper and more user friendly, might improve the performance of these risk prediction models. The relevance of proliferation for BC prognostication was previously reported by several other research groups. Indeed, Thomassen et al. (2007) found that cell cycle and cell proliferation represented the predominant overlaps in gene ontology categories of the nine prognostic signatures they compared. Yu et al. (2007) also conducted pathway analyses of five published prognostic gene signatures and found that the signatures had many pathways in common such as cell cycle, regulation of cell cycle, mitosis, apoptosis, etc. Our group also investigated in a large meta-analysis of publicly available gene expression data, how different gene lists may give rise to signatures with equivalent prognostic performance and found by dissecting these signatures according to the main molecular processes involved in breast cancer, that proliferation may be the common driving force of several prognostic signatures (Sotiriou et al., 2006a). Until now, the generation of the prognostic signatures has been done on global populations of BC patients. However, since it is clear that breast cancer is a molecular heterogeneous disease, with subgroups defined primarily by the estrogen (ER) and HER2 receptors (Perou et al., 2000; Sotiriou et al., 2003), prognosis could be refined to these molecularly homogeneous subgroups of patients. We showed, for example, in a meta-analysis recently published by our group that proliferation is the strongest parameter predicting clinical outcome in the ER+/HER2 − subgroup of patients only (group of patients representing more than 66% of the global population), whereas immune response and tumor invasion appear to be the main biological processes associated with prognosis in the ER-/HER2- and HER2+subgroups, respectively (Desmedt et al., 2008; Sotiriou et al., 2007). These recent results suggest that we could improve BC prognostication by restricting the genome-wide analysis to specific molecular subtypes. This will be the subject of further investigations. Supplementary Material [Supplementary Data] The authors would like to thank Martin Schumacher and Thomas Gerds for providing the R package, and Yann-Aël Le Borgne for his constructive comments. Funding: This work was supported by the Belgian National Foundation for Scientific Research FNRS (B.H.-K., C.D., C.S.), and by the MEDIC Foundation (C.S.). Conflict of Interest: C. Sotiriou, M. Delorenzi and M. Piccart are named inventors on a patent application for the Gene expression Grade Index used in this study. There are no other conflicts of ^1Raw gene expression and clinical data are publicly available in the GEO Gene Expression Omnibus!GEO public database and the Sweave version of the article including the standalone R code (R Development Core Team, 2007) is available from http://www.ulb.ac.be/di/map/bhaibeka/survcompaper/. • Akritas MG. Nearest neighbor estimation of a bivariate distribution under random censoring. Ann. Stat. 1994;22:1299–1327. • Barrett T, et al. NCBI GEO: mining millions of expression profiles – database and tool. Nucleic Acids Res. 2005;33:D562. [PMC free article] [PubMed] • Bontempi G. A blocking strategy to improve gene selection for classification of gene expression data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2007;4:293–300. [PubMed] • Brier GW. Verification of forecasts expressed in terms of probabilities. Mon. Weather Rev. 1950;78:1–3. • Buyse M, et al. Validation and clinical utility of a 70-gene prognostic signature for patients with node-negative breast cancer. J. Natl. Cancer Inst. 2006;98:1183–1192. [PubMed] • Cox DR. Regression models and life tables. J. R Stat. Soc. Ser B. 1972;34:187–220. • Desmedt C, et al. Strong time-dependency of the 76-gene prognostic signature for node-negative breast cancer patients in the transbig multi-centre independent validation series. Clin. Cancer Res. 2007;13:3207–3214. [PubMed] • Desmedt C, et al. Clin. Cancer Res. in press; 2008. Biological processes associated with breast cancer clinical outcome depend on the molecular subtypes. [PubMed] • Dudoit S, et al. Comparison of discrimination methods for the classification of tumors using gene expression data. J. Am. Stat. Assoc. 2002;97:77–87. • Durbecq V, et al. Transforming genomic grade index (GGI) into a user-friendly qRT-PCR tool which will assist clinicians and patients in optimizing treatment of early breast cancer. Journal of Clinical Oncology. 2007;25:21058. • Eifel P, et al. National institutes of health consensus development conference statement: adjuvant therapy for breast cancer. J. Natl. Cancer Inst. 2001;93:979–989. [PubMed] • Ein-Dor L, et al. Outcome signature genes in breast cancer: is there a unique set. Bioinformatics. 2005;21:171–178. [PubMed] • Foekens JA, et al. Multicenter validation of a gene expression–based prognostic signature in lymph node–negative primary breast cancer. J. Clin. Oncol. 2006;24 [PubMed] • Galea MH, et al. The nottingham prognostic index in primary breast cancer. Breast Cancer Res. Treat. 1992;22:207–219. [PubMed] • Gentleman R. Reproducible research: a bioinformatics case study. Stat. Appl. Genet. Mol. Biol. 2005;4 [PubMed] • Gerds TA, Schumacher M. On functional misspecification of covariates in the cox regression model. Biometrika. 2001;88:572–580. • Gerds TA, Schumacher M. Consistent estimation of the expected brier score in general survival models with right-censored event times. Biometrical J. 2006;6:1029–1040. [PubMed] • Goldhirsh A, et al. Meeting highlights: updated international expert consensus on the primary therapy of early breast cancer. J. Clin.Oncol. 2003;21:3357–3365. [PubMed] • Graf E, et al. Assessment and comparison of prognostic classification schemes for survival data. Stat. Med. 1999;18:2529–2545. [PubMed] • Haibe-Kains B, et al. Computational intelligence in clinical oncology : lessons learned from an analysis of a clinical study. In: Smolinski TG, editor. Applications of Computational Intelligence in Biomedicine and Bioinformatics: Current Trends and Open Problems of Studies in Computational Intelligence. Berlin/Heidelberg: Springer; 2008. pp. 237–268. • Hanahan D, Weinberg RA. The hallmarks of cancer. Cell. 2000;100:57–70. [PubMed] • Harrell FE, et al. Tutorial in biostatistics: multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat. Med. 1996;15:361–387. [PubMed] • Heagerty PJ, et al. Time-dependent ROC curves for censored survival data and a diagnostic marker. Biometrics. 2000;56:337–344. [PubMed] • Hedges L, Olkin I. Statistical methods for meta-analysis. J. Am. Stat. Assoc. 1987;82:350–351. • Kaplan EL, Meier P. Nonparametric estimation from incomplete observations. J. Am. Stat. Assoc. 1958;53:457–451. • Kittler J, et al. On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 1998;20:226–238. • Lewis S, Clarke M. Forest plots: trying to see the wood and the trees. Brit. Med. J. 2001;322:1479–1480. [PMC free article] [PubMed] • Loi S, et al. Definition of clinically distinct molecular subtypes in estrogen receptor positive breast carcinomas through use of genomic grade. J. Clin. Oncol. 2007;25:1239–1246. [PubMed] • Michiels S, et al. Prediction of cancer outcome with microarrays: a multiple random validation strategy. Lancet. 2005;365:488–492. [PubMed] • Miller LD, et al. An expression signature for p53 status in human breast cancer predicts mutation status, transcriptional effects, and patient survival. Proc. Natl Acad. Sci. USA. 2005;102 :13550–13555. [PMC free article] [PubMed] • Olivotto IA, et al. Population-based validation of the prognostic model adjuvant! for early breast cancer. J. Clin. Oncol. 2005;23:2716–2725. [PubMed] • Park MY, Hastie T. L1 regularization path algorithm for generalized linear models. J. R. Stat. Soc. 2007;69:659–677. • Pencina MJ, D'Agostinno RB. Overall C as a measure of discrimination in survival analysis: model specic population value and condence interval estimation. Stat. Med. 2004;23:2109–2123. [PubMed] • Perou CM, et al. Molecular portraits of human breast tumours. Nature. 2000;406:747–752. [PubMed] • R Development Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2007. • Scarff RW, Torloni H. Histological typing of breast tumors. International histological classification of tumours. 1968;2:13–20. • Schumacher M, et al. Assessment of survival prediction models based on microarray data. Bioinformatics. 2007;23:1768–1774. [PubMed] • Simon R. Roadmap for developing and validating therapeutically relevant genomic classifiers. J. Clin. Oncol. 2005;23:7332–7341. [PubMed] • Sotiriou C, Piccart MJ. Taking gene-expression profiling to the clinic: when will molecular signatures become relevant to patient care. Nat. Cancer Rev. 2007;7:545–553. [PubMed] • Sotiriou C, et al. Breast cancer classification and prognosis based on gene expression profiles from a population-based study. Proc. Natl Acad. Sci. 2003;100:10393–10398. [PMC free article] [ • Sotiriou C, et al. Comprehensive molecular analysis of several prognostic signatures using molecular indices related to hallmarks of breast cancer: proliferation index appears to be the most significant component of all signatures. In: Lippman ME, editor. Breast Cancer Research and Treatment. Vol. 100. Netherlands: Springer; 2006a. p. S86. • Sotiriou C, et al. Gene expression profiling in breast cancer: understanding the molecular basis of histologic grade to improve prognosis. J. Natl Cancer Inst. 2006b;f98:262–272. [PubMed] • Sotiriou C, et al. Biological mechanisms that trigger breast cancer (bc) tumor progression are molecular subtype dependent. ASCO Annual Meeting Proceedings. J. Clin. Oncol. 2007;25:10581. • Sweets JA. Measuring the accuracy of diagnostic systems. Science. 1988;240:1285–1293. [PubMed] • Therneau TM, Grambsch PM. Modeling Survival Data: Extending the Cox Model. In: Gail M, editor. Statistics for Biology and Health Series. New York: Springer; 2000. • Thomassen M, et al. Comparison of gene sets for expression profiling: prediction of metastasis from low-malignant breast cancer. Clin. Cancer Res. 2007;13:5355–5360. [PubMed] • van de Vijver MJ, et al. A gene expression signature as a predictor of survival in breast cancer. N. Engl. J. Med. 2002;347:1999–2009. [PubMed] • van Houwelingen H, et al. Cross-validated cox regression on microarray gene expression data. Stat. Med. 2006;25:3201–3216. [PubMed] • van't Veer LJ, et al. Gene expression profiling predicts clinical outcome of breast cancer. Nature. 2002;415:530–536. [PubMed] • Varma S, Simon R. Bias in error estimation when using cross-validation for model selection. BMC Bioinformatics. 2006;7:1471–2105. [PMC free article] [PubMed] • Wang Y, et al. Gene-expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer. Lancet. 2005;365:671–679. [PubMed] • Wilcoxon F. Individual comparisons by ranking methods. Biometrics. Bull. 1945;1:80–83. • Yu J, et al. Pathway analysis of gene signatures predicting metastasis of node-negative primary breast cancer. BMC Cancer. 2007;7:182. [PMC free article] [PubMed] Articles from Bioinformatics are provided here courtesy of Oxford University Press Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2553442/?tool=pubmed","timestamp":"2014-04-19T02:55:53Z","content_type":null,"content_length":"124032","record_id":"<urn:uuid:6f676258-f844-45d8-81f8-bf7cab17fd35>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Bounding the modular discriminant of an elliptic curve in the j-invariant up vote 6 down vote favorite Consider an elliptic curve $X=\mathbf{C}/ (\mathbf{Z}+\tau \mathbf{Z})$, where $\tau$ is an element in the complex upper half plane. We define $$\Vert \Delta\Vert(X) = (\Im \tau)^6 \vert q\prod_{k=1} ^\infty (1-q^k)^{24}\vert,$$ where we write $q=\exp(2\pi i \tau)$ as usual. This is called the modular discriminant of $X$. Assume $j$ is algebraic, i.e., $X$ can be defined over a number field. Question. Can the function $\log\Vert \Delta \Vert(-)$ (on the moduli space of elliptic curves over $\mathbf{C}$) be bounded (from above or below) in terms of the (height of the) $j$-invariant? Firstly, one should be able to answer this question ineffectively. That is, to give a yes or no answer to the question. An effective bound (if it exists) might be a bit harder to obtain. I heavily edited this old question. Therefore, the first four comments below might not make sense anymore. elliptic-curves ag.algebraic-geometry riemann-surfaces 1 It doesn't seem that the morphism $\pi$ play a role in your question, or am I missing something? – François Brunault Dec 7 '10 at 12:09 1 Actually it probably does. I first thought I had solved the question simply by taking the Weierstrass function and using that the modular discriminant is s^2(s-1)^2 but now I doubt this being correct. That's why I removed it again. The point is that the degree of \pi should come into play at some point, right? – Ari Dec 8 '10 at 8:20 1 Ow maybe I removed it too quickly and it was actually correct.... – Ari Dec 8 '10 at 8:24 1 Why do you take the absolute value of the imaginary part of $\tau$? It looks like it's already a positive real (which is more than I can say for that product in $q$). – S. Carnahan♦ Dec 8 '10 at 3 Height only makes sense if $j$ is algebraic. Your formulation implies $X$ is any elliptic curve over $\mathbb{C}$. – Felipe Voloch Jul 25 '11 at 17:38 show 4 more comments 2 Answers active oldest votes The new $\| \Delta \|$, defined as $\mathop{\rm Im}(\tau)^6$ times the absolute value of the usual modular form $\Delta$, is invariant under the full modular group $\Gamma = {\rm PSL}_2({\ bf Z})$ acting on the upper half-plane $H$. This $\| \Delta \|$ is nonzero and continuous on the quotient $H / \Gamma$, and approaches zero exponentially as $\tau$ approaches the one cusp of $H / \Gamma$. Hence $\|\Delta\|$ is uniformly bounded above, without any hypothesis on $j$; and $\|\Delta\|$ is bounded below if we have an upper bound on $|j|$. The latter bound is completely effective, namely $$ \| \Delta \| \gg (\log|j|)^6 / |j| {\rm\ \ \ \ as\ \ \ \ } |j| \rightarrow \infty, $$ and indeed $\| \Delta \| \sim C (\log|j|)^6 / |j|$ for some universal constant $C$, which is $(2\pi)^{-6}$ if I did this right. Now if you bound the height of $j$ from above then you impose an upper bound on the absolute value of any conjugate of $j$, and up vote 9 thus on $\| \Delta \|$. down vote accepted Whether and how this lower bound depends on the height of $j$ then hinges on which flavor of height you're using, i.e. whether you normalize according to the degree $[{\bf Q}(j) : {\bf Q}] $, and whether you take logarithms. There is no such bound in the other direction: large height of $j$ does not force small $\| \Delta \|$ because it does not force $j$ to have a large conjugate (e.g. $j$ could be $1 / 10^{100}$). 1 How does one obtain the effective lower bound for $\Vert \Delta\Vert$ in terms of the absolute value of $j$? – Ari Jul 26 '11 at 9:10 1 @Ariyan: As $j(\tau) \rightarrow \infty$ with $\tau$ in the usual fundamental domain, $q \rightarrow 0$ with $q \sim 1/j$. Thus $|q| \sim 1/|j|$. Also Im$(\tau) = \log(1/|q|)/(2\pi) \ sim \log|j|/(2\pi)$. This accounts for two of the factors of $\|\Delta\|$, and the remaining factor $\prod_{n=1}^\infty (1-q^n)^{24}$ approaches $1$ as $q \rightarrow 0$. This gives the asymptotic formula for $\|\Delta\|$ in terms of $|j|$, and all the error estimates are readily seen to be effective. – Noam D. Elkies Jul 26 '11 at 10:42 Thank you very much. This is a great answer. Do you think there's any hope in making the uniform upper bound on $\Vert \Delta \Vert$ explicit? – Ari Jul 26 '11 at 13:00 You're welcome!$$ $$Getting an explicit upper bound on $\|\Delta\|$ is a calculus exercise. We may assume $\tau = x+iy$ with $y^2 \geq 3/4$ (fundamental domain). Then the factor $\prod_ 1 {n=1}^\infty (1-q^n)^{24}$ of $\|\Delta\|$ is at most $\prod_{n=1}^\infty (1+e^{-\sqrt{3}\pi n})^{24}$, while the rest is $y^6 e^{-2\pi y}$ which is maximized at $y = 3/\pi$, etc.$$ $$It seems (and is probably known) that in fact the max occurs at the sharper corner $\tau = (\pm 1 + \sqrt{3}i)/2$ of the fundamental domain, where $\|\Delta\| = .002+ = (2\pi^2/9)^6 / \Gamma(2/3)^{36}$. – Noam D. Elkies Jul 27 '11 at 1:55 add comment As it stands, I think this question is still too vague to be answerable in generality. What kind of expression are you permitting for the bound? Certainly one can construct an artificial bound which doesn't even involve $j$ at all, which would be completely silly and certainly not what you have in mind. up vote 3 As a first step, it's easy to see that no rational function of $j$ bounds $\Delta$. Indeed, if $R$ is a rational function such that $|\Delta| \leq |R(j)|$ everywhere on $X=\mathbb{H}/PSL down vote (2, \mathbb{Z})$, then the bounded meromorphic function $\Delta/R(j)$ must be a constant. But this implies that $\Delta$ has weight $0$, which is false. Thank you for this very important remark. It made me realize that I'm actually looking for a bound on the logarithm of the modular discriminant. Sorry for the vagueness. I'm just interested in this particular discriminant $\log \Vert \Delta \Vert$, because from an Arakelov-theoretic point of view it coincides up to a constant with the Faltings delta invariant. – Ari Jul 25 '11 at 16:52 add comment Not the answer you're looking for? Browse other questions tagged elliptic-curves ag.algebraic-geometry riemann-surfaces or ask your own question.
{"url":"http://mathoverflow.net/questions/48544/bounding-the-modular-discriminant-of-an-elliptic-curve-in-the-j-invariant/71279","timestamp":"2014-04-20T09:03:05Z","content_type":null,"content_length":"70225","record_id":"<urn:uuid:0c8a8ab4-676d-4073-a808-8a12d650c770>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
whole wheat percentage February 22, 2009 - 10:36am I was thinking about how we often describe mixtures of flour in ratios, such as 50/50 whole wheat/white, 20/80, etc. This is referring to the ratio of a measure of whole wheat flour to a measure of In reality, though, whole wheat flour is white flour -- 100%, in fact -- plus a little extra. Or, perhaps more accurately, white flour is 70% of the whole wheat grain. So, I'm curious as the most accurate way to describe a "50/50" mix. What about a 20/80 mix (white/ww)? 80/20? Does anyone have the math power to figure these out? This is purely an academic exercise, and probably not helpful for most home bakers. However, I have learned that when my left brain cries out for sustenance, it is best to feed him before he gets out of hand and does things like making lists and organizing.
{"url":"http://www.thefreshloaf.com/print/10838","timestamp":"2014-04-24T07:46:50Z","content_type":null,"content_length":"6844","record_id":"<urn:uuid:2bb100ed-453f-44fb-abb5-761be79a0876>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-Dev] chi-square test for a contingency (R x C) table Neil Martinsen-Burrell nmb@wartburg.... Thu Jun 17 19:43:18 CDT 2010 On 2010-06-17 11:59, Bruce Southey wrote: > On 06/17/2010 10:45 AM, josef.pktd@gmail.com wrote: >> On Thu, Jun 17, 2010 at 11:31 AM, Bruce Southey<bsouthey@gmail.com> wrote: >>> On 06/17/2010 09:50 AM,josef.pktd@gmail.com wrote: >>>> On Thu, Jun 17, 2010 at 10:41 AM, Warren Weckesser >>>> <warren.weckesser@enthought.com> wrote: >>>>> Bruce Southey wrote: >>>>>> On 06/16/2010 11:58 PM, Warren Weckesser wrote: >>>>>> The handling for a one way table is wrong: >>>>>> >>>print 'One way', chisquare_nway([6, 2]) >>>>>> (0.0, 1.0, 0, array([ 6., 2.])) >>>>>> It should also do the marginal independence tests. >>>>> As I explained in the description of the ticket and in the docstring, >>>>> this function is not intended for doing the 'one-way' goodness of fit. >>>>> stats.chisquare should be used for that. Calling chisquare_nway with a >>>>> 1D array amounts to doing a test of independence between groupings but >>>>> only giving a single grouping, hence the trivial result. This is >>>>> intentional. >>> In expected-nway, you say that "While this function can handle a 1D >>> array," but clearly it does not handle it correctly. >>> If it was your intention not to do one way tables, then you *must* check >>> the input and reject one way tables! >>>>> I guess the question is: should there be a "clever" chi-square function >>>>> that figures out what the user probably wants to do? >>> My issue is that the chi-squared test statistic is still calculated in >>> exactly the same way for n-way tables where n>0. So it is pure >>> unnecessary duplication of functionality if you require a second >>> function for the one way table. I also prefer the one-stop shopping approach >> just because it's chisquare doesn't mean it's the same kind of tests. >> This is a test for independence or association that only makes sense >> if there are at least two random variables. > Wrong! > See for example: > http://en.wikipedia.org/wiki/Pearson's_chi-square_test > "Pearson's chi-square is used to assess two types of comparison: tests > of goodness of fit and tests of independence." > The exact same test statistic is being calculated just that the > hypothesis is different (which is the user's problem not the function's > problem). So please separate the hypothesis from the test statistic. It is only the exact same test statistic if we know the expected cell counts. How these expected cell counts are determined depends completely on the type of test that is being carried out. In a goodness-of-fit test (chisquare_oneway) the proportions of each cell must be specified in the null hypothesis. For an independence test (chisquare_nway), the expected cell counts are computed from the given data and the null hypothesis of independence. The fact that the formula involving observed and expected numbers is the same should not obscure the fact that the expected numbers come from two completely different assumptions in the n=1 and n>1 cases. Can you explain how the expected cell counts should be determined in the 1D case without the function making assumptions about the user's null hypothesis? I believe that we CANNOT separate the test statistic from the user's null hypothesis and that is the reason that chisquare_oneway and chisquare_nway should be separate functions. The information required to properly do a goodness-of-fit test is qualitatively different than that required to do an independence test. I support your suggestion to reject 1D arrays as input for chisquare_nway. (With appropriate checking for arrays such as np.array([[[1, 2, 3, 4]]].) >> I don't like mixing shoes and apples. > Then please don't. Great. I'm glad to see that we all agree that chisquare_oneway and chisquare_nway should remain separate functions. :) More information about the SciPy-Dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2010-June/014912.html","timestamp":"2014-04-18T11:06:49Z","content_type":null,"content_length":"7851","record_id":"<urn:uuid:6ecf769f-3471-4cc8-abae-a8e59c465418>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
[Scipy-tickets] [SciPy] #1466: More accurate roots/weights for Gauss-Hermite quadrature SciPy Trac scipy-tickets@scipy.... Mon Jun 27 23:28:03 CDT 2011 #1466: More accurate roots/weights for Gauss-Hermite quadrature Reporter: Bogdan | Owner: pv Type: enhancement | Status: new Priority: normal | Milestone: Unscheduled Component: scipy.special | Version: devel Keywords: | System: OSX 10.6.7, Python 2.6.1 (the one provided with system), Numpy 2.0.0.dev-3071eab and SciPy 0.10.0dev (installed via SciPy Superpack) I am using scipy.special.orthogonal.hermite and .h_roots to perform decomposition into basis of harmonic oscillator eigenfunctions (which is essentially G-H quadrature). I noticed that decomposition is not behaving well when the number of modes is big (>40 in my case). After some testing I narrowed it down to inaccurate polynomial roots provided by h_roots(). As far as I understand from the code, scipy implementation uses Golub- Welsch algorithm, which is general and fast because it employs linear algebra. But in our imperfect world with finite precision numbers it starts to lose stability with large matrix sizes. So I took recursive root finding algorithm from Numerical Recipes (3rd ed., section 4.6.1) and it showed much better accuracy (see attached script with comparison). I understand that this algorithm is less general (because it uses good guesses for first roots of Hermite polynomials) and definitely slower than G-W, but there are applications when its accuracy is necessary. I think it should be present in SciPy, at least as an alternative to the current Ticket URL: <http://projects.scipy.org/scipy/ticket/1466> SciPy <http://www.scipy.org> SciPy is open-source software for mathematics, science, and engineering. More information about the Scipy-tickets mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-tickets/2011-June/004187.html","timestamp":"2014-04-17T13:24:45Z","content_type":null,"content_length":"4867","record_id":"<urn:uuid:d9a593b1-35a0-4a52-875b-ab33de31bfde>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Random walk on the hypercube up vote 1 down vote favorite Let $H_N=\{0,1\}^N$ the N-dimensional hypercube. We define the following random walk $X_n$ on $H_N$: • start from a point $x \in H_N$ • pick at random an integer $k$ in $[1,N-1]$ and exchange $x(k)$ and $x(k+1)$ • go on like that... For instance, with n=4, the random walk looks like: $X_0=0011$, $X_1=0101$, $x_2=0110$, ... Do you know any article/textbook where this random walk is studied in details ? I have been looking in many classical textbook without success... Beyond this general request, I am interested in the following question: take a given point $x^* \in H_N$ and define $d_n$ the Hamming distance between $X_n$ and $x^*$. Is is possible to estimate the mean of the first time when $d_n$ hits $0$ (as a function of $x*$ and the starting point of the chain $X_0$) ? Of course we suppose that $X_0$ and $x^*$ have the same number of ones... I hope you will enjoy this problem. Thank you ! This looks like the sort of thing that R. Graham et al would have studied at Bell Labs. Does a paper such as www-stat.stanford.edu/~cgates/PERSI/papers/… provide any guidance? – Benjamin Dickman Mar 15 '13 at 21:27 "Go on like that" is not entirely clear to me. Do we pick a new random $k$ at each step? Or do we proceed $k$, $k+1$, ..., which is consistent with the example you gave? Also, I suggest that it will probably help to visualize the problem to think of these 0,1 strings as lattice paths from $(0,0)$ to $(t,n-t)$, where we read 1's as horizontal steps and 0's as vertical steps. The basic swap move which you describe looks at two adjacent steps. If they are both horizontal or both vertical, nothing happens; otherwise, a single box is added or subtracted from the region under the path. – Hugh Thomas Mar 16 '13 at 0:42 One upper bound would be the time to reach a particular permutation by adjacent transpositions in the symmetric group. However, this value should be much lower, since you only need to hit one of $k! (n-k)!$ permutations. – Douglas Zare May 18 '13 at 22:56 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged random-walk or ask your own question.
{"url":"http://mathoverflow.net/questions/124640/random-walk-on-the-hypercube","timestamp":"2014-04-17T15:49:07Z","content_type":null,"content_length":"48682","record_id":"<urn:uuid:f9ce3c76-28f1-4aea-988e-b64f151a6fda>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Autonomous perturbations of LISA orbits by Pucacco, Giuseppe and Bassan, Massimo and Visco, Massimo 27pages, 20 figures We investigate autonomous perturbations on the orbits of LISA, namely the effects produced by fields that can be expressed only in terms of the position, but not of time in the Hill frame. This first step in the study of the LISA orbits has been the subject of recent papers which implement analytical techniques based on a “post-epicyclic” approximation in the Hill frame to find optimal unperturbed orbits. The natural step forward is to analyze the perturbations to purely Keplerian orbits. In the present work a particular emphasis is put on the tidal field of the Earth assumed to be stationary in the Hill frame. An accurate interpretation of the global structure of the perturbed solution sheds light on possible implications on injection in orbit when the time base-line of the mission is longer than that assumed in previous papers. Other relevant classes of autonomous perturbations are those given by the corrections to the Solar field responsible for a slow precession and a global stationary field, associated to sources like the interplanetary dust or a local dark matter component. The inclusion of simple linear contributions in the expansion of these fields produces secular solutions that can be compared with the measurements and possibly used to evaluate some morphological property of the perturbing components.
{"url":"http://brownbag.lisascience.org/arxiv1005-2976/","timestamp":"2014-04-16T07:31:53Z","content_type":null,"content_length":"30540","record_id":"<urn:uuid:3eebb4a5-b476-4fc2-b35e-abcec07382fb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Hersh's incoherent attack on formalism and logicism Stephen G Simpson simpson at math.psu.edu Thu Oct 1 12:16:38 EDT 1998 Reply to Hersh 30 Sep 1998 16:19:42. 1. Frege and modern logic Hersh writes: > You say "you must dismiss Frege's work as a failure." Not at all. > I wrote, page 141 of W.I.M.R.: "Frege's introduction of quantifiers is > consdered the birth of modern logic." Do *you* consider Frege's work to be the birth of modern logic? Do *you* think modern logic is of value for philosophy of mathematics? You don't seem to think so. 2. The axiom of infinity: Hilbert's program and formalism > You research program doesn't respond to my remark. Which remark, and which research program? One of your remarks (12 Sep 1998 18:06:45) was as follows: > One famous difficulty is [the] axiom of infinity. You can't do > modern math without it. One of "my" research programs (actually it involves a number of people) responds directly to your remark. It does so by examining the extent to which modern mathematics is reducible to finitism. This is in the context of Hilbert's program, which you have dismissed as a failure. Are you willing to rescind your dismissal? Are you willing to examine evidence against your remark? > You say, "If you are unwilling to study the role of infinity in > mathematics, then how can you expect anyone to take your comments > on it seriously?" > My comment, that the axiom of infinity is not inituitively > plausible as an axiom of logic, is not mine. It has been made by > others .... Even if the comment has been made by others, you repeated it, so you must take some responsibility for it. You can't hide behind others. However, I wasn't referring to that particular comment. (More on that comment below, in connection with logicism.) Rather, I was referring to another of your comments concerning the axiom of > One famous difficulty is [the] axiom of infinity. You can't do > modern math without it. I say again: How can you expect anyone to take this comment seriously, if you are not willing to examine evidence for and against it? > You say, "You present a a caricature of Hilbert's work, then > attack the caricature." No. I used the word "formalism in the > common, colloquial sense, not in Hilbert's sense. There is no > caricature and no attack. Here you seem to be evading the fact that Hilbert is generally regarded as the originator of formalism. Do you dispute this conventional view of the history of formalism? But, all right, let's take you at your word and assume that you never attacked Hilbert's formalism. Let's assume that you were attacking somebody else's formalism. Who are these hitherto unnamed formalists? Do you recognize a difference between their views and those of Hilbert? Or are you merely attacking coffee-room chatter, as Martin Davis suggested? > You say, "You were arguing that it's OK to dismiss Hilbert's > views without a hearing." As I keep trying to explain, I never > referred to Hilbert's views at all. The word formalism has more > than one meaning. I can't believe you're unaware of that. I'm *not* aware of that. I accept the conventional view that Hilbert is the originator of formalism. If you have some other kind of formalism in mind, please tell me who originated it and how it differs from Hilbert's formalism. Here are the real questions: Do *you* think Hilbert's program is of any actual or potential value for philosophy of mathematics? Do *you* think the research of your other formalists (who are they?) has any actual or potential value for philosophy of mathematics? 3. The axiom of infinity: set theory and logicism Hersh writes: > You seem incapable of dealing with this well known fact. Here you are referring to the well known fact that the axiom of infinity is not generally regarded as a logical axiom. I accept that fact, and I understand the reasons for it, at least in the context of Russell's type theory and ZF set theory. In this sense, one could say that these theories do not represent a *total* vindication of the logicist program. But it's going too far to say, as you do, that the logicist program as a whole is a mistake or a By the way, there is an alternative set theory known as New Foundations (= NF), going back to Quine. I don't know too much about it, but my impression is that it attempts to carry out the logicist program by deriving the axiom of infinity and others from some logical principles. Naturally there are costs to this. As I say, I am not an expert on this. The FOM subscriber list includes some experts on NF: Thomas Forster, Randall Holmes. Also, there is some recent work of Harvey Friedman about motivating the axioms of set theory in a more logical way, as an outcome of a theory of mathematical predication. Do you regard this kind of f.o.m. research as legitimate? Do you regard it as having potential interest for philosophy of 4. Demonization Hersh writes: > what do you mean, "demonize"? When you attribute such motives to > me, it's I who am being demonized. To criticize or even reject > foundationalism isn't demonizing anything. It's what people do > in the course of finding their philosophical beliefs. You have gone beyond what I regard as legitimate philosophical criticism. You have attacked foundationalism as anti-"humanistic", anti-life in a sense, and you have tried to artificially link foundationalism with religion and with authoritarian or totalitarian politics. I don't think "demonize" is too strong a term to describe your behavior. > It's weird to tell me I regard the pursuit of certainty as "evil > incarnate." It's not at all weird to tell you this, in light of your attempt to demonize foundationalism, on the explicit grounds that the foundationalists (Frege, Brouwer, Hilbert, ...) were motivated by a quest for certainty. > to be fair, you'll have to accuse Sol of demonizing, attacking, > and being "so hostile" to fom!! Not at all. Sol Feferman has never attempted to demonize f.o.m. by saying that it is anti-humanistic and linking it to totalitarian politics, as you routinely do. 5. Misinterpretation Reuben Hersh writes: > I asked why you consistently persist in misinterpreting me. > You didn't answer, of course. OK, I'll answer. The answer is that I don't accept the premise of your question. The premise of your question is that I am misinterpreting you. I don't accept that premise. I don't think I am misinterpreting you. I think my interpretation of you is correct. To put it colloquially, I think I've "got your number". -- Steve More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-October/002240.html","timestamp":"2014-04-19T01:49:41Z","content_type":null,"content_length":"9643","record_id":"<urn:uuid:59631324-07ba-4e74-ac48-90f73194504d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Parameter setup Up: Cross-correlation Previous: Definitions Figure 4 shows an example of a cross-correlation function for a sum of squared differences. The reference image is a bright galaxy, and the compared image is a shifted version of the reference with a translation vector of (+5,-5). This plot clearly shows that the cross-correlation criterion is minimal at the expected point. A search for a local minimum will yield the right translation vector. At this point, a subpixel offset detection is applied by fitting a parabola to the cross-correlation signal in x and y, and looking for the subpixel minimum. This method is theoretically precise up to a 1/100 pixel precision in the ideal case of noiseless images. The precision obtained with SOFI data is about 1/10 pixel. Nicolas Devillard
{"url":"http://www.eso.org/projects/dfs/papers/jitter99/node11.html","timestamp":"2014-04-18T19:52:35Z","content_type":null,"content_length":"3209","record_id":"<urn:uuid:041cb2f0-a3af-4a40-805c-9c33520fc080>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
HUP in QFT and QM:virtual particles and I was thinking that feynman rules are derived first,then the gauge is fixed (which is supposed to be the key advantage of path integral formalism) You have to be careful; there are the so-called [itex]R_\xi[/itex] gauges which are a generalization of the Lorentz gauge [itex]\partial_\mu A^\mu = 0[/itex]; in the [itex]R_\xi[/itex] gauges one adds a gauge term to the Lagrangian in the path integral [tex]\delta\mathcal{L}_\xi = -\frac{(\partial_\mu A^\mu)^2}{2\xi}[/tex] which is a Gaussian located at [itex]\partial_\mu A^\mu = 0[/itex] with width [itex]\xi[/itex] in the "gauge field space". Via this mechanism one has a "family of gauge fixings" labelled by the continuous parameter [itex]\xi[/itex]; for [itex]\xi\to 0[/itex] the gauge breaking term in the action reduces to a delta functional in the PI fixing the theory to ordinary Lorentz gauge. Another possibility is to introduce the axial gauge condition [itex]n_\mu A^\mu = 0[/itex] where the global direction [itex]n_\mu[/itex] remains a s a free parameter in the theory on the level of Feynman diagrams. It is true that via this mechanism one introduces a free parameter into the Feynman rules and that chosing a specific gauge (i.e. a specific value for [itex]\xi[/itex], [itex]n^\mu[/itex], ...) is done afer deriving the Feynman rules. But this is not what I refer to. What I mean is that one first fixes a family auf gauges , which may depend on a free parameter and then derives the Feynman rules for this family.
{"url":"http://www.physicsforums.com/showthread.php?p=4157362","timestamp":"2014-04-16T07:45:24Z","content_type":null,"content_length":"44584","record_id":"<urn:uuid:9ee8052e-d757-43a1-9ea8-3731cc1990fe>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamentals of College algebra (10th Ed) Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/fundamentals-college-algebra-10th-ed-10th/bk/9780534390587","timestamp":"2014-04-18T13:10:12Z","content_type":null,"content_length":"39131","record_id":"<urn:uuid:c2b6d32e-3e2d-418d-ba4b-e22e74e1ccf7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
ACT Section 5: Writing Test Name: Section Five: Writing # of Questions: 65 Study Guide Pages Last Update: Feb 23, 2014 I Found A Saviour in Certkiller's act testsI can't believe how lucky I was to pass my test on the first try. My friends told me it would take at least a couple of tries before I succeeded. Thanks to Certkiller's act test questions I was able to do it in one sitting. I can't thank them enough for enabling me to pass quickly and just with the use of their act sample tests. With just a little investment and work, the ACT practice math will allow anyone to pass their math test with ease. Thanks again and I'll be sure to let my friends know where they can go for help. Rebecca Simmons I Got An Amazing Score In The Math SectionI am glad the ACT exam is over and I didn't ruin it because of the math section. I actually did well in the math section. Thanks to the act test tutoring which I got from certkiller. The act practice test 2011has a large collection of question and answers, which provided me with ample opportunities to practice. Well, practice makes perfect and thanks to the practice act math. I somewhat perfected my math skills. I got an amazing score in all sections and now, I can go to college too thanks to act test practice questions. Kathy Thank You Certkiller for act full practice testYES thank you! I would surely have flunked the math section if it hadn't been for the a practice tests for act, which I got from certkiller. This resource is really helpful and it solved all my problems, which I was facing in the math section before. act math test practice is perfect for those who have problems with maths, like I have. I am really thankful to the practice act math material that certkiller provided me with. Thanks to it. I got a good score in the ACT exam. Ben Thanks Certkiller You Are The BestCertkiller is the best thing that ever happened to me! Math was my weakest subject but I also needed to pass it in the ACT exam. I didn't know what to do and then a friend who had just achieved a perfect score in ACT exam advised me to use the act math practice test by certkiller's act math practice tests. I made use of it and I did WELL in the math section too! Guys, you have to check out the the act practice test. I loved the official act practice test act math material. Caleb I Performed Well In The sample act test questions Too!I cannot believe it! Certkiller is just amazing, because I, of all people was able to get a good result in the maths section. This is all because of the act practice test answers, which I got from Certkiller website. The study guide for act test is especially designed to ensure maximum practice, and that helps prepare for all areas. I am really happy with the ACT practice maths test, because even I did well in the maths section,and got a good overall score in the exam. Emma act classes act exam act online prep act practice act practice test act practice tests act prep act prep classes act prep online act prep test act questions act sample questions act sample test act study guide act test prep act tests act tips and tricks practice act AACD ACLS ACT ASSET ASVAB BLS CAHSEE CBEST CCE-CCC CDL CFA CGFNS CHSPE COMPASS CPA CPAT CPR CPT CRCT CSCS CUNY DMV EMT FCAT FIRST AID FORKLIFT GACE GED GMAT GRADE 7 GRE HESI HOBET HSPT ISAT LEED LSAT MAT MCAT MTEL NCCT NCLEX NET NREMT NSCA OGT PCAT PERMIT TEST PMP PRAXIS PRAXIS 2 PSAT PTCB QRZ SAT SOL SSAT TAKS TCAP TCLEOSE TEAS THEA TOEFL USMLE Get 10% Discount on Your Purchase When You Sign Up for Email Instant Discount 10% OFF Enter Your Email Address to Receive Your 10% OFF Discount Code Plus... Our Exclusive Weekly Deals * We value your privacy. We will not rent or sell your email address Instant Discount 10% OFF Save 10% Today on all IT exams. Instant Download. Use Discount Code:
{"url":"http://www.certkiller.com/admissiontest/act-tests.htm","timestamp":"2014-04-19T01:49:08Z","content_type":null,"content_length":"44095","record_id":"<urn:uuid:1fb09fe0-c867-4e95-b6ec-f8e96682c232>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Variable name Definition and construction Source GDP growth rate Percentage growth rate of real GDP per capita in constant prices. Reference year: 1996, Laspeyres index. Penn World Tables 6.3 GDP volatility Standard deviation of real GDP growth rate. Yearly series. Penn World Tables 6.3 Consumption volatility Standard deviation of real consumption growth rate (real GDP times consumption share of GDP). Yearly series. Penn World Tables 6.3 Investment volatility Standard deviation of real Investment growth rate (real GDP times investment share of GDP). Yearly series. Penn World Tables 6.3 Gov. consumption volatility Standard deviation of real public consumption growth rate (real GDP times government cons.share of GDP). Yearly series. Penn World Tables 6.3 Investment share of GDP Log level of the investment share of real GDP. Yearly series. Penn World Tables 6.3 Initial GDP Log level of GDP of the 1st year of the window of which the corresponding volatility is computed. Yearly series. Penn World Tables 6.3 Population growth rate Percentage growth rate of population. Yearly series. Penn World Tables 6.3 Education Logarithm of the percentage of secondary schooling attained in population aged 25 years and over. Barro-Lee dataset (2010)
{"url":"http://www.hindawi.com/journals/isrn.economics/2013/381368/tab3/","timestamp":"2014-04-20T21:54:40Z","content_type":null,"content_length":"3254","record_id":"<urn:uuid:aceff731-d3a4-4b1e-b27c-5c7b06ad9336>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Oak Brook Calculus Tutor Find an Oak Brook Calculus Tutor I earned High Honors in Molecular Biology and Biochemistry as well as an Ancient History (Classics) degree from Dartmouth College. I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece. 41 Subjects: including calculus, chemistry, physics, English ...In addition I have taking 12 hours of graduate level mathematical computer science, incuding combinatorics, advanced combinatoric, and graph theory. I have taken two semesters of undergraduate differential equations as well as 1 semester of ODE's (ordinary differential equations) and elementary ... 24 Subjects: including calculus, physics, geometry, GRE ...Since then I have worked as a TA for "Finite Mathematics for Business" which had a major component of counting (combinations, permutations) problems, and linear programming, both of which are common in discrete math. Other topics in which I am well versed are formulation of proofs, which is a ma... 22 Subjects: including calculus, geometry, statistics, precalculus ...I have taught Pre-algebra, Elementary Algebra, and Intermediate Algebra at various colleges. These courses correspond with material covered in Algebra 1. I have also written a high school textbook for Algebra 1. 25 Subjects: including calculus, writing, GRE, geometry ...This way I create a desire to learn math rather than have to learn it. For example: I would teach the basic Multiplication facts by memorizing them in a fun musical rhymes. I have taught my own son who is just starting his 2nd grade. 11 Subjects: including calculus, geometry, algebra 2, trigonometry
{"url":"http://www.purplemath.com/Oak_Brook_calculus_tutors.php","timestamp":"2014-04-16T05:06:19Z","content_type":null,"content_length":"23899","record_id":"<urn:uuid:90fca716-5f89-4567-b10e-58f9ca830556>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: find the slope and y-intercept of each line: 3x-4y=12 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5060e12ee4b02e13941114ea","timestamp":"2014-04-19T10:27:26Z","content_type":null,"content_length":"60798","record_id":"<urn:uuid:91f8ee83-5628-46e1-9bd0-6d413ecc688d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
The following text was written a few years ago, but much of it never got published. So, I thought that this might be a good opportunity to make it available, since what it says is still true today. Since a phylogenetic tree is interpreted in terms of the monophyletic groups that it hypothesizes, it is important to quantitatively assess the robustness of all of these groups (i.e. the degree of support for each branch in the tree) — is the support for a particular group any better than would be expected from a random data set? This issue of clade robustness is the same as assessing branch support on the tree, since each branch represents a clade. Many different techniques have been developed, including: 1. analytical procedures, such as interior-branch tests (Nei et al. 1985; Sneath 1986), likelihood-ratio tests (Felsenstein 1988; Huelsenbeck et al. 1996b), and clade significance (Lee 2000); 2. resampling procedures, such as the bootstrap (Felsenstein 1985), the jackknife (Lanyon 1985), topology-dependent permutation (Faith 1991), and clade credibility or posterior probability (Larget and Simon 1999); and 3. non-statistical procedures, such as the decay index (Bremer 1988), clade stability (Davis 1993), and spectral signals (Hendy and Penny 1993). Of these, far and away the most popular and widely used method has been the bootstrap technique (Holmes 2003; Soltis and Soltis 2003). The bootstrap This method was first introduced by Efron (1979) as an alternative method to jackknifing for producing standard errors on estimates of central location other than the mean (e.g. the median), but it has since been expanded to cover probabilistic confidence intervals as well (Efron and Tibshirani 1993; Davison and Hinkley 1997). It was introduced into phylogenetic studies by Penny et al. (1982) and then formalized by Felsenstein (1985), who suggested that it could be implemented by holding the taxa constant and resampling the characters randomly with replacement, the tree-building analysis then being applied to each of the bootstrap resamples. Bootstrapping is a monte carlo procedure that it generates "pseudo" data sets from the original data, and uses these new data sets for its inferences. That is, it tries to derive the population inferences (i.e the "true" answer) from repeated generation of new samples, each sample being constrained by the characteristics of the original data sample. It thus relies on an explicit analogy between the sample and the appropriate population: that sampling from the sample is the same as sampling from the population. Clearly, the strongest requirement for bootstrapping to work is that the sample be a reasonable representation of the population. Bootstrap confidence intervals are only ever approximate, especially for complex data structures, as they are a fundamentally more ambitious measure of accuracy than is a simple standard error (SE). For example, the usual formula for calculating a confidence interval (CI) when the population frequency distribution is assumed to be normal is: CI = t * SE, where t is the Student t-value associated with the particular sample size and confidence percentage required. However, the main use of bootstrapping is in situations where the population frequency distribution is either indeterminate or is difficult to obtain empirically, and so this simple formula cannot be applied. Getting from the standard error to a confidence interval is then not straightforward. As a result, there are actually several quite distinct procedures for performing bootstrapping (Carpenter and Bithell 2000), with varying degrees of expected success. Types of bootstrap The original technique is called the percentile bootstrap . It is based on the principle of using the minimum number of ad hoc assumptions, and so it merely counts the percentage of bootstrap resamples that meet the specified criteria. F§or example, to estimate the standard error of a median, the median can be calculated for each bootstrap resample and then the standard deviation of the resulting frequency distribution will be the estimated standard error of the original median. The method is thus rather simplistic, and is often referred to as the naïve bootstrap , because it assumes no knowledge of how to calculate population estimates. It is a widespread method, as it can be applied even when the other methods cannot. However, it is known to have certain problems associated with the estimates produced, particularly for confidence intervals, such as bias and skewness (especially when the parent frequency distribution is not symmetrical). These were pointed out right from the start (Efron 1979), and efforts have subsequently been made to deal with them. Nevertheless, this is the form of bootstrap introduced by Felsenstein (1985), and it is the one used by most phylogeny computer programs. It is therefore the one that will be discussed in more detail below. These known problems with the naïve bootstrap can be overcome by using bias-corrected (BC) bootstrap estimates — that is, the bias is estimated and removed from the calculation of the confidence interval. Possible dependence of the standard error on the parameter being estimated, which creates skewness, can be dealt with by using bias-corrected and accelerated (BCa) bootstrap estimates, so that the bias and skewness are both estimated and removed from the calculation of the confidence interval. The BCa method is the one usually recommended for use (Carpenter and Bithell 2000), because it corrects for both bias and skewness. This method is much slower to calculate than the simple percentile bootstrap, because it requires an extra parameter to be estimated for each of the bias and skewness corrections, and the latter correction is actually estimated by performing a separate jackknife analysis on each bootstrap resample (which means that the analysis can take 100 times as long as a naïve analysis). There have been several attempts to apply this form of correction methodology to bootstrapping in a phylogenetic context (Rodrigo 1993; Zharkikh and Li 1995; Efron et al. 1996; Shimodaira 2002), but while these can be successful at correcting bias and skewness (Sanderson and Wojciechowski 2000) these have not caught on, possibly because of the time factor Alternatively, we can decide not to be naïve when calculating confidence intervals, and to therefore calculate them in the traditional manner, using the standard error and the t-distribution. However, we then need to overcome any non-normal distribution problems of these two estimates by estimating both of them using bootstrapping. That is, bootstrapped-t confidence intervals are derived by calculating both the standard error and the t-value using bootstrapping, and then calculating the confidence interval as ±t * SE. To many people, this is the most natural way to calculate confidence intervals, since it matches the usual parametric procedure, and thus it is frequently recommended (Carpenter and Bithell 2000). Once again, this method is much slower to calculate than the percentile bootstrap, because the t-value is actually estimated by performing a separate bootstrap analysis on each bootstrap resample (which means that the analysis can take 100 times as long as a naïve analysis). This methodology seems not to have yet been suggested in a phylogenetic context, and in any case the time factor may be restrictive. It is also possible to calculate test-inversion confidence intervals . This idea is based on the reciprocal relationship of statistical tests and confidence intervals, where (for example) non-overlapping 95% confidence intervals indicate statistically significant patterns at p<0.05 and vice versa. Thus, if we work out the situation where the pattern has a probability of p=0.05 of occurring by chance, then this defines the 95% confidence limit of the pattern. Clearly, this can be a complex process, especially for two-sided tests (which double the required number of calculations), as it can only be done by iteratively modifying the pattern and re-calculating the probability until the solution is reached. Once again, no-one yet seems to have suggested this methodology in a phylogenetic context, which is not unexpected given the general problems in deciding how to test branches statistically. The above methods all count as non-parametric bootstrap methods. More recently, parametric bootstrapping methods have also been developed, which make the more restrictive assumption that a parametric model can be applied to the data (e.g. that the standard deviation of the parameter can be reliably estimated). In parametric bootstrapping we generate simulated datasets based on the assumed frequency distribution of the data, rather than by resampling from the data set itself. That is, instead of sampling from the sample, we sample from the assumed theoretical distribution to generate the set of bootstrap samples. We can then apply the percentile, BCa or bootstrap-t methods, described above, in the usual way. Clearly this method assumes that we know the appropriate frequency distribution; and the method will only be appropriate if this assumption is true, but not otherwise. However, if the assumption is correct, then this can be the most powerful method (Huelsenbeck et al. 1996a; Newton 1996) because it is not dependent on the representativeness of the data sample. The method has been introduced into phylogenetics in several contexts (Goldman 1993; Adell and Dopazo 1994; Huelsenbeck et al. 1996a), but the appropriate frequency distribution for branch support is not obvious (i.e. a phylogeny is a complex structure and cannot be represented by a single number but rather requires a model of sequence evolution and a model tree) and so it is not used for this purpose. Issues with the bootstrap Thus, for several reasons, all of the best bootstrapping methods are not likely to be available when assessing the robustness of a phylogenetic tree, and we are left with the naïve percentile bootstrap, which can be expected a priori to provide biased and skewed estimates of confidence intervals (because the frequency distribution associated with tree branches will not be symmetrical). Sadly, these problems have been repeatedly confirmed for the assessment of branch support in phylogenetic tree-building, both theoretically (Zharkikh and Li 1992a, 1992b; Felsenstein and Kishino 1993; Li and Zharkikh 1994; Sitnikova et al. 1995; Berry and Gascuel 1996; Efron et al. 1996; Huelsenbeck et al. 1996a; Newton 1996; Sanderson and Wojciechowski 2000; Suzuki et al. 2002; Alfaro et al. 2003; Erixon et al. 2003; Galtier 2004; Huelsenbeck and Rannala 2004) and empirically (Sanderson 1989; Hillis and Bull 1993; Buckley et al. 2001; Buckley and Cunningham 2002; Wilcox et al. 2002; Taylor and Piel 2004). An example of the relationship between naïve bootstrap probabilities and the true probability of a false positive result, showing that percentile bootstrap indices >75% tend to be underestimates of the amount of support while they are overestimates below this level. The graph is based upon 1000 bootstrap resamples of 100 simulated characters for a clade of three taxa plus outgroup (based on data presented by Zharkikh and Li 1992a). The true probability represents the amount of character support for the clade in the simulated data, while the bootstrap probability is the proportion of resamples that included the clade. These studies have demonstrated that the probability of bootstrap resampling supporting the true tree may be either under- or overestimated, depending on the particular situation. For example, bootstrap values >75% tend to be underestimates of the amount of support, while they may be overestimates below this level, as shown in the first graph (above). That is, when the branch support is strong (i.e. the clade is part of the true tree) there will be an underestimation and when the support is weak (i.e. the clade is not part of the true tree) there will be an overestimation. This situation has been reported time and time again, with various theoretical explanations (e.g. Felsenstein and Kishino 1993; Efron et al. 1996; Newton 1996), although there are dissenting voices (e.g. Taylor and Piel 2004) as would be expected for a complex situation. Unfortunately, practitioners seem to ignore this fact, and to assume incorrectly that bootstrap values are Just as importantly, the theoretical studies show that the pattern of over- and underestimation depends on (i) the shape of the tree and the branch lengths, (ii) the number of taxa, (iii) the number of characters, (iv) the evolutionary model used, and (v) the number of bootstrap resamples. This was first reported by Zharkikh and Li (1992a), and has been reconfirmed since then. For example, with few characters the bootstrap index tends to overestimate the support for a clade and to underestimate it for more characters. This is particularly true if the number of phylogenetically informative characters is increased or the number of non-independent characters is increased; and the index becomes progressively more conservative (i.e. lower values) as the number of taxa is increased. Moreover, these patterns of under- and overestimation are increased with an increasing number of bootstrap replications, as shown in the next graph — this called "being wrong, with confidence". An example of the relationship between the true clade probability and the observed non-parametric bootstrap proportion for two simulated data sets with different numbers of characters (as shown). The lines are based on data presented by Zharkikh & Li, (1995) for 1000 bootstrap resamples of a clade of three taxa plus outgroup. The following graph pair of graphs show the effect of varying the evolutionary model used to generate the data, where under-specification of the analysis model leads to a general over-estimate of the true probability (cross-over at p=0.8, as shown in the first graph of the pair), while matching the generating and analysis models leads to a general under-estimation (cross-over at p=0.3, as shown in the second graph of the pair). An example of the relationship between the true tree probability and the difference between the observed percentile bootstrap proportion and the true probability for two simulated data sets. The label in the bottom corner shows the substitution model used to simulate the data, then the model assumed in the bootstrap analysis (the sequence length is 100 nucleotides); JC69 = Jukes-Cantor, GTRG = general time- reversible + gamma-distributed among-site rate variation. The points are based on data presented by Huelsenbeck & Rannala (2004). These are serious issues, which seem to be often ignored by practitioners. We can't just assume that the "true" support value is larger than our observed bootstrap value. In particular, this means that bootstrap values are not directly comparable between trees, even for the same taxa, and thus there can be no "agreed" level of bootstrap support that can be considered to be "statistically significant". A bootstrap value of 90% on a branch on one tree may actually represent support than a bootstrap value of 85% on another tree, depending on the characteristics of the dataset concerned and the bootstrapping procedure used (although within a single tree the values should be comparable). This complex situation means that we have to consider carefully how best to interpret bootstrap values in a phylogenetic context (Sanderson 1995). The bootstrap proportion (i.e. the proportion of resampled trees containing the branch/clade of interest) has variously been interpreted as (Berry and Gascuel 1996): 1. a measure of reliability, telling us what would be expected to happen if we repeated our experiment; 2. a measure of accuracy, telling us about the probability of our experimental result being true; and 3. a measure of confidence, interpreted as a conditional probability similar to those in standard statistical hypothesis tests (i.e. measuring Type I errors or false positives). The bootstrap was originally designed for purpose (1), and all of the problems identified above relate to trying to use it for purposes (2) and (3). The values derived from the naïve bootstrap need correcting for purposes (2) and (3), and the degree of correction depends on the particular data set being examined (Efron et al. 1996; Goloboff et al. 2003). The issue of support values depending on the number of bootstrap replicates is also of interest. It is usually recommended that at least 1,000–2,000 bootstrap resamples are taken for estimating confidence intervals, and this generality has been applied to phylogenetic trees (Hedges 1992). However, it is important to recognize that these suggestions relate to the precision of the confidence estimates not to their accuracy. Accuracy refers to how close the estimates are to the true value (i.e. correctness) while precision refers to how variable are the estimates (i.e. repeatability). Accuracy depends on a complex set of characteristics many of which have nothing to do with bootstrap replication. Precision, on the other hand, is entirely to do with the number of bootstrap replicates and the expected accuracy of the estimates. As shown in the next graph, 100 replicates at a conventional level of accuracy produces estimates that are expected to be within ±4% of the "true" values while 2,000 replicates produces estimates ±1%. This needs to be borne in mind when deciding whether to call a particular value "significant support" or not. The number of bootstrap replicates needed to achieve a specified amount of precision, given statistical testing at two different levels of probability. For example (as shown by the dotted line), 100 bootstrap replicates means that, if the bootstrap value is accurate at the 95% confidence level, then the estimated bootstrap percentage will be precise to ±4.3%. In order to get ±1% precision then nearly 2,000 bootstrap replicates are needed. There have also been attempts to overcome some of the practical limitations of bootstrapping for large data sets by adopting heuristic procedures, including resampling estimated likelihoods for maximum-likelihood analyses (Waddell et al. 2002) and reduced tree-search effort for the bootstrap replicates. However, approaches using reduced tree-search effort produce even more conservative estimates of branch support, and the magnitude of the effect increases with decreasing bootstrap values (DeBry and Olmstead 2000; Mort et al. 2000; Sanderson and Wojciechowski 2000). Adell J.C., Dopazo J. 1994. Monte Carlo simulation in phylogenies: an application to test the constancy of evolutionary rates. J. Mol. Evol. 38, 305-309. Alfaro M.E., Zoller S., Lutzoni F. 2003. Bayes or bootstrap? A simulation study comparing the performance of bayesian markov chain monte carlo sampling and bootstrapping in assessing phylogenetic confidence. Mol. Biol. Evol. 20, 255-266. Berry V., Gascuel O. 1996. On the interpretation of bootstrap trees: appropriate threshold of clade selection and induced gain. Mol. Biol. Evol. 13, 999-1011. Bremer K. 1988. The limits of amino acid sequence data in angiosperm phylogenetic reconstruction. Evolution 42, 795-803. Buckley T.R., Cunningham C.W. 2002. The effects of nucleotide substitution model assumptions on estimates of nonparametric bootstrap support. Mol. Biol. Evol. 19, 394-405. Buckley T.R., Simon C., Chambers G.K. 2001. Exploring among-site rate variation models in a maximum likelihood framework using empirical data: effects of model assumptions on estimates of topology, branch lengths and bootstrap support. Syst. Biol. 50, 67-86. Carpenter J., Bithell J. 2000. Bootstrap confidence intervals: when, which, what? A practical guide for medical statisticians. Stat. Med. 19, 1141-1164. Davis J.I. 1993. Character removal as a means for assessing the stability of clades. Cladistics 9, 201-210. Davison A.C., Hinkley D.V. 1997. Bootstrap Methods and Their Applications. Cambridge Uni. Press, Cambridge. DeBry R.W., Olmstead R.G. 2000. A simulation study of reduced tree-search effort in bootstrap resampling analysis. Syst. Biol. 49, 171-179. Efron B. 1979. Bootstrapping methods: another look at the jackknife. Ann. Stat. 7, 1-26. Efron B., Halloran E., Holmes S. 1996. Bootstrap confidence levels for phylogenetic trees. Proc. Nat. Acad. Sci. U.S.A. 93, 7085-7090. Efron B., Tibshirani R.J. 1993. An Introduction to the Bootstrap. Chapman & Hall, London. Erixon P., Svennblad B., Britton T., Oxelman B. 2003. Reliability of bayesian probabilities and bootstrap frequencies in phylogenetics. Syst. Biol. 52, 665-673. Faith D.P. 1991. Cladistic permutation tests for monophyly and nonmonophyly. Syst. Zool. 40, 366-375. Felsenstein J. 1985. Confidence limits on phylogenies: an approach using the bootstrap. Evolution 39, 783-791. Felsenstein J. 1988. Phylogenies from molecular sequences: inference and reliability. Annu. Rev. Genet. 22, 521-565. Felsenstein J., Kishino H. 1993. Is there something wrong with the bootstrap on phylogenies? A reply to Hillis and Bull. Syst. Biol. 42, 193-200. Galtier N. 2004. Sampling properties of the bootstrap support in molecular phylogeny: influence of nonindependence among sites. Syst. Biol. 53, 38-46. Goldman N. 1993. Statistical tests of models of DNA substitution. J. Mol. Evol. 36, 182-198. Goloboff P.A., Farris J.S., Källersjö M., Oxelman B., Ramırez M.J., Szumik C.A. 2003. Improvements to resampling measures of group support. Cladistics 19, 324-332. Hedges S.B. 1992. The number of replications needed for accurate estimation of the bootstrap P value in phylogenetic studies. Mol. Biol. Evol. 9, 366-369. Hendy M.D., Penny D. 1993. Spectral analysis of phylogenetic data. J. Classific. 10, 5-24. Hillis D.M., Bull J.J. 1993. An empirical test of bootstrapping as a method for assessing confidence in phylogenetic analysis. Syst. Biol. 42, 182-192. Holmes S. 2003. Bootstrapping phylogenetic trees: theory and methods. Statist. Sci. 18, 241-255. Huelsenbeck J.P., Hillis D.M., Jones R. 1996a. Parametric bootstrapping in molecular phylogenetics: applications and performance. In: Ferraris, J.D., Palumbi, S.R. (Eds), Molecular Huelsenbeck J.P., Hillis D.M., Nielsen R. 1996b. A likelihood ratio test of monophyly. Syst. Biol. 45, 546-558. Huelsenbeck J.P., Rannala B. 2004. Frequentist properties of bayesian posterior probabilities of phylogenetic trees under simple and complex substitution models. Syst. Biol. 53, 904-913. Lanyon S.M. 1985. Detecting internal inconsistencies in distance data. Syst. Zool. 34, 397-403. Larget B., Simon D.L. 1999. Markov chain monte carlo algorithms for the bayesian analysis of phylogenetic trees. Mol. Biol. Evol. 16, 750-759. Lee M.S.Y. 2000. Tree robustness and clade significance. Syst. Biol. 49, 829-836. Li W.-H., Zharkikh A. 1994. What is the bootstrap technique? Syst. Biol. 43, 424-430. Mort M.E., Soltis P.S., Soltis D.E., Mabry M.L. 2000. Comparison of three methods for estimating internal support on phylogenetic trees. Syst. Biol. 49, 160-171. Nei M., Stevens J.C., Saitou M. 1985. Methods for computing the standard errors of branching points in an evolutionary tree and their application to molecular data from humans and apes. Mol. Biol. Evol. 2, 66-85. Newton M.A. 1996. Bootstrapping phylogenies: large deviations and dispersion effects. Biometrika 83, 315-328. Penny D., Foulds L.R., Hendy M.D. 1982. Testing the theory of evolution by comparing phylogenetic trees constructed from five different protein sequences. Nature 297, 197-200. Rodrigo A.G. 1993. Calibrating the bootstrap test of monophyly. Int. J. Parasitol. 23, 507-514. Sanderson M.J. 1989. Confidence limits on phylogenies: the bootstrap revisited. Cladistics 5, 113-129. Sanderson M.J. 1995. Objections to bootstrapping phylogenies: a critique. Syst. Biol. 44, 299-320. Sanderson M.J., Wojciechowski M.F. 2000. Improved bootstrap confidence limits in large-scale phylogenies, with an example from Neo-Astragalus (Leguminosae). Syst. Biol. 49, 671-685. Shimodaira H. 2002. An approximately unbiased test of phylogenetic tree selection. Syst. Biol. 51, 492-508. Sitnikova T., Rzhetsky A., Nei M. 1995. Interior-branch and bootstrap tests of phylogenetic trees. Mol. Biol. Evol. 12, 319-333. Sneath P.H.A. 1986. Estimating uncertainty in evolutionary trees from Manhattan-distance triads. Syst. Zool. 35, 470–488. Soltis P.S., Soltis D.E. 2003. Applying the bootstrap in phylogeny reconstruction. Statist. Sci. 18, 256-267. Suzuki Y., Glazko G.V., Nei M. 2002. Overcredibility of molecular phylogenies obtained by bayesian phylogenetics. Proc. Nat. Acad. Sci. U.S.A. 99, 16138-16143. Taylor D.J., Piel W.H. 2004. An assessment of accuracy, error, and conflict with support values from genome-scale phylogenetic data. Mol. Biol. Evol. 21, 1534-1537. Waddell P.J., Kishino H. and Ota, R. 2002). Very fast algorithms for evaluating the stability of ML and Bayesian phylogenetic trees from se- quence data. Genome Informatics 13, 82-92. Wilcox T.P., Zwickl D., Heath T.A., Hillis D.M. 2002. Phylogenetic relationships of the dwarf boas and a comparison of bayesian and bootstrap measures of phylogenetic support. Mol. Phylogenet. Evol. 25, 361-371. Zharkikh A., Li W.-H. 1992a. Statistical properties of bootstrap estimation of phylogenetic variability from nucleotide sequences. I. Four taxa with a molecular clock. Mol. Biol. Evol. 9, 1119-1147. Zharkikh A., Li W.-H. 1992b. Statistical properties of bootstrap estimation of phylogenetic variability from nucleotide sequences. II. Four taxa without a molecular clock. J. Mol. Evol. 35, 356-366. Zharkikh A., Li W.-H. 1995. Estimation of confidence in phylogeny: the complete-and-partial bootstrap technique. Mol. Phylogenet. Evol. 4, 44-63. In Australia at the time I was born, the most popular first name for boys was "David" and the second most popular was "Andrew". Not unexpectedly, the most popular middle name was "Andrew" and number two was "David". It then comes as no surprise to you that I ended up with this pair of given names. Names come and go in popularity (these are called fads), and if your parents have no imagination then you will grow up knowing that you are not unique, because half the people in your classroom will have the same name as yourself. You may even end up being numbered (David #1, David #2, etc). What's worse, if you are not careful then you may end up doing the same thing to your own children. Indeed, having a common name has only one known advantage — no matter where you go in the world everyone can recognize it, although they may not always spell it and pronounce it the way you expect (David, Davide, Dawit ...). Therefore, you will have no problems making restaurant bookings where ever you happen to be (see Leonard S. Bernstein. 1981. Never Make a Reservation in Your Own Name. Rand McNally). These days in Australia, "David" struggles to be in the top 100 in popularity for boys. However, currently it appears to be in the top 10 in places like Armenia, Austria, Hungary, Italy, Spain and Israel (in 2012 or 2013), as well as the top 20 in Poland and Portugal. This information comes from The Baby Name Wizard. This site has current lists for many countries (Popular Names From Around the World), but has historical data only for the USA. So, let's look at the U.S. data in more detail. As for Australia, the peak popularity in the USA was from 1955-1965, as shown in the first graph. Note that the peak is truncated from 1950-1960. The site's Name Mapper web page has annual data for each state from 1960-2009, which is precisely 50 years. These data show the ranking of names by popularity within each state. The average rank for the name "David" across the 50 states is shown in the next graph. "David" was one of the top 10 names for boys born from 1936-1992, the #1 name in 1960, and it remains inside the top 20 to this day. We can also look at the data for each state individually, as shown in the next graph, where darker shading represents greater popularity of the name. From the peak in the 1960s there was a steady decrease in almost every state until 1995, after which the popularity has been more erratic. For example, in 1960 "David" was the #1 boys name in 28 of the 50 states (and in the top 5 in every state), but by 1968 it was not #1 anywhere. The last time it was ranked #1 was in Utah in 1970, which was also the last year in which it was in the top 6 in every state. Note that the states are grouped and colored geographically / culturallu. Only in California and Texas has the name stayed in the top 10 over the past 50 years. In the other states it has stayed in the top 50 or so, except for North Dakota, where it is currently struggling to stay in the top 100. In Nevada and Alaska it has even made a bit of a comeback in the past 10 years. We can look at the relationship between the states using a phylogenetic network. The next graph is a NeighborNet (based on the manhattan distance) of the 1960-2009 data for the popularity ranking of "David" as a boy's name. States near each other in the network have a similar naming popularity, while states further apart are progressively more different from each other. The network shows a simple trend of increasing average popularity of "David" from the top-left to the bottom-right. I have also colored the states using the same color scheme as for the previous graph (ie. geographically / culturally). Note that the orange, red, yellow and blue states are fairly neatly grouped, indicating that their alleged geographical / cultural similarity extends to the popularity of given names ("David" has continued to be popular in all of these states). The purple, brown and green states are not grouped very much, indicating much more diversity in the popularity of "David". For example, "David" has continued to be popular in New York and New Jersey but not in Maine, New Hampshire or Vermont. The extreme disinterest of North Dakotans in the name is very clear. The fall of "David" is not as bad as that of "James" and "John", which were in the top 3 most popular names in the USA all the way from the 1880s to the 1950s, but which are now in 17th and 27th place, respectively (see the timeline graph in the Name Voyager). I am not sure what has led to the eclipse of these names, other than the whims of faddishness. For example, in Britain and Ireland the name "Harry" has shot to the top in recent years (guess why!), while it still languishes near #700 in the USA. Otherwise, "Noah" and "Liam" seem to have the most widespread popularity for boys in the western world at the moment. Footnote: I actually got the name Andrew because it is my father's middle name, and his father's before him, and his father's first name.
{"url":"http://phylonetworks.blogspot.se/","timestamp":"2014-04-20T13:31:11Z","content_type":null,"content_length":"137622","record_id":"<urn:uuid:fd2cf84f-e833-40ff-b95f-8bc958ad79d5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Sorting Algorithm Breaks Giga-Sort Barrier, With GPUs 15690032 story Posted by from the quick-like-double-time dept. An anonymous reader writes "Researchers at the University of Virginia have recently open sourced an algorithm capable of sorting at a rate of one billion (integer) keys per second using a GPU. Although GPUs are often assumed to be poorly suited for algorithms like sorting, their results are several times faster than the best known CPU-based sorting implementations." This discussion has been archived. No new comments can be posted. Sorting Algorithm Breaks Giga-Sort Barrier, With GPUs Comments Filter: • The video card in question.. (Score:5, Informative) by black3d (1648913) on Sunday August 29, 2010 @10:33PM (#33412038) Specifically, a GTX480 (just over 1 B keys/sec), followed up by a Tesla 2050 at around 75% of the speed of the GTX480. (745 M keys/sec) • Link to Technical Paper (Score:5, Informative) by PatPending (953482) on Sunday August 29, 2010 @10:59PM (#33412126) Revisiting Sorting for GPGPU Stream Architectures [virginia.edu] (PDF) • Re:x86 (Score:5, Informative) by emmons (94632) on Monday August 30, 2010 @12:14AM (#33412374) Homepage GPUs are highly parallel processors, but most of our computing algorithms were developed for fast single core processors. As we figure out how to implement new solutions to old problems to take advantage of these highly parallel processors, you'll continue to see stories like this one. But, there's a limit to how good they can be at certain types or problems. Read up on Amdahl's law. Basically, traditional x86 processors are good at lots of stuff. Modern GPUs are great at a few things. • No (Score:5, Informative) by Sycraft-fu (314770) on Monday August 30, 2010 @12:29AM (#33412408) GPUs are special kinds of processors, often called stream processors. They are very efficient at some kinds of operations, and not efficient at others. Some things, they run literally a thousand times faster than the CPU. Graphics rasterization would be one of these (no surprise, that's their prime job). However other things they run much slower. For something to run fast on a GPU it has to meet the following requirement, the more it matches them, the faster it is: 1) It needs to be parallel to a more or less infinite extent. GPUs are highly parallel devices. The GTX 480 in question has 448 shaders, meaning for max performance it needs to be working on 448 things in parallel. Things that are only somewhat parallel don't work well. 2) It needs to not have a whole lot of branching. Modern GPUs can branch, but they incur a larger penalty than CPUs do. So branching in the code needs to be minimal. It needs to mostly be working down a known path. 3) When a branch happens, things need to branch the same way. The shaders work in groups with regards to data and instructions. So if you have half a group branching one way, half the other, that'll slow things down as it'll have to be split out and done separately. So branches need to be uniform for the most part. 4) The problem set needs to fit in to the RAM of the GPU. This varies, 1GB is normal for high end GPUs and 4-6GB is possible for special processing versions of those. The memory on board is exceedingly fast, over a hundred gigabytes per second in most cases. However the penalty for hitting the main system RAM is heavy, the PCIe bus is but a fraction of that. So you need to be able to load data in to video RAM and work on it there, only occasionally going back and forth with the main system. 5) For very best performance, your problem needs to be single precision floating point (32-bit). That is what GPUs like the best. Very modern ones can do double precision as well, but at half the speed. I don't know how their integer performance fares over all, they can do it, but again not the same speed as single precision FP. Now this is very useful. There are a lot of problems that fall in that domain. As I said, graphics would be one of the biggest, hence why they exist. However there are many problems that don't. When you get ones that are way outside of that, like, say, a relational database, they fall over flat. A normal CPU creams them performance wise. That's why we have the separate components. CPUs can't do what GPUs do as well, but they are good at everything. GPUs do particular things well, but other things not so much. In fact this is taken to the extreme in some electronics with ASICs. They do one and only one thing, but are fast as hell. Gigabit switches are an example. You find that tiny, low power, chips can switch amazing amounts of traffic. Try it on a computer with gigabit NICs and it'll fall over. Why? Because those ASICs do nothing but switch packets. They are designed just for that, with no extra hardware. Efficient, but inflexible. • Re:Ugh. (Score:3, Informative) by metacell (523607) on Monday August 30, 2010 @02:26AM (#33412784) Dude, an algorith which is O(n*log(n)) is not faster than O(n) just because n*log(n) < n. When an algorithm is O(n* log(n)), it means the actual time requirement is p*n*log(n)+q, where p and q are constants specific to the algorithm. The O(n*log(n)) algorith is faster than the O(n) one when p1*n*log(n)+q1 < p2*n+q2 ... and for any n, it is possible to choose p1, p2, q1 and q2 so that the O(n) algorith becomes faster. This means, for example, that an algorithm which is O(n*log(n)) isn't automatically faster than an algorithm which is O(n) on lists with three elements or more. The O(n*log(n)) algorithm may take a hundred times longer to sort a list of two elements than the O(n) one (due each step being more complex), and in that case the lists will need to grow some before the O(n*log(n)) algorithm becomes faster. • Re:The video card in question.. (Score:2, Informative) by FilipeMaia (550301) on Monday August 30, 2010 @02:36AM (#33412818) Homepage The reason for the GTX480 being faster is that it has 15 SM compared to 14 from the Tesla 2050. Also the GTX 480 runs at a higher clock speed (700 compared to 575). Put together this is 575/ 700*14/15 = 76.7% which comes pretty close to the 75%. • Re:Um... (Score:3, Informative) by Anonymous Coward on Monday August 30, 2010 @02:43AM (#33412832) Typically, I hear researchers describe the parallelism of an algorithm separately from its computational complexity (big oh notation) using the idea of "speedup." The perfect scaling in your first example has linear speedup, and the second example has logarithmic speedup (that is, the speedup is log(p)). Here is the relevant Wikipedia article [wikipedia.org]. • Re:Also (Score:1, Informative) by Anonymous Coward on Monday August 30, 2010 @03:32AM (#33412986) Doesn't matter what the theory says, if the hardware can do X faster than Y, then X is better according to users. Normally big-O notation is applied on a purely theoretical level where all operations are assumed to have the same base cost in terms of time to execute. This does not make the notation invalid for real world applications and implementations, however. But when doing so you have to adjust your formulas by adding the proper weighting to execution time. In practice this is usually a waste of effort as it's generally faster to just write an implementation and time it on various pieces of hardware. In the article we're talking about, they are comparing a single implementation of a single algorithm on several pieces of hardware. So first of all the summary shouldn't be shouting about breaking any kind of record- they weren't trying to hit any particular benchmark it's a relative test of the hardware, not the algorithm or implementation. The reason why using big-O and making comparisons is useful, is because if all we use is this type of test the answer to any speed problem is simply "Get faster hardware, or buy a piece of hardware which runs this code faster". In all likliehood, there are other methods which, when implemented on the same hardware, may yield much faster results. Heck, it's possible someone else's implementation of the same algorithm may yield faster results as well. In regards to your comments about raytracing vs. polygon rendering, all I'm going to say is that you don't have a very good concept of what raytracing really is if you think a sphere created from a million triangles will raytrace faster than one modeled as a mathematical sphere. It won't- those demonstrations are a pure head-to-head comparison operating on a scene which has actually been optimized for a non-raycasting technique. • Re:No (Score:2, Informative) by FlawedLogic (1062848) on Monday August 30, 2010 @03:33AM (#33412992) The GTX480 can actually do a double precision op per clock cycle. Fermi was designed with DP supercomputing in mind which is why it's so bloody expensive. To get the price down for consumer cards they removed that ability since graphics doesn't generally need it. Consumer cards need four ticks to do the equivalent DP op. • Re:Excel Charts (Score:4, Informative) by w0mprat (1317953) on Monday August 30, 2010 @05:51AM (#33413324) Amen. Some tools like that would be a godsend. It could be coming. http://en.wikipedia.org/wiki/Linked_Data [wikipedia.org] http://linkeddata.org/ [linkeddata.org] - Not what you are talking about, but what you describe may result from it. • Re:The video card in question.. (Score:2, Informative) by ericcj (1574601) on Monday August 30, 2010 @09:52AM (#33414430) Chips on the GTX 480, C2050, and C2070 come from the exact same die and wafer. The C20XX GPUs actually run at a lower clock speed for 32-bit floating-point and integer operations than a GTX 480. C20XX series hardware is intended for physics/science/engineering calculations, where double-precision is preferred. The C20XX series is 4 times faster at double-precision calculations than the GTX 480. This is the sweet spot it is tuned for. • by Anonymous Coward on Monday August 30, 2010 @11:05AM (#33415228) His naivety hit the real world and exploded. Related Links Top of the: day, week, month.
{"url":"http://developers.slashdot.org/story/10/08/30/0133203/Sorting-Algorithm-Breaks-Giga-Sort-Barrier-With-GPUs/informative-comments","timestamp":"2014-04-24T13:32:16Z","content_type":null,"content_length":"109972","record_id":"<urn:uuid:bdc960f2-53e5-41ca-851e-0ca4072ad07a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Deptford Township, NJ Calculus Tutor Find a Deptford Township, NJ Calculus Tutor ...As a Pennsylvania certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Mathematics test takers. In high school I scored 1550/1600 (780M, 770V) on the SAT and in January 2013 I scored 2390/2400 (800M, 790R, 800W). Yes, I still take the tests to mak... 19 Subjects: including calculus, statistics, algebra 2, geometry ...I have a degree in mathematics. This included taking calculus 1, 2, 3, 4 and a class called Advanced Calculus. I also have experience tutoring students in calculus 1 and 2. 16 Subjects: including calculus, English, physics, geometry ...Peter is always willing to offer flexible scheduling to suit the client's needs. He is also prepared to be responsive to any budgetary concerns.My qualification for tutoring GMAT is based upon (1) my academic record and (2) my workplace experience. Academic achievements include a BS (with honors)in Applied Physics and a Doctorate in Engineering Physics from Oxford University. 10 Subjects: including calculus, GRE, algebra 1, GED ...I can do the same for you. Probability is perhaps the most fascinating topic in all of mathematics. As a young graduate student, I had a probability professor who was a professional gambler and a world-class backgammon player. 23 Subjects: including calculus, English, geometry, statistics ...Also, I have tutored students in ODE's for over ten years. I worked for close to three years as a pension actuary and have passed the first three exams given by the Society of Actuaries, which rigorously cover such topics as calculus, probability, interest theory, modeling, and financial derivat... 19 Subjects: including calculus, geometry, trigonometry, statistics
{"url":"http://www.purplemath.com/Deptford_Township_NJ_calculus_tutors.php","timestamp":"2014-04-17T15:26:30Z","content_type":null,"content_length":"24479","record_id":"<urn:uuid:098a9ee1-37d4-4c85-82e0-ab94725f7d24>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Nano and Giga Abstract Large Scale Dynamics with Quantum Mechanical Forces: The Transfer Hamiltonian The question of 'predictability' in multi-scale materials simulations is a very important one, since we want the results to be reliable, qualitatively or quantitatively. The frist step in achieveing this largely depends upon having a source of accurate quantum chemical forces that underlie the classical dynamics. However, unlike small molecules, where highly sophisticated and predictive methods like coupled-cluster theory can be applied, for any multi-scale simulation hundreds to thousands of atoms need to be described by the quantum chemical methods with an efficientcy that permits tying to dynamics. To solve this problem we have embarked upon the concept of a 'transfer Hamiltonian' which is formally defined by coupled-cluster theory whose eigenfunction is a single determinant, yet whose energy and forces are exact. Furthermore, it also permits the treatment of different electronic or ionized states permitting optical properties and state specificity to be described as part of the simulation. This transfer Hamiltonian can be evaluated purely from first principles, or from determining parameters that define the Hamiltonian for the particular phenomena of interest. We will illustrate this approach in comparison with others like density functional theory in problems involving the fracture of silica including the presence of water.
{"url":"http://www.asdn.net/moscow/software-abstracts/bartlett/","timestamp":"2014-04-16T21:52:00Z","content_type":null,"content_length":"3789","record_id":"<urn:uuid:1b3ebe8e-7f5f-4272-b5a3-222ecf809f52>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Bessel Function Zeros - File Exchange - MATLAB Central Please login to add a comment or rating. Comments and Ratings (21) 14 Jul Utilizei a rotina para calcular a solução analítica da equação da difusão 2D. Ficou muito boa. 01 Mar what are the advantages of using this over fzero? thanks 02 Feb Mostly seems to work. However, the zeros for the Bessel functions of the second kind are missing the first zero (0.893576966279). That is, the first Y_n zero it gives is actually the 2010 second, the second is actually the third, and so on. Hi all I am using the function in Matlab R2008a I entered the command besselzero(1/2,1,2) 24 Jul and the result is different than the value from 2009 Mathematica s BesselYZero[1/2,1] I also tried some other combinations which seemed fine. I just wonder if this is a special case or I should check the numbers given by this Matlab function with some other online sources. Thanks 08 Oct i dont have comments jeje sorry but this informaion is good ! ! thank you!!! 02 Mar Thank you, very useful 29 Jan The algorithm is fascinating and the speed is satisfactory. But there is a small defect: For Y function, the initial guess is not correctly chosen and the routine misses the first root in 2008 some cases. 02 Oct 05 May 01 May It works just fine 23 Apr j'ai male a trouver la solution de l'equation dde la gaine fini et infini de la fibre vouler vous m'aider svp. 20 Mar Nice work. You've saved me a lot of time. 26 Nov works great 15 Nov Good -- more commenting / discussion would be nice. 13 Oct I am using it. But not sure about its accuracy. 18 Aug Very good. Works perfectly. 11 Jul Thanks 2006 T.S BGU University Israel 20 Apr 05 Dec French student thank you. 2005 We hope you'll have a good life. I used this m-file to generate the zeros for a bessel function of the first kind and order zero and it worked just fine. 03 Dec 2005 When you run the function, you should use besselzero(n,k,kind) I interpreted the documentation to suggest that the function needs only 2 inputs, but it really requires 3 to work. 04 Aug It seems the besselzero(n,k,1) and besselzero(n,k,2) works fine. Checked with multiple plot(besselzero(n,100,kind)). 2005 Good work ...
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/6794-bessel-function-zeros","timestamp":"2014-04-21T04:49:26Z","content_type":null,"content_length":"36033","record_id":"<urn:uuid:4d6af810-3c91-4afb-9301-c9a2db62643e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
How TeleConverters Affect Magnification Prepared 2007-02-28 (169/12917) by Bill Claff TeleConverters (TCs) operate by the same lens combination principles as close-up lenses. The basic formulas are: P = P[1] + P[2] - d[m] * P[1] * P[2] P, P[1], and P[1] are lens powers in diopters d[m] is the distance between lens nodes of P[1] and P[2] in meters H[11]H[1] = d[m] * P[2] / P H[22]H[2] = -d[m] * P[1] / P For the distance the resulting lens nodes shift as a result of a non-zero d[m] For close-up lenses we often disregard d[m] which simplies calculations considerably. (See How Close-up Lenses Affect Magnification) However, the design of a TC is more complex since in addition to changing focal length the rear node must also move to maintain infinity focus. Because of the placement and movement of the rear node, the term d[m] in the power equations cannot be disregarded. So without complete lens and TC information, only the effect on focal length at infinity focus and the effect on magnification at any focus position can be easily calculated. In a suprising twist, the effect of the TC multiplier decreases as the magnification of the primary lens increases and can go below 1 resulting in a decrease of focal length as you focus closer. For example, the 105mm f/2.8G ED-IF AF-S VR Micro-Nikkor has infinity and closest focus focal lengths of 105.2mm and 75.1mm. With the addition of a Kenko 1.4xTC the infinity focal length increases a factor of 1.4 from 105.2mm to 148.8mm but the closest focus focal length drops from 75.1mm to 52.9mm. All of the above values are actual measurements with the exception of the 148.8mm value which has been computed. Although focal length may not be as expected at closest focus, magnification is increased as expected; and a TC can be a suprisingly good way to get more magnification for close-ups.
{"url":"http://home.comcast.net/~NikonD70/GeneralTopics/Close-up/How_TeleConverters_Affect_Magnification.htm","timestamp":"2014-04-20T09:20:46Z","content_type":null,"content_length":"6749","record_id":"<urn:uuid:7d17d038-1eea-4c83-9bda-ccb659c010e6>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Parkside, PA Math Tutor Find a Parkside, PA Math Tutor ...At the advanced level, I like pointing out grammatical constructions to students and asking them questions of syntax so that they can begin to understand how the language works. I served as a math tutor during my first two years of college. Before that, I tutored students in high school math as well as elementary math. 10 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...It upsets me when I hear students say, 'I'm just not good in math!' Comments like that typically mean that a math teacher along the way wasn't able to present the material in a way that made sense to the student. I've never met a student who didn't understand once we as a team figured out how t... 9 Subjects: including geometry, Microsoft Outlook, algebra 1, algebra 2 ...I have passed all of the required PRAXIS tests for elementary education in both states. I student taught in a 3rd grade classroom for 16 weeks, tutored many students at the elementary level, substitute taught in the elementary level, and completed fieldwork in 1st and 4th grade. I have taught middle school math for 6 years. 21 Subjects: including algebra 1, prealgebra, statistics, special needs Hello, My name is Kelly and I am a Kindergarten/1st grade teacher for Education Plus Academy Charter School. I am a graduate from West Chester University with a Bachelor's degree in Elementary and Special Education. Throughout college I attended field classes where I was required to observe and work with students in classes on different levels. 9 Subjects: including prealgebra, reading, grammar, special needs ...I value your feedback, and I look forward to hearing from you. If you are looking for someone who will think outside of the box and is completely dedicated to your success, I am that someone. Please feel free to contact me if you'd like to know more about my experience and tutoring capabilities. 11 Subjects: including prealgebra, reading, grammar, English Related Parkside, PA Tutors Parkside, PA Accounting Tutors Parkside, PA ACT Tutors Parkside, PA Algebra Tutors Parkside, PA Algebra 2 Tutors Parkside, PA Calculus Tutors Parkside, PA Geometry Tutors Parkside, PA Math Tutors Parkside, PA Prealgebra Tutors Parkside, PA Precalculus Tutors Parkside, PA SAT Tutors Parkside, PA SAT Math Tutors Parkside, PA Science Tutors Parkside, PA Statistics Tutors Parkside, PA Trigonometry Tutors Nearby Cities With Math Tutor Brookhaven, PA Math Tutors Chester, PA Math Tutors Crum Lynne Math Tutors Eddystone, PA Math Tutors Folsom, PA Math Tutors Garden City, PA Math Tutors Holmes, PA Math Tutors Parkside Manor, PA Math Tutors Ridley Park Math Tutors Rose Valley, PA Math Tutors Rutledge, PA Math Tutors Swarthmore Math Tutors Upland, PA Math Tutors Wallingford, PA Math Tutors Woodlyn Math Tutors
{"url":"http://www.purplemath.com/Parkside_PA_Math_tutors.php","timestamp":"2014-04-18T18:35:37Z","content_type":null,"content_length":"24028","record_id":"<urn:uuid:92ef2a94-9efc-41c0-a028-442cb76c684b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Browse by Author Number of items: 3. Chapman, S Franz, B. and Flegg, M. B. and Chapman, S. J. and Erban, R. (2013) Multiscale reaction-diffusiion algorithms: PDE-assisted Brownian dynamics. SIAM Journal of Applied Mathematics, 73 (3). pp. Franz, B. and Flegg, M. B. and Chapman, S. J. and Erban, R. (2012) Multiscale reaction-diffusion algorithms: pde-assisted Brownian dynamics. SIAM Journal on Applied Mathematics . (Submitted) Erban, R Franz, B. and Flegg, M. B. and Chapman, S. J. and Erban, R. (2013) Multiscale reaction-diffusiion algorithms: PDE-assisted Brownian dynamics. SIAM Journal of Applied Mathematics, 73 (3). pp. Franz, B. and Flegg, M. B. and Chapman, S. J. and Erban, R. (2012) Multiscale reaction-diffusion algorithms: pde-assisted Brownian dynamics. SIAM Journal on Applied Mathematics . (Submitted) Franz, B. and Erban, R. (2011) Hybrid modelling of individual movement and collective behaviour. In: Dispersal, individual movement and spatial ecology: A mathematical perspective. Springer. Flegg, M Franz, B. and Flegg, M. B. and Chapman, S. J. and Erban, R. (2013) Multiscale reaction-diffusiion algorithms: PDE-assisted Brownian dynamics. SIAM Journal of Applied Mathematics, 73 (3). pp. Franz, B. and Flegg, M. B. and Chapman, S. J. and Erban, R. (2012) Multiscale reaction-diffusion algorithms: pde-assisted Brownian dynamics. SIAM Journal on Applied Mathematics . (Submitted) Franz, B Franz, B. and Flegg, M. B. and Chapman, S. J. and Erban, R. (2013) Multiscale reaction-diffusiion algorithms: PDE-assisted Brownian dynamics. SIAM Journal of Applied Mathematics, 73 (3). pp. Franz, B. and Flegg, M. B. and Chapman, S. J. and Erban, R. (2012) Multiscale reaction-diffusion algorithms: pde-assisted Brownian dynamics. SIAM Journal on Applied Mathematics . (Submitted) Franz, B. and Erban, R. (2011) Hybrid modelling of individual movement and collective behaviour. In: Dispersal, individual movement and spatial ecology: A mathematical perspective. Springer. This list was generated on Sun Apr 20 01:23:45 2014 BST.
{"url":"http://eprints.maths.ox.ac.uk/view/author/Franz=3AB=2E=3A=3A.html","timestamp":"2014-04-20T00:39:17Z","content_type":null,"content_length":"11739","record_id":"<urn:uuid:ede7a497-bb3b-4084-b55e-d01841de2539>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
The Fetus.Net : Cases Winners of the Case of the Week, 2014 Winners of the case of the week for 2013 - Winners' list Winners of the Case of the Week, 2013 Winners of the case of the week for 2012 - Winners' list Winners of the Case of the Week, 2012 Winners of the case of the week for 2011 - Winners' list Winners of the case of the week for 2011 Winners of the case of the year for 2010 - Winners' list - Philippe Jeanty, MD PhD. Winners of the case of the week for 2010 Winners of the case of the year for 2009 - Winners' list Winners of the case of the week for 2009 Winners of the case of the week for 2008 - Winners' list Winners of the case of the week for 2008 Winners of the case of the week for 2007 Winners of the case of the week for 2006 Winners of the case of the week for 2005 Winners of the case of the week for 2004 Winners of the case of the week for 2003 Winners of the case of the week for 2002 - Winners of the case of the week for 2001 - Winners of the case of the week for 2000 - Winners of the case of the Week 1999 - Winners of the case of the week for 1999 - Submit a Case Case of the week # 367 - Moshe Bronshtein, Segal Yeoushua Case of the week # 366 - Fabrice Cuillier Case of the week # 365 - Lilit Hovsepyan Case of the week # 364 - F Grochal Case of the week # 363 - F Grochal Case of the week # 362 - Alberto Sosa Olavarría_Eduardo Caleiras Case of the week # 361 - Frantisek Grochal Case of the week # 360 - Philippe Deblieck Case of the week # 359 - Philippe Deblieck Case of the week # 358 - Juan Pablo Gallo Case of the week # 357 - Emmanuel Julien Case of the week # 356 - A Krasnov, A Averyanov, I Glazkova, A Kurkevich, L Golovakha, A Malov Case of the week # 355 - M Bronshtein, I Naroditsky, F Grochal Case of the week # 354 - Moshe Bronshtein Case of the week # 353 - AS Olavarria, JLD Corral, A Cortez, F Boscaja Case of the week # 352 - Martin Juhas; Jozef Janovsky Case of the week # 351 - Othman Al-Asali; F Yassen Case of the week # 350 - Frantisek Grochal Case of the week # 349 - Alberto Sosa Case of the week # 348 - Mayank Chowdhury Case of the week # 347 - Moshe Bronshtein Case of the week # 346 - Moshe Bronshtein Case of the week # 345 - F Cuillier, Mardamootoo D, Balu M, Alessandri JL Case of the week # 344 - Elena Andreeva Case of the week # 343 - Shona Whitmell, MRT(N), RDMS, CRGS Case of the week # 342 - Murad Esetov Case of the week # 341 - Tudor Iacovache Case of the week # 340 - Murad Esetov, MD, Gylnabat Bekeladze, MD, Elina Gyseinova, MD Case of the week # 339 - Murad Esetov Case of the week # 338 - Fabrice Cuillier, MD. Case of the week # 337 - Nguyen Ha, MD case of the week # 336 - Francesco Contarin Case of the weeks # 335 - Moshe Bronshtein, MD Case of the weeks # 334 - Moshe Bronshtein, MD Case of the weeks # 333 - F Ventriglia; V Martucci; A Cerekja; A Caiaro; I Orsola; L Manganaro Case of the week # 332 - Fabrice Cuillier Case of the week # 331 - Elena Andreeva Case of the week # 330 - Elena Andreeva, MD; case of the week # 329 - Vu D. Nguyen Case of the week # 328 - Nguyen Ha Case of the week # 327 - Marcos Antonio Velasco Sanchez Case of the weeks # 326 - Moshe Bronshtein Case of the weeks # 325 - Claudio Gomes, MD Case of the week # 324 - Hanane Saadi Case of the week # 323 - Fabrice Cuillier, Mardamootoo . Case of the week # 322 - Emmanuel Julien, MD. Case of the week # 321 - J. Van Keirsbilck, MD., Ph. Moerman, MD. PhD, K. Devriendt, MD.PhD, L. De Catte, MD. PhD Case of the week # 320 - Fabrice Cuillier, MD Case of the week # 319 - Fabrice Cuillier, MD.1, Michel J.L.,MD.2, Harper,MD., Alessandri J.L,MD Case of the week # 318 - Fabrice Cuillier ,MD., Arrazola G.,MD., Cartault F.,MD., Alessandri J.L.,MD. Case of the week #317 - F Grochal Case of the week #316 - M Bronshtein Case of the week #315 - F Grochal, J Krsiakova, E Kaufmann, D Mardamootoo, F Cuillier, F Cartault Case of the week # 314 - Fabrice Cuillier ,MD.*,Mardamootoo D. , Midwife *,Alessandri J.L.,MD.**,Balu M.,MD.***. Case of the week # 313 - Fabrice Cuillier Case of the week #312 - M Bronshtein; I Naroditsky Case of the week #311 - A Krasnov, E Buryachenko, A Averyanov, I Glazkova, R Abdullin Case of the week #310 - F Grochal, A Cunderlik, P Klesken, P Szwarc, P Calda, R Vlk, P Babjak, D Murgasova, V Frisova Case of the week #309 - Philippe Jeanty Case of the week #308 - E Andreeva; L Zhuchenko; N Odegova; F Lagkueva Case of the week #307 - Philippe Deblieck Case of the week #306 - Fabrice Cuillier; JL Alessandri Case of the week #305 - F. Cuillier MD, Y Rio MD, J.L. Alessandri MD. Case of the week #304 - Moshe Bronshtein, MD. Case of the week #303 - Amr Hamza, MD. Case of the week #302 - Cuillier F. MD, Alessandri J.L.MD, Dr Cartault F.MD. Case of the week # 301 - Kozeta Mustafaraj, MD, Shuaip Beqiri, MD, Arben Haxhihyseni, MD. Case of the week #300 - Philippe Jeanty, MD PhD., Eva Leinart, MD, PhD. Case of the week #299 - E. Andreeva, MD, L. Zhuchenko, MD, I. Geilis, MD. Case of the week #298 - Philippe Jeanty, MD PhD., Eva Leinart, MD, PhD. Case of the week #297 - WL Lau, MD, LL Chan, MD, KS Chan, MD, WC Leung MD Case of the week #296 - Sivasamy Manohar, MD. Case of the week #295 - Moshe Bronshtein, MD Case of the week #294 - Elena Andreeva, MD. Case of the week #293 - A. Mazzocco MD, R. Manara MD, A.de Nardi MD, A.Cerekja MD PhD Case of the week #292 - Vu Dinh Nguyen, MD. Case of the week #291 - Fabrice Cuillier, MD, A.Bertha Case of the week #290 - Elena Andreeva, MD., Ludmila Juchenko, MD. Case of the week #289 - A. Sosa Olavarria Case of the week #288 - Elena Andreeva, MD. Case of the week #287 - D. Markov MD PhD, E. Pavlova MD, D. Atanassova MD, P. Markov, MD Case of the week #286 - O. Al-Asali MD, I. Abo-Almaged MD, A. Al Taher MD, M Al-Hakeem MD Case of the week #285 - Fabrice Cuillier, MD, J.L. Alessandri, MD, F.Cartault, MD, L.Vinatier MD. Case of the week #284 - Binodini M. Chauhan, MD. Case of the week #283 - G. Perez-Canto Ch, A. Sosa Olavarria, H. Parra Venero Case of the week #282 - Nguyen Ha, M.D. Case of the week #281 - Héctor Gonzalo Quiroga Pacheco, MD. Case of the week #280 - Moshe Bronshtein, MD Case of the week #279 - Ajit Gandhi MD, Anish Dekhane MD Case of the week #278 - Nguyen Ha, M.D. Case of the week #277 - Elena Andreeva, MD., Ludmila Juchenko, MD. Case of the week #276 - Tihonenko I.MD, Venchikova N.MD, Nerovnya A. MD, Yushchenko O. Case of the week #275 - Ilyina H.G. MD, Novikova I.V. MD, Lishtvan L.M. MD, Savenko L.A. MD, Kasetskaya T.V. MD, Miheeva N. MD. Case of the week #274 - Elena Andreeva, MD. Case of the week #273 - Cuillier F.MD,Elad T.MD, Prost F.MD, Possati P.MD Case of the week #272 - Nattinee Srisantiroj, M.D. Case of the week #271 - Sheryl Rodts-Palenik, MD. Case of the week #270 - A Krasnov, MD, , I Glazkova MD, A.Averyanov MD, I Mokryk MD. Case of the week #269 - N. Srisantiroj MD, P.Padungkiatwattana MD,S.Sirijareonthai MD,P.Jaruyawong MD. Case of the week #268 - S. Manohar, MD, M.Mohan Karthikeyan, MD Case of the week #267 - Christa Faschingbauer, M.D. Case of the week #266 - S.Manohar, M.D, M.Mohan Karthikeyan, DMRD Case of the week #265 - Natalya Miheeva, MD. Case of the week #264 - Tibisay G. Chin Aleong, MD., Montserrat Alegre, MD., Maria V.Huerta Anaya, MD. Case of the week #263 - Emmanuel Julien, MD., Philippe Juhel, MD. Case of the week #262 - C Gar El, MD, AB Gelot, MD PhD, M Moutard, MD, R Grigorescu, MD, L Gourand, MD. Case of the week #261 - Binodini M. Chauhan, MD. Case of the week #260 - Trinh Nguyen, M.D. Case of the week #259 - JZ. Peralta, MD., FL.Garcia, MD., OA. López, MD. Case of the week #258 - P. Budziszewska, MD., D.Kuka, MD., K. Sodowski, MD., P.Jeanty, MD., M. Hernanz-Schulman, MD. Case of the week #257 - Ha To Nguyen, MD. Case of the week #256 - Elena Andreeva, MD. Case of the week #255 - A Krasnov, MD, G Gorokhova, L Golovakha, I Glazkova Case of the week #254 - F Cuillier, JL Alessandri, KC Dillon Case of the week #253 - Andrés Arencibia Molina, MD. Case of the week #252 - Elena Andreeva, MD. Case of the week #251 - Irina Tihonenko, MD, Tatiyana Koipish, MD. Case of the week #250 - Emmanuel Julien Case of the week #249 - Fabric Cuillier Case of the week #248 - S. Manohar, DMRD, MD. Case of the week #247 - Adam Paer, MD, Philippe Jeanty, MD. Case of the week #245 - Margarita Alvarez de la Rosa Rodriguez, MD Case of the week #246 - Irina Tihonenko Case of the week #244 - Héctor Gonzalo Quiroga Pacheco, MD. Case of the week #243 - Andrea L. Gonzales ,Luis A. Izquierdo-Encarnacion ,Samantha Long Case of the week #242 - S. Manohar, DMRD, MD. Case of the week #241 - S. Manohar, DMRD, MD. Case of the week #240 - Nattinee Srisantiroj ,Prapat Wanitpongpan ,Vitaya Titapant Case of the week #239 - Binodini M. Chauhan, MD. Case of the week #238 - Nguyen Ha ,Francois Manson Case of the week #237 - Nickolay Petrovich Veropotvelyan, MD. Case of the week #236 - F Cuillier ,H M’Lamali ,X Baron ,A Berta Case of the week #235 - Elena Andreeva Case of the week #234 - E Racanska ,R Gerychova ,P Janku. Case of the week #233 - Calvo A, MD ,Azumendi Guillermo, MD ,J. R. Herrero, MD ,Marisa Borenstein, MD. Case of the week #232 - Fabrice Cuillier ,J. Bideault ,D. Daguindeau. Case of the week #231 - F Cuillier ,D Daguindeau ,J Bideault ,JL Travers Case of the week #230 - WL Lau ,HSW Lam ,WC Leung ,RKH Chin. Case of the week #229 - Francois Manson, MD. Case of the week #228 - Alexander Krasnov ,Irina Glazkova ,Andrey Averyanov. Case of the week #227 - Jayprakash Shah Case of the week #226 - A Volkov ,A Rymashevsky ,A Lukach Case of the week #225 - F Cuillier ,D Daguindeau ,J Bideault ,A Bertha Case of the week #224 - Nguyen Ha ,Philippe Jeanty Case of the week #223 - Elena Andreeva ,Natalia Odegova ,Natalia Bortnovskaya ,Alexandr Michin Case of the week #222 - Trinh Nguyen, MD. Case of the week #221 - Fabrice Cuillier ,J. Chouchani ,A. Hamas. Case of the week #220 - Fabrice Cuillier ,A. S. Charpentier ,L. Lagarde. Case of the week #219 - F Cuillier ,P Dussautoir ,G Kah ,JP Riviere Case of the week #218 - Trinh Nguyen, MD. Case of the week #217 - Luis Guillermo Diaz Guerrero, MD. Case of the week #216 - Meyer Serrano Riano, MD. Case of the week #215 - Andrés Arencibia Molina, MD ,Ludmila Ocón, MD. Case of the week #214 - Irina Glazkova, MD. Case of the week #213 - Maha Tulbah, MD. Case of the week #212 - Ha To Nguyen, MD Case of the week #211 - F Cuillier ,T Gervais ,J M Scemama ,JM Laville ,F Salmeron ,JP Riviere ,A Bertha Case of the week #210 - A Volkov ,E Andreeva ,A Rymashevsky Case of the week #209 - F Cuillier ,J Bideault ,P Lemaire ,M Deshayes ,JM Travers Case of the week #208 - Christa Faschingbauer Case of the week #207 - Elena Andreeva, MD ,Svetlana Tchuchvaga, MD. Case of the week #206 - Elena Andreeva, MD ,Svetlana Tchuchvaga, MD. Case of the week #205 - BH Syla ,S Fetiu ,S Tafarshiku Case of the week #204 - Trinh Nguyen, MD Case of the week #203 - Fabrice Cuillier, MD ,J. Narboni, MD ,J. L. Alessandri, MD. Case of the week #202 - C Dalmon ,G Brodaty ,B Bessières ,V Mirlesse ,L Gourand Case of the week #201 - Francois Manson, MD. Case of the week #200 - Sheryl Rodts-Palenik, MD. Case of the week #199 - Breck Collins, MD ,Philippe Jeanty, MD, PhD ,Ian and Kristi Hay, Case of the week #198 - Robert Dankovcik, MD, PhD. Case of the week #197 - F Contarin, MD ,J Suárez, MD ,J Rangel, MD ,J Visconti, MD ,B Rodriguez, MD. Case of the week #196 - Fabrice Cuillier, MD ,K. Comalli Dillon, BA, RDMS ,Edouard Kaufman, MD ,H. Randrianaivo, MD Case of the week #195 - Carlos Canetti, MD ,Alejandra Medina, MD Case of the week #194 - Heron Werner, MD ,Iugiro Kuroki, MD ,Erick Malheiro L. Martins, MD Case of the week #193 - Vavilala Suseela, MD. Case of the week #192 - Héctor Gonzalo Quiroga Pacheco, MD. Case of the week #191 - Joan Acosta Diez, MD. Case of the week #190 - Kathleen Comalli Dillon, BA, RDMS ,Fabrice Cuillier, MD ,C, P. Lemaire, MD. Case of the week #189 - S. Manohar, DMRD, MD Case of the week #188 - Miguel Merino, MD Case of the week #187 - S. Manohar, MD, DMRS Case of the week #186 - F. Cuillier, MD ,J. L. Alessandri ,J. M. Laville ,D. Durandeu ,J. P. Bourdil ,K. C. Dillon Case of the week #185 - Giuseppe Cali, MD ,Francesco Labate, MD Case of the week #184 - Montse Alegre, MD Case of the week #183 - Alvaro Teran, MD Case of the week #182 - Cuillier F*, MD ,J. Chouchani**, MD ,Abdou M**, MD ,Bideault J***, MD ,Fossati P****, MD Case of the week #181 - Mariliza Volpe, MD Case of the week #180 - Luc Gourand, MD Case of the week #179 - Fabrice Cuillier, MD Case of the week #177 - Cuillier F, MD ,Carasset G, MD ,Lemaire P ,Deshayes M ,DeNapoli S, MD Case of the week #176 - Debabrata Das, MD, Vaijayanthi Raja, MD Case of the week #175 - Luc Gourand, MD, Veronique Mirlesse, MD Case of the week #174 - Lachlan De Crespigny, MD Case of the week #173 - Debabrata Das, MD, Sulochana Neelakandam, MBBS Case of the week #172 - Philippe Jeanty, MD, PhD ,Cheryl Turner, BS, RDMS, Juliana Leite, MD Case of the week #171 - Ron Auslander, MD, Goldberg Yael, MD Case of the week #170 - David Fox, MD Case of the week #169 - Luis Diaz Guerrero, MD ,Thuy Van Nguyen, MD Case of the week #168 - Cuillier F, MD, Avignon MS, MD Case of the week #167 - Novakov Mikic A, MD ,Ivanoviæ Lj, MD ,Luèiæ M, MD ,Kiralj A, MD ,Dobriæ Lj, MD ,Koprivšek , MD Case of the week #166 - Cuillier, MD ,Alessandri JL, Bideault J, MD ,Rabenja A, MD Case of the week #165 - Cuillier F, MD ,Baroche G, MD ,Alessandri JL, MD Case of the week #164 - Fabrice Cuillier, MD Case of the week #163 - Heron Werner, MD Case of the week #162 - Fabrice Cuillier, MD Case of the week #161 - Heron Werner, MD Case of the week #160 - Carlos Elorza, MD Case of the week #159 - Graham Parry, MD Case of the week #158 - Giuseppe Calì, MD ,Francesco Labate, MD ,Adriano Cipriani, MD ,Sergio Di Liberto, MD Case of the week #157 - Montse Alegre, MD Case of the week #156 - Yagel Simcha, MD Case of the week #155 - João Mendes, MD Case of the week #154 - Ilan Timor, MD Case of the week #153 - Fabrice Cuillier, MD ,Elad T., MD, Fossati P.,MD Case of the week #152 - Fabrice Cuillier, MD ,Deshayes M.,MD, Michel J.L., MD Case of the week #151 - S. Manohar, DMRD, MD Case of the week #150 - Fabrice Cuillier, MD Case of the week #149 - Philippe Jeanty ,Pam Ross ,Dorris Baier Case of the week #148 - Juan Carlos Mejia Quintero, MD Case of the week #147 - Drs. Gui T.Mazzoni Jr ,Marcos M.L.Faria ,César C.Sabbaga ,Linei A.B.D.Urban, et al Case of the week #146 - Hector Quiroga, MD Case of the week #145 - Emmanuel Julien, MD Case of the week #144 - Dr.Cuillier F ,Dr Bideault J. ,Dr Alessandri J.L. ,Dr Cartault J.F. Case of the week #143 - Philippe Jeanty, MD, PhD Case of the week #142 - Fabrice Cuillier, MD Case of the week #141 - Philippe Jeanty, MD, PhD Case of the week #140 - Rodolpho Lambruschini, MD Case of the week #139 - Philippe Jeanty, MD, PhD Case of the week #138 - Claudia Elena Teodorescu, MD Case of the week #137 - Sheryl Rodts-Palenik, MD Case of the week #136 - Ghada Mansour, MD ,Sobhi Abou-Louz, MD Case of the week #135 - S.Manohar, MD, DMRD Case of the week #134 - Philippe Jeanty, MD, PhD ,Chaitali Shah, MD ,Cerine Jeanty Case of the week #133 - Philippe Jeanty, MD, PhD Case of the week #132 - José Luis Volpacchio, MD ,Pablo Giuliani, MD, et al. Case of the week #131 - Patrick Bailleul, MD Case of the week #130 - Helio Camargo, MD ,Marcia Camargo, MD Case of the week #129 - Ivan Lee, MD Case of the week #128 - Fabrice Cuillier, MD Case of the week #127 - Wayne Persutte, MS, RDMS ,John Hobbins, MD Case of the week #126 - Montse Alegre, MD Case of the week #125 - Fabrice Cuillier, MD Case of the week #124 - François Jacquemard*, MD ,Danièle Pariente MD ,Hélène Martelli, MD ,Luc Gourand*, MD Case of the week #123 - Lívia Rios,MD Case of the week #122 - Rajesh Garge, MD Case of the week #121 - Miguel Octavio Sosa Case of the week #120 - Luc Druart MD ,Helene Dessuant MD ,Luc Gourand MD ,Bettina Bessieres MD ,Fernand Daffos MD Case of the week #119 - Fabrice Cuillier, MD Case of the week #118 - Luc Gourand, MD Case of the week #117 - Raul Martinez, MD Case of the week #116 - Fernando Velazquez, MD Case of the week #115 - Juan Carlos Quintero MD ,Eduardo Romero, MD Case of the week #114 - Adrian Clavelli, MD ,Horacio Ahielo, MD Case of the week #113 - Alberto Sosa Olavarria, MD Case of the week #112 - Maher Sarraf, MD Case of the week #111 - Fabrice Cuillier, MD ,Philippe Lemaire, MD Case of the week #110 - Cheryl Turner, RDMS, BS, Sefa Kelekci, MD ,Jos Offermans, MD, PhD ,Philippe Jeanty, MD, PhD. Case of the week #109 - Ana Bircher, Philippe Jeanty Case of the week #108 - Adrian Clavelli MD Case of the week #107 - Giuseppe Calì, MD , Francesco Labate, MD Case of the week #106 - François Duchatel Case of the week #105 - Claudia Scarpetta, MD, Gustavo Vasquez, MD and Juan Carlos Quintero MD Case of the week #104 - A. Sosa Olavarría ,L. Díaz Guerrero ,G. Giugni Chalbaud Case of the week #103 - Hector G Quiroga, MD Case of the week #102 - Alberto Sosa Olavarría Case of the week #101 - Miguel Octavio Sosa Case of the week #100 - Raul Martinez, MD Case of the week # 99 - Fabrice Cuillier Case of the week # 98 - Eleni Tzachrista Case of the week # 97 - Boopathy Vijayaraghavan, MD, DMRD ,R.Lalitha, MBBS, DGO ,Geethanjali, MBBS, DMRD Case of the week # 96 - Luc Gourand Case of the week # 95 - Nayana Parange, MD Case of the week # 94 - Waldo Sepulveda, MD and Sebastian Illanes, MD Case of the week # 93 - Ronaldo Levy MD ,Luc Gourand MD ,Jean Simon Arfi MD ,Bettina Bessières MD, Case of the week # 92 - Luis Izquierdo Case of the week # 91 - Alberto Sosa, MD Case of the week # 90 - Moshe Bronshtein MD and Etan Z. Zimmer MD Case of the week # 89 - Boopathy Vijayaraghavan.S, MD., DMRD, ,Suma Natarajan, MD, DGO, ,Ravikumar.V.R.MS, MCh. Case of the week # 88 - Thomas ,Goolaertsl ,Watkins ,Autin ,Barlow Case of the week # 87 - Duchatel Case of the week # 86 - Johns ,Bircher ,Jeanty ,Heredia ,Dev Case of the week # 85 - Mirlesse ,Levy ,Brodaty ,Sonigo ,Teillac ,Gourand ,Daffos ,Heredia Case of the week # 84 - Luis Izquierdo ,Amanda Cotter Case of the week # 83 - Fernando Heredia ,Hector Figueroa Case of the week # 82 - Gourand ,Jacquemard ,Bessières ,Fallet ,Daffos Case of the week # 81 - Ana Bircher ,Fernando Heredia ,Philippe Jeanty Case of the week # 80 - Raúl Martínez ,Fernando Heredia Case of the week # 79 - Ana Maria BIrcher, MD, Philippe Jeanty, MD, PhD Case of the week # 78 - Guillermina M. Ochua ,Ricardo De Loredo ,Roque A. Carpio ,Javier B. Gardey ,Silvina Fernàndez Case of the week # 77 - Sebastian Illanes, MD ,Cristian Kottmann, MD ,Waldo Sepulveda, MD Case of the week # 76 - Jose Sierra, MD ,Philippe Jeanty, MD, PhD Case of the week # 75 - Alberto Sosa Olavarría, MD, PhD ,Gelsy Giugni Chalbaud ,Luis Díaz Guerrero, MD ,Miguel A. Granda MD Case of the week # 74 - Philippe Jeanty, MD, PhD ,Jose Sierra, MD ,Eleni Tzachrista, MD Case of the week # 73 - Pavel Vlašin, MD, Pavel Eliáš, MD* Case of the week # 72 - Renato Ximenes, MD Case of the week # 71 - Moshe Bronshtein MD ,Etan Z. Zimmer MD Case of the week # 70 - Alberto Sosa Olavarría, MD, PhD ,Luis Díaz Guerrero, MD ,Maria Miraz MD ,Aldo Reigosa MD ,Emilio Mac Case of the week # 69 - Philippe Jeanty, MD, PhD Case of the week # 68 - Geneviève Brodaty, MD ,Bettina Bessières, MD ,Luc Gourand, MD Case of the week # 67 - Alberto Sosa Olavarría, MD ,PhD. Luis Díaz Guerrero, MD ,Reigosa Yanis, MD Case of the week # 66 - Raul Martinez MD Case of the week # 65 - Luís Flávio Gonçalves, MD Case of the week # 64 - Philippe Jeanty, MD PhD ,Ana Bircher, MD Case of the week # 63 - Fabrice Cuillier, MD ,Jacques Bidault, MD Case of the week # 62 - Alberto Sosa Olavarría, MD, PhD ,Gonzalo Pérez Canto, MD ,Luis Díaz Guerrero, MD Case of the week # 61 - Philippe Jeanty, MD, PhD Case of the week # 60 - Moshe Bronshtein, MD ,Etan Zimmer, MD Case of the week # 59 - Rama Murthy, MD Case of the week # 58 - Marcos Tawil, MD ,Norma Tello, MD Case of the week # 57 - Luiz Eduardo Machado, MD Case of the week # 56 - Philippe Jeanty, MD, PhD ,Cheryl Turner, RDMS Case of the week # 55 - Luis A. Izquierdo, MD, RDMS ,Victor H. Gonzalez-Quintero, MD ,Mahsomeh Haghayegh, RDMS Case of the week # 54 - Angela Regina Capelanes, MD ,Gloria Valero, MD ,Philippe Jeanty, MD, PhD Case of the week # 53 - Alessandro Giuffrida, MD ,Saveria Cantone, MD ,Fabiola Galvani, MD ,Claudio Giorlandino, MD. Case of the week # 52 - Philippe Jeanty, MD, PhD Case of the week # 51 - Laurie Briare, RDMS ,Dianne Glassford, RDMS ,Dianna Heidinger, RDMS ,Van R. Bohman, MD Case of the week # 50 - Connie Jörgensen, MD, PhD Case of the week # 49 - Gustavo Malinger, MD ,Tally Lerman-Sagie, Dorit Lev ,Mordechai Tamarkin ,Debora Kidron. Case of the week # 48 - Juan Carlos Quintero Mejia MD ,Philippe Jeanty, MD, PhD Case of the week # 47 - Christine Comstock, MD Case of the week # 46 - Bettina Bessieres, MD ,François Jacquemard, MD ,Luc Gourand, MD ,Fernand Daffos, MD Case of the week # 45 - Elke Sleurs, MD ,Luc De Catte, MD Case of the week # 44 - Dominique Thomas, MD ,Isabelle Mahillon, MD, ,France Hayez-Delatte, MD, ,Françoise Rypens, MD, Joelle Case of the week # 43 - Elke Sleurs, MD ,Luc De Catte, MD Case of the week # 42 - Elke Sleurs, MD ,Luc De Catte, MD Case of the week # 41 - Maria Verônica Muñoz Rojas, MD ,Luís Flávio Gonçalves, MD ,Rodrigo Dias Nunes, MD ,Jorge Abi Saab N Case of the week # 40 - Anupama Patil, MD ,Philippe Jeanty, MD, PhD Case of the week # 39 - Daniel Margulies, MD ,Nestor Matarasso, MD Case of the week # 38 - Fernando M. Heredia, MD ,Víctor G. Quiroz, MD ,Eliana C. Selman, MD ,Carlos B. Henríquez, MD, Ph Case of the week # 37 - Elke Sleurs, MD ,Luc de Catte, MD Case of the week # 36 - Alberto Sosa Olavarria, MD, PhD ,Luis Díaz Guerrero, MD Case of the week # 35 - L. Gourand MD ,V. Mirlesse MD ,B. Bessieres MD ,F. Jacquemard MD ,R. Levy MD ,F. Daffos MD Case of the week # 34 - Phillip Ramm Case of the week # 33 - Cheryl Turner, RDMS Philippe Jeanty, MD, PhD Case of the week # 32 - Luiz Eduardo.Machado, MD ,L. Chamusca, MD ,K. Chagas, MD ,C. Cadete, MD ,AF.Santos, MD Case of the week # 31 - Philippe Jeanty, MD, PhD ,Ghada Mahmoud Mansour, MD Case of the week # 30 - Carlos Alberto Mejia Escobar, MD ,Jorge Ramirez, MD ,Jaime Gomez, MD ,Oscar Medina, MD Case of the week # 29 - Fernando Heinen MD ,Diego Elias, MD ,Marcelo Pietrani MD ,P. Verdaguer MD Case of the week # 28 - Luis Flavio Goncalves Case of the week # 27 - Luc de Catte, MD Case of the week # 26 - Elke Sleurs, MD, and Luc de Catte, MD Case of the week # 25 - Peter Twining Case of the week # 24 - Alberto Sosa Olavarría, MD, PhD ,Luis Díaz Guerrero, MD ,A Reigosa Yanis, MD ,Perinatology Unit, Car Case of the week # 23 - Marcos Tawil, MD, Mexico, ,Philippe Jeanty Case of the week # 22 - Sosa Olavarría, A. MD, PhD ,Luis Díaz Guerrero, MD ,García M.MD ,Pereira D. MD ,Martines M.MD, Perin Case of the week # 21 - Philippe Jeanty, MD, PhD Case of the week # 20 - Sandra Silva, MD, Sao Paolo Brazil , Philippe Jeanty, MD, PhD Case of the week # 19 - Guillermo Diaz Guerrero Case of the week # 18 - Gerald Mulligan, MD Case of the week # 17 - Claudio Giorlandino, MD Case of the week # 16 - Luc de Catte Case of the week # 15 - Adrian Clavelli Case of the week # 14 - Philippe Jeanty, MD, PhD ,Gianluigi Pilu, MD Case of the week # 13 - Philippe Jeanty, MD, PhD Case of the week # 12 - Alberto Sosa Olavarria, MD ,Luis Guillermo Diaz Guerrero, MD, Valencia, Venezuela Case of the week # 11 - Submitted by Luis Izquierdo, MD, Pensacola, Florida Case of the week # 10 - Peter Twining, Nottingham, MD Case of the week # 9 - Luc de Catte, Brussels, Belgium Case of the week # 8 - Luis A. Izquierdo, Pensacola, Florida Case of the week # 7 - Luc De Catte, MD,Brussel, Belgium Case of the week # 6 - Cheryl Norris ,Philippe Jeanty Case of the week # 5 - Adrian Clavelli ,Buenos Aires , Argentina Case of the week # 4 - Hytham M. Imseis, MD, MAHEC Case of the week # 3 - Peter Twining Case of the week # 2 - Glynis Sacks, MD Case of the week # 1 - Jeanty
{"url":"http://www.sonoworld.com/TheFetus/Listing.aspx?Id=2","timestamp":"2014-04-18T11:01:42Z","content_type":null,"content_length":"391793","record_id":"<urn:uuid:f98ee882-e4f9-42fa-822a-718184a546fc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Ernie's 3D Pancakes Congratulations and thank you for your contribution to Clinical Psychology. Now that the book is published, we need your help to get some 5 star reviews posted to both Amazon and Barnes & Noble to help support and promote it. As you know, these online reviews are extremely persuasive when customers are considering a purchase. For your time, we would like to compensate you with a copy of the book under review as well as a $25 Amazon gift card. If you have colleagues or students who would be willing to post positive reviews, please feel free to forward this e-mail to them to participate. We share the common goal of wanting Clinical Psychology to sell and succeed. The tactics defined above have proven to dramatically increase exposure and boost sales. I hope we can work together to make a strong and profitable impact through our online bookselling channels. it was all a mistake Recent Comments
{"url":"http://3dpancakes.typepad.com/ernie/2009/06/index.html","timestamp":"2014-04-23T09:11:34Z","content_type":null,"content_length":"36973","record_id":"<urn:uuid:dd49c27d-72a1-49f0-8497-c0111ef6a27b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Lunar Perigee and Apogee Calculator To display the date, time, and distance of lunar perigees and apogees for a given year, enter the year in the box below and press “Calculate”. Depending on the speed of your computer, it may take a while for the results to appear in the text boxes. This page requires your browser to support JavaScript, and that JavaScript be enabled; all computation is done on your own computer so you can, if you wish, save this page in a file and use it even when not connected to the Internet. All dates and times are Universal time (UTC); to convert to local time add or subtract the difference between your time zone and UTC, remembering to include any additional offset due to summer time for dates when it is in effect. For each perigee and apogee the distance in kilometres between the centres of the Earth and Moon is given. Perigee and apogee distances are usually accurate to within a few kilometres compared to values calculated with the definitive ELP 2000-82 theory of the lunar orbit; the maximum error over the years 1977 through 2022 is 12 km in perigee distance and 6 km at The closest perigee and most distant apogee of the year are marked with “++” if closer in time to full Moon or “--” if closer to new Moon. Other close-to-maximum apogees and perigees are flagged with a single character, again indicating the nearer phase. Following the flags is the interval between the moment of perigee or apogee and the closest new or full phase; extrema cluster on the shorter intervals, with a smaller bias toward months surrounding the Earth's perihelion in early January. “F” indicates the perigee or apogee is closer to full Moon, and “N” that new Moon is closer. The sign indicates whether the perigee or apogee is before (“−”) or after (“+”) the indicated phase, followed by the interval in days and hours. Scan for plus signs to find “photo opportunities” where the Moon is full close to apogee and perigee. This table gives the time of all new and full Moons in the indicated year, as well as the last phase of the preceding year and the first phase of the next year. Click on titles to order books on-line from Meeus, Jean. Astronomical Algorithms . Richmond: Willmann-Bell, 1998. ISBN 0-943396-63-8. The essential reference for computational positional astronomy. The calculation of perigee and apogee time and distance is performed using the algorithm given in Chapter 48. Meeus, Jean. Astronomical Formulæ for Calculators, Fourth Edition . Richmond: Willmann-Bell, 1988. ISBN 0-943396-22-0. This book, largely superseded by the more precise algorithms given in Astronomical Algorithms, remains valuable when program size and speed are more important than extreme precision. The date and time of the phases of the Moon are calculated using the method given in Chapter 32, and are accurate within 2 minutes, more than adequate for our purposes here. The more elaborate method in Chapter 47 of Astronomical Algorithms reduces the maximum error to 17.4 seconds (and mean error to less than 4 seconds), but would substantially increase the size and download time for this page, and the calculation time for each update. Chapront-Touzé, Michelle and Jean Chapront. Lunar Tables and Programs from 4000 B.C. to A.D. 8000 . Richmond: Willmann-Bell, 1991. ISBN 0-943396-33-6. If you need more precise calculation of the Moon's position than given in the references above, you're probably going to end up here. This book presents the ELP 2000-85 theory which, while less accurate than ELP 2000-82, has been tested for stability over a much longer time span. ELP 2000-85 generates predictions of lunar longitude accurate to 0.0004 degrees for the years 1900 through 2100, and 0.0054 degrees for the period 1500 through 2500. Chapront-Touzé, Michelle and Jean Chapront. Lunar solution ELP 2000-82B. This is the most precise semi-analytical theory of the Moon's motion for observations near the present epoch. Machine-readable files for all of the tables and a sample FORTRAN program which uses them to compute lunar ephemerides may be obtained from the Astronomical Data Center at the NASA Goddard Space Flight Center by FTP across the Internet, or on CD-ROM, along with a wide variety of other astronomical catalogues and tables. This material is intended for experts in positional astronomy and computation. If you can't figure it out, don't ask me for help. by John Walker May 5, 1997 This document is in the public domain.
{"url":"http://fourmilab.ch/earthview/pacalc.html","timestamp":"2014-04-18T23:15:05Z","content_type":null,"content_length":"9417","record_id":"<urn:uuid:80b0f412-7ee1-4289-95be-7a79f1b83ffb>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Recursion theory question Harvey Friedman friedman at math.ohio-state.edu Sat Feb 16 16:58:14 EST 2002 The following is well known. THEOREM. "The set of all n such that there exists e < n with phi_e(0) = n" is of Turing degree 0'. Normally, one easily proves that this is effectively simple, and quotes Donald Martin's theorem that every effectively simple set is of Turing degree 0', whose usual proof is to use Donald Martin's theorem about DNR. However, there is a simple direct proof of this Theorem that does not involve this interesting machinery. This is also probably well known. But now what about CONJECTURE. "The set of all n such that there exists e < n such that n is the largest element of W_e" is of Turing degree 0'. Is this well known, or at least known? If not, then can you prove this? I am confident that an expert in recursion theory can do this. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-February/005289.html","timestamp":"2014-04-17T04:22:39Z","content_type":null,"content_length":"3048","record_id":"<urn:uuid:9ab22c0c-2d85-446a-88ca-b35bf8a98adc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: [ap-calculus] Re: From the Moderator + AP versus non-AP calculus + An AP myth Replies: 1 Last Post: Nov 27, 2004 4:28 PM [ap-calculus] Re: From the Moderator + AP versus non-AP calculus + An AP myth Posted: Nov 27, 2004 4:28 PM Hello all, There have been a variety of comments on the above three subjects over the past few days. Here are mine: 1: The article quoted in the Moderator's message should be viewed in light of the data at <a href="http://measuringup.highereducation.org/2000/">http://measuringup.highereducation.org/2000/</a> For example, in 2002 the five-year completion rate for college students in Texas was 41%; this should be compared with the rate on the chart in the article. Go to the web page to see the various qualifiers listed, qualifiers which are not included in the article. I have not read the book on which the article is based, so I can not in fairness judge the quality of its statistics. However, the numbers in the Post article in and of themselves have no statistical value; they are just a table of numbers. I would be hesitant to draw any conclusions from them. 2: If calculus is going to be offered in high school, it should be AP calculus. A course in STATISTICS (Freedman, Aliaga, ...) or FINITE MATHEMATICS" (COMAP, Hathaway, ...) would be a much better choice for both the academic and civic futures of the students. In particular, a course of either type would require students to set up models, make judgments, and draw conclusions. In other words, they would have to think about what they are doing. In my experience, students who take AP calculus, but do not get a 4 or a 5 on the AP test, are very similar to students who do take a non-AP calculus class in high school; too many of think they know calculus. Their performance on their first college calculus examination is usually a shock and, hopefully, a wake-up call. As for the 4's and 5's, their performance in more advanced courses, again in my experience, ranges from very good to very bad. I realize that these comments run contrary to many of the submissions to this list but, again, why are schools reexamining the way they award AP calculus credit? 3: Students who get college credit for two or three AP classes typically are not going to save any money. Most institutions of higher learning charge "full-time" students the same tuition whenever they take a full course load, which may range from 12 to 18 hours per term. On the other hand such a student can shave a semester off of their undergraduate work through the appropriate use of summer classes, which usually cost less, per course, than full time tuition during the regular academic year. Of course, the structure of their major courses has to be amenable to this approach. Similar comments apply to students who get credit for six or more AP courses except, again their major requirements willing, they can get out a year earlier with the help of summer courses. Richard J. Maher Mathematics and Statistics Loyola University Chicago 6525 N. Sheridan Rd. Chicago, Illinois 60626 You are subscribed to ap-calculus as: archive@mathforum.org To unsubscribe send a blank email to To update your preferences, search the archives or post messages online, visit <a href="http://lyris.collegeboard.com/cgi-bin/lyris.pl?site=collegeboard&amp;enter=ap-calculus">http:// If you need help using Lyris web interface go to: <a href="http://www.lyris.com/lm_help/4.0/WebInterfaceforUsers.html">http://www.lyris.com/lm_help/4.0/WebInterfaceforUsers.html</a> Visit AP Central(tm) - The online resource for teachers, schools, colleges, and education professionals-- <a href="http://apcentral.collegeboard.com">http://apcentral.collegeboard.com</a> The College Board 45 Columbus Avenue New York, NY 10023-6992
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1107896&messageID=3640212","timestamp":"2014-04-21T04:49:56Z","content_type":null,"content_length":"18412","record_id":"<urn:uuid:05df1d7b-6cb0-4ef6-aa43-8371f32c86b8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
series - converge or diverge November 10th 2009, 02:56 AM #1 Junior Member Mar 2008 series - converge or diverge Hi all, I am trying to determine if the following series converges or diverges: $<br /> \sum_{n=1}^{\infty}\frac{sin^2(n)}{n^2}<br />$ Not sure if what i did is correct: -1 <= sin(n) <= 1 1 <= sin^2(n) <= 1 1/n^2 <= [sin^2(n)]/ n^2 <= 1/n^2 Note: sin^2(n) = [sin(n)]^2 I then compared the series to 1/n^2 and determined that the series 1/n^2 is a harmonic p series, where p > 1 and so it converges. Thus, the series $<br /> \sum_{n=1}^{\infty}\frac{sin^2(n)}{n^2} <br />$ converges. Could someone tell me if i am doing this correctly Thanks in advance, You mean -1<= sin^2(n)<= 1 but the comparison test uses absolute values that doesn't matter. 1/n^2 <= [sin^2(n)]/ n^2 <= 1/n^2 Note: sin^2(n) = [sin(n)]^2 I then compared the series to 1/n^2 and determined that the series 1/n^2 is a harmonic p series, where p > 1 and so it converges. Thus, the series $<br /> \sum_{n=1}^{\infty}\frac{sin^2(n)}{n^2} <br />$ converges. Could someone tell me if i am doing this correctly Thanks in advance, Except for the unimportant sign error, yes. November 10th 2009, 02:59 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/113625-series-converge-diverge.html","timestamp":"2014-04-18T21:52:50Z","content_type":null,"content_length":"35080","record_id":"<urn:uuid:ca31aea8-9390-4f1f-a232-f309f06ec2eb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Paterson, NJ ACT Tutor Find a Paterson, NJ ACT Tutor ...I have also assisted them in preparing for such exams as the NY State Regents exam and the SATs. I have tutored dozens of students to help them improve in the math section of the SAT exam. To accomplish this, I review test taking strategies, as well as the basic math rules that help them solve the required problems. 8 Subjects: including ACT Math, geometry, algebra 1, algebra 2 ...As for my tutoring subjects, I am very confident in my ability to tutor: biology, math, INCLUDING PSAT and SAT Math, and chemistry. I have taken intro to upper level chemistry courses, general biology, and up to calculus II in math. I have always taken pride in my ability to succeed in my educa... 14 Subjects: including ACT Math, chemistry, biology, algebra 1 ...I enjoy helping students succeed in academics. There is no better feeling than seeing that light bulb go off in a students head. I graduated from Immaculate Heart Academy and went on to continue my educational and softball career at Manhattan College. 28 Subjects: including ACT Math, English, reading, geometry ...I favor a dual approach, focused on both understanding concepts and going through practice problems. Let me know what concepts you're struggling with before our session, so I can streamline the session as much as possible! In my free time, I like to play with my pet chickens, play Minecraft, code up websites, and write sci-fi creative stories. 26 Subjects: including ACT Math, English, physics, calculus Hi parents and students, My name is Natalie and I am a forthcoming high school mathematics teacher. I graduated from a NYC specialized high school and I am currently studying at New York University, majoring in Mathematics Secondary Education. I have been a volunteer math tutor for the last 5 years, and have grown to work quickly and effectively on any mathematics subject. 19 Subjects: including ACT Math, calculus, geometry, biology Related Paterson, NJ Tutors Paterson, NJ Accounting Tutors Paterson, NJ ACT Tutors Paterson, NJ Algebra Tutors Paterson, NJ Algebra 2 Tutors Paterson, NJ Calculus Tutors Paterson, NJ Geometry Tutors Paterson, NJ Math Tutors Paterson, NJ Prealgebra Tutors Paterson, NJ Precalculus Tutors Paterson, NJ SAT Tutors Paterson, NJ SAT Math Tutors Paterson, NJ Science Tutors Paterson, NJ Statistics Tutors Paterson, NJ Trigonometry Tutors Nearby Cities With ACT Tutor Clifton, NJ ACT Tutors Elmwood Park, NJ ACT Tutors Fair Lawn ACT Tutors Fairlawn, NJ ACT Tutors Garfield, NJ ACT Tutors Haledon ACT Tutors Hawthorne, NJ ACT Tutors Little Falls, NJ ACT Tutors North Haledon, NJ ACT Tutors Passaic ACT Tutors Passaic Park, NJ ACT Tutors Prospect Park, NJ ACT Tutors Totowa ACT Tutors Wayne, NJ ACT Tutors Woodland Park, NJ ACT Tutors
{"url":"http://www.purplemath.com/Paterson_NJ_ACT_tutors.php","timestamp":"2014-04-19T02:47:52Z","content_type":null,"content_length":"23891","record_id":"<urn:uuid:048331db-690d-4eca-8131-fa62b5103c04>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00138-ip-10-147-4-33.ec2.internal.warc.gz"}