content
stringlengths
86
994k
meta
stringlengths
288
619
discretization of wave equation ("technical") October 26th 2010, 07:21 AM #1 discretization of wave equation ("technical") I have a question regarding the discretization of the wave equation. Consider the wave equation in one dimension for a function $u(x,t)$ $\frac{d^2u}{dt^2}=\alpha^2 \frac{d^2u}{dx^2}$ with the discretization ( $\Delta t=1$ and $\Delta x=1$) $u_i(t+1) = \alpha^2 [u_{i+1}(t)+u_{i-1}(t)] + 2u_{i}(t)[1-\alpha^2]-u_i(t-1)$ Now the "technical" question. It starts by adding, to the right hand side of the discrete version, a function $v_i(t-1)$: $u_i(t+1) = \alpha^2 [u_{i+1}(t)+u_{i-1}(t)] + 2u_{i}(t)[1-\alpha^2]-u_i(t-1)\mathbf{+v_i(t-1)}$ Such that when I choose $v_i(t-1) \equiv u_i(t-1)$ I cancel the terms with time dependence $t-1$. Doing that modifies the discretization of the 2nd time derivative, and the resulting discrete equation is not anymore a "genuine" wave equation. But then, how would the differential form of the last equation look like? Any help would be highly appreciated!! Thanxxxxxxxxx! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-equations/161038-discretization-wave-equation-technical.html","timestamp":"2014-04-18T22:05:55Z","content_type":null,"content_length":"32722","record_id":"<urn:uuid:ac94e7fe-6371-4fa8-a81c-faede7316075>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Pelham Manor, NY Calculus Tutor Find a Pelham Manor, NY Calculus Tutor ...The TAKS exam is very similar to the upper level ISEE exam. It too consists of verbal and mathematical sections. The math sections also include information about learning sequences. 15 Subjects: including calculus, chemistry, physics, geometry ...Allow me to use the experience that I have gained by teaching and tutoring hundreds of precalculus students to help you gain further knowledge of such basic functions as the absolute value, polynomial, rational, exponential, logarithmic, trigonometric and inverse trigonometric functions. My prec... 21 Subjects: including calculus, physics, statistics, geometry ...I finished my degree at Princeton in 2006 majoring in Politics specializing in Political Theory and American Politics so I'm very well equipped to tutor social studies and history along with related fields. I also took many classes in English at college so I can work with students in that area t... 40 Subjects: including calculus, chemistry, English, reading ...I have 3 years of experience tutoring organic chemistry students at Hunter, Columbia, and Stonybrook. I've developed methods of organizing reagents and mechanisms by using study guides and worksheets customized to suit college curricula. I've also devised study strategies to maximize study time and to reduce frustrations. 24 Subjects: including calculus, chemistry, physics, biology ...I have experience in tutoring just never professionally. I have lead many study sessions for fellow students in many of my classes, all of which received higher test grades based on the study sessions we had. I enjoy teaching others and have been told that the way I explain material and concepts are very understandable. 11 Subjects: including calculus, physics, geometry, accounting Related Pelham Manor, NY Tutors Pelham Manor, NY Accounting Tutors Pelham Manor, NY ACT Tutors Pelham Manor, NY Algebra Tutors Pelham Manor, NY Algebra 2 Tutors Pelham Manor, NY Calculus Tutors Pelham Manor, NY Geometry Tutors Pelham Manor, NY Math Tutors Pelham Manor, NY Prealgebra Tutors Pelham Manor, NY Precalculus Tutors Pelham Manor, NY SAT Tutors Pelham Manor, NY SAT Math Tutors Pelham Manor, NY Science Tutors Pelham Manor, NY Statistics Tutors Pelham Manor, NY Trigonometry Tutors Nearby Cities With calculus Tutor Bronxville calculus Tutors Englewood Cliffs calculus Tutors Great Neck Estates, NY calculus Tutors Hillside, NY calculus Tutors Kensington, NY calculus Tutors Kings Point, NY calculus Tutors Larchmont calculus Tutors Mamaroneck calculus Tutors Mount Vernon, NY calculus Tutors Mt Vernon, NY calculus Tutors New Rochelle calculus Tutors Pelham, NY calculus Tutors Rye Brook, NY calculus Tutors Sands Point, NY calculus Tutors Tuckahoe, NY calculus Tutors
{"url":"http://www.purplemath.com/Pelham_Manor_NY_calculus_tutors.php","timestamp":"2014-04-21T04:43:13Z","content_type":null,"content_length":"24281","record_id":"<urn:uuid:101d0bbe-5bbe-4904-928b-394b48a367e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
rotation matrices September 25th 2011, 06:24 AM rotation matrices Hi, I am trying to find some rotation matrices of $R^3$. So for angle $\frac{\pi}{2}$ and axis $e_2$ I think the rotation matrix is $R_{e_2} (\frac{\pi}{2}) = \left(\begin{matrix} cos(\frac{\pi}{2}) &0 &sin(\frac{\pi}{2})\\0 &1 &0\\-sin(\frac{\pi}{2}) &0 &cos(\frac{\pi}{2})\end{matrix}\right) = \left(\begin{matrix} 0 &0 &1\\0 &1 &0\\-1 &0 &0\end{matrix}\right)$ I just used the given rotation matrix in $R^3$ from the text with no real working out needed. Is this correct? Need help on this one: For the angle $\frac{\pi}{6}$ and axis containing the vector $(1,1,1)^t$. Not sure how to use the $(1,1,1)^t$ in this question. Thanks for any help. September 27th 2011, 06:30 PM Re: rotation matrices I think I have misunderstood what the axis e2 is and I think my working out is wrong. Any help would be nice. September 29th 2011, 03:38 AM Re: rotation matrices Anyone have any ideas as to how I should interpret the axis $e_2$ and use $(1,1,1)^t$? I have thought about this question but still can't get it. And sorry for making another post, wanted to edit a previous post but I couldn't. September 29th 2011, 07:33 AM Re: rotation matrices Hi, I am trying to find some rotation matrices of $R^3$. So for angle $\frac{\pi}{2}$ and axis $e_2$ I think the rotation matrix is $R_{e_2} (\frac{\pi}{2}) = \left(\begin{matrix} cos(\frac{\pi}{2}) &0 &sin(\frac{\pi}{2})\\0 &1 &0\\-sin(\frac{\pi}{2}) &0 &cos(\frac{\pi}{2})\end{matrix}\right) = \left(\begin{matrix} 0 &0 &1\\0 &1 &0\\-1 &0 &0\end{matrix}\right)$ I just used the given rotation matrix in $R^3$ from the text with no real working out needed. Is this correct? Need help on this one: For the angle $\frac{\pi}{6}$ and axis containing the vector $(1,1,1)^t$. Not sure how to use the $(1,1,1)^t$ in this question. Thanks for any help. This is a much more complicated question, and requires more advanced treatment. The best information I can give you is to go to this website and use that procedure. As you can see, it's a multi-step procedure.
{"url":"http://mathhelpforum.com/advanced-algebra/188778-rotation-matrices-print.html","timestamp":"2014-04-18T15:28:32Z","content_type":null,"content_length":"10276","record_id":"<urn:uuid:d4b42942-3db3-4d54-ada4-7fcdb0a9046e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Independent Subspace Analysis Is Unique, Given Irreducibility Gutch, Harold W. and Theis, Fabian J. (2007) Independent Subspace Analysis Is Unique, Given Irreducibility. In: Davies, Mike E. and James, Christopher J., (eds.) Independent Component Analysis and Signal Separation. 7th International Conference, ICA 2007, London, UK, September 9-12, 2007. Proceedings. Lecture notes in computer science, 4666. Springer, Berlin, pp. 49-56. ISBN 978-3-540-74493-1 (Printausgabe), 978-3-540-74494-8 (e-book). Full text not available from this repository. Other URL: http://www.springerlink.com/content/rl17454m1135n676/fulltext.pdf Independent Subspace Analysis (ISA) is a generalization of ICA. It tries to find a basis in which a given random vector can be decomposed into groups of mutually independent random vectors. Since the first introduction of ISA, various algorithms to solve this problem have been introduced, however a general proof of the uniqueness of ISA decompositions remained an open question. In this ... Export bibliographical data
{"url":"http://epub.uni-regensburg.de/16856/","timestamp":"2014-04-17T01:01:45Z","content_type":null,"content_length":"30158","record_id":"<urn:uuid:bc52c0a2-6de5-49c7-88c9-e8e8e6460f79>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
infinity, in mathematics, that which is not finite; it is often indicated by the symbol ∞art/infinity.gif. A sequence of numbers, a [1], a [2], a [3],…, is said to "approach infinity" if the numbers eventually become arbitrarily large, i.e., are larger than some number, N, that may be chosen at will to be a million, a billion, or any other large number (see limit). The term infinity is used in a somewhat different sense to refer to a collection of objects that does not contain a finite number of objects. For example, there are infinitely many points on a line, and Euclid demonstrated that there are infinitely many prime numbers. The German mathematician Georg Cantor showed that there are different orders of infinity, the infinity of points on a line being of a greater order than that of prime numbers (see transfinite number). In geometry one may define a point at infinity, or ideal point, as the point of intersection of two parallel lines, and similarly the line at infinity is the locus of all such points; if homogeneous coordinates ( x [1], x [2], x [3]) are used, the line at infinity is the locus of all points ( x [1], x [2], 0), where x [1] and x [2] are not both zero. (Homogeneous coordinates are related to Cartesian coordinates by x = x [1]/ x [3] and y = x [2]/ x [3].) See A. D. Aczel, The Mystery of the Aleph (2000); D. F. Wallace, Everything and More (2003). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on infinity from Fact Monster: See more Encyclopedia articles on: Mathematics
{"url":"http://www.factmonster.com/encyclopedia/science/infinity.html","timestamp":"2014-04-16T05:19:41Z","content_type":null,"content_length":"21732","record_id":"<urn:uuid:8fd03cfa-1a9e-4e2c-a49b-35e88943c981>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
line equation February 18th 2007, 11:13 AM #1 Nov 2006 line equation cannot work this thing out ... 6400 bricks have been delivered to a site by the first day of a brick laying contract. subsequently 1600 bricks are delivered daily. brick laying commences on the first day and proceeds regularly, the bricklayers using 2400 bricks per day. write down two equations of the form of y=mx+c the first in respect of brick delivery, the second of bricklaying. use these equations to find out how many days after the commencement of the contract the brick layers will have been run out of bricks. cannot work this thing out ... 6400 bricks have been delivered to a site by the first day of a brick laying contract. subsequently 1600 bricks are delivered daily. brick laying commences on the first day and proceeds regularly, the bricklayers using 2400 bricks per day. write down two equations of the form of y=mx+c the first in respect of brick delivery, the second of bricklaying. use these equations to find out how many days after the commencement of the contract the brick layers will have been run out of bricks. This has been answered here already, where the answer appears to be at the end of day 6. February 18th 2007, 12:21 PM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/pre-calculus/11708-line-equation.html","timestamp":"2014-04-16T16:07:00Z","content_type":null,"content_length":"33379","record_id":"<urn:uuid:b3d41a7f-74a2-48ac-ba86-151347cd26f4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Ted Bunn’s Blog Stephen Hawking says that we shouldn’t try to contact aliens, lest they come and attack us for our resources: Hawking believes that contact with such a species could be devastating for humanity. He suggests that aliens might simply raid Earth for its resources and then move on: "We only have to look at ourselves to see how intelligent life might develop into something we wouldn't want to meet. I imagine they might exist in massive ships, having used up all the resources from their home planet. Such advanced aliens would perhaps become nomads, looking to conquer and colonise whatever planets they can reach." He concludes that trying to make contact with alien races is "a little too risky". He said: "If aliens ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn't turn out very well for the Native Americans." I can’t get too worried about this. It seems to me that any alien civilization with the technology to get here and attack us would also have the technology to search telescopically for planets with useful resources. We’ll probably be able to do a decent job on that ourselves within the next decade or two. To be specific, we’ll be able to do spectroscopy on the atmospheres of lots of planets, which would give us a good idea of which ones to go to and mine – if only we could get there. For anyone who wants to find us, get to us, and exploit us, finding will be by far the easiest step, so this doesn’t strike me as a good argument for hiding. Of course, there may be other reasons for not broadcasting our presence to aliens, the most obvious being that it’s a poor use of resources. It all comes down to a cost-benefit analysis: Hawking doesn’t want to do it because of the potential cost (alien attack); I’m more concerned about the (overwhelmingly likely) lack of benefit.
{"url":"http://blog.richmond.edu/physicsbunn/2010/04/","timestamp":"2014-04-16T13:03:29Z","content_type":null,"content_length":"48667","record_id":"<urn:uuid:22759800-3824-4ab4-b234-d991bd91b9f7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Herding Cats: Quote of the Day - More Statistics for Decision Making The notion that we can make decisions about the future without somehow knowing the possible behaviours we will encounter in this future is naive at best and is doomed to fail at worse. Here's some advice from others on this topic. • The only certainty is uncertainty - Pliny the Elder (Gaius Plinius Secundus) - if we some how think the future will emerge and we'll be able to react to it, then the simple finance concept of a nonrecoverable sunk cost will be forced on our projects. • The Public ... demands certainties ... But there are not certainties - Henry Louis Mencken - when we encounter the "Dilbert Boss" and accept that as the norm, then it's time to look for new work, not accept that as our lot in life. • If chance is the antithesis of law, then we need to discover the laws of chance - C. R. Rao (mentor to Ramanujan) - all numbers in project work are random variables. Without knowing the underlying statistical process and the probabilistic outcomes, no credible forecast for future performance is possible. Deciding anything in the absence of probabilistic confidence is simply not • There is only one thing about which I am certain, and this is that there is very little about which one can be certain - W. Sommerset Maugham - uncertainty comes in two types reducible and irreducible. We can only have confidence in the future when we deal with both of those on the project. We can also only have confidence in success for work in the short term - withing our planning horizon. Beyond that uncertainty increases and lowers the probability of success for any decision making. • Obviously, a man's judgement cannot be better than the information on which he bases it - Arthur Hays Sulzberger - to refuse to look into the future, to refuse to make an estimate of the possible outcomes, is to refuse to acknowledge your obligation as a project manager to be the steward of your customers money. • Lest men suspect your tale untrue. Keep probability in view - John Gay, 1688-1732, English Poet - all numbers in the project domain are random numbers and these random numbers are drawn from a probability distribution. Anyone seeking certainty is fooling themselves. So when Dilert bosses are quoted, it may be common, but that person is simply uninformed about the processes driving a project. On the other side anyone claiming the future cannot be forecast is equally uninformed. Both sides are wrong and both sides fail to understand the solution is to use the probability and statistics skills they should have learned in high school to make decisions. • A reasonable probability is the only certainty - Edger Watson Howe, Country Town Savings (1911) - like the quote above all project work is probabilistic. Learn to think, act, and make decisions based on probabilistic thinking. • Our wisdon and deliberation for the most part follow the lead of chance - Michel Eyquem de Montaigne, Essays (1580) - chance drives all decision making. Change is not an unknown random outcome. Chance can be a forecast, drawn from an underlying probability generated by a statistical process. Just learn to do this through probabilty and statistical decision making. • All uncertainty is fruitfull ... so long as it is accompanied by the wish to understand - Antinio Machado, Juan De Mairena (1943) - if you really want to understand the behaviours of dynamic systems, study probability and statistics. • Life is the art of drawing sufficient conclusions from insufficient premises - Samuel Butler, Lord, What is Man? Note Books, (1912) - those who naively assert we cannot know something about the future need to study further how to make decisions in the presence of uncertainty. What Does All This Tell Us? When we hear words about our inability to predict the future, make a forecast of something in the future, or make a decision in the presence of uncertainty, we must first assume that person has failed to understand the foundtaions of probability and statistics and how to apply these principles to making decisions. Next we might assume that person doesn't actually want to know how to do If the former is true, start reading Books for Cost and Schedule Forecasting Recent Comments
{"url":"http://herdingcats.typepad.com/my_weblog/2013/09/quote-of-the-day-more-statistics-for-decision-making.html","timestamp":"2014-04-21T15:45:15Z","content_type":null,"content_length":"64862","record_id":"<urn:uuid:ae7bf11a-cdf1-4aed-97c7-20e7ac460a4b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Burtonsville Geometry Tutor ...I teach students to: (1) consider the audience; (2)develop a theme and supporting ideas; (3) enrich the content with memorable stories, facts, and other materials that engage the audience; and (4) practice a comfortable delivery. I have taught grades 4-12. Two things help a child feel confident and prepared to do his or her best on the HSPT and future standardized tests. 32 Subjects: including geometry, reading, English, chemistry ...Currently, I am working toward my Ph.D. in Chemical Education at the Catholic University of America in Washington, D.C.. This doctoral degree involves the in-depth study of learning and teaching in the chemistry classroom. It is a combination of education and chemistry research, and it is alrea... 5 Subjects: including geometry, chemistry, algebra 1, algebra 2 I am currently teaching at a High School and tutoring at a Community College in Maryland. I have had more than 30 years of teaching mathematics and other science courses. As an instructor, I take pride in using the most effective instructional approach that suits the students' learning style and available resources. 3 Subjects: including geometry, algebra 1, trigonometry ...I minored in economics and went on to study it further in graduate school. My graduate work was completed at the University of Maryland College Park, where I specialized in international development and quantitative analysis. I currently work as a professional economist. 16 Subjects: including geometry, calculus, econometrics, ACT Math ...I enjoy working with teenagers and I also enjoy science and math. I teach thinking skills, organizational skills, and help students learn to ask for what they need to succeed. I have an undergraduate degree in physics from Carnegie Mellon University and a masters in molecular genetics and biochemistry from the University of Pittsburgh. 31 Subjects: including geometry, chemistry, reading, physics
{"url":"http://www.purplemath.com/Burtonsville_Geometry_tutors.php","timestamp":"2014-04-19T00:00:53Z","content_type":null,"content_length":"24180","record_id":"<urn:uuid:af4b231e-7391-4a23-abc2-bbf4135756dd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics A formula of the language of propositional calculus taking the truth value "true" independently of the truth values "true" or "false" taken by its propositional variables. Examples: In general one can check whether a given propositional formula is a tautology by simply examining the finite set of all combinations of values of its propositional variables. [a1] Yu.I. Manin, "A course in mathematical logic" , Springer (1977) pp. 31, 54 (Translated from Russian) How to Cite This Entry: Tautology. V.N. Grishin (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Tautology&oldid=16024 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Tautology","timestamp":"2014-04-19T09:38:04Z","content_type":null,"content_length":"15758","record_id":"<urn:uuid:3b1c3f05-9fa2-4883-9f92-6c0b6b3a1cc3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
The Economics of Pest Control The scientific study of pests and pest control strategies is often called economic entomology in recognition of the financial impact insects have on industry, agriculture, and human society in general. To be sure, economically important insects are not always pests; we have already stressed their value as pollinators, natural enemies, producers of silk, honey, etc. But wherever pest populations develop, their impact always results in monetary loss, either directly or indirectly. In most cases, losses from insect pests are directly proportional to the density of the pest population -- high density increases the extent or severity of damage and makes the need for control more critical. Injury vs. Damage Many people use the terms "damage" and "injury" interchangeably, but entomologists usually make an important distinction between them. • Injury is defined as the physical harm or destruction to a valued commodity caused by the presence or activities of a pest (e.g., consuming leaves, tunnelling in wood, feeding on blood, etc.). • Damage is the monetary value lost to the commodity as a result of injury by the pest (e.g., spoilage, reduction in yield, loss of quality, etc.). Any level of pest infestation causes injury, but not all levels of injury cause damage. Plants often tolerate small injuries with no apparent damage and sometimes even overcompensate by channelling more energy or resources into growth terminals or fruiting structures. A low level of injury may not cause enough damage to justify the time or expense of pest control operations. These sub-economic losses are simply part of the cost of doing business. But at some point in the growth phase of a pest population it reaches a point where it begins to cause enough damage to justify the time and expense of control measures. But how does one know when this point is reached? (How many boll weevils, for example, does it take to make a cotton farmer hook up his sprayer?) To a great extent, the answer depends on two fundamental pieces of economic • A. How much financial loss is the pest causing? and • B. How much will it cost to control the pest? A pest outbreak, by definition, occurs whenever the value of "A" is greater than the value of "B". Actual losses are relatively easy to measure in agricultural or industrial settings because commodity values are well established by commerce and trade. But losses from household insects, vectors of human disease, and nuisance pests can be much harder to quantify. In these cases, estimates of damage are often based on potential loss (from disease, contamination, etc.) rather than on actual or expected loss. Economic Injury Level The break-even point, where "A"="B", is known as the economic injury level. This is the population density at which the cost to control the pest equals the amount of damage it inflicts (actual or potential). Below the economic injury level, it is not cost-effective to control the pest population because the cost of treatment (labor plus materials) would exceed the amount of damage. Above the economic injury level, however, the cost of control is compensated by an equal or greater reduction in damage by the pest. The economic injury level (EIL) is often expressed mathematically by the formula: • "C" is the unit cost of controlling the pest (e.g., $20/acre) • "N" is the number of pests injuring the commodity unit (e.g., 800/acre) • "V" is the unit value of the commodity (e.g., $500/acre) • "I" is the percentage of the commodity unit injured (e.g., 10% loss) For the example given above, the economic injury level would equal 320 insects per acre: The economic injury level is usually expressed as a number of insects per unit area or per sampling unit. Occasionally, when the insects themselves are difficult to count or detect, the economic threshold may be based on a measurement of injury (e.g., leaf area consumed or number of dead plants). It is important to recognize that the economic injury level is a function of both the cost of pest control and the value of a commodity or product. Some commodities may be worth so little that it is never worth saving them from insect injury (my wife feels this way about the books I store in my attic). The value of other commodities may be so great that any level of infestation is worthy of control (my wife feels this way about the food she stores in her cupboard). Because of its dependence on both cost and value, the economic injury level can be calculated only after establishing a value for the damaged commodity or product. In practice, this is a difficult task because different people have different values. Economic Thresholds The economic injury level is a useful concept because it quantifies the cost/benefit ratio that underlies all pest control decisions. In practice, however, it is not always necessary or desirable to wait until a population reaches the economic injury level before initiating control operations. Once it is determined that a population will reach outbreak status, prompt action can maximize the return on a control investment. Since there is usually a lag time between the implementation of a control strategy and its effect on the pest population, it is always desirable to begin control operations before the pest actually reaches the economic injury level. Consequently, entomologists define a point below the economic injury level at which a decision is made to treat or not treat. This decision point is called the economic threshold, or sometimes the action threshold. It is the decision point for action -- the pest density at which steps are first taken to ensure that a potential pest population never exceeds its economic injury level. The economic threshold, like the economic injury level, is usually expressed in units of insect density or in terms of an injury measurement. The economic threshold is always lower than the economic injury level in order to allow for sufficient time to enact control measures. Surveillance of Pest Populations Effective use of economic thresholds in the management of insect populations depends on accurate measurements of population density as well as reliable predictions of population growth trends. Since it is not practical to count all the flies in the barnyard or all the boll weevils in the cotton field, entomologists depend on sampling strategies to estimate density and distribution. Hundreds of sampling methods have been devised and entomologists continue to develop and refine their techniques. An "ideal" sampling strategy requires minimal effort and gives an accurate and reproducible measure of the density and/or distribution of an insect population. In practice, such "ideal" methods do not exist. Every technique is inherently biased in some way. One method may be better than another for a particular pest or situation, but no sampling process is totally random, objective, and repeatable. The most widely used techniques, such as sweep nets or bait traps, do not measure absolute density of pest populations, they are only relative measures (yardsticks, in a sense) that may be used as estimates of population density once they are properly "calibrated" through exerimentation and comparison with other sampling techniques. Sex pheromone traps, for example, may attract male peachtree borers from several miles downwind. Without compensating for such immigration, trap catch data would greatly exaggerate the size of a local moth population. Similarly, sweepnet samples of alfalfa weevils tend to underestimate the numbers of small larvae in a field (relative to adults and large larvae) because early instars hide within the plant's terminal growth and are not easily knocked out during sweeping. Analysis of Sampling Data Simple, descriptive statistics are essential for interpreting data collected in any replicated sampling scheme. Regardless of how data is gathered, whether as continuous measurements (e.g., leaf area consumed), in the form of numerical counts (e.g., number of beetles per plant), as ordinal ratings (e.g., on a scale from 1 to 10), or in binomial form (e.g., presence/absence), there is always some degree of uncertainty about its accuracy. Statisticians call this uncertainty "variance". It arises both from experimental error (inability to precisely replicate all conditions in each sample) and from the natural variability that is a characteristic of all biological systems (e.g., the number of leafhoppers collected in 25 sweeps at dawn may be quite different from a similar sample taken that evening in the same field). Good sampling strategies are designed to minimize variance in order to give the most reasonable "estimate" of population size. The mean, variance, and standard error are the calculations most commonly used to evaluate sampling results. The mean is simply an arithmetic average of data values. It is one of several ways to describe a range of numbers. The variance (sum of squared deviations from the mean divided by number of observations), and the standard error (square root of the variance divided by the mean) are measures of how far the other data values tend to stray from the mean. │Column A │Column B │ │ 51 │ 12 │ │ 49 │ 79 │ │ 51 │ 23 │ │ 48 │ 67 │ │ 50 │ 31 │ │ 49 │ 47 │ │ 52 │ 91 │ Statistical tools provide a way to find and measure the variance in many different types of data. Columns A and B (at right) have the same "average" values (50), but the variance of column A is obviously much lower than column B. If these numbers represent sample data collected from a single population, an estimate of population size based on the numbers in column B would be regarded with a great deal more skepticism than a similar estimate based on the numbers in column A. In general, larger numbers of samples provide more trustworthy estimates of population density. By knowing the amount of variance in sample data, it is possible to calculate a range of values, a confidence interval, that includes the upper and lower boundaries of our faith in the reliability of the samples. A 99% confidence interval means that the probability of data falling outside a given range of values is only 1 in 100. Confidence intervals can be set at any level of certainty, but in practice, most pest management decisions are based on 95-99% confidence levels. A lower confidence level is associated with an increased risk of uncertainty in the development or outcome of a pest outbreak. The farmer's decision to treat or not treat a pest population has to be made with as little risk as possible. If the pest population is larger than indicated by sample data, failure to treat could result in total destruction of a crop. On the other hand, if the population is smaller than indicated by sample data, money would be wasted by a decision to treat. Sequential Sampling Although it is fairly easy to sample for some insects, many pest management systems utilize sampling protocols that are fairly time consuming and labor intensive. Whenever large numbers of samples are needed to achieve an adequate level of confidence, it may be possible to use a sequential sampling system that saves time and effort by concentrating mostly on populations that are closest to the economic threshold. Sequential sampling systems are relatively new in pest management, but they are based on well-established rules for determining confidence intervals for sample data. Unlike regular sampling protocols that require a fixed number of replications (usually 10-100), sequential sampling systems are designed to evaluate the data at the end of each sampling step. The total number of samples is variable, depending upon whether the cumulative data falls inside or outside of predetermined confidence intervals. Relatively few samples would be needed to recognize that a population is very small (well below the economic threshold) or very large (well above the economic threshold). But a larger number of samples (higher confidence) would be needed to decide whether an intermediate population should be treated or not treated. In most sequential sampling systems, there are three different outcomes possible at the end of each sampling step: 1. If the cumulative total of pests exceeds an upper threshold value, then conclude that the population is large enough to warrant control actions. Stop sampling and prepare to enact control 2. If the cumulative total of pests is beneath a lower threshold value, then conclude that the population is small and warrants no control actions. Stop sampling (at least for awhile) and leave the population untreated. 3. If the cumulative total of pests is between the upper and lower threshold values, then no conclusion is possible yet. Sampling should continue until cumulative values reach the upper or lower
{"url":"http://www.cals.ncsu.edu/course/ent425/library/tutorials/applied_entomology/economics_pest_control.html","timestamp":"2014-04-20T23:45:02Z","content_type":null,"content_length":"32594","record_id":"<urn:uuid:de9ec2cb-88ea-4d80-bcd7-075e40c7c53e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Isotropic crystal and energy band The problem is that there are no isotropic crystals. Even cubic crystals are only symmetric with respect to four and threefold rotations about some special axes. However this is sufficient to render second order (but not higher order) tensors isotropic. Do you mean that using the isotropic approximation we in fact disregard the crystal?
{"url":"http://www.physicsforums.com/showthread.php?s=06ba088eaf7340c2c7091f47f6aacc65&p=4569662","timestamp":"2014-04-19T04:41:28Z","content_type":null,"content_length":"28423","record_id":"<urn:uuid:99fd8863-bf0b-4a43-a0cd-e5411e23c256>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
I am trying to figure out the most appropriate MPI calls for a certain portion of my code. I will describe the situation here: Each cell (i,j) of my array A is being updated by a calculation that depends on the values of 1 or 2 of the 4 possible neighbors A(i+1,j), A(i-1,j), A(i,j+1), and A(i,j-1). Say, for example, A(i,j)= A(i-1,j)*A(i,j-1). The thing is, the values of the neighbors A(i-1,j) and A(i,j-1) cannot be used until an auxiliary array B has been updated from 0 to 1. The values B(i-1,j) and B(i,j-1) are changed from 0 -> 1 after the values A(i-1,j) and A(i,j-1) have been communicated to the proc that contains cell (i,j), as cells (i-1,j) and (i,j-1) belong to different procs. Here is pseudocode for how I have the algorithm implemented (in fortran): do while (B(ii,jj,kk).eq.0) if (probe_for_message(i0,j0,k0,this_sc)) then end if end do The function 'probe_for_message' uses an 'MPI_IPROBE' to see if 'MPI_ANY_SOURCE' has a message for my current proc. If there is a message, the function returns a true logical and calls 'MPI_RECV', receiving (i0,j0,k0,this_sc) from the proc that has the message. This works! My concern is that I am probing repeatedly inside the while loop until I receive a message from a proc such that ii=i0, jj=j0, kk=k0. I could potentially call MPI_IPROBE many many times before this happens... and I'm worried that this is a messy way of doing this. Could I "break" the mpi probe call? Are there MPI routines that would allow me to accomplish the same thing in a more formal or safer way? Maybe a persistent communication or something? For very large computations with many procs, I am observing a hanging situation which I suspect may be due to this. I observe it when using openmpi-1.4.4, and the hanging seems to disappear if I use mvapich. Any suggestions/comments would be greatly appreciated. Thanks so much!
{"url":"http://www.open-mpi.org/community/lists/devel/att-11990/attachment","timestamp":"2014-04-19T23:21:19Z","content_type":null,"content_length":"3358","record_id":"<urn:uuid:08b7a486-2dc9-4ced-b852-df8f6ecf5fa3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Bronx ACT Tutor Find a Bronx ACT Tutor ...Sound interesting? I coach students in three core areas: 1) High school coursework: Mathematics, Physics, Biology 2) Standardized testing: SAT Mathematics (general, and subject tests); 3) Competitions: Science Fair and Model UN Send me a message describing your needs, and I'll do my best to r... 32 Subjects: including ACT Math, reading, calculus, physics ...Geometry is one of those subjects that people either love or hate. Most of the time when you mention geometry, people sigh and say "I hate proofs." To combat this, I bring logic puzzles and Sudoku problems to kids to show them the applications of the proofs they are learning. I do a lot of hand... 11 Subjects: including ACT Math, Spanish, algebra 2, geometry ...I have tutored junior high and high school students in Math, English, and Biology, and have aided several seniors in the college application process. I also help foreign graduate students perfect their grammar and delivery in writing. I believe in building confidence while teaching material. 25 Subjects: including ACT Math, English, reading, biology ...I have complete experience teaching any level of math from grades 1 to 12. A bit more information on my experience:I graduated from Manhattan College with a degree in High School Math Education. My first job soon after graduation was at a public high school in the country of England, UK. 15 Subjects: including ACT Math, Spanish, calculus, statistics ...In addition, I am in the process of enrolling in a teaching certification program with the state of NY so that I can take the LAST, ATS-W and CTS exams. I will also be taking further education classes to be able to teach in NYC schools. I have been tutoring and preparing students for High Schoo... 22 Subjects: including ACT Math, Spanish, chemistry, physics
{"url":"http://www.purplemath.com/bronx_act_tutors.php","timestamp":"2014-04-19T02:13:15Z","content_type":null,"content_length":"23492","record_id":"<urn:uuid:2f8eb730-6d53-4bee-9707-bc463db8ce8b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Resources Note: I only link to pages that I use myself. If you want a page with dozens of math-related links, you could just do a Google search on "math"; I'm being selective to avoid overwhelming people with too many choices. But the fact that I haven't linked to a page doesn't mean I think badly of it. Some of the notes are web pages, usually accompanied by PDF (Adobe Acrobat) files for printing. Some of the older notes are in PostScript format; I plan to convert them to web pages with PDF as time Please read the PostScript help file before you download any of the notes in PostScript format. I appreciate hearing about mistakes, which I'll correct as soon as I can. Some of the notes are very incomplete; there may be a few worked-out examples, with little exposition. The Mathematica Notebooks require at least Mathematica 2.1. (At this point, I think I'm required to remind you that some of the links above take you out of Millersville University Web space, and that we aren't responsible for the content you reach by following those links. If you wind up in the Arabica Coffee Shop on Coventry Road in Cleveland Heights, Ohio, or get kidnapped by aliens in a space ship, it's not our fault.) Send comments about this page to: Bruce.Ikenaga@millersville.edu.
{"url":"http://www.millersville.edu/~bikenaga/math-resources.html","timestamp":"2014-04-21T12:16:33Z","content_type":null,"content_length":"7813","record_id":"<urn:uuid:7092afe2-dce0-4b4f-872a-1fd1a8ef13b0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
another DE problem (well more of a parts prob) June 7th 2007, 10:45 AM #1 Junior Member Jun 2007 another DE problem (well more of a parts prob) Me agian, catch up on my DE HW...(150ish DE's in 4days Question: x^2(dy/dx) + x(x+2)y = e^x I have found roe (p) and now im trying to intergrate the right side....i come up with... Dx[2x(e^x)y] = 2 (intergal sign)(x^-1)(e^2x) ive tryed to use intergrating by parts....but it helps not as i get down to (above intregal) (intregal sign) (-x^(-2))x((1/2x)e^2x) and im back where i started.. The answer i need to get it to is....y= 1/2(x^-2)e^x+cx^-2(e^-x) I'd appreciate any help...also any hints to what i did wrong $x^2 y' + x(x + 2)y = e^x \implies y' + x^{ - 1} (x + 2)y = x^{ - 2} e^x$ Now find the integrating factor. i got to there....my intergrating factor came out to be....2x(e^x) I think the integrating factor should be $e^xx^2$ Show your process to find it. the biggest problem i get is when i go to intergrate the right side and use intergration by parts...i get u = x^-1 du= -x^-2 dv= e^2x V = 1/2 e^2x but that brings me back to x^-1(1/2 e^2x) (intragal sign) (1/2e^2x)(x^-1) which just gives me another parts intregal....almost the same one minus the constant multiplyer which wont really help anything not really sure if i just doing my math wrong or which (x+2)/x -> 1+2/x e^(intregal) 1+2/x -> x+ 2ln lxl -> e and ln cancel leaving 2xe^x $\exp \int {\frac{{x + 2}}<br /> {x}~dx} = e^{x + 2\ln \left| x \right|} = e^x \cdot e^{\ln x^2 } = e^x x^2$ oh duh.....sorry, i havent used log, ln and "e" in ages....forgot all the rules. know a good place to brezze back up on em? It's quite dangerous solving differential equations if you don't consider the basic properties. I've got to go now, but I hope someone could give you a 'refresh' June 7th 2007, 12:20 PM #2 June 7th 2007, 12:24 PM #3 Junior Member Jun 2007 June 7th 2007, 12:30 PM #4 June 7th 2007, 12:30 PM #5 Junior Member Jun 2007 June 7th 2007, 12:47 PM #6 Junior Member Jun 2007 June 7th 2007, 12:52 PM #7 June 7th 2007, 12:55 PM #8 Junior Member Jun 2007 June 7th 2007, 01:00 PM #9
{"url":"http://mathhelpforum.com/calculus/15731-another-de-problem-well-more-parts-prob.html","timestamp":"2014-04-18T07:11:11Z","content_type":null,"content_length":"54882","record_id":"<urn:uuid:6a603c66-8f0a-456a-a65a-2deb3eea299c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
When Should Potentially False Research Findings Be Considered Acceptable? Ioannidis estimated that most published research findings are false [1], but he did not indicate when, if at all, potentially false research results may be considered as acceptable to society. We combined our two previously published models [2,3] to calculate the probability above which research findings may become acceptable. A new model indicates that the probability above which research results should be accepted depends on the expected payback from the research (the benefits) and the inadvertent consequences (the harms). This probability may dramatically change depending on our willingness to tolerate error in accepting false research findings. Our acceptance of research findings changes as a function of what we call “acceptable regret,” i.e., our tolerance of making a wrong decision in accepting the research hypothesis. We illustrate our findings by providing a new framework for early stopping rules in clinical research (i.e., when should we accept early findings from a clinical trial indicating the benefits as true?). Obtaining absolute “truth” in research is impossible, and so society has to decide when less-than-perfect results may become acceptable. Citation: Djulbegovic B, Hozo I (2007) When Should Potentially False Research Findings Be Considered Acceptable? PLoS Med 4(2): e26. doi:10.1371/journal.pmed.0040026 Published: February 27, 2007 Copyright: © 2007 Djulbegovic and Hozo. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was funded by the Department of Interdisciplinary Oncology of the H. Lee Moffitt Cancer Center and Research Institute at the University of South Florida. Competing interests: The authors have declared that no competing interests exist. Abbreviations: B/H, net benefits/harms; CI, confidence interval; combined R[x], radiotherapy plus chemotherapy; EUT, expected utility theory; PPV, posterior probability; p[r], acceptable regret threshold probability; p[t], threshold probability; R[0], acceptable regret; RT, radiotherapy alone As society pours more resources into medical research, it will increasingly realize that the research “payback” always represents a mixture of false and true findings. This tradeoff is similar to the tradeoff seen with other societal investments—for example, economic development can lead to environmental harms while measures to increase national security can erode civil liberties. In most of the enterprises that define modern society, we are willing to accept these tradeoffs. In other words, there is a threshold (or likelihood) at which a particular policy becomes socially acceptable. In the case of medical research, we can similarly try to define a threshold by asking: “When should potentially false research findings become acceptable to society?” In other words, at what probability are research findings determined to be sufficiently true and when should we be willing to accept the results of this research? Defining the “Threshold Probability” As in most investment strategies, our willingness to accept particular research findings will depend on the expected payback (the benefits) and the inadvertent consequences (the harms) of the research. We begin by defining a “positive” finding in research in the same way that Ioannidis defined it [1]. A positive finding occurs when the claim for an alternative hypothesis (instead of the null hypothesis) can be accepted at a particular, pre-specified statistical significance. The probability that a research result is true (the posterior probability; PPV) depends on: (1) the probability of it being true before the study is undertaken (the prior probability), (2) the statistical power of the study, and (3) the statistical significance of the research result. The PPV may also be influenced by bias [1,4], i.e., by systematic misrepresentation of the research due to inadequacies in the design, conduct, or analysis [1]. However, the calculation of PPV tells us nothing about whether a particular research result is acceptable to researchers or not. Nevertheless, it can be shown that there is some probability (the “threshold probability,” p[t]) above which the results of a study will be sufficient for researchers to accept them as “true” [3]. The threshold probability will depend on the ratio of net benefits/ harms (B/H) that is generated by the study [3,5,6]. Mathematically the relationship between p[t] and B/H can be expressed as (see Appendix, Equation A1): We define net benefit as the difference between the values of the outcomes of the action taken under the research hypothesis and the null hypothesis, respectively (when in fact the research hypothesis is true). Net harms are defined as the difference between the values of the outcomes of the action taken under the null and the research hypotheses, respectively (when in fact the null hypothesis is true) [3]. It follows that if the PPV is above p[t] we can rationally accept the results of the research findings. Similarly, if the PPV is below p[t] we should accept the null hypothesis. Note that the research payoffs (the benefits) and the inadvertent consequences (harms) in Equation 1 can be expressed in a variety of units. In clinical research these units would typically be length of life, morbidity or mortality rates, absence of pain, cost, and strength of individual or societal preference for a given outcome [3]. We can now frame the crucial question of interest as: What is the minimum B/H ratio for the given PPV for which the research hypothesis has a greater value than the null hypothesis? Mathematically, this will occur when (see Appendix, Equations A1 and A2): Calculation of the Threshold Probability of “Accepted Truth” Figure 1 shows the threshold probability of “truth” (i.e., the probability above which the research findings may be accepted) as a function of B/H associated with the research results. The graph shows that as long as the probability of “accepted truth” (a horizontal line) is above the threshold probability curve, the research findings may be accepted. The higher the B/H ratio, the less certain we need to be of the truthfulness of the research results in order to accept them. Figure 1. The Threshold Probability Above (P[t] in Red) Which We Should Accept Findings of Research Hypothesis as Being True The horizontal yellow line indicates the actual conditional probability that the research hypothesis is true in the case of positive findings. This means that for benefit/harm ratios above the threshold (1.5 in this example), the research hypothesis can be accepted. Note that we are following the classic decision theory approach to the results of clinical trials, which states that a rational decision maker should select the research versus the null hypothesis depending on which one maximizes the value of consequences [7–9]. In the parlance of expected utility decision theory, this means that we should choose the option with the higher expected utility [3, 5,7–12]. (Expected utility is the average of all possible results weighted by their corresponding probabilities—see Appendix). In other words, the results of the research hypothesis should be accepted when the benefit of the action outweigh its harms. A Practical Example: When Should We Stop a Clinical Trial? Interim analyses of clinical trials are challenging exercises in which researchers and/or data safety monitoring committees have to make a judgment as to whether to accept early promising results and terminate a trial or whether the trial should continue [13,14]. If the interim analysis shows significant benefit in efficacy for the new treatment over the standard treatment, continuing to enroll patients into the trial may mean that many patients will receive the inferior standard treatment [13,14]. The first randomized controlled trial of circumcision for preventing heterosexual transmission of HIV, for example, was terminated early after the interim analysis showed that circumcised men were less likely to be infected with HIV [15]. However, if a study is wrongly terminated for presumed benefits, this could result in adoption of a new therapy of questionable efficacy [13,14]. We now illustrate these issues by considering a clinical research hypothesis: is radiotherapy plus chemotherapy (combined R[x]) superior to radiotherapy alone (RT) in the management of cancer of the esophagus? (see Box 1). We consider two scenarios: (1) the best-case scenario (B/H = 13.5), and (2) the worst-case scenario (B/H = 1.4). The probability that the research finding is true [16,17] (i.e., that combined treatment is truly better than radiotherapy alone) under the best-case scenario is 95% [95% confidence interval (CI), 89%–99.9%]. Under the worst-case scenario, the probability that combined treatment is better than radiotherapy alone is 80% [95% CI, 61%–99%]. The threshold probability above which these findings should be accepted is 7% [95% CI, 0%–30%] if we assume that B/ H = 13.5, or 41% [95% CI, 11%–72%] if we assume B/H = 1.4 (Table 1). Table 1. How True Is the Research Hypothesis that Combined Chemotherapy Is Superior To Radiotherapy Alone in the Management of Esophageal Cancer? Box 1. Is Combined Chemotherapy Plus Radiotherapy Superior To Radiotherapy Alone for Treating Esophageal Cancer? The Radiation Oncology Cooperative Group conducted a randomized controlled trial to evaluate the effects of combined chemotherapy and radiotherapy versus radiotherapy alone in patients with cancer of the esophagus [28]. A sample size of 150 patients was planned to detect an improvement in the two-year survival rate from 10%–30% in favor of combined R[x] (at α = 0.05 and ß = 0.10). At the interim analysis, 88% of patients in the control group (RT) had died while only 59% in the experimental arm (combined R[x]) had died, resulting in a survival advantage of 29% in favor of combined R[x] (p < 0.001). For this reason, the trial was terminated prematurely after enrolling 121 patients. Two percent of patients died as a result of treatment in the combined R[x] group versus 0% in the RT arm. Thus, the observed net benefit/harm ratio in this trial was [88-59-2]/2 = 13.5 [29] (the best-case scenario). For our worst-case scenario we assume that two-thirds of patients who experienced life-threatening toxicities with combined R[x] (12%) will have died. This will result in the worst-case net benefit/ harms ratio = (88-59-12)/12 = 1.4. The trial was stopped using classic inferential statistics which indicated that the probability of the observed results, assuming the null hypothesis that combined R[x] is equivalent to RT, was extremely small (p < 0.001). This, however, tells us nothing about how true the alternative hypothesis is [16,17], i.e., in our case, what is the probability that combined R[x] is better than RT? The probability that the research finding is true [16,17] (i.e., that combined R[x] is truly better treatment than RT) under the best-case scenario is 95% [95% CI, 89%–99.9%]. Under the worst-case scenario, the probability that combined R[x] is better than RT is 80% [95% CI, 61%–99%]. The results indicate that in the best-case scenario, the probability that the research findings are true far exceeds the threshold above which the results should be accepted (i.e., PPV is greater than p[t]). Therefore, rationally, in this case we should not hesitate to accept the findings from this study as truthful. However, in the worst-case scenario, the lower limit of the PPV's 95% confidence interval intersects with the upper limit of the threshold's 95% confidence interval, indicating that under these circumstances the research hypothesis may not be acceptable (since PPV is possibly less than p[t]). Had the investigators made a mistake when they terminated the trial early? Dealing with Unavoidable Erroneous Research Findings Mistakes are an integral part of research. Positive research findings may subsequently be shown to be false [18]. When we accept that our initially positive research findings were in fact false, we may discover that another alternative (i.e., the null hypothesis) would have been preferable [7,19–21]. When an initially positive research finding turns out to be false, this may bring a sense of loss or regret [19,20,22,23]. However, abundant experience has shown that there are many situations in which we can tolerate wrong decisions, and others in which we cannot [2]. We have previously described the concept of acceptable regret, i.e., under certain conditions making a wrong decision will not be particularly burdensome to the decision maker [2]. Defining Tolerable Limits for Accepting Potentially False Results We now apply the concept of acceptable regret to address the question of whether potentially false research findings should be tolerated. In other words: which decision (regarding a research hypothesis) should we make if we want to ensure that the regret is less than a predetermined (minimal acceptable) regret, R[0] [2]? (R[0] denotes acceptable regret and should be expressed in the same units as benefits and harms). It can easily be shown that we should be willing to accept the results of potentially false research findings as long as the posterior probability of it being true is above the acceptable regret threshold probability, p[r] (see Equation 3, Appendix, and Equations A3 and A4): where r is the amount of acceptable regret expressed as a percentage of the benefits that we are willing to lose in case our decision proves to be the wrong one (i.e., R[o] = r · B). This equation describes the effect of acceptable regret on the threshold probability (Equation 1) in such a way that the PPV now also needs to be above the threshold defined in Equation 3 for the research results to become acceptable. Note that actions under expected utility theory (EUT) and acceptable regret may not necessary be identical, but arguably the most rational course of action would be to select those research findings with the highest expected utility while keeping regret below the acceptable levels. The supplementary material (a longer version of the paper and Appendix) show that the maximum possible fraction of benefits that we can forgo (and still be wrong) while at the same time adhering to the precepts of EUT is given by (see Appendix, Equations A3–A6): A practical interpretation of this inequality is that some research findings may never become acceptable unless we are ready to violate the axioms of EUT, i.e., accept value r to be larger than defined in Equation 4 (Table 2). Table 2. Probability that Research Findings Are True and Benefit/Harms Ratio Above Which Findings May Become Acceptable We return now to the “real life” scenario above, i.e., the dilemma of whether to stop a clinical trial early. In our worst-case analysis (Box 1), we found that the probability that combined R[x] is better than radiotherapy alone could potentially be as low as 80% [95% CI, 61%–99%]. This figure overlaps with the probability of the threshold of 41% [95% CI, 11%–72%] above which research findings are acceptable under the worst case scenario (see Table 1) (i.e., PPV is possibly less than p[t]; see Equations 1 and 2). Thus, it is quite conceivable that the investigators made a mistake when they closed the trial prematurely. One way to handle situations in which evidence is not solidly established is to explicitly take into account the possibility that one can make a mistake and wrongly accept the results of a research hypothesis. Accepting this possibility can, in turn, help us determine “decision thresholds” that will take into account the amount of error which may or may not be particularly troublesome to us if we wrongly accept research findings. Let us assume that the investigators in the esophageal cancer trial are prepared to accept that they may be wrong and that they were willing to forgo 10%, 30%, or 67% of benefits. Using Equation 3, the calculations in Box 2 and Figure 2 show that for any willingness to tolerate loss of net benefits of greater than 10%, the probability that combined R[x] is superior to R[T] is above all decision thresholds (since p[r] = 0 in best-case scenario; Equation 3). Therefore the investigators seemed to have been correct when they terminated the trial earlier than originally anticipated. Figure 2. The Threshold Probability (P[t]) Above Which We Should Accept Findings of Research Hypothesis as Being True (Pink Line) as a Function of Benefit/Harm Ratio The calculated (acceptable regret) threshold above which we should accept research findings is shown for the worst-case scenario (B/H = 1.4; see text for details) with a (hypothetical) assumption that we are willing to forgo 30% of the benefits (slanted line). The calculated threshold probability (acceptable regret threshold) has a value of 58% when B/H = 1.4 (the horizontal line). This means that as long as the probability that research findings are true is above this acceptable regret threshold, these research findings could be accepted with tolerable amount of regret in case the research hypothesis proves to be wrong (for didactic purposes only one acceptable regret threshold is shown). See Box 2 and text for details. Box 2. Determining the Threshold Above Which Research Findings Are Acceptable When Acceptable Regret Is Taken Into Account You will recall (in Box 1) that the Radiation Oncology Cooperative Group investigators hoped to detect an absolute difference of 10%–30% in survival in favor of combined R[x]. By finding that combined R[x] improved survival by 29%, they appeared to have realized their most optimistic expectations [28]. This implies that the investigators would consider their trial a success even if the survival was improved by 10% instead, i.e., less than 67% of the realized, but most optimistic outcome [1-(.10/.30) × 100% = 67%]. Therefore, we assume that the investigators in the esophageal cancer trial are prepared to accept that they may be wrong and that they were willing to forgo 10%, 30%, or 67% of benefits. We applied Equation 3 to calculate acceptable regret thresholds above which we can accept research findings as true (i.e., when PPV > p[r]). Best-case scenario (benefit/harm ratio: 13.5). The calculated thresholds above which we should accept the findings are zero, regardless of whether our tolerable loss of benefits was 10%, 30%, or 67%. Note that these thresholds (p[r] = 0) are well below calculated probability that the research hypothesis is true [PPV = 95% (88%–99.9%)] ( i.e., PPV > p[r] = 0 for all acceptable regret assumptions; Equation 3, Table 1) and hence the research hypothesis should be accepted. Worst-case scenario (benefit/harm ratio: 1.4). The calculated threshold above which we should accept the findings from this study is 86% [95% CI, 84%–88%] for a loss of 10% of benefits, 58% [95% CI, 52%–64%] for a loss of 30% of net benefits, and 6% [95% CI, 0%–19%] if we are willing to tolerate a loss of 67% of net benefits. This means that, except in the case when acceptable regret is 10% or less, the probability that combined R[x] is superior to RT [80% (61%–99%)] is above all other decision thresholds and its “truthfulness” can be accepted (because PPV [= 80% (61%–99%)] > acceptable regret threshold [= 58% (52%–64%)] and PPV > acceptable regret threshold [= 6% (0%– 19%)]). Note that in case of our willingness to tolerate loss of 30% of benefits for being wrong, the upper limit of the acceptable regret CI (=64%) still overlaps with the lower limit of PPV's CI (=61%), but that is not the case if we are willing to forgo 67% of treatment benefits. See Equation 3, Table 1. Threshold Probabilities in Various Types of Clinical Research Table 2 summarizes the results of most types of clinical research showing the probabilities that the research findings are true and the benefit/harms ratio above which the findings become acceptable. For each type of research, the table shows these probabilities with and without acceptable regret being taken into account. What is remarkable is that depending on the amount of acceptable regret, our acceptance of potentially false research findings may dramatically change. For example, in the case of a meta-analysis of small inconclusive studies, we can accept the research hypothesis as true only if B/H > 1.44. However, if we are willing to forgo, say, only 1% of the net benefits in case we prove to be mistaken, the B/H ratio for accepting the findings from the meta-analysis of small inconclusive studies dramatically increases to 59. In the final analysis, the answer to the question posed in the title of this paper, “When should potentially false research findings be considered acceptable?” has much to do with our beliefs about what constitutes knowledge itself [24]. The answer depends on the question of how much we are willing to tolerate the research results being wrong. Equation 3 shows an important result: if we are not willing to accept any possibility that our decision to accept a research finding could be wrong (r = 0), that would mean that we can operate only at absolute certainty in the “truth” of a research hypothesis (i.e., PPV = 100%). This is clearly not an attainable goal [1]. Therefore, our acceptability of “truth” depends on how much we care about being wrong. In our attempts to balance these tradeoffs, the value that we place on benefits, harms, and degrees of errors that we can tolerate becomes crucial. However, because a typical clinical research hypothesis is formulated to test for benefits, we have here postulated a relationship between acceptable regret and the fraction of benefits that we are willing to forgo in the case of false research findings. Unfortunately, when we move outside the realm of medical treatments and interventions, the immediate and long-term harms and benefits are very difficult to quantify. On occasion, wrongly adopting some false positive findings may lead to the adoption of other false findings, thus creating fields replete with spurious claims. One typical example is the use of stem cell transplant for breast cancer, which resulted in tens of thousands of women getting aggressive, toxic, and very expensive treatment based on strong beliefs obtained in early phase I/II trials until controlled, randomized trials demonstrated no benefits but increased harms of stem cell transplants compared with conventional chemotherapy [25]. Therefore, even for clinical medicine, where benefits and harms are more typically measured, we should acknowledge that often the quality of the information on harms is suboptimal [26]. There is no guarantee that the “benefits” will exceed the “harms.” Although (as noted in Text S1) there is nothing to prevent us from relating R[0] to harms, or both benefits and harms, one must acknowledge that there is much more uncertainty, often total ignorance, about harms (since data on harms is often limited). As a consequence, under these circumstances research may become acceptable only if we relax our criteria for acceptable regret, i.e., accept value r to be larger than defined in Equation 4. In other words, unless we are ready to violate the precepts of rational decision making (see the figures in red in Table 2), a research finding with low PPV (the majority of research findings) should not be accepted [1]. We conclude that since obtaining the absolute “truth” in research is impossible, society has to decide when less-than-perfect results may become acceptable. The approach presented here, advocating that the research hypothesis should be accepted when it is coherent with beliefs “upon which a man is prepared to act” [27], may facilitate decision making in scientific research. Supporting Information Text S1. Longer version of the paper (657 KB DOC). Text S2. Appendix (163 KB DOC). We thank Drs. Stela Pudar-Hozo, Heloisa Soares, Ambuj Kumar, and Madhu Behrera for critical reading of the paper and their useful comments. We also wish to thank Dr. John Ioannidis for important and constructive insights particularly related to the issue of quality of data on harms and overall context of this work.
{"url":"http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0040026","timestamp":"2014-04-18T06:24:54Z","content_type":null,"content_length":"120581","record_id":"<urn:uuid:b7cfbf2e-b384-4424-a23c-90dff7f18a82>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Licensed to: iChapters User 7th edition College Algebra and Trigonometry College Algebra and Trigonometry Richard N. Aufmann Vernon C. Barker Richard D. Nation 7th edition Cengage Learning developed and published this special edition for the benefit of students and faculty outside the United States and Canada. Content may significantly differ from the North American college edition. If you purchased this book within the United States or Canada, you should be aware that it has been imported without the approval of the publisher or the author. Aufmann Barker Nation Thank you for choosing a Cengage Learning International Edition. Cengage Learning’s mission is to shape the future of global learning by delivering consistently better learning solutions for students, instructors, and institutions worldwide. This textbook is the result of an innovative and collaborative global development process designed to engage students and deliver content and cases with global relevance. NOT AUTHORIZED FOR SALE IN THE U.S.A. OR CANADA For product information: www.cengage.com/international Visit your local office: www.cengage.com/global Visit our corporate website: www.cengage.com 1439049394_ise_cvr.indd 1 ISE/Aufmann/Barker/Nation, College Algebra and Trigonometry, 7th Edition ISBN-1-4390-4939-4 ©2011 Designer: Denise Davidson Text printer: Quebecor World/Taunton Cover printer: Quebecor World/Taunton Binding: Case Trim: 8.5" x 10" CMYK 1/9/10 5:07 PM 49394_00_FM.qxd 1/9/10 Licensed to: iChapters User 12:09 PM Page iv College Algebra and Trigonometry, Seventh Edition Richard N. Aufmann, Vernon C. Barker, Richard D. Nation Acquisitions Editor: Gary Whalen Developmental Editor: Carolyn Crockett Assistant Editor: Stefanie Beeck © 2011, 2008 Brooks/Cole, Cengage Learning ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced, transmitted, stored, or used in any form or by any means graphic, electronic, or mechanical, including but not limited to photocopying, recording, scanning, digitizing, taping, Web distribution, information networks, or information storage and retrieval systems, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the publisher. Editorial Assistant: Guanglei Zhang Media Editor: Lynh Pham Marketing Manager: Myriah Fitzgibbon Marketing Assistant: Angela Kim Marketing Communications Manager: Katy Malatesta For product information and technology assistance, contact us at Cengage Learning Customer & Sales Support, 1-800-354-9706. For permission to use material from this text or product, submit all requests online at www.cengage.com/permissions. Further permissions questions can be e-mailed to permissionrequest@cengage.com. Content Project Manager: Jennifer Risden Creative Director: Rob Hugel Library of Congress Control Number: 2009938510 Art Director: Vernon Boes International Student Edition: Print Buyer: Karen Hunt ISBN-13: 978-1-4390-4939-6 Rights Acquisitions Account Manager, Text: Roberta Broyer ISBN-10: 1-4390-4939-4 Rights Acquisitions Account Manager, Image: Don Schlotman Production Service: Graphic World Inc. Text Designer: Diane Beasley Brooks/Cole 20 Davis Drive Belmont, CA 94002-3098 USA Photo Researcher: Prepress PMG Copy Editor: Graphic World Inc. Illustrator: Network Graphics; Macmillan Publishing Solutions Cengage Learning is a leading provider of customized learning solutions with office locations around the globe, including Singapore, the United Kingdom, Australia, Mexico, Brazil, and Japan. Locate your local office at www.cengage.com/global. Cover Designer: Lisa Henry Cover Image: Chad Ehlers, Getty Images Compositor: Macmillan Publishing Solutions Cengage Learning products are represented in Canada by Nelson Education, Ltd. To learn more about Brooks/Cole, visit www.cengage.com/brookscole Purchase any of our products at your local college store or at our preferred online store www.CengageBrain.com. Printed in Canada 1 2 3 4 5 6 7 14 13 12 11 10 Copyright 2011 Cengage Learning, Inc. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. 48610_01_ch0p_s01_001-016.qxd Licensed to: iChapters User CHAPTER 10/14/09 5:16 PM P Page 1 PRELIMINARY CONCEPTS AFP/Getty Images P.1 The Real Number System P.2 Integer and Rational Number Exponents P.3 Polynomials P.4 Factoring P.5 Rational Expressions P.6 Complex Numbers Albert Einstein proposed relativity theory more than 100 years ago, in 1905. Martial Trezzini/epa/CORBIS Relativity Is More Than 100 Years Old The Large Hadron Collider (LHC). Atomic particles are accelerated to high speeds inside the long structure in the photo above. By studying particles moving at speeds that approach the speed of light, physicists can confirm some of the tenets of relativity theory. Positron emission tomography (PET) scans, the temperature of Earth’s crust, smoke detectors, neon signs, carbon dating, and the warmth we receive from the sun may seem to be disparate concepts. However, they have a common theme: Albert Einstein’s Theory of Special Relativity. When Einstein was asked about his innate curiosity, he replied: The important thing is not to stop questioning. Curiosity has its own reason for existing. One cannot help but be in awe when he contemplates the mysteries of eternity, of life, of the marvelous structure of reality. It is enough if one tries merely to comprehend a little of this mystery every day. Today, relativity theory is used in conjunction with other concepts of physics to study ideas ranging from the structure of an atom to the structure of the universe. Some of Einstein’s equations require working with radical expressions, such as the expression given in Exercise 139 on page 31; other equations use rational expressions, such as the expression given in Exercise 64 on page 59. 1 Copyright 2011 Cengage Learning, Inc. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. 48610_01_ch0p_s01_001-016.qxd Licensed to: iChapters User 2 CHAPTER P 10/14/ 09 5:16 PM Page 2 PRELIMINARY CONCEPTS SECTION P.1 Sets Union and Intersection of Sets Interval Notation Absolute Value and Distance Exponential Expressions Order of Operations Agreement Simplifying Variable Expressions The Real Number System Sets Human beings share the desire to organize and classify. Ancient astronomers classified stars into groups called constellations. Modern astronomers continue to classify stars by such characteristics as color, mass, size, temperature, and distance from Earth. In mathematics it is useful to place numbers with similar characteristics into sets. The following sets of numbers are used extensively in the study of algebra. 51, 2, 3, 4, Á 6 Natural numbers 5 Á , -3, -2, -1, 0, 1, 2, 3, Á 6 Integers 5all terminating or repeating decimals6 Rational numbers 5all nonterminating, nonrepeating decimals6 Irrational numbers 5all rational or irrational numbers6 Real numbers If a number in decimal form terminates or repeats a block of digits, then the number is a rational number. Here are two examples of rational numbers. 0.75 is a terminating decimal. 0.245 is a repeating decimal. The bar over the 45 means that the digits 45 repeat without end. That is, 0.245 = 0.24545454 Á . p , where p and q are inteq gers and q Z 0. Examples of rational numbers written in this form are Rational numbers also can be written in the form 3 4 Note that Math Matters Archimedes (c. 287–212 B.C.) was the first to calculate p with any degree of precision. He was able to show that 3 10 1 6 p 6 3 71 7 from which we get the approximation 3 1 22 = L p 7 7 The use of the symbol p for this quantity was introduced by Leonhard Euler (1707–1783) in 1739, approximately 2000 years after Archimedes. 27 110 - 5 2 7 1 -4 3 7 n = 7, and, in general, = n for any integer n. Therefore, all integers are rational 1 1 numbers. p , the decimal form of the rational q number can be found by dividing the numerator by the denominator. When a rational number is written in the form 3 = 0.75 4 27 = 0.245 110 In its decimal form, an irrational number neither terminates nor repeats. For example, 0.272272227 Á is a nonterminating, nonrepeating decimal and thus is an irrational number. One of the best-known irrational numbers is pi, denoted by the Greek symbol p . The number p is defined as the ratio of the circumference of a circle to its diameter. Often in applications the rational number 3.14 or the rational 22 number is used as an approximation of the irrational number p. 7 Every real number is either a rational number or an irrational number. If a real number is written in decimal form, it is a terminating decimal, a repeating decimal, or a nonterminating and nonrepeating decimal. Copyright 2011 Cengage Learning, Inc. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. 48610_01_ch0p_s01_001-016.qxd Licensed to: iChapters User 10/14/09 5:16 PM Page 3 P.1 The relationships among the various sets of numbers are shown in Figure P.1. Math Matters Sophie Germain (1776–1831) was born in Paris, France. Because enrollment in the university she wanted to attend was available only to men, Germain attended under the name of Antoine-August Le Blanc. Eventually her ruse was discovered, but not before she came to the attention of Pierre Lagrange, one of the best mathematicians of the time. He encouraged her work and became a mentor to her. A certain type of prime number is named after her, called a Germain prime number. It is a number p such that p and 2p + 1 are both prime. For instance, 11 is a Germain prime because 2(11) + 1 = 23 and 11 and 23 are both prime numbers. Germain primes are used in public key cryptography, a method used to send secure communications over the Internet. Alternative to Example 1 For each number, check all that apply. N = Natural I = Integer Q = Rational R = Real N Ϫ57 3.3719 7.42917 0 1.191191119 . . . 101 3 THE REAL NUMBER SYSTEM I Q R Positive integers (natural numbers) 7 1 103 Integers Zero 0 −201 7 0 Rational numbers 3 4 −5 Real numbers 3 4 3.1212 −1.34 −5 3.1212 −1.34 7 Irrational numbers Negative integers −201 −8 1 −5 −0.101101110... √7 π −0.101101110... √7 π −5 0 103 −201 Figure P.1 Prime numbers and composite numbers play an important role in almost every branch of mathematics. A prime number is a positive integer greater than 1 that has no positiveinteger factors1 other than itself and 1. The 10 smallest prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23, and 29. Each of these numbers has only itself and 1 as factors. A composite number is a positive integer greater than 1 that is not a prime number. For example, 10 is a composite number because 10 has both 2 and 5 as factors. The 10 smallest composite numbers are 4, 6, 8, 9, 10, 12, 14, 15, 16, and 18. EXAMPLE 1 Classify Real Numbers Determine which of the following numbers are a. integers b. rational numbers c. irrational numbers d. real numbers e. prime numbers f. composite numbers -0.2, 0, 0.3, 0.71771777177771 Á , p, 6, 7, 41, 51 Solution a. Integers: 0, 6, 7, 41, 51 b. Rational numbers: -0.2, 0, 0.3, 6, 7, 41, 51 c. Irrational numbers: 0.71771777177771..., p d. Real numbers: -0.2, 0, 0.3, 0.71771777177771 Á , p, 6, 7, 41, 51 e. Prime numbers: 7, 41 f. Composite numbers: 6, 51 Try Exercise 2, page 14 Each member of a set is called an element of the set. For instance, if C = 52, 3, 56, then the elements of C are 2, 3, and 5. The notation 2 ʦ C is read “2 is an element of C.” 1 A factor of a number divides the number evenly. For instance, 3 and 7 are factors of 21; 5 is not a factor of 21. Copyright 2011 Cengage Learning, Inc. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part...
{"url":"http://www.doocu.co/pdf/search/soalan+integer+matematik+tingkatan+2","timestamp":"2014-04-18T12:20:04Z","content_type":null,"content_length":"71519","record_id":"<urn:uuid:1e50df53-39a6-42a1-b3e2-45f053d5deca>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Siu decomposition up vote 0 down vote favorite hello. I have the following question: Let $T$ be a positive closed current of bidimension $(p,p)$ then one has the Siu decomposition $T=R+S$ where $R$ is a positive closed current such that its Lelong number is zero along analytic sets of dimension greater or equal $p$. Now my question is why this current $R$ is the bigger with this property? $R$ is called the residual part while $S$ is the singualr one and $S$ is also positive. thank you I think you are talking about the result in Siu's paper "Analyticity of sets associated to Lelong numbers and extension of closed positive currents, Invent. Math. 27 (1974) 53–156." But I do not quite understand what you are asking. What do you mean by "$R$ is bigger with this property"? – Paul Jul 6 '11 at 12:30 2 He wrote not "is bigger" bit "is the bigger", so probably meant "is the biggest" (e.g. in Spanish "el" = the, "más grande" = bigger, but "el más grande" = the biggest). – Noam D. Elkies Jul 6 '11 at 15:15 add comment 3 Answers active oldest votes In my opinion, the point is not that $R$ is the biggest in whatever sense you may give to this, but that the decomposition $T=R+S$ with $S=\sum \lambda_j [A_j]$ ($A_j$ being a $p$-dimensional analytic set), and $R$ having zero Lelong number along any $p$-dimensional analytic set. The uniqueness is clear because for $x$ generic in $A_j$, $\nu(R+S,x)=\lambda_j \nu([A_j],x)=\lambda_j$, which determines thus uniquely $\lambda_j$, and therefore $S$ and $R$. up vote 2 Now, if you really want to see that $R$ is the biggest current (in the sense of positivity of currents) such that $R$ has zero Lelong number along any $p$-dimensional analytic set, then you down vote can proceed this way: assume that $T=S'+R'$ is another such decomposition. Then for $x$ generic in $A_j$, $\nu(T,A_j)=\nu(T,x)=\nu(S',x)=\nu(S',A_j)$. But it is a classical fact (see e.g Demailly, Complex analytic and differential Geometry, Proposition 8.16) that $S'-\nu(S', A_j) [A_j]$ is a closed positive current, so that $S' \geqslant S$, and therefore $R' \leqslant R$, which concludes. Indeed, the crucial property that Henri points out in his answer that if T is a closed positive current of bidimension $(p,p)$ and $A$ an irreducible analytic set of dimension $p$ then $T-\nu(T,A)[A]$ is again positive is exactly what you need in order show the existence part of the decomposition above (more precisely the convergence of the series in the weak topology). Thus, I really think that our answers are more or less equivalent! – diverietti Jul 7 '11 at 8:01 add comment I am not sure if I understand correctly your question, but anyway I'll try to answer: the point is that Siu's decomposition is in fact unique, but let me explain better. Let $X$ be a complex manifold and $T$ a closed positive current of bidimension $(p,p)$. Then, there is a unique decomposition of $T$ as a (possibly finite) weakly convergent series $$ T=\ sum_{j\ge 1}\lambda_j[A_j]+R,\quad\lambda_j\ge 0, $$ where $[A_j]$ is the current of integration over an irreducible $p$-dimensional analytic set $A_j\subset X$ and where $R$ is a closed positive current with the property that $\dim E_c(R)< p$ for every $c>0$. Here $E_c(R)$ is the set of point $x\in X$ such that the Lelong number $\nu(R,x)$ of $R$ at $x$ is greater than or equal to $c$. up vote 1 down vote To prove the uniqueness, assume that $T$ admits such a decomposition. Then, the $p$-dimensional components of $E_c(T)$ are $(A_j)_{\lambda_j\ge c}$, for $\nu(T,x)=\sum\lambda_j\nu([A_j],x)+ \nu(R,x)$ is non zero only on $\bigcup A_j\cup\bigcup E_c(R)$, and is equal to $\lambda_j$ generically on $A_j$. In particular $A_j$ and $\lambda_j$ are unique. Thus $R$ is neither the biggest nor the littlest... It is just unique! Hi Simone! You've been faster than I on this one ;-) But I think the OP's question makes sense. Indeed, you could write $T=(S+R/2)+R$, which would give a decomposition in the OP's sense. – Henri Jul 6 '11 at 22:13 Hi Henri! Did you mean $T=(S+R/2)+R/2$? Anyway, I don't think so, because $R/2$ is not of the form "integration on analytic set of dimension $p$". – diverietti Jul 6 '11 at 22:17 Yes I mean $R/2$ sorry! Of course $R/2$ is not of the form you say because it is the residual part, thus has zero Lelong number along any $p$-dim analytic set. It is just a matter of definition here, cf the OP question. – Henri Jul 6 '11 at 22:21 Right, a matter of definitions... That's why I tried to fix also notations in my answer! – diverietti Jul 6 '11 at 22:36 add comment R is the biggest with such property since S is the sum of currents of integration on varieties of dimension p. Moreover, a dimension (p,p) current can not have positive Lelong number up vote 0 down along varieties of dimension greater than p. 1 There is still something to say so far, cf the third § of my answer. – Henri Jul 6 '11 at 22:10 add comment Not the answer you're looking for? Browse other questions tagged complex-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/69621/siu-decomposition/69676","timestamp":"2014-04-17T12:38:30Z","content_type":null,"content_length":"65870","record_id":"<urn:uuid:2b5283e2-c410-4816-9139-6926c9f43844>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Graterford, PA Algebra 1 Tutor Find a Graterford, PA Algebra 1 Tutor ...All of my students have seen grade improvement within their first two weeks of tutoring, and all of my students have reviewed me positively. Through WyzAnt, I have tutored math subjects from prealgebra to precalculus; I have also tutored English writing, English grammar, and economics, and I am ... 38 Subjects: including algebra 1, English, reading, calculus ...What helped me have those results with my students were a number of factors: 1. a very good understanding of the subject2. trying to understand my students’ needs by trying to figure out what they do not understand, what they do not know3. trying to explain everything in a way they can understand... 7 Subjects: including algebra 1, chemistry, geometry, organic chemistry ...I have more than five years experience in using Outlook for creating email accounts, managing incoming and outgoing emails, and using the calendar efficiently. I can help you obtain or sharpen these skills. I worked as a Tax Specialist in the past three years and prepared federal, state, and local tax returns for over 100 people. 24 Subjects: including algebra 1, calculus, accounting, Chinese ...It also emphasizes writing proofs to solve (prove) properties of geometric figures. Microsoft Word is a full-featured word processing program. Word contains rudimentary desktop publishing capabilities and is the most widely used word processing program on the market. 39 Subjects: including algebra 1, English, writing, geometry ...I believe that tutoring will prepare me for my future profession, as well as allow me to increase a young student’s confidence in their ability to perform better in mathematics. Some of the students I work with are from the following schools: * Daniel Boone Area High School in Birdsboro * Exeter... 13 Subjects: including algebra 1, calculus, geometry, GRE Related Graterford, PA Tutors Graterford, PA Accounting Tutors Graterford, PA ACT Tutors Graterford, PA Algebra Tutors Graterford, PA Algebra 2 Tutors Graterford, PA Calculus Tutors Graterford, PA Geometry Tutors Graterford, PA Math Tutors Graterford, PA Prealgebra Tutors Graterford, PA Precalculus Tutors Graterford, PA SAT Tutors Graterford, PA SAT Math Tutors Graterford, PA Science Tutors Graterford, PA Statistics Tutors Graterford, PA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Congo, PA algebra 1 Tutors Creamery algebra 1 Tutors Englesville, PA algebra 1 Tutors Eshbach, PA algebra 1 Tutors Fagleysville, PA algebra 1 Tutors Gabelsville, PA algebra 1 Tutors Gulph Mills, PA algebra 1 Tutors Linfield, PA algebra 1 Tutors Morysville, PA algebra 1 Tutors Niantic, PA algebra 1 Tutors Passmore, PA algebra 1 Tutors Rahns, PA algebra 1 Tutors Schultzville, PA algebra 1 Tutors Valley Forge algebra 1 Tutors Zieglersville, PA algebra 1 Tutors
{"url":"http://www.purplemath.com/Graterford_PA_algebra_1_tutors.php","timestamp":"2014-04-18T04:21:22Z","content_type":null,"content_length":"24347","record_id":"<urn:uuid:c5bc4ac4-4dee-4f8c-ae26-624d286f59da>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Compute the roots of a polynomial. Return the roots (a.k.a. “zeros”) of the polynomial c : 1-D array_like Parameters : 1-D array of polynomial coefficients. out : ndarray Returns : Array of the roots of the polynomial. If all the roots are real, then out is also real, otherwise it is complex. The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the power series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. >>> import numpy.polynomial.polynomial as poly >>> poly.polyroots(poly.polyfromroots((-1,0,1))) array([-1., 0., 1.]) >>> poly.polyroots(poly.polyfromroots((-1,0,1))).dtype >>> j = complex(0,1) >>> poly.polyroots(poly.polyfromroots((-j,0,j))) array([ 0.00000000e+00+0.j, 0.00000000e+00+1.j, 2.77555756e-17-1.j])
{"url":"http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.polynomial.polynomial.polyroots.html","timestamp":"2014-04-19T05:20:23Z","content_type":null,"content_length":"10207","record_id":"<urn:uuid:7b46874d-a9b7-48ec-8dea-197a672e65d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Tuning sparse stuff in NumPy Robert Cimrman cimrman3@ntc.zcu... Tue Mar 27 12:27:32 CDT 2007 David Koch wrote: > Ok, > I did and the results are: > csc * csc: 372.601957083 > csc * csc: 3.90811300278 a typo here? which one is csr? > csr * csc: 15.3202679157 > csr * csr: 3.84498214722 > Mhm, quite insightful. Note, that in an operation X.transpose() * X, > where X > is csc_matrix, then X.tranpose() is automatically cast to csr_matrix. A > re-cast to csc make the whole operation faster. It's still about 1000 times > slower than Matlab but 4 times faster than before. ok. now which version of scipy (scipy.__version__) do you use (you may have posted it, but I missed it)? Not so long ago, there was an effort by Nathan Bell and others reimplementing sparsetools + scipy.sparse to get better usability and performance. My (almost latest) version is More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026841.html","timestamp":"2014-04-18T01:22:26Z","content_type":null,"content_length":"3561","record_id":"<urn:uuid:ba79d91c-3014-46e6-8e8b-8f0762ed8957>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
c05rbc Solution of a system of nonlinear equations using first derivatives (easy-to-use) c05rcc Solution of a system of nonlinear equations using first derivatives (comprehensive) c05rdc Solution of a system of nonlinear equations using first derivatives (reverse communication) c05zdc Check user's function for calculating first derivatives of a set of nonlinear functions of several variables d04aac Numerical differentiation, derivatives up to order 14, function of one real variable e01aec Interpolating functions, polynomial interpolant, data may include derivative values, one variable e01bgc Evaluation of interpolant computed by e01bec, function and first derivative e02agc Least squares polynomial fit, values and derivatives may be constrained, arbitrary data points e02ahc Derivative of fitted polynomial in Chebyshev series form e02bcc Evaluation of fitted cubic spline, function and derivatives e02dhc Evaluation of spline surface at mesh of points with derivatives e04bbc Minimizes a function of one variable, requires first derivatives e04fcc Unconstrained nonlinear least squares (no derivatives required) e04gbc Unconstrained nonlinear least squares (first derivatives required) e04hdc Checks second derivatives of a user-defined function e04jbc Bound constrained nonlinear minimization (no derivatives required) e04kbc Bound constrained nonlinear minimization (first derivatives required) e04lbc Solves bound constrained problems (first and second derivatives required) e04ufc Minimum, function of several variables, sequential QP method, nonlinear constraints, using function values and optionally first derivatives (reverse communication, comprehensive) e04yac Least squares derivative checker for use with e04gbc g01rtc Landau derivative function φ^′(λ) g02hlc Calculates a robust estimation of a correlation matrix, user-supplied weight function plus derivatives s14adc Scaled derivatives of ψ(x) s14aec Derivative of the psi function ψ(x) s14afc Derivative of the psi function ψ(z) © The Numerical Algorithms Group Ltd, Oxford UK. 2012
{"url":"http://www.nag.com/numeric/CL/nagdoc_cl23/pdf/INDEXES/KWIC/derivative.html","timestamp":"2014-04-18T00:45:21Z","content_type":null,"content_length":"7100","record_id":"<urn:uuid:7c847134-a629-4eb1-99af-270b5089c3ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
I think I can read it correctly. In latex (when it works again) $\int_{0}{1} \sqrt{9+4t^2}dx$. In easier to read type: int sqrt(9+4t^2)dt, evaluated from 0 to 1. Before I do this, have you studied trig substitution, because if you haven't this might be slightly difficult to get at first? i know some trig identities if you mean that is trig substitution? Alright. Well draw a right triangle with the horizontal leg being 3 and the vertical leg being 2t. So it follows that the hypotenuse is sqrt(9+4^2). Now let's make some substitutions. Call the angle between the horizontal leg and the hypotenuse theta. tan(theta) = (2t)/3, and taking the derivative... sec^2(theta)d(theta) = (2/3)dt. So t = [3tan(theta)]/2 and dt = [3sec^2(theta)]/2 Now plug this back into the original integral. int sqrt(9+4t^2)dt = int sqrt[(9/4)tan^2(theta)+9] *(3/2)sec^2(theta)d(theta). Is this making sense?
{"url":"http://mathhelpforum.com/calculus/11717-antiderivative.html","timestamp":"2014-04-20T04:53:03Z","content_type":null,"content_length":"40170","record_id":"<urn:uuid:be6a8db3-4782-4980-92d5-9962f79b26d9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Spherical Measures without Spherical Trigonometry A common misconception suggests that it is necessary to understand spherical trigonometry in order to calculate properties of a spherical surface. That this is false can be demonstrated by deriving the formula for distance and direction on a sphere without any knowledge of spherical trigonometry. However, some knowledge of the properties of a sphere is required. Suppose one wants to know the great circle distance between two locations given by their names in latitude and longitude. Call these P[1 ](f[1], l[1]) and P[2 ](f[2], l[2]). Using plane trigonometry one can compute the location of these points in a three dimensional rectangular (Euclidean) coordinate system, centered at the origin of the sphere. This yields the locations as vectors in space as V[1] (X[1], Y[1], Z[1]) and V [2] (X[2], Y[2], Z[2]). The straight line distance between these two points is given by the well known formula: D[12] = [(X[1]-X[2])^2 + (Y[1]-Y[2])^2+ (Z[1]-Z[2])^2]^1/2 . This is the chord distance between the two points, penetrating the sphere. The chord distance between the two points can be considered to be one side of a triangle, with origin at the center of a circle, whose other two sides have a length equal to the radius of the sphere. Take this radius to be one unit, and call it R. Divide the triangle into two equal parts, by constructing the perpendicular to the chord from the center. Now we know two sides of a right triangle, and the length of the third side (from the center to the chord) can be calculated. From this the angle at the center can be calculated, using any of the trigonometric formulae relating points in a right triangle. Twice this angle is the angle that spans the distance between the two points, that is, it is the angular separation of the points on the sphere. In more detail, use standard spherical coordinates: X[1] = R cos f[1] cos l[1] Y[1] = R cos f[1] sin l[1] Z[1] = R sin f[1] X[2] = R cos f[2] cos l[2] Y[2] = R cos f[2] sin l[2] Z[2] = R sin f[2] The length, L, of the line from the center of the circle to the perpendicular to the chord is, using C as the length of the chord, L = [R^2 - (C/2)^2]^1/2 . Now, from elementary trigonometry, C/2L = tan a, L/R = cos a, C/2R = sin a. Doubling a gives the angle subtended at the center of the circle, and at the center of the sphere. Use the simplest (the third) of these relations to calculate the distance on a sphere. Near Santa Barbara, CA: f[1 ]= 34°, l[1] = -120° . Near Zürich, CH: f[2 ]= 47.5°, l[2] = 8.5° X[1] = R cos f[1] cos l[1 ]= -0.414529 Y[1] = R cos f[1] sin l[1]= -0.717978 Z[1] = R sin f[1]=0.559193 X[2] = R cos f[2] cos l[2 ]= 0.668169 Y[2] = R cos f[2] sin l[2]= 0.099859 Z[2] = R sin f[2]= 0.737277 C = [(-.414529 - .668169)^2 + (-.717978 - .099859)^2 + (.559193 - .737277)^2]^1/2= 1.368491, L = 0.729252 but is not really needed, and continue to use R = 1. Then, from a = sin^–1(C/2), a = 43.176 degrees of arc. Twice this value gives 86.35 degrees for the length of the arc on the circle. Now take R= 6378 kilometers to approximate the radius of the sphere. Then the distance between Santa Barbara and Zürich, on the spherical assumption, is p*a*R/90 or 9612 km. No spherical trigonometry has been used. But if one is familiar with vector algebra the result can be obtained more immediately, namely by calculating the angle between the two vectors directly as 2a = cos^-1(V[1]·V[2]/|V[1]||V[2]|). For the direction of the second point from the first recall that directions are measured between great circles. In this case we need the direction of the great circle from Santa Barbara to Zürich with respect to the reference direction from North. The plane passing through Santa Barbara, the North Pole, and the origin, cuts the sphere along the North-South great circle and has equation -.717978 X + .414529 Y = 0, from the three-point equation of a plane. The plane spanning the region between Santa Barbara, Zürich, and the origin intersects the sphere along the connecting great circle and has equation -.5851891 X + .6792581 Y + .4383362 Z = 0. The angle between these two planes is 31.9 degrees East of North. (see C. Oakley, 1954, “Analytic Geometry”, Barnes & Noble, pages 179, 185). The area of a polygon on a sphere is equal to R^2{p(n-2)+Sb[i]} where the sum is taken over the n interior angles. Again, no spherical trigonometry has been used.
{"url":"http://www-personal.umich.edu/~copyrght/image/solstice/win01/tobler/tobler.html","timestamp":"2014-04-18T18:15:02Z","content_type":null,"content_length":"11606","record_id":"<urn:uuid:1fd79c93-0dfb-4c91-8529-3ddab078a50d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
st: re: how to run zandrews as a postestimation command [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: re: how to run zandrews as a postestimation command From Kit Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: re: how to run zandrews as a postestimation command Date Wed, 16 Apr 2008 15:21:07 -0400 Purba said I would like to test for the structural stability of an estimation of the form y=a+bx+error. Specifically want to test using the "zandrews" command if "b" which is the coefficient of x, changes overtime and exactly where the breakpoints are. The data is 40 years time series. Can anyone please help. It seems like the "zandrews" command only allows me to select one variable to test in this case I can pick either y or x but there seems to be no method to specify the actual equation that I am zandrews is a unit root test that allows for structural breaks. Unit root tests are applied to individual time series (or panels of individual time series). You could, of course, apply it to the residuals of a regression that you have run, but that does not sound like what you're trying to do. If you want to test for structural stability of a relationship, then you should use a test for structural stability. -qll- will do that, but it will not identify where breaks occur (only if the relationship appears to violate the null of structural stability). To test for breaks in a y on x regression, it is usual to compute split-sample estimates (moving the breakpoint through, for instance, the interior 85% of the sample), allowing for a break in every feasible period and keeping track of which period registers the lowest sum of squared errors. See the code for -clemio1-, which does something like that. Kit Baum author of zandrews, qll, clemao1 Kit Baum, Boston College Economics and DIW Berlin An Introduction to Modern Econometrics Using Stata: * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-04/msg00690.html","timestamp":"2014-04-16T13:29:08Z","content_type":null,"content_length":"6767","record_id":"<urn:uuid:b3650d3f-e3cb-4f1a-90be-63f675846916>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Satisfying a Zeta Jones For a vivid picture of prime fetishism, check out Chris Caldwell's encyclopedic and surprisingly addictive Prime Pages (www.utm.edu/research/primes). Among the contents: a list of the largest known primes (updated weekly), a primality test for numbers smaller than 2^53 - 1, and abundant prime trivia ("17 was the original title of the Beatles song 'I Saw Her Standing There' "). Over at the Aesthetics of the Prime Sequence (2357.a-tu.net), a counter reels off primes as you listen to spooky minimalist compositions derived by feeding prime numbers into an algorithm, or take a prime intuition test (a crude de facto autism test, perhaps Oliver Sacks once recounted the bizarre case of autistic twin brothers whose private language consisted of exchanging primes of up to 20 digits). Prime number theory dates to 350 B.C., when Euclid proved that there are an infinite number of primes. But it was Karl Friedrich Gauss, a future teacher of Riemann's, who in the late 18th century laid the groundwork for the latter's breakthrough. At 14, Gauss observed that while individual primes are randomly dispersed, there is an overall pattern to their density: Not only do they thin out the higher you count, the proportion of primes seems to follow a logarithmic function. (For Gauss, according to Sabbagh's book, counting primes was a pastime it's said that by the time he died, he'd gotten to 3 million.) His conjecture, that the number of primes less than any given number n is approximately n/log n (proved more than a century later in 1896), is called the Prime Number Theorem. Gauss's formulation provided only an estimate. Riemann, using the zeta function, derived an exact formula for the number of primes smaller than any given number. The Swiss mathematician Leonhard Euler had discovered that the zeta function, which we defined above as an infinite sum involving all the natural numbers (1, 2, 3, . . . ), could also be expressed as an infinite product involving just the prime numbers. This was, as du Sautoy writes, "the first sign that the zeta function might reveal unexpected links between seemingly disparate parts of the mathematical canon." Riemann's insight was to plug complex numbers into the zeta function complex numbers have real and imaginary parts, the latter being numbers that can be expressed in terms of i, where i is the square root of Related Stories More About These suspicious-sounding numbers may seem like a flight of mathematical fancy they were first "imagined" in the 16th century and found to have enormous computational significance, but even mathematicians needed a few hundred years to wrap their heads around the concept, just as it took the ancient Greeks a while to get used to the initially horrifying idea of fractions. (Barry Mazur's recent Imagining Numbers offers an eccentric road map through this abstract terrain.) Riemann relocated the enigma of prime distribution to the complex plane (a 2-D plane with the x-axis real and the y-axis imaginary). This magical shift, which du Sautoy likens to a beyond-the-looking-glass maneuver, revealed an astonishing regularity. The complex numbers for which the zeta function equals zero are called the Riemann zeros; the frequency of the Riemann zeros mirrors the frequency of the primes. Riemann's famous hypothesis, which he declared sehr wahrscheinlich ("very probable"), was that, mapped on the complex plane, these solutions fall on a straight line to be exact, the vertical line that runs through the point 1/2 on the real axis. Convoluted as it sounds, this gloss on the problem grossly simplifies the heady mathematics at its core. All three new books are careful not to outpace the general reader. Du Sautoy, a professor at Oxford, provides a panoramic history of prime-number crunching, rich with anecdote and unfailingly patient with the mathematical fine points. Sabbagh, a documentary producer, uses the R.H. to tunnel into the mathematical mind, folding in copious interviews with number theorists. His approach to the math is at once reassuring and deflationary. Chapter 16 opens: "After fifteen chapters of a book on the Riemann Hypothesis, I have to break some bad news to you. You know almost nothing about the Riemann Hypothesis compared with what there is to know." Derbyshire, a National Review columnist, has written the most mathematically detailed of the trio. Those looking for a quick, lucid R.H. initiation should consult Keith Devlin's evocative chapter in The Millennium Problems (Basic Books, 2002), which also takes on the other six Clay puzzles. Like many other conjectures, the Riemann Hypothesis could conceivably be disproved by brute tenacity repeatedly testing it until a counterexample is found. Indeed, the R.H. has been checked and holds true for the first 100 billion Riemann zeros. At the AT&T Labs in Princeton, supercomputers keep cranking out zeros, an alarm ready to go off if and when the first zero strays from the so-called critical line. You'd think this would count as overwhelming evidence in favor of the hypothesis, but mathematics is a science of absolutes, insisting on the concreteness of universal truth as if to compensate for some of its more vaporous abstractions. Du Sautoy says in an e-mail interview, "For the other sciences, evidence in the lab is everything. In the mathematician's lab, such evidence can be completely misleading." In other words, extrapolation is a fool's game when you're dealing with infinity. "The primes are a malicious bunch of numbers," du Sautoy continues. "They're the masters of disguise. People have made conjectures about primes supported by billions of pieces of computer evidence, but theoretical analysis has later revealed the evidence to be a charade."
{"url":"http://www.villagevoice.com/2003-04-15/art/satisfying-a-zeta-jones/2/","timestamp":"2014-04-20T17:21:58Z","content_type":null,"content_length":"110555","record_id":"<urn:uuid:4d1067dc-d817-4571-a9a5-93e9a242e332>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Comments on It's the Top Talent that Helps You WinBob Morris commented on 'It's the Top Talent that Helps You Win' TypePad 2009-07-24T20:11:12Z Gerhard Gschwandtner http://blog.sellingpower.com/gg/ tag:typepad.com,2003:http://blog.sellingpower.com/gg/2009/07/mondays-video/comments/atom.xml/ tag:typepad.com,2003:6a011571fbc6ed970b012876e25afd970c 2010-01-16T18:02:04Z 2010-01-16T18:02:04Z Bob Morris Mary Delaney, President of Personified, a division of CareerBuilder has hired thousands of salespeople during her career. ------------------------------------- VERY UNLIKELY... <p>Mary Delaney, President of Personified, a division of CareerBuilder has hired thousands of salespeople during her career.<br /> -------------------------------------<br /> VERY UNLIKELY</p> <p>Let&#39;s assume the number she hired is 2000 over 25 years or 80 a year. Let&#39;s assume she interviewed 7 people to hire one. So she spent 30 minutes on each doing a &quot;screening&quot; interview(3.5 hours)and then 3 hours with the top two (6 hours)and and 1 additional hour with the hired candidate. That comes to 10.5 hours per hire x 80 hires = 840 hours a year.</p> <p>Now let&#39;s assume she worked 48 weeks a year (2 weeks vacation and 2 weeks of holidays, sick days etc). 48 weeks x 40 hours = 1920 hours of which 840 (44%)were spent on hiring. </p> <p>Perhaps you meant to say she PLACED (got jobs for)thousands of people in her career. Then the 2000 number is believable assuming she was a placer/recruiter. Many in our profession make 2 placements a week. In fact one month I made 16. But the time spent interviewing the candidate was less than an hour. </p>
{"url":"http://blog.sellingpower.com/gg/2009/07/mondays-video/comments/atom.xml","timestamp":"2014-04-20T23:28:37Z","content_type":null,"content_length":"3456","record_id":"<urn:uuid:35fdf42e-216a-4196-82a0-86dd1b7ed9e6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Glendale, CO SAT Math Tutor Find a Glendale, CO SAT Math Tutor ...I especially love working with students who have some fear of the subject or who have previously had an uncomfortable experience with it.I have taught Algebra 1 for many years to middle and high school students. We have worked on applications and how this relates to things in real life. I realize that Algebra is a big step up from the math many have worked at previously with great 7 Subjects: including SAT math, geometry, GRE, algebra 1 ...I've used various versions of Excel including 2003, 2004 (on a MacIntosh), 2007, and 2010 for a variety of business, engineering, and home applications. If you need a good solid introduction to the basic capabilities of Excel, or you need to know how to do more advanced things in Excel, give me ... 18 Subjects: including SAT math, calculus, geometry, statistics ...I enjoy working with students of all ages, and strive to adapt to the learning style of each individual student. I look forward to working with you, or your children!In June 2013 I completed a Bachelor of Science in physics from University of California, Santa Barbara. The required math courses... 13 Subjects: including SAT math, calculus, physics, geometry ...The statistic course also let me skilled at SPSS. I have tutored graduate level statistic in CO. I am now in the master program of accounting. 34 Subjects: including SAT math, reading, Chinese, ACT Math ...I have my Colorado Secondary Teaching License, 7-12 grade. Currently, I teach developmental math classes at a Community College and College Algebra at a University. I am also getting my Master's Degrees in Statistics, Probability and Optimization. 26 Subjects: including SAT math, calculus, geometry, ASVAB Related Glendale, CO Tutors Glendale, CO Accounting Tutors Glendale, CO ACT Tutors Glendale, CO Algebra Tutors Glendale, CO Algebra 2 Tutors Glendale, CO Calculus Tutors Glendale, CO Geometry Tutors Glendale, CO Math Tutors Glendale, CO Prealgebra Tutors Glendale, CO Precalculus Tutors Glendale, CO SAT Tutors Glendale, CO SAT Math Tutors Glendale, CO Science Tutors Glendale, CO Statistics Tutors Glendale, CO Trigonometry Tutors Nearby Cities With SAT math Tutor Aurora, CO SAT math Tutors Bow Mar, CO SAT math Tutors Cherry Hills Village, CO SAT math Tutors Columbine Valley, CO SAT math Tutors Denver SAT math Tutors Edgewater, CO SAT math Tutors Englewood, CO SAT math Tutors Greenwood Village, CO SAT math Tutors Lakeside, CO SAT math Tutors Littleton City Offices, CO SAT math Tutors Littleton, CO SAT math Tutors Lone Tree, CO SAT math Tutors Lonetree, CO SAT math Tutors Lowry, CO SAT math Tutors Sheridan, CO SAT math Tutors
{"url":"http://www.purplemath.com/Glendale_CO_SAT_Math_tutors.php","timestamp":"2014-04-21T00:16:49Z","content_type":null,"content_length":"24036","record_id":"<urn:uuid:009df911-1aaf-4686-adbc-9328f12f9735>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Maximum Group Velocity Up: Applications in Vibrational Mechanics Previous: Longitudinal and Torsional Waves The equations of motion of a stiff plate are the (2+1)D generalization of those of a beam. We assume the plate to lie, when at rest, in the plane, and to be of thickness ; the deflection of the plate from its equilibrium state is assumed to be perpendicular to the plane. The plate material has density , as well as Young's modulus and Poisson's ratio , all of which are assumed, for the sake of generality, to be smooth positive functions of and . In particular, must be less than one-half. The classical development depends on neglecting rotational inertia effects and makes various assumptions analogous to the ``plane sections remain plane and perpendicular to the neutral axis'' hypothesis that was used as the basis for the Euler-Bernoulli beam model [77]. The resulting equation of motion [6,113] can be written as and used . If the material parameters and the thickness are constant, then which is easily seen to be a direct generalization of (5.2). As such, we expect to find the same anomalous behavior of the resulting propagation velocities, which can become infinitely large in the high-frequency limit. Numerical integration of these equations via a waveguide mesh proceeds along exactly the same lines as in the case of the Euler-Bernoulli beam; in particular, we find a restriction on the space step/time step ratio similar to those that resulted in §5.1.2. Because the development is so similar to the (1+1)D case, we will proceed directly to the more refined model of plate motion, which is a direct generalization of the Timoshenko theory for beams. First proposed by Mindlin, the model [77,120], can be written as system of eight PDEs [173]: Here, we have written where is the transverse displacement of the plate, and is the pair of angles giving the orientation of the sides of a deformed differential element of the plate with respect to the perpendicular. (In the classical theory, for which cross-sections of the plate are assumed to remain parallel to the plate normal, we have .) In addition, we have the shear forces and moments , which are the (2+1)D generalizations of and . The system (5.31)-(5.32) as a whole is known as Mindlin's system, although it is more commonly written s a system of three second-order equations in the variables , and [77]. We have written Mindlin's system so that it is easy to see the decomposition into two separate subsystems, one in and the other in , with the coupling occurring via constant-proportional terms in , , and . In particular, subsystem (5.31) is similar to the lossless parallel-plate system (see §4.4), except for the coupling terms. It is easy to see that this system is not, as written, symmetric hyperbolic. It is easy to symmetrize it by taking sums and differences of (5.32c) and (5.32d), in which case we get, in terms of the variable , where the stands for zero entries, and The system defined by (5.33) is lossless, due to the anti-symmetry of . Also, note that is positive definite (recall that is positive, and less than one-half), but not diagonal^; this did not come up in any of the systems we have looked at previously, and will have interesting consequences in the circuit representations in the next section. Next: Maximum Group Velocity Up: Applications in Vibrational Mechanics Previous: Longitudinal and Torsional Waves Stefan Bilbao 2002-01-22
{"url":"https://ccrma.stanford.edu/~bilbao/master/node172.html","timestamp":"2014-04-24T22:13:50Z","content_type":null,"content_length":"22690","record_id":"<urn:uuid:d6c755ba-303a-47ac-a61a-2b513bf3a1cb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
so... n/(n+1) = 1-[1/(n+1)] ? I have tried to work out the proof of this myself, but I can't seem to get the two to equal each other/manipulate the first to equal the second. Could someone point me towards what i have to do to the first fraction? Should be an easy thing but I can't put my finger on it. The easier way of proving this would be to work with the second fraction, then hopefully come up with how you can reverse the procedure. Why we want to work with the end result is because the first fraction is just one fraction, which means to get from [tex]1-\frac{1}{n+1}[/tex] to [tex]\frac{n}{n+1}[/tex] we need to start by taking the common denominator, that is, putting the fraction all under the same denominator.
{"url":"http://www.physicsforums.com/showpost.php?p=3818005&postcount=4","timestamp":"2014-04-16T13:54:40Z","content_type":null,"content_length":"8426","record_id":"<urn:uuid:87060ccd-5e4a-4507-8b5f-8115982612b4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
${\bf METHOD}$ ${\bf 1}$ First she applied Brahmagupta's formula for the area of a quadrilateral: $ \delta = \sqrt{(s - a)(s - b)(s - c)(s - d) - abcd \cos^2 \beta}$ where $ s = \frac{1}{2}(a + b + c + d)$ and $\beta = \frac{1}{2}(A + C)$ or $ \frac{1}{2}(B + D)$. Hence the area is clearly the greatest when $abcd \cos^2 \beta$ is least. Since $\cos^2 \beta$ is always positive, this value is least when $\beta $ is $90$ degrees as $\cos 90^{\circ} = 0$. Hence $\ frac{1}{2}(A + C) = 90 $ and so $A + C = 180$ showing that the opposite angles in the quadrilateral add up to $180^{\circ}$ and so the area of a quadrilateral with fixed lengths of sides is greatest when it is cyclic. This method gives a proof of the required result but you have to assume Brahmagupta's formula and Sue's second method uses only the formula for the area of a triangle. ${\bf METHOD}$ ${\bf 2}$ The area of the quadrilateral $ABCD$ can be expressed as the sum of the areas of triangle $ABD$ and $BCD$. Let the area of $ABCD$ be $\Delta$. Then \[ \Delta = \frac{1}{2} ad\sin A + \frac{1}{2} bc \ sin C \] Thus $$\frac {d\Delta}{dA} = \frac{1}{2} ad\cos A + \frac{1}{2} bc \cos C \frac {dC}{dA}.\quad (1)$$ From the Cosine Rule, $$a^2 + d^2 - 2ad\cos A = b^2 + c^2 - 2bc\cos C,$$ Hence, (differentiating both sides with respect to $A$), $$2ad\sin A = 2bc\sin C \frac {dC}{dA}.\quad (2)$$ From (1) and (2), $$\eqalign{ \frac{d\Delta}{dA} = \frac{1}{2} ad\cos A + \frac{1}{2} bc \cos C{ad\sin A \over bc\sin C}\\ = \frac {ad\cos A \sin C + \sin A \cos C}{2\sin C}\\ = \left(\frac{ad}{2}\ right) \frac{\sin (A + C)}{\sin C} }.$$ Hence, for the maximum area, $\sin (A + C)=0$ and $A+C=\pi$ which makes the quadrilateral cyclic. We can show that this gives the maximum and not the minimum value of $\Delta$ by finding the second derivative.
{"url":"http://nrich.maths.org/317/solution?nomenu=1","timestamp":"2014-04-19T17:08:37Z","content_type":null,"content_length":"5473","record_id":"<urn:uuid:254d014b-cdd5-4008-b4fa-cd410f57ad20>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenGL SDK: Simple Tessellation Shader OpenGL SDK: Simple Tessellation Shader By Evan Hart, posted Jan 10 2013 at 01:52PM In the last OpenGL SDK post I introduced the new SDK, now let me introduce our first new sample, “Simple Tessellation Shader”. As the name implies, it demonstrates the minimum parts necessary to use tessellation shaders in OpenGL. The sample only demonstrates tessellating the quad domain (rather than the triangular domain), and it shows two modes: one with a flat plane and one where the plane is wrapped into a torus. About Tessellation Shading OpenGL recently added the concept of tessellation shading to its definition of the graphics pipeline. It expands the pipeline by two additional programmable stages and a single fixed-function stage that handle the specification and evaluation of a tessellated surface. It all fits into the pipeline like the diagram below. Diving in a bit deeper, the TCS is one of the more complex shader stages in the pipeline due to all of the options it controls. The TCS specifies the number of vertices it creates in a patch via a layout statement. (e.g. layout (vertices=1) out ) In the TCS, each output will have its own thread to generate the output vertices. One thing that makes the TCS special, is that the threads can all see all of the input data, and at the end of the shader, they can share their results to allow group computations for items like patch-wide tessellation levels. (These are more advanced features that we’ll skip in this sample) Moving on the tessellator, it is very much a black-box from the point of view of the graphics programmer. All it really does is generate a sequence of u.v coordinates and an associated topology map to control how the patch is converted to triangles. While the amount of tessellation is controlled by data output by the TCS, the TES declares an input which controls the pattern of triangles generated. Once the points are generated, the tessellator launches one thread per point to the TES. Finally, the TES evaluates and transforms points into clip space to define the surface. It has as input the output from the TCS defining the patch as a whole, and a u,v coordinate from the tessellator defining where on the surface this particular point should lie. In many ways, it operates much like a vertex shader with an additional set of data that is uniform per patch rather than uniform for all patches. Sample Shader Overview The first shader to consider in our sample today is the vertex shader. It has been descriptively named passthrough_vertex.glsl, as it does no real work. The vertex shader simply declares a single input that defines the location of the patch, and it copies it through to the next shader stage. Had this been a more complex example, work such as animation transforms (skinning for skeletal animation) would be best applied in the vertex shader, because it would create a final object-space position. Additionally, since patches likely share adjacent vertices, just like triangles, performing shared work here can reduce redundant computations. The TCS shader in this example is not much more complex than the vertex shader. Because the sample just tessellates to a user-controlled constant, there are no real computations to be performed at the patch level. The big thing for users unfamiliar with tessellation shaders to notice is the introduction of a whole bunch of new built-in variables. The gl_TessLevelOuter built-in controls the tessellation level for the four outer edges of the patch. In the diagram below, OL0 corresponds to element 0 of the array of out edge tessellation factors. Similarly, the gl_TessLevelInner[0] corresponds to IL0 marking and represents the interior divisions in the u parameter space. (0,1) OL3 (1,1) | | | +--------+ | | | IL0 | | OL0| |IL1 | |OL2 | | | | | +--------+ | | | (0,0) OL1 (1,0) Now that the TCS has setup how finely to tessellate the patch, the TES is where all the real interesting work occurs. The first thing to notice in the TES is the input layout statement. It defines that this TES expects to be operating on a quadrilateral grid parameterized by u and v, that the tessellation subdivisions will happen at equal steps, and that the resulting triangles should be ordered in the standard OpenGL winding of counter-clockwise. The input array gl_in holds the array of patch parameters. Since our TCS only had a single parameter defining the center of the patch, this is the only data it contains. The built-in gl_TessCoord contains the u,v parameters of where on the patch this point belongs. This simple plane shader is just expanding in the x and y dimensions from [-1.0, -1.0] to [1.0, 1.0]. As a result, the math required to modify the parametric u,v coordinates into offsets is clearly very simple. Finally, the position and an outward facing normal are transformed to produce both eye-space and projection-space values. A more complex version of the TES is also included in the sample. This version uses sin and cos to wrap he u,v space around to form a torus. It also demonstrates how you may need to perform additional math to compute proper normal for the surface generated. (In the case of the torus, it is just more sin and cos calls) Beyond Shaders Outside the tessellation shaders there are a couple other bits of the sample worth noting. First is the call to glPatchParameteri. This is what sets the number of input vertices per invocation of the TCS. Since the sample is using one point to define a patch, the value is set to 1. Next, rather than having all shader stages linked into a single shader program. The shaders are compiled as separate shader objects and bound into a shader pipeline. This allows switching the torus TES in and out with the plane TES without changing any of the others. It offers a moderate amount of convenience, especially for those used to a DirectX-centric pipeline. Finally, some readers might have noticed that my shaders were using user-defined uniforms, with no matching declarations in the shader code above. This is because the uniforms were all centralized into a single uniform buffer object defined in an include file. Using the recent GL_ARB_shading_language_include, the shaders were all able to share the same definitions. Additionally, with a bit of macro and typing work, the same include defines a struct with the necessary alignments to match the uniform buffer. With this, the uniform buffer update is as a single copy with glBufferSubdata from the program struct. While this is clearly a very simple sample, it should get those interested in learning tessellation shaders with OpenGL a start. Also, this sample gives us a chance to ease into the new SDK. We’ll be back soon with more introductory material for new features like this, as well as some material to dive deeper into what you can do with OpenGL today. • You can download the sample code here • If you have questions or comments about the sample, please discuss it on this DevTalk thread. Thanks!
{"url":"https://developer.nvidia.com/content/opengl-sdk-simple-tessellation-shader?display[%24ne]=defaultnppnppnsight-visual-studio-edition-22-new-featurestegra-hardware-sales-inquiries","timestamp":"2014-04-23T15:22:13Z","content_type":null,"content_length":"26609","record_id":"<urn:uuid:e553ff82-cab5-4f5d-a728-a5a90751253d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: please help me with Q5 gain and Q6 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5089a309e4b077c2ef2deb19","timestamp":"2014-04-18T16:12:01Z","content_type":null,"content_length":"39497","record_id":"<urn:uuid:5ac959de-f6f3-4b52-a08d-68d651ac06a5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical explanations come in two general varieties: mathematical explanations of mathematical facts and mathematical explanations of physical facts, henceforth called internal and external explanations respectively. The debate in internal explanations boils down to identifying and understanding the difference between a demonstration of a mathematical fact, and an explanation of said fact. Meanwhile, the debate within external explanations attempts to determine whether or not mathematics can genuinely explain physical, non-mathematical phenomena. One important question that has received little attention is how internal and external mathematical explanations relate to each other, if at all. It seems clear that if one type of explanation is related to the other it would be external explanations that would depend on internal explanations. Mark Steiner (1978) was the first to propose such a relationship. Steiner claimed that an external explanation is such that if you ‘remove the physics’ from the explanation, what remains is an internal explanation of a mathematical truth. The suggestion here is that external explanations owe some of their explanatory power to the existence of a good internal explanation. In this paper I will survey several different theories on internal explanation and examine whether or not Steiner’s suggestion of dependence is tenable. Recent accounts of external explanation seem to assume that there is an intimate relationship between internal and external explanation. I aim to show that such an assumption is unjustified as not all accounts of internal explanation are suitable for such a dependence.
{"url":"http://cms.math.ca/Reunions/hiver11/abs/hpm.html","timestamp":"2014-04-16T16:13:24Z","content_type":null,"content_length":"25645","record_id":"<urn:uuid:c3f80bbd-64d6-4d52-bfd8-6f46495ddffa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Biomechanics is the science concerned with the internal and external forces acting on the human body and the effects produced by these forces. Kinematics is the branch of biomechanics about the study of movement with reference to the amount of time taken to carry out the activity. Distance and displacement Distance (length of the path a body follows) and displacement (length of a straight line joining the start and finish points) are quantities used to describe a body's motion. e.g. in a 400m race on a 400m track the distance is 400 metres but their displacement will be zero metres (start and finish at the same point). Speed and velocity Speed and velocity describe the rate at which a body moves from one location to another. Average speed of a body is obtained by dividing the distance by the time taken and average velocity is obtained by dividing the displacement by the time taken e.g. a swimmer in a 50m race in a 25m length pool who completes the race in 71 seconds - distance is 50m and displacement is 0m (swimmer is back where they started) so speed is 50/71= 0.70m/s and velocity is 0/71=0 m/s • Speed and Velocity = distance travelled ÷ time taken Acceleration is defined as the rate at which velocity changes with respect to time. • average acceleration = (final velocity - initial velocity) ÷ elapsed time From Newton's 2nd law: • Force = Mass x Acceleration • Acceleration = Force ÷ Mass If the mass of a sprinter is 60kg and the force exerted on the starting blocks is 600N then acceleration = 600 ÷ 60 = 10 msec² Acceleration due to gravity Whilst a body is in the air it is subject to a downward acceleration, due to gravity, of approximately 9.81m/s² Vectors and scalars Distance and speed can be described in terms of magnitude (amount) and are known as scalars. Displacement, velocity and acceleration require magnitude and direction and are known as vectors. Components of a vector Let us consider the horizontal and vertical components of velocity of the medicine ball in Figure 1. Figure 2 indicates the angle of release of the medicine ball is 35° and the velocity at release as 12 metres/second. • Vertical component Vv = 12 x sin 35° = 6.88 m/sec • Horizontal component Vh = 12 x cos 35° = 9.82 m/sec Let us now consider the distance the medicine ball will travel horizontally (its displacement). Distance (D) = ((v² × sinØ × cosØ) + (v × cosØ × sqrt((v × sinØ)² + 2gh))) ÷ g Where v = 12, Ø = 35, h = 2m (height of the shot above the ground at release) and g = 9.81 • D = ((12² × sin35 × cos35) + (12 × cos35 × sqrt((12 × sin35)² + 2x9.81x2))) ÷ 9.81 • D = 16.22m The time of flight of the shot can be determined from the equation: • Time of flight = Distance (D) ÷ velocity (Vh) • Time of flight = 16.22 ÷ 9.82 = 1.65 seconds Uniformly accelerated motion When a body experiences the same acceleration throughout an interval of time, its acceleration is said to be constant or uniform and the following equations apply: • Final velocity = initial velocity + (acceleration x time) • Distance = (initial velocity x time) + (½ x acceleration x time²) Moment of force (torque) The moment of force or torque (τ) is defined as the application of a force at a perpendicular distance to a joint or point of rotation. Torque (τ = rFsin θ ) depends on three quantities: • the length of the lever arm connecting the axis to the point of force application (r) • the force applied (F) • the angle between the force vector and the lever arm (sin θ) Angular Kinematics Angular distance and displacement When a rotating body moves from one position to another, the angular distance through which it moves is equal to the length of the angular path. The angular displacement that a rotating body experiences is equal to the angle between the initial and final position of the body. Angular movement is usually expressed in radians where 1 radian = 57.3° Angular speed, velocity and acceleration • Angular speed = angular displacement ÷ time • Angular velocity = angular displacement ÷ time • Angular acceleration = (final angular velocity - initial angular velocity) ÷ time Angular Momentum Angular momentum is defined as: angular velocity x moment of inertia The angular momentum of a system remains constant throughout a movement provided nothing outside of the system acts with a turning moment on it. This is known as the Law Conservation of Angular Momentum. (e.g. if a skater, when already spinning, moves their arms out to the side, then the rate of spin will change but the angular momentum will stay the same). Linear Kinetics Kinetics is concerned with what causes a body to move. Momentum, inertia, mass, weight and force • Momentum: mass x velocity • Inertia: the reluctance of a body to change whatever it is doing • Mass: the quantity of matter of which a body is composed of - not affected by gravity - measured in kilograms (kg) • Weight: force due to gravity -9.81m/s² • Force: a pushing or pulling action that causes a change of state (rest/motion) of a body is proportional to mass x acceleration. It is measured in Newtons (N) where 1N is the force that will produce an acceleration of 1 m/s² in a body of 1kg mass The classification of external or internal forces depends on the definition of the 'system'. In biomechanics, the body is seen as the 'system' so any force exerted by one part of the system on another part of the 'system' is known as an internal force all other forces are external. Newton's Laws of Motion^[1] • First Law: Every body continues in its state of rest or motion in a straight line unless compelled to change that state by external forces exerted upon it. • Second Law: The rate of change of momentum of a body is proportional to the force causing it and the change takes place in the direction in which the force acts • Third Law: To every action there is an equal and opposite reaction OR for every force that is exerted by one body on another there is an equal and opposite force exerted by the second body on the Newton's law of gravitation^[1] • Any two particles of matter attract one another with a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them Kinetic Energy and Power Kinetic energy is the mechanical energy possessed by a moving object. Kinetic Energy = ½ x mass x velocity² (joules) Power is defined as the rate at which energy is used or created from other forms • Power = energy used ÷ time taken • Power = (force x distance) ÷ time taken • Power = force x velocity Angular Kinetics Translation and couple A force that acts through the centre of a body result in movement (translation). A force whose line of action which does not pass through the body's centre of gravity is called an eccentric force and results in movement and rotation. Example - if you push through the centre of an object it will move forward in the direction of the force. if you push to one side of the object (eccentric force) it will move forward and rotate. A couple is an arrangement of two equal and opposite forces that cause a body to rotate. A lever is a rigid structure, hinged at one point and to which forces are applied at two other points. The hinge is known as the fulcrum. The two forces forces that act on the lever are the weight that opposes movement and a force that causes movement. For more details see the page on Levers. Bernoulli Effect If an object has a curved top and flat bottom (e.g. the wing of an aircraft), the air will have further to travel over the top of the wing than the bottom. For the two airflows to reach the rear of the wing at the same time the air flowing over the top of the wing will have to flow faster resulting in less pressure above the wing (air is thinner) than below it and the aircraft will lift. This is known as the Bernoulli effect. Referenced Material 1. HAY, J.G. (1993) The Biomechanics of Sports Technique. 4th Ed. London, Prentice-Hall International (UK) Ltd. p. 64-68 Page Reference The reference for this page is: • MACKENZIE, B. (2004) Biomechanics [WWW] Available from: http://www.brianmac.co.uk/biomechanics.htm [Accessed Associated Pages The following Sports Coach pages should be read in conjunction with this page: • Speed • Levers
{"url":"http://www.brianmac.co.uk/biomechanics.htm","timestamp":"2014-04-20T11:54:31Z","content_type":null,"content_length":"34703","record_id":"<urn:uuid:c97db785-52a6-4f25-8958-e0b5b00dfc21>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficiency Measurements of Portable-Handset Antennas Authors: Darioush Agahi, William Domino Conexant Systems Inc. Newport Beach, CA Note: This article originally appeared in the June 2000 edition of "Applied Microwave & Wireless" (now out of print) Both authors now work for Skyworks Solutions Inc. In the design of wireless portable devices, antenna efficiency is a variable that can have a great effect on overall system performance, and yet may not always receive the attention it deserves. As an example, RF engineers must frequently make critical tradeoffs in receiver design in order to improve sensitivity by mere fractions of a dB, but a poor antenna efficiency can easily cause a degradation of several dB. This pitfall can occur in systems such as GSM, where many tests are performed using a cable connection to the antenna port; a handset may easily pass such tests, only to be later hampered by its antenna in the field. This paper is targeted at the very important parameter of antenna efficiency, and a measurement technique that can be used to quantify it. Antenna “efficiency” must be distinguished from antenna “gain”. Antenna gain is a directional quantity that refers to the signal strength that can be derived from an antenna relative to a reference dipole. Efficiency, on the other hand, quantifies the resistive loss of the antenna, in terms of the proportion of power that is actually radiated versus the power that is first delivered to it. It is not a directional quantity. The Model We model the antenna’s loss as a resistor placed in series with the radiation resistance, as shown in Figure 1. Since the model includes no reactances, there is an implicit assumption that the measurements must be taken at resonance. The equations to be derived later require this assumption. Figure1 Model of Antenna Loss The antenna efficiency (see appendix) is h = Note that it is immaterial whether the antenna is matched to the source resistance R . While it is certainly desirable and necessary to match the antenna in actual use, the match is not part of the problem of finding the above resistance ratio. Therefore we need only relate the radiated power to that which is transferred forward at the point shown in Figure 1. What is needed, then, is a way to effectively separate the resistances R and R by way of measurement, so that the efficiency can be calculated. The Wheeler Cap Wheeler [1] sets forth just such a method, where a hollow conductive sphere is placed over the antenna at the radius of transition between the antenna’s energy-storing near-field and its radiating far-field. This transition radius occurs at a distance of , and thus the sphere is referred to as the “radiansphere”. The role of the conductive sphere is to reflect all of the antenna’s radiation while causing minimal disturbance to the near-field. In theory, a complete sphere is appropriate for reflecting the radiation of a small dipole, which is an approximation of an isotropic antenna, while in practice a monopole with a ground plane can be capped with a half-sphere. The half-spherical “Wheeler cap” is shown in Figure 2. For 900MHz, the cap’s radius is 5.3cm. Figure 2 Half-Spherical Wheeler Cap If all of the power radiated by the antenna is reflected back by the cap and not allowed to escape, then in our model this is the equivalent of setting R to zero. By making separate S measurements with the cap in place and with the cap removed, we gather enough information to find the resistances and the antenna’s efficiency. The spherical or half-spherical cap is intended for physically small antennas; simple dipole or monopole antennas must therefore also be electrically short. Given the cap’s radius, it is not possible to fit a monopole of length /4 under it. To test such an antenna we replace the half-spherical cap with a cylindrical cap, keeping the radius at . Such a cylindrical cap is shown in Figure 3. Figure 3 Cylindrical Wheeler Cap Efficiency of Short Antennas: The Constant-Power-Loss Method For an electrically short antenna (< /10), the radiation resistance is typically small in comparison to the 50 source resistance of the measuring system. The radiation resistance of an ideal short monopole [2] is R[RAD MONO] = (2) So, for example, a 1/20-wavelength monopole, which fits comfortably under the half-spherical cap, exhibits about a 4 radiation resistance. With such a small value of R , the power lost in the resistance R is about the same whether the cap is in place or removed, that is, with zero or finite R . With the assumption of constant power loss, we can make use of S magnitude measurements with the cap on and off as follows: Cap on. The radiation resistance is zero, and the antenna reflection coefficient is measured and referred to as S [11WC. ] Cap off. The radiation resistance is that of the antenna radiating into free space, and the antenna reflection coefficient is measured and referred to as S We need only measure the magnitudes (and not the signs) of S and S . The antenna efficiency becomes h = (5) h = (6) The efficiency can therefore be found directly from the reflection coefficient magnitude measurements, without any need to actually determine R [RAD ] and R . It should still be noted that the measurements must be made at resonance, because the loss model is based on vector S values that are all-real, even though only their magnitudes are needed in the above equations. Eq. (6) appears in [3] in a survey of previous techniques. This method was likely developed to make it possible to obtain an antenna efficiency measurement using only reflectometer-type reflection-coefficient measurements, where only the magnitude of S is measured. In this case the method further depends on the close proximity of the minimum of |S | to its all-real point. For our example of a 1/20-wavelength monopole, suppose the measurements are S[11WC] = -0.966 S[11FS] = -0.823 Then the efficiency calculated from eq. (6) is h = 79.3% This antenna causes a performance penalty to the radio of 10log(79.3%) = -1.0 dB Such an efficiency is not atypical, and some antennas have been measured to be even less than 50% efficient, which corresponds to a power loss of more than 3dB. This power loss is a direct degradation of the receiver sensitivity and the transmitter output power, relative to a cable-connection test. Such a transmitter power loss would greatly degrade the handset’s battery life, and, in cellular systems that tend to be uplink-limited, it would affect the ability of the handset to obtain service in marginal areas. In a 1/8-duty-cycle GSM system operating at full transmit power of 2W with a 50% efficient antenna, the resistive heating of the antenna would amount to 1/8W! Efficiency of Moderate-Length Antennas: The Constant-Loss-Resistor Method As the antenna becomes longer, its radiation resistance increases, and the assumption of constant power loss with and without the cap breaks down. In this case a method of efficiency measurement that directly makes use of the quantities R and R is preferred. Fortunately, modern vector network analyzers can provide a direct display of the impedance of a measured device when performing a reflection coefficient measurement. So we make use of the resistance ratio in (1) rather than the power ratio: h = (7) Here the key assumption is that R itself, rather than the power lost in it, remains constant with the cap in place or removed. If desired, we can still express the efficiency in terms of the reflection coefficients: Cap on. The radiation resistance is zero, and the antenna reflection coefficient is S [11WC. ] Cap off. The radiation resistance is that of the antenna radiating into free space, and the antenna reflection coefficient is S (7) and (8) are transposed to become And the efficiency is h = (12) h = (13) Eq. (13) is actually more accurate then eq. (6) regardless of the absolute value of R (and thereby of the antenna length). Its one disadvantage lies in the fact that the signs of the (all-real) S measurements must be retained and accounted for in the calculation, making for somewhat less convenience. But it exactly reproduces the resistance ratio at any level of antenna efficiency, as opposed to eq. (6) which becomes less accurate at lower efficiencies. As an example, consider a low-efficiency monopole, where the S measurements are: S[11WC] = -0.626 S[11FS] = -0.325 Then the efficiency calculated from eq. (13) is h = 54.9% = -2.6dB while that calculated from eq. (6) is h = 32.0% = -4.9dB The value of efficiency calculated by the constant-power-loss method is unnecessarily pessimistic. This discrepancy between the methods occurs with longer antennas that inherently exhibit a large R Next we plot the calculated efficiency vs. a swept value of R , in order to further compare the two methods of calculation. Figure 4 is plotted for an R of 4 (our short-antenna case) while Figure 5 is for an R of 14 (a longer antenna). In the 4 case the curves agree down to about 75%, while in the 14 case they quickly diverge. In either case the constant-loss-resistor method is more accurate, as it agrees with the resistance ratio. This illustrates that the constant-power-loss method is accurate only for small radiation resistances and high efficiency. Figure 4 Efficiency vs. R[LOSS] for Antenna with 4W Radiation Resistance Figure 5 Efficiency vs. R[LOSS] for Antenna with 14W Radiation Resistance Making the Measurements: Practical Considerations In the above derivations it was assumed that the radiation and loss resistances are not accompanied by any reactive impedances; therefore the S measurements need to be made at the antenna’s actual resonance, as defined by the point where S is all-real. For an ideal lossless antenna and perfectly-reflecting Wheeler cap, the S measurements would be -1 with the cap on and zero with the cap off, and we need to find the points of all-real impedance that most closely approach these. They may or may not be precisely the same as where the magnitude |S | is minimized, as illustrated in Figure 6, and so it is advisable to view the measurements on a Smith chart display rather than a log-magnitude grid. These points should not be too far apart. Figure 6 Smith Chart Display of Free-Space S[11] It is especially true that the free space measurement should be done at the actual resonance instead of the antenna’s nominal operating frequency F . This is because the antenna is normally loaded down when installed on the handset, and it is expected that there should be a small shift when it is removed and placed on a different ground plane. A rule of thumb that is normally used for the maximum limit of this shift is 10% of F . De-tuning beyond this could impact the accuracy of the measurement, as it means the actual-usage environment differs too much from the measuring setup. Figure 7 Allowable De-tuning for Measurement The Wheeler cap provides a convenient and reasonably accurate method of determining antenna efficiency. For practical use it consists of a cap over a ground plane, usually of a simpler shape than the ideal half-sphere, such as a cylinder, keeping the radius. The efficiency is best determined by measurement of the antenna resistance with the cap in place and removed, each taken at the resonance defined as the all-real impedance point. The derivation of equation (1), which also appears in ref. [2] page 48, is duplicated here. In the ideal case, where there are no disturbances to the antenna, and a perfect match, the input resistance represents the dissipation loss of the antenna. This resistance represents the sum of radiation resistance and ohmic resistance. R[in] = R [RAD] + R[LOSS ](a1) [ ] Given that the peak current flowing in to the antenna is I then the average power dissipated in an antenna is Pin = ½ R[in] | I[in] |^2 (a2) Inserting (a1) into (a2) yields, Pin = ½ ( R [RAD] + R[LOSS]) | I[in] |^2 = ½ R [ RAD] | I[in] |^2 + ½ R[LOSS] | I[in] |^2 Or equivalently, P[RAD] = ½ R [RAD] | I[in] |^2 (a3) P[LOSS] = ½ R[LOSS] | I[in] |^2 (a4) Using the efficiency equation and inserting (a3) and (a4) yields, h = = = After canceling similar terms, it yields, h = 1) H. A. Wheeler, “The radiansphere around a small antenna,” Proc. Of the IRE, vol.47, pp.1325-1331, Aug. 1959. 2) W. L. Stutzman and G. A. Thiele, Antenna Theory and Design, Wiley, New York, 1981. 3) R. H. Johnston, L. P. Ager, and J. G. McRory, “A new small antenna efficiency measurement method,” IEEE 1996 Antennas and Propagation Society International Symposium, vol.1, pp. 176-179 Darioush Agahi, P.E. is director of GSM RF systems engineering at Conexant Systems Inc. in Newport Beach CA. He has 17 years of industry experience of which the last 4 years he has been with Conexant and 9 years with Motorola’s (GSM) cellular subscriber division. Darioush received his BS in electronics (1981) and MS in Medical Engineering from the George Washington University in Washington DC (1983). Also he received an MSEE from Illinois Institute of Technology in Chicago Illinois (1993) and an MBA from National University in 1997. Darioush holds ten US patents and several more pending, he is a Professional Engineer (P.E.) registered in the state of Wisconsin. William Domino is Principal Engineer, GSM RF Systems, at Conexant Systems Inc. in Newport Beach, CA, where he has been employed since 1992 in the area of digital-radio system architecture development. He received the BSEE degree from the University of Southern California in 1979 and the Master of Engineering from the California State Polytechnic University, Pomona, in 1985. His interests currently include receiver and transmitter system design for various cellular standards, as well as filter design. In these areas he has one patent issued and ten patents pending.
{"url":"http://www.rfcafe.com/references/articles/Efficiency-Measurement-Antenna-Wheeler-Cap.htm","timestamp":"2014-04-19T04:24:51Z","content_type":null,"content_length":"46466","record_id":"<urn:uuid:7148193d-17ac-4bef-be4a-4ec2b7e722c9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Interpretation of interaction term in log linear (non linear) model Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Interpretation of interaction term in log linear (non linear) model From Suryadipta Roy <sroy2138@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: Interpretation of interaction term in log linear (non linear) model Date Sat, 8 Jun 2013 12:12:04 -0400 Dear Statalisters, I was wondering if some one would be kind enough to clarify if I am on the right track in clarifying the coefficient of the interaction term when the dependent variable is in logarithm. The estimated model is of the form: log(Trade) = constant + 0.15dummy - 0.15x1 + 0.12dummy*x1, where dummy is (0,1) categorical variable, x1 is a continuous variable (standardized 0 - 1), and dummy*x1 is the interaction term. The result has been obtained from a fixed effects panel regression using -areg- with robust standard error option, and all the variables are statistically significant. Based on readings of Maarten's Stata tip 87: Interpretation of interactions in non-linear model, several Statalist postings, and the following link , I wanted to make sure if any of the following interpretation of the above result is correct: 1. The coefficient of "dummy" indicates that this category (dummy variable = 1) has 16% (= exp(0.15) - 1) more of "Trade" compared to the base category (dummy variable = 0). 2. The effect of being in this category on "Trade" increases when the value of x1 increases. For every standard deviation increase in x1, the effect of "dummy" increases by about 13% (exp(0.12) - 1), OR there is a statistically significant 13% increase in "Trade" to countries having more of x1 relative to countries that have one standard deviation lower value of x1, OR the effect of being in the "dummy = 1" category in a country with one standard deviation more of x1 than average is exp(0.12)*exp(0.15) = 1.31, which means that "dummy=1" category has about 31% more "Trade" than "dummy=0" category. Following suggestions elsewhere in the Statalist, I have pursued other non-linear estimation strategies (and have asked questions to that effect earlier), but there is a tradition in this literature to use log-linear models. Any suggestion is greatly appreciated. Suryadipta Roy. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-06/msg00321.html","timestamp":"2014-04-16T13:32:46Z","content_type":null,"content_length":"9933","record_id":"<urn:uuid:78b335d5-50a6-4a82-9654-8c6422a6a3ea>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence of Primitive Roots December 7th 2011, 10:32 PM #1 Super Member Feb 2008 Existence of Primitive Roots By definition, the positive integers that have a primitive root are 2, 4 and integers of the form $p^t$ and $2p^t$ where p is a prime and t is a positive integer. Then why does 16 NOT have a primitive root? $16 = 2^4$ and 2 is prime and 4 is a positive integer. I'm confused... Re: Existence of Primitive Roots You are missing one key word in your definition. There exists a primitive root (mod n) if and only if n = 1, 2, 4, $p^\alpha, 2p^\alpha$, where p is an odd prime. December 8th 2011, 06:13 PM #2 Junior Member Dec 2011
{"url":"http://mathhelpforum.com/number-theory/193749-existence-primitive-roots.html","timestamp":"2014-04-18T12:11:00Z","content_type":null,"content_length":"31757","record_id":"<urn:uuid:a26804ea-8b32-4db9-9e7b-d04648c62928>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
April 2 In the last post we saw that Rpython-to-LLVM can be 200X faster than Python in a tight loop. What happens when the loop gets more complicated? This next test introduces a Vector class, and using a new decorator, the LLVM backend can translate instances of this class into the optimized LLVM vector type. @rpy.vector( type='float32', length=4 ) class Vector(object): def __init__(self, x=.0, y=.0, z=.0): self.x = x self.y = y self.z = z def __getitem__(self, index): r = .0 if index == 0: r = self.x elif index == 1: r = self.y elif index == 2: r = self.z return r def __setitem__(self, index, value): if index == 0: self.x = value if index == 1: self.y = value if index == 2: self.z = value def __add__( self, other ): x = self.x + other.x y = self.y + other.y z = self.z + other.z return Vector( x,y,z ) The new decorator is "rpy.vector( type, length )" and for best SSE performance it should be of type float32 with length 4 (even if you only use 3). Test Function: def test(x1, y1, z1, x2, y2, z2): a = Vector(x1, y1, z1) b = Vector(x2, y2, z2) i = 0 c = 0.0 while i < 16000000: v = a + b c += v[0] + v[1] + v[2] i += 1 return c Test Results: • Python2 = 51 seconds • Rpython-to-LLVM = 0.019 seconds How could LLVM be 2,680X faster than standard Python? It turns out in this case LLVM is able to optimize the while-loop by moving many operations into the "function entry" and reducing the work the while-loop needs to do (see the optimized LLVM ASM below). define float @test(float %x1_0, float %y1_0, float %z1_0, float %x2_0, float %y2_0, float %z2_0) { %0 = insertelement <4 x float> , float %x1_0, i32 0 %1 = insertelement <4 x float> %0, float %y1_0, i32 1 %2 = insertelement <4 x float> %1, float %z1_0, i32 2 %3 = insertelement <4 x float> , float %x2_0, i32 0 %4 = insertelement <4 x float> %3, float %y2_0, i32 1 %5 = insertelement <4 x float> %4, float %z2_0, i32 2 %vecadd = fadd <4 x float> %2, %5 %element = extractelement <4 x float> %vecadd, i32 0 %element3 = extractelement <4 x float> %vecadd, i32 1 %v5 = fadd float %element, %element3 %element4 = extractelement <4 x float> %vecadd, i32 2 %v7 = fadd float %v5, %element4 br label %while_loop %st_c_0.0 = phi float [ 0.000000e+00, %entry ], [ %v8, %while_loop.while_loop_crit_edge ] %st_i_0.0 = phi i32 [ 0, %entry ], [ %v9, %while_loop.while_loop_crit_edge ] %v8 = fadd float %st_c_0.0, %v7 %v9 = add i32 %st_i_0.0, 1 %v10 = icmp ult i32 %v9, 16000000 br i1 %v10, label %while_loop.while_loop_crit_edge, label %else br label %while_loop %v8.lcssa = phi float [ %v8, %while_loop ] ret float %v8.lcssa Part2: Escaping the GIL llvm-py contains an example "call-jit-ctypes.py" that shows you how to bypass the LLVM Execution Engine and instead call your compiled function via ctypes. The advantage of using ctypes over the Execution Engine is that ctypes will release the GIL and allows your Python threads to run in parallel. The next test simply calls the same function four times from four threads at the same time. Test 4 Threads: • LLVM Execution Engine = 0.086 seconds • Ctypes = 0.025 seconds As we can see in this test with 4 threads, ctypes is 3.4X faster on a quad core CPU. Note that another way to escape the GIL is the multiprocessing module, there are pros and cons for both processes and threads. Rpythonic now uses ctypes by default to call the compiled LLVM functions, so its up to you to decide if you want to take advantage of threads or not. Psyco and Unladen Swallow were the first to try to make a just-in-time compiler (JIT) for Python, but these projects have stopped, leaving standard Python with no good JIT solution. So I started investigating how hard would it be to make a JIT for Python using Rpython and LLVM. The results of my first highly experimental implementation of Rpython-to-LLVM show very fast JIT performance: 4x faster than PyPy, 200x faster than Python2, and 260x faster than Python3. Test Function def simple_test(a, b): c = 0 while c < 100000*100000: c += a + b return c The test function is simply a huge loop that adds-to and returns a 64bit integer. The test was performed on a AMD 2.4ghz Quad with 4GB of RAM, average test result times are: • Rpython-to-LLVM = 2 seconds • PyPy1.8 (with warm JIT) = 8 seconds • Python2.7.2 = 400 seconds • Python3.2.2 = 530 seconds Building The JIT The first challenge in this project was building the code that traverses the Rpython flow-graph ("flow object space") and converts it into LLVM format. For each Rpython flow-graph block a new LLVM basic-block is created, and for each operation in the block a new LLVM instruction is created. Blocks that loop and modify a variable require some extra work, these mutable variables are treated as stack allocations, and then the LLVM optimization pass PROMOTE_MEMORY_TO_REGISTER replaces the costly stack allocations with fast register memory. It is interesting to see what LLVM IR looks like for the simple function used in this test, before and after the PROMOTE_MEMORY_TO_REGISTER optimization. Raw LLVM IR define i64 @simple_test(i64 %a_1, i64 %b_1) { %st_a_1 = alloca i64 ; [#uses=2] store i64 %a_1, i64* %st_a_1 %st_b_1 = alloca i64 ; [#uses=2] store i64 %b_1, i64* %st_b_1 %st = alloca i64 ; [#uses=1] store i64 0, i64* %st %st_v2 = alloca i64 ; [#uses=4] store i64 %a_1, i64* %st_v2 br label %while_loop while_loop: ; preds = %while_loop, %entry %a_0 = load i64* %st_a_1 ; [#uses=1] %b_0 = load i64* %st_b_1 ; [#uses=1] %v0 = add i64 %a_0, %b_0 ; [#uses=1] %v1 = load i64* %st_v2 ; [#uses=1] %v2 = add i64 %v1, %v0 ; [#uses=2] store i64 %v2, i64* %st_v2 %v3 = icmp ult i64 %v2, 10000000000 ; [#uses=1] br i1 %v3, label %while_loop, label %else_return else_return: ; preds = %while_loop %0 = load i64* %st_v2 ; [#uses=1] ret i64 %0 LLVM IR (after PROMOTE_MEMORY_TO_REGISTER) define i64 @simple_test(i64 %a_1, i64 %b_1) { br label %while_loop while_loop: ; preds = %while_loop, %entry %st_v2.0 = phi i64 [ %a_1, %entry ], [ %v2, %while_loop ] ; [#uses=1] %v0 = add i64 %a_1, %b_1 ; [#uses=1] %v2 = add i64 %st_v2.0, %v0 ; [#uses=3] %v3 = icmp ult i64 %v2, 10000000000 ; [#uses=1] br i1 %v3, label %while_loop, label %else_return else_return: ; preds = %while_loop ret i64 %v2 LLVM Advantages LLVM is more than just a JIT, because LLVM IR is platform independent, it becomes the best solution for making Python extension modules that need to support all platforms and all Python versions. A classic Python extension module is written in C, and must be compiled for each Python version, each OS, and each OS type (32bit and 64bits)! (Python2+Python3+PyPy)*(Linux+OSX+Windows)*(32bits+64bits) = 18 targets. How is anybody supposed to compile their Python extension for all 18 targets? LLVM IR can be generated on any platform any bit-depth, saved to a file, and later loaded and run on any target that PyLLVM supports. PyLLVM works with Python2 and Python3; and is easily portable to PyPy using cpyext. In other words, LLVM IR can easily hit all 18 targets - no problem. Extra Advantages: • LLVM easily calls into C libraries • LLVM has a SIMD accelerated vector type • LLVM has powerful optimizations like: PROMOTE_MEMORY_TO_REGISTER • Rpython and LLVM are a natural fit Still not convinced? Read what Intel has to say about LLVM. source code requires Mahadevan's PyLLVM Restricted Python (Rpython) has all the limitations you would expect from a static language, and not very hard to adapt to. However, some key language features remain missing, like: managed attributes and operator overloading. As you will see in this post, meta-programming can easily lift these limitations, and redefine what Rpython is. The most direct type of meta-programming is to simply write Python code that generates other Python code and eval or exec the new code, I use this technique in this post. Another technique is to parse and modify byte-code directly, using a library like BytePlay. The most powerful meta-programming technique is to use PyPy's flow-graph and directly modify it. When using the interactive translator (pypy.translator.interactive.Translation) it parses the byte-code of a function into a flow-graph. The flow-graph has a very simple design that was laid out almost decade ago in October 2003 at a PyPy sprint in Berlin. The flow-graph is composed of: blocks, operations, and links. The primary feature of the flow-graph is that it is easy to modified in-place, and traverse it to find all the types of each instance. This provides a way to find the class of an instance, and introspect it to check for managed attributes and operator overloading methods like: __call__, __getitem__, __setitem__. Example - Managed Attributes class A(object): def set_myattr(self,v): self.myattr = v def get_myattr(self): return self.myattr myattr = property( get_myattr, set_myattr ) def func(arg): a = A() a.myattr = 'foo' s = a.myattr a.myattr = s + 'bar' return 1 T = pypy.translator.interactive.Translation( func ) Initial FlowGraph When a Translation instance is created it parses the byte-code into the initial flow-graph, at this stage the flow-graph needs to be static, but it is not yet required to be strict-Rpython. v0 = simple_call((type A)) v1 = setattr(v0, ('myattr'), ('foo')) v2 = getattr(v0, ('myattr')) v3 = add(v2, ('bar')) v4 = setattr(v0, ('myattr'), v3) Trying to translate the above flow-graph will fail at the annotation stage, throwing an error that "myattr" degenerated to SomeObject. The degeneration error is caused by the class using "property( get_myattr, set_myattr )" to manage the "myattr" attribute. To make this work we must do the following: 1. traverse the initial flow-graph and check all setattr/getattr operations 2. modify the flow-graph in place to make it strict-Rpython compatible 3. delete the "property" from the class. Modified FlowGraph The flow-graph below has been modified to make it strict-Rpython. The "setattr(v, myattr, foo)" operation is replaced by two operations: 1. get a pointer to the setter method 2. call the method passing "foo" v0 = simple_call((type A)) v5 = getattr(v0, ('set_myattr')) v1 = simple_call(v5, ('foo')) v6 = getattr(v0, ('get_myattr')) v2 = simple_call(v6) v3 = add(v2, ('bar')) v4 = simple_call(v5, v3) The flow-graph is now ready to pass the next steps in the translation process: annotation and rtyping. Using this same technique of operation swapping, it becomes easy to add support for operator overloading. Example - Operator Overloading class A(object): def __getitem__(self, index): return self.array[ index ] def __setitem__(self, index, value): self.array[ index ] = value def __init__(self): self.array = [ 100.0 ] def func(arg): a = A() a[0] = a[0] + a[0] return 1 Initial FlowGraph v0 = simple_call((type A)) v1 = getitem(v0, (0)) v2 = getitem(v0, (0)) v3 = add(v1, v2) v4 = setitem(v0, (0), v3) Modified FlowGraph v0 = simple_call((type A)) v5 = getattr(v0, ('__getitem__')) v1 = simple_call(v5, (0)) v2 = simple_call(v5, (0)) v3 = add(v1, v2) v6 = getattr(v0, ('__setitem__')) v4 = simple_call(v6, (0), v3) source code
{"url":"http://pyppet.blogspot.com/2012_04_01_archive.html","timestamp":"2014-04-18T19:31:52Z","content_type":null,"content_length":"85668","record_id":"<urn:uuid:37767d0c-d106-4b40-a168-ef0a92e4bdee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: COMPLEXITY: Exercise No. 6 due in two weeks 1. (Test 99)Consider the following problem: Input: Sets S 1 ; : : : ; S k ` N where jS i j = 3 for all i. Goal: Find a set I ` f1; 2; : : : ; kg with maximum size such that S i `` S j = OE for all i; j 2 I. Give a polynomial time approximation algorithm to this problem with a constant approxima­ tion ratio. 2. (Test 99)Consider the MINIMUM STEINER TREE problem: Input: A complete graph G = (V; E), a subset of the vertices X ` V , and a length function l(e) ? 0 defined on the edges. The lengths satisfy the triangle inequality. Goal: Find a tree T = (W; F ) such that X ` W ` V , F ` E and Length(T ) = l(e) is Give an approximation algorithm for this problem whose approximation ratio is Ÿ 2. Note: If X = V then the optimal tree is a minimum spanning tree, but this is not true if X ae V . 3. Give an approximation algorithm for MAXIMUM CLIQUE whose inverse ratio is O(n= log n) where n is the number of vertices in the input graph (i.e., there is a constant c such that
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/402/3766080.html","timestamp":"2014-04-20T16:24:24Z","content_type":null,"content_length":"8124","record_id":"<urn:uuid:99f39fd2-2d8f-45d1-af5a-44ecd91e3816>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
shlomif / project-euler - b287af6 Add the notes for euler_188. +The hyperexponentiation or tetration of a number a by a positive integer b, denoted by a↑↑b or ba, is recursively defined by: +Thus we have e.g. 3↑↑2 = 33 = 27, hence 3↑↑3 = 327 = 7625597484987 and 3↑↑4 is roughly 103.6383346400240996*10^12. Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js. Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java. Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory. Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml. Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file. Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o. Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
{"url":"https://bitbucket.org/shlomif/project-euler/commits/b287af69e02e0f05dbd6f2b6a45ed8a371d9d1d0?at=default","timestamp":"2014-04-20T03:33:13Z","content_type":null,"content_length":"62980","record_id":"<urn:uuid:1d94e469-4b18-4983-9c36-97bde6e05486>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Mod(%) operator Author Mod(%) operator Ranch Hand Joined: Jan 05, Posts: 477 When taking y value as 0 in i1,it gives y1 as true.(0==0).When taking y value as 1 in i1,1 mod 2====>1/2=quotient 0.5 and reminder 0.that way I had taken.(calculator return quotient value 0.5.but compiler return quotient value 0.0 and remainder 0.). I know that mod operator will give remainder part only.But when I take 1%2 working,I always take 0 as the remainder and 0.5 as the quotient.How comw the 1%2 value is quotient 0.0 and remainder 0? Ranch Hand Joined: Jan 03, Because they are integer values, not floating point values. Posts: 490 "Computer science is no more about computers than astronomy is about telescopes" - Edsger Dijkstra Joined: Oct 13, Posts: 36499 . . . because you are using integer division, which loses its remainder.does not produce a double result. It is integer division which produces an integer result and it is then cast to a double. Joined: Oct 13, Posted by Rusty Shackleford Posts: 36499 Because they are integer values, not floating point values. Ranch Hand Because it's not the mod (or modulus) operator nor does it behave like the modulus operator (which Java does not have). Joined: Sep 24, 2003 It's the remainder operator, which is described in the JLS. The rest is explanatory from that point on. Posts: 1608 Tony Morris Java Q&A (FAQ, Trivia) subject: Mod(%) operator
{"url":"http://www.coderanch.com/t/404536/java/java/Mod-operator","timestamp":"2014-04-19T12:54:38Z","content_type":null,"content_length":"27603","record_id":"<urn:uuid:f31b5137-ba3e-4689-bada-71af9e7185b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 3. Exercises In this exercise you have to solve a given task. Therefore you have to enter the integer part of a fraction and the numerator and the denominator. The difficulty of the generated task can be adjusted by using some options on the left. There are several parameters which influence the difficulty of the generated tasks: Mixed number Set if the fractions will appear as mixed numbers or not in the question expression ( mixed number example: 1 4/5 = 9/5 ). Number of terms The number of terms (separate fractions) given in each task. From 2 to 5, inclusive. Maximum denominator The highest number KBruch will use as the main denominator in the tasks it sets. From a minimum of 10 to a maximum of 50. Mixed number Set if the fractions will appear as mixed numbers or not in the answer ( mixed number example: 1 4/5 = 9/5 ). Reduced Form Check this to force the use of the reduced form. Operations which should be used in the task: Addition, Subtraction, Multiplication or Division. Check all operations you want to use. After you have changed the parameters you have to click on the button in the toolbar to generate a task which uses the new parameters. You can also call this action from the menubar with → . This will reset the statistics. To avoid that, click the button to proceed with the changed parameters. The chosen parameters will be saved on KBruch's termination and restored on next startup. After you have solved a given task, you need to enter the result into the three input boxes. In the left box you enter the integer part of the fraction, in the upper box the numerator and in the lower box the denominator. If the option Mixed number in the Answer section is unchecked, the left box for the integer part of the fraction is hidden. Then you use only the numerator box and the denominator box for your input. If the result is negative, you can enter a minus sign in front of the numerator or denominator. If the result is 0, just type a 0 in the numerator input field. If the result has a denominator of 1, you can leave the lower box empty. After you have entered the result you should click the button below the input boxes. KBruch will check your input and present the correct result on the right below the Incorrect! string: This task was solved incorrectly. The correct value is shown in 2 different forms: normal (reduced) and mixed number. If you checked the in the Options in the Answers section then you always have to enter the result reduced. KBruch will show you a short message like the one in the screenshot below, if you enter the correct result unreduced. The answer will then be counted as incorrect. To continue with the next task, click on the button. If you want to change the parameters for the next task please do this before clicking on the button.
{"url":"http://docs.kde.org/stable/en/kdeedu/kbruch/exercises.html","timestamp":"2014-04-20T05:46:15Z","content_type":null,"content_length":"10481","record_id":"<urn:uuid:c6484fe1-ff0a-4747-997b-2f0a6c39271d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning the Unscented Kalman Filter - File Exchange - MATLAB Central Please login to add a comment or rating. Comments and Ratings (56) 11 Very good example! 28 This contribution can be used as an optimizer but is not very efficient then. Thank you Prof. Yi Cao. I have a problem at this point: X=sigma(x,P,c) %sigma points around x It keeps saying: "Index exceeds matrix dimensions. Error in sigma (line 97) 24 a=varargin{1}; b=varargin{2}; c=varargin{3}; d=varargin{4};" 2013 when i run this instead: X=sigma(x,P,c,[],1), it says: "Error using sigma (line 107) The A matrix must be square" Please does anyone have a solution to this? I can't seem to get past this point. Thank you. Dear Yi Cao, I have a problem with the correlation matrix of the measurement. It becomes non-positive definite for some parameters. How can I handle this problem? 25 I have already tried same matrix validations but they do not work. Mar I have to calibrate model's parameters with MLE. 2013 Any comment is appreciated Dear Prof. Yi Cao, Thank you very much for the posted Matlab code. I tried to modify the process function of this code as I want. for example: 16 s=[1;2;3]; Nov f=@(x)[x(2);x(3);2*x(1)*(x(2)+x(3))]; 2012 h=@(x)x(1) ; But when I run the program it shows an error of computing sigma points. It says that the matrix P shoud me positive definite. How we can modify this UKF program for any kind of defined Thank you. Dear Yi Cao, 29 According to the paper'performance evaluation of UKF-based nonlinear filtering',choose:f=@(x)[x(1)+tao*x(2);x(2)-tao*x(1)+tao*(x(1)^2+x(2)^2-1)*x(2)]; Jul h=@(x)x(1),with covariance of the process noise w(k)given as:Q=0.003^2I,the covaiance of the noise v(k) is given by:R=0.001^2I,initial state:x0=[2.3;2.2],P0=I,the true value of the initial 2012 state:x=[0.8;0.2]. The paper proof that when given all these,UKF tends to be divergent.However,based on this code,it seems that the estimator is stable.Does it owe to the weights chosen when doing the prediction? Dear Dr. Cao, I am relatively new to Kalman filtering, and I am very happy to have found your Excellent, heavily commented UKF function and example “ beginners”: It seems that your nonlinear function “f” in this code - that you use as an example could be modified from f=@(x)[x(2);x(3);0.05*x(1)*(x(2)+x(3))]; % nonlinear state equations relatively easily to a nonlinear function that describes different nonlinear or time-varying features, like a battery’s state-of-charge. I would most grateful if you could direct me to further literature, that might further guide me, e.g., • notes from your class that gives more background on your unscented Kalman filter example, above . • how to generally select 07 ki=0; %default, tunable May beta=2; %default, tunable 2012 alpha=1e-3; %default, tunable • Do state variables x(1:3) in your example above represent states of an actual physical process or is x used purely a as a numerical example ? • I understand that your Matlab function UKF.m, describes a simplified unscented KF with added process noise and measurement noise: % x_k+1 = f(x_k) + w_k % z_k = h(x_k) + v_k Do you have a more general unscented KF, UKF2.m, that that propagates the nonlinear noises, e.g.,: % x_k+1 = f(x_k,w_k) % z_k = h(x_k,v_k) ? Thanks again for hour excellent work! i need help. i have the above system, but i dont know how to fuse the equations into the filter. can anyone help me?? the system model has three states: X, Y, Th. the equations are: Xk+1 = Xk + cos(Thk)*u*Dt Yk+1 = Yk + sin(Thk)*u*Dt Thk+1 = Thk + w*Dt the measurement model is: X1=Xk+1+r*cos(Thk+1 - 90); //r is a constant Y1=Yk+1+r*sin(Thk+1 - 90); X2=Xk+1+m*cos(Thk+1); //m is a constant X3=Xk+r*cos(Thk+1 + 90); Y3=Yk+r*sin(Thk+1 + 90); P1=(-a*X1-b*Y1-c*H1-d) / (a*(XL-X1)+b*(YL-Y1)+c*(ZL-H1)); // a, b, c, d, constants P2=(-a*X2-b*Y2-c*H2-d) / (a*(XL-X2)+b*(YL-Y2)+c*(ZL-H2)); P3=(-a*X3-b*Y3-c*H3-d) / (a*(XL-X3)+b*(YL-Y3)+c*(ZL-H3)); A1[0] = XL-X1; // XL, YL, ZL, constants 24 A1[1] = YL-Y1; Apr A1[2] = ZL-H1; A2[0] = XL-X2; A2[1] = YL-Y2; A2[2] = ZL-H2; A3[0] = XL-X3; A3[1] = YL-Y3; A3[2] = ZL-H3; Z1[0] = A1[0]*P1 + X1; Z1[1] = A1[1]*P1 + Y1; Z1[2] = A1[2]*P1 + H1; Z2[0] = A2[0]*P2 + X2; Z2[1] = A2[1]*P2 + Y2; Z2[2] = A2[2]*P2 + H2; Z3[0] = A3[0]*P3 + X3; Z3[1] = A3[1]*P3 + Y3; Z3[2] = A3[2]*P3 + H3; Z[0] = Z1[0] + Z3[0]; // Z[i], are the measurements needed for the UKF!!! Z[1] = Z1[1] + Z3[1]; Z[2] = atan((Z2[1]-Z[1])/(Z2[0]-Z[0])); please can you help me implement the UKF with these equations? 13 Great works! Take note of the point made by Haijun Shen if you are planning on using this filter as a basis for an augmented system (where the noise is part of the state vector). Not including the process 05 noise in the function "f" will cause significant bias in the filter results if your noise is not additive or your state vector is augmented. 2011 Otherwise, thanks so much for a great way to learn about unscented filtering! 27 what does the x(1),x(2),x(3) represent Feb f=@(x)[x(2);x(3);0.05*x(1)*(x(2)+x(3))]; % nonlinear state,I don't konw how does the f function come out? 04 Please send your comments to majordavuramus@gmail.com..as I am not very frequent visitor.Thanks again 04 I have implemented a highly non-linear aircraft tracking app with both an EKF and your UKF. The initial state and state error covariance matrices are the identical as are the observation and Jan process errors. However, I get a decent result with the EKF, but NOT with your UKF... it should be the reverse... Any suggestion? hi Dr.Cao have some problem with my dynamic model. would you help me to apply my model in your "UKF". could i get your email addres. Dec Best, Sepuluh Nopember Institute of technology 23 I have an input function also ("u"). How can this be added to the UKF code? Do i simply pass "u" through to the fstate() function via the ut() function? Hi Jordan, Dec Thanks for the comments. A reference has been added to the updated code. It should be available within a few days. Dr. Cao, 11 I really appreciate your submission, it was a great help. I would only suggest listing a reference or two in your m-file, e.g. (R van der Merwe and EA Wan, 2002). I didn't know about the Dec square-root implementation of the ukf and was, just at first, a bit confused about your implementation. Otherwise everything was very clear and helpfull. Thanks again! Hi Yi, In propagating the process by You include Q in the covariance P1, but the propagated states in matrix X1 does not include any process noise because you are assuming additive noise and your f function does not account for process noise. In turn, when you feed X1 into 08 to get the measurement matrix Z1, Z1 does not include the effect of any process noise. Therefore, when you use Z1 and z1 to calculate P2, even though you add R onto P2, P2 is not a true Jul representation of Pyy. In your example, Pyy is off by Q. In linear terms, your X1 consists of Ak*xkhat instead of Ak*xkhat+wk even though your P1 is Ak*Pkhat*Ak'+Qk. And your P2 is C_{k+1}*Ak*Pkhat*AK'*C_{k+1}' + R. The correct P2 should be C_{k+1} *(Ak*Pkhat*AK'+Q)*C_{k+1}' + R. By the way, I think the augmented version is still applicable to cases with additive noises, although one may choose not to use it because of added complexity. Looking forward to your opinion. Haijun Shen Hi holland, Yes, we can. Please check the following two FEX entries for details. Jun Yi Hi Dr Cao Jun why we can not use this UKF algorithm for parameter and state estimation both just like EKF algorithm. What needs to be done to play with this UKF algorithm for state-parameter estimation. 2010 It seems this UKF algorithm is useless and much touted advantage over EKF is not true. 02 Hi Yi Cao, May i still can't turn the program, please can you tell me how may i do it, since the dowloading of the file to the right execution. matlab always returns errors 19 Hello Dr Yi Cao 2010 are you planing to post square root UKF for parameter estimation? 15 Is it possible to use the UFK when the non-linear function 'f' is unknown. But instead there is a 'map' (non deterministic) which is known. I want to filter the measurement signal using this Mar non-deterministic 'map' which is only a set of samples and is seen periodically in the measurement signal. ok for some reasion my previous posting got lost , so once again : I am using UKF to estimate distances from radio signal strength. Eventhough the RSSI error (measurement equation) is gauss distributed UKF performs very poorly and I cannot understand why as it seems the perfect choice for this kind of problem. I set the measurment nois to the std I got from the training data. My system equations are f=@(x)[abs(x(1)+x(2));abs(x(3)-x(1));x(1)] ; Jun for f : 2009 x1: new distance = old distance + velocity x2: velocity = difference(old distance, new distance) x3: old distance h is simply a given transformation from distance to radio singal strength I cannot find any reason for the poor performance as it should be the best filter for this kind of application. Am I missing some important issues ? 23 sorry...ekf should be ukf in the previous posting Jun This means the iteration of ukf is unstable. You have to adjust P, Q, etc to make it stable. Hi Yi Cao, First of all, thanks for your contribution here. I do have a question though, I do get for some parameter combinations a complex covariance matrix, the parameters look like this : z = -78 c = 1.7321e-004 P = 1.2500 + 0.0000i 0.0000 - 0.0000i 0.0000 - 0.0000i 0.0000 + 0.0000i 0.4438 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 - 0.0000i 1.2500 + 0.0000i x = 0.5807 - 0.0000i -5.5018 + 3.7078i 20 0.7954 - 0.0000i 2009 Then I get this error : ??? Error using ==> chol Matrix must be positive definite with real diagonal. I assume that this is due to the complex covariance matrix. I have no idea how this matrix can become complex as in my oppinion the only way it can become complex is if c would be negative which it isn't here... Additionally, I would like to measure distances using radio signal strength, therefore I have actually the distances from RSSI values and additional velocity from the last step to the current step, is it possible to process these information with this implementation as well ? Thanks for any help! Right. However, K=P12*inv(P2). Hence, K*P2 = P12. 06 This leads to K*P2*K' = P12*K'. Therefore, P=P1-P12*K'. 2009 HTH I think the covariance updat should be: Jun P=P1-K*P2*K' 20 It seems that your model is not stable. You may wish to adjust P, Q and R matrices to see if this helps. 2009 Yi hello Dr.Yi 20 This code is working good for N<=150 Apr but when N exceeds this limit, a nonsense happens 2009 Is there any improvement to the code considering this error? 12 The same question as Adam. 2009 Thank you 03 well i'm doing my research project and the topic is comparison of EKF and UKF in non-linear state estimation. The non-linear model which i have used gives correct results for EKF but i'm not Dec able to get the correct results for UKF. I don't know what the problem is. I was wondering if you could look at my model and suggest a solution to it. 26 The same question as Loki, Nov if the measurement equation is nonlinear in state variable, the estimated state variable does not change with actural (simulated) state variable. Any suggestion? Hi, Dr. Cao, Is the covariance update correct? P=P1-K*P12'; %covariance update Nov Some paper wrote: While I understand it is no longer necessary to augment the states when you consider additive noise, it is also apparent that you then only have to use the first L weights, and not the 2L+1 weights. Can you comment on this? Is anything lost or gained by using L weights or 2L+1 weights in the additive noise case? Nov I ask only because I saw degraded performance when I switched from using all 2L+1 weights to using only the first L weights in my program (I am not augmenting the states). NC State Univ 31 ???????????? There are a few different versions of UKF. I used unaugmented one, where Q and R have to be explicitly used. There are some augmented versions, where Q and R are included in P. May hth 2008 Yi I'm student in france and i have seen your program about UKF (unscented kalman filter) in the page : http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=18217&objectType= file . I don't understand why the function UKF nead the covariance R and Q coz in the algorithm UKF we can find in the paper : http://mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/ 21 2001-BMVC-Stenger-kalman.pdf (page 4) the UKF just need the covariance P and the state x. May Sorry for my english if it was difficult to understand my question. 2008 Can you please help to understand the UKF. Thank you 30 Hi, Kavin 2008 Thanks for comments. chol is more efficient and robust than sqrtm. 28 hey Yi cao,why do u use chol function instead of sqrt ,what is the advantage of doing so,pls clarify chol function u use. Apr States is not evolved by the UKF. You should have another simulation model to evolve states, then send output of the model to UKF to estimate the states. If you send me you model through 2008 email I may be able to see what is you problem. Hi; i tried your function with this, f=@(x)[-x(2);-exp(-a*x(1))*x(2)^2*x(3);0]; % nonlinear state equations 03 h=@(x)[sqrt(m^2+(x(1)-Z)^2)]; % measurement equation Apr s=[3*10^5 2*10^4 1*10^-3]'; 2008 but the state xV seems to do not evolve after the first step. Any suggestion? Did i make something wrong? Dear Atilla Aydogdu, Thank you for your comments. The augmented state variables are only applable if the process noise and measurement noise are non-additive, i.e. they are in the nonlinear functions: In this case, we have to propagate w and v through the nonlinear functions, hence have to have extra augmented dimensions in state space to evaluate these nonlinear functions. That is why the state space dimention becomes 2L+1. As I stated in the description of my UKF submission, for tutorial purpose, we only consider a simple case, i.e. the noises are additive, where the equarions are: 04 x(k+1)=f(x(k),u(k))+w(k) Feb y(k+1)=h(x(k),u(k))+v(k) In this case, both w and v are not a part of these nonlinear functions, hence, do not need to propagate through these functions. Hence, we do not need the state space augmentation. We can prove, in this case, the non-augment state space formulation is equivalent to the augmented state space formulation. The reason to assume additive noises is that normally, we do not know how exactly noises influence a system, hence do not realy know how to represent them in nonlinear functions. In this case, it is sensible to assume noises are additive. In the Julier's paper, since it is an academic article, certainly, it makes sense to discuss a more general case, that is to include noises within these nonlinear functions. Conclusion: if we know how to represent noises in nonlinear functions, then use augmented formulation. Other wise, we can assume additive noises and use the simplified formulation without the state space augmentation. Hi, Hao Li The function "[z1,Z1,P2,Z2]=ut(hmeas,X1,Wm,Wc,m,R)" is the subfunction included in the file from Line 72 to Line 95. Hi, Terry Zeng, Jan The line you mentioned is line 69. 'z1' is calculated in line 66: 2008 [z1,Z1,P2,Z2]=ut(hmeas,X1,Wm,Wc,m,R); i.e. the line mentioned by Hao Li. Hi,in your code. you write Jan 'x=x1+K*(z-z1); %state update' but the procedure calculating 'z1' has not been given. 30 where's the function "[z1,Z1,P2,Z2]=ut(hmeas,X1,Wm,Wc,m,R)" Dear Edwin, This is not a recursive code. Hence, this error should not happen. I believe this is due to the way you run the example. When you selected the example and pressed control-t to uncomment the 23 selection, you must have saved the change so that the ukf function is recursively called. The right way to run the example, after you uncomment the selection, you should not save the change, Jan just right-click to run the selection. To help other users may come with the same error, I modified the example with block-comments. Now, you can select the example, right-click to run the 2008 selection without accidently saving the change. Hope this help. ??? Maximum recursion limit of 500 reached. Use set(0,'RecursionLimit',N) to change the limit. Be aware that exceeding your available stack space can 23 crash MATLAB and/or your computer. 2008 Error in ==> ukf>create@(x)[x(2);x(3);0.05*x(1)*(x(2)+x(3))] at 25 f=@(x)[x(2);x(3);0.05*x(1)*(x(2)+x(3))]; % nonlinear state equations
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/18217-learning-the-unscented-kalman-filter","timestamp":"2014-04-20T06:07:31Z","content_type":null,"content_length":"71993","record_id":"<urn:uuid:bfe74eba-66e4-4cf7-822e-9bef02f98cf6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
AutoSignal: Tutorials - Spectral Analysis of unevenly spaced data This tutorial covers the spectral analysis capabilities of AutoSignal when the data are unevenly spaced. The main focus will be upon Fourier procedures that use data that have not been uniformly sampled. A secondary focus will be upon interpolation procedures that can generate uniform data without altering spectral properties. Importing An Unevenly Sampled Data Set Select the Import option in the File menu or main toolbar. Change the format to Excel (xls) and select sample.xls from the Signals subdirectory. Check the Import Preview option. Click on (5)Uneven! A: Uneven Time. This column in the Excel worksheet is used as the X or time variable. Click on (5)Uneven!C: S1,SN20dB. This column is used as the Y or signal variable. Click OK to accept the data and OK once again to accept the default titles. This data set contains three spectral peaks, one at a frequency of 2 (amplitude 100, phase 3p/2), another at a frequency of 5 (amplitude 100, phase p), and a third peak at the frequency of 8 (amplitude 100, phase p/2). 10% random Gaussian noise was added. The average Nyquist frequency is 4.75. There are thus two peaks beyond the average Nyquist limit. One important difference between unevenly sampled data relative to uniformly sampled data is that information beyond the average Nyquist limit is not automatically aliased to lower frequencies. It is thus possible to extract information beyond this average Nyquist frequency since some of the data are spaced more closely and support a much higher "local" Nyquist frequency. Similarly, there will be widely spaced points whose local Nyquist frequency is smaller than the overall average. The information within the average Nyquist range is thus incomplete. Detrending The Data Note that the data appear to evidence an upward trend. Further, the algorithm that generates a Fourier spectrum using unevenly spaced data does not generate a zero frequency channel. We will thus remove the upward trend and subtract the mean prior to processing the data. Select the Detrend option from the Time menu or toolbar. The following items should be checked: Subtract Fit, Subtract Mean, Linear (model), and Least Squares (minimization). Although AutoSignal offers a variety of background models, you should use the higher order and non-linear models very cautiously. Click OK to close the Detrend procedure and answer Yes to update the data table. Lomb-Scargle Periodogram The Lomb-Scargle periodogram is an algorithm that specifically generates a Fourier spectrum for the instance where data are not uniformly spaced. Note that the spectrum readily recovers all three components, including those that exist beyond the average Nyquist limit. Unlike a traditional FFT, there is no zero frequency channel. In most respects, there are few other differences between this type of spectrum and a conventional Fourier spectrum that uses uniformly sampled data. All of the expected power and amplitude plot formats are Data Windowing AutoSignal extends the algorithm to include windowing. All windows that can be created using unevenly sampled time values are included. The Chebyshev and Slepian (DPSS) windows are not available, although a special Chebyshev approximation window is available for creating the sharpest possible spectral peak for a given sidelobe level. The high dynamic range processing that is available to Fourier analysis using the better data tapering windows is thus available in this procedure. Critical Limits The Lomb-Scargle periodogram normally includes a traditional confidence limit based upon an exponential distribution. This is not used in AutoSignal. Instead, full critical limits are available. As with the evenly spaced Fourier procedures, separate critical limit models are used for each of the data windows and these are based on extensive Monte-Carlo trials using the exact algorithm in Since the distribution of abscissae can impact significance, these critical limits should be considered approximate. Click on the Show Significance Levels button to enable the critical limits. The first two peaks are shown to be significant beyond a 99.9% critical limit. The peak at frequency 8 is significant at a 95% critical limit. This means that of twenty white noise data sets having an equivalent variance, one would be expected to evidence a peak of this magnitude strictly due to random chance. Click OK to exit the Lomb procedure. Interpolation by Harmonic Retrieval We will now address the two alternatives available for interpolating a uniform data set from unevenly spaced data. The first involves fitting a parametric model to the time-domain data in order to extract the harmonic components. This approach is only useful when the data consist of one or more sinusoids or damped sinusoids. Select the Parametric Interpolation and Prediction option in the Process menu or toolbar. Change the algorithm to Lomb 2x. Be sure the model is Undamped, that the Signal Subspace is set to 6 (this resolves three harmonic components) and the NL Optimization is enabled. The values in the Data Processed fields start with the full data range. Since we are not interested in prediction, change the n in the Output to 1024 and change the x end value to 10. Click on the Set Confidence/Prediction Intervals button. Be sure Prediction Intervals is checked and that a 95% Confidence is selected. Click OK to close the Intervals dialog. The upper graph confirms the amplitude and frequency of the three peaks isolated by the Lomb procedure. The lower graph plots the three sinusoidal components on the Y axis, and the model (the sum of the three components) and the prediction intervals in the upper graph. Note that the prediction intervals look respectable. To be certain we have a valid interpolation, we will inspect the non-linear fit statistics and check the residuals to insure that they are normally distributed. Validating The Parametric Model Click the Numeric Summary button and inspect the fit statistics. Fitted Parameters │ r² Coef Det │ DF Adj r² │ Fit Std Err │ F-value │ │ 0.96429399 │ 0.96055732 │ 24.7429970 │ 293.695617 │ │ Data Power │ Model Power │ Error Power │ │ 157021.45272 │ 151414.84394 │ 5606.6087823 │ │ Comp │ Type │ Frequency │ Amplitude │ Phase │ Power │ % │ │ 1 │ Sine │ 1.99767236 │ 98.9505350 │ 4.80945274 │ 48899.2677 │ 36.3822307 │ │ 2 │ Sine │ 5.00589608 │ 99.3582574 │ 3.00200171 │ 49303.7360 │ 36.6831648 │ │ 3 │ Sine │ 7.99611939 │ 85.1101161 │ 1.69401426 │ 36201.2557 │ 26.9346045 │ Although the parameters are not recovered perfectly, the fit is an accurate one and the r² goodness of fit value is high. Close the Numeric Summary window. Click the View Residuals button. Be sure the Stabilized Normal Probability Plot option (the second from right in the toolbar) is selected. All of the residuals are shown to be within a 90% critical limit, an excellent indication that they are normally distributed. When residuals lack this Gaussian distribution, the model is often insufficient or incorrect. There may be a missing component, or the fit may have failed to achieve the true least-squares minimum. Close the Residuals window. Click OK to exit the Parametric Interpolation procedure. Answer Yes to update the data table. Answer No when asked to save the current data table. The 1024-point interpolated uniform data certainly bears little resemblance to the unevenly sampled data that were imported. Although there is no practical benefit to a Fourier analysis at this point, we will make one to confirm the presence of the three desired spectral components. Confirming The Parametric Interpolation Select the Fourier Spectrum with Data Window option in the Spectral menu or toolbar. Set the window to cs4 BHarris min, Nmin to 1024, set the plot to dB Norm, and set the signal count (sig) to 3. The three components are present as expected. Note that the parametric interpolation procedure also functions as a noise filter, preserving only the harmonics. Close the Fourier procedure. Non-Parametric Interpolation Time-domain alternatives are available for interpolating a uniform data sequence from unevenly sampled values. The Spline Estimation option offers a variety of interpolating and smoothing splines. The Non-Parametric Estimation option offers locally-weighted regression. Both procedures are of value only when high frequency components are absent. Time-domain interpolation can also introduce spurious spectral components. If only low frequency information is present, however, these forms of interpolation can be very straightforward and effective. In general, the harmonics should be in the lower quarter of the Nyquist range. To illustrate the danger of time-domain interpolation when high frequency components are present, we will reload the original data set and create uniformly spaced data using a cubic spline. Select sample.xls from the most recently used files list at the bottom of the File menu. Click on (8)Uneven!A: Uneven Time and (8)Uneven!C: S1,SN20dB. Click OK to accept the data and OK once again to accept the default titles. Select the Spline Estimation option from the Time menu or toolbar. Be sure the Cubic spline is selected and that the Function is being output. Set the output n to 1024. It should be readily apparent that time-domain interpolation requires a sufficient number of points to define each oscillation in the spectrum. That is clearly absent here. Click OK to exit the Spline Estimation procedure. Answer Yes to update the data table. Again select the Fourier Spectrum with Data Window option in the Spectral menu or toolbar. Zoom in the range between 0 and 10 to include the frequency band where the peaks are known to be present. Although the lowest frequency peak at 2.0 is preserved, the remainder of the spectrum is nonsense. Again, time-domain interpolation routines should only be used when the spectral content is in the lower quarter and ideally the lower eighth of the Nyquist range. Close the Fourier procedure.
{"url":"http://www.sigmaplot.com/products/autosignal/tutorials/tutorial3.php","timestamp":"2014-04-20T18:23:06Z","content_type":null,"content_length":"37012","record_id":"<urn:uuid:235eb731-f78c-4ca7-b21f-5458dc197ddb>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove that all integers are of the form... February 5th 2010, 05:21 AM #1 Feb 2010 Prove that all integers are of the form 3k, 3k+1, or 3k+2 The problem is clearly easy and it's obvious why this is true, but I just don't understand how exactly to go about proving it...help? Because the reminder of division any number by 3 is 0, 1 or 2. so $n$ is $3k+0$ or $3k+1$ or $3k+2$ Follows from the division algorithm (using an integer and 3). For each integer $a$, there exists unique integers $k$ and $r$ such that $a=3k+r$, where $0 \le r <3$. February 5th 2010, 05:36 AM #2 Feb 2010 February 5th 2010, 05:37 AM #3
{"url":"http://mathhelpforum.com/number-theory/127297-prove-all-integers-form.html","timestamp":"2014-04-19T22:28:00Z","content_type":null,"content_length":"35333","record_id":"<urn:uuid:394005a2-776f-4aa0-a7aa-f625e9c769fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Lines, Sines, and Curve Fitting 8 - D'Agostino > Lines, Sines, and Curve Fitting 8 – D'Agostino Lines, Sines, and Curve Fitting 8 – D'Agostino 2011 January 16 The eyeball and quick sigma population checks in the previous post provided some confidence that the global temperature anomalies are normally distributed over the mean. But there are more formal tests, including D’Agostino normality test. From wiki: In statistics, D’Agostino’s K2 test is a goodness-of-fit measure of departure from normality, that is the test aims to establish whether or not the given sample comes from a normally distributed population. The test is based on transformations of the sample kurtosis and skewness, and has power only against the alternatives that the distribution is skewed and/or kurtic. D’Agostino tests the skew and kurtosis of a distribution. Failing the test indicates that the distribution is skewed or kurtic to the point that it is not normal. Passing the test is not proof positive that the distribution is in fact normal. We’ll first take a look at three intentionally distorted distributions: a high kurtosis, a low kurtosis, and a skewed distribution. I had trouble skewing the distribution without also trigging kurtic indicators in the D’Agostino test. The results of each D’Agostino test follows the displayed distribution. The D’Agostino test is included in a financial basics package from rmetrics.org. D'Agostino Normality Test Test Results: Chi2 | Omnibus: 43.6439 Z3 | Skewness: -0.5324 Z4 | Kurtosis: 6.5849 P VALUE: Omnibus Test: 3.333e-10 Skewness Test: 0.5945 Kurtosis Test: 4.553e-11 D'Agostino Normality Test Test Results: Chi2 | Omnibus: 34.441 Z3 | Skewness: -0.3546 Z4 | Kurtosis: 5.8579 P VALUE: Omnibus Test: 3.321e-08 Skewness Test: 0.7229 Kurtosis Test: 4.687e-09 D'Agostino Normality Test Test Results: Chi2 | Omnibus: 41.7927 Z3 | Skewness: -5.8956 Z4 | Kurtosis: 2.6523 P VALUE: Omnibus Test: 8.41e-10 Skewness Test: 3.734e-09 Kurtosis Test: 0.007994 It appears that very low values of the p-value indicate that the distribution does not pass the given test for Omnibus (the overall D’Agostino test for ‘could be normal’) or the subcomponents for Skewness or Kurtosis. Now we can look at the four distributions that we looked at in the previous post. Each set contains more residual data points from the mean than the one before. 1) GISTEMP annual 1970-2010 2) GISTEMP annual 1880-2010 3) GISTEMP monthly 1970-2010 4) GISTEMP monthly 1880-2010 Test Results: Chi2 | Omnibus: 3.3396 Z3 | Skewness: -0.4131 Z4 | Kurtosis: -1.7801 P VALUE: Omnibus Test: 0.1883 Skewness Test: 0.6795 Kurtosis Test: 0.07505 Test Results: Chi2 | Omnibus: 0.6071 Z3 | Skewness: 0.0594 Z4 | Kurtosis: -0.7769 P VALUE: Omnibus Test: 0.7382 Skewness Test: 0.9526 Kurtosis Test: 0.4372 Test Results: Chi2 | Omnibus: 0.0225 Z3 | Skewness: -0.1197 Z4 | Kurtosis: 0.0904 P VALUE: Omnibus Test: 0.9888 Skewness Test: 0.9047 Kurtosis Test: 0.928 Test Results: Chi2 | Omnibus: 0.3223 Z3 | Skewness: 0.032 Z4 | Kurtosis: -0.5668 P VALUE: Omnibus Test: 0.8512 Skewness Test: 0.9745 Kurtosis Test: 0.5709 The tests results do not uniformly improve with increasing data points. The monthly 1880-2010 set of residuals are not an improvement over the monthly 1970-2010. This is probably an indication that the 1940-1970 cooling trend is throwing off the distribution around a linear mean. On the other hand, the 1880-2010 yearly is a definite improvement over the 1970-2010 yearly. Probably an improvement due to the increase in the number of points available.
{"url":"https://rhinohide.wordpress.com/2011/01/16/lines-sines-and-curve-fitting-8-dagostino/","timestamp":"2014-04-17T07:17:24Z","content_type":null,"content_length":"24785","record_id":"<urn:uuid:36e86ed7-2d06-48dd-94c0-e95b7a86c013>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: RE: [mg18963] Packed Arrays in version 4 Replies: 1 Last Post: Aug 5, 1999 2:29 AM Messages: [ Previous | Next ] Re: [mg18963] Packed Arrays in version 4 Posted: Aug 5, 1999 2:29 AM In article <7o5fd0$rgp@smc.vnet.net>, Ersek, Ted R <ErsekTR@navair.navy.mil> writes >The following should do what you want for Times and you can make a similar >definition for Transpose. However, I don't recommend it because it will >slow done Times in every case when this rule isn't used. > HiddenSymbols`ModifyTimes:= > Block[{HiddenSymbols`ModifyTimes}, > Developer`ToPackedArray[Times[z,arr]] > ]; >How does it work? >The above definition is only used when >(HiddenSymbols`ModifyTimes=True). When the rule is used Block >temporarily clears the value of (HiddenSymbols`ModifyTimes), so >the kernel doesn't go back and use the rule again. >Ted Ersek Thank you, Ted: this is a good solution for Transpose (see also my reply to Bruno Daniel). Just by replacing Transpose for Times everywhere (and a little mixing of arguments) gave a good result. However I still have great difficulty with Times: I cannot make your function above give satisfactory results. I give an example below, it actually seems to be counter-productive. Is this because Transpose does not have Listable (etc.) Attributes like Times? It has been suggested to me by e-mail that using (N[I]*array) instead of (I*array) solves the problem, and indeed I had found that ((I+0.0)*array) also solved the problem. The real problem is that multiplying a real array by I is so common in electronics/microwaves! I have many, many notebooks with many,many,many occurrences of this and I do not want to start chasing them all down. Thus I have a real motivation for making this work as a redefinition of Times or \[ImaginaryI]. A global redefinition of Global`I := N[System`I] (see my reply to Bruno Daniel) is probably extremely risky! avbytesize[arr_] := ByteCount[arr]/Length[Flatten[{arr}]] // N tmp1 = Table[Random[], {100000}]; tmp2 = Developer`FromPackedArray[tmp1]; {avbytesize[#], Developer`PackedArrayQ[#]} & /@ {tmp1, tmp2, Times[tmp1, tmp1], tmp1*tmp2, I*tmp1, N[I]*tmp1, I*tmp2, N[I]*tmp2} {{8.00056, True}, {20.0002, False}, {8.00056, True}, {8.00056, True}, {60.0002, False}, {16.0006, True}, {60.0002, False}, {16.0006, True}} but I dont understand what happens after evaluating your function, especially mystifying since the results for Transpose were so good: HiddenSymbols`ModifyTimes = True; Times[z_Complex, arr_?Developer`PackedArrayQ] /; HiddenSymbols`ModifyTimes := Developer`ToPackedArray[Times[z, arr]]]; {avbytesize[#], Developer`PackedArrayQ[#]} & /@ {tmp1, tmp2, Times[tmp1, tmp1], tmp1*tmp2, I*tmp1, N[I]*tmp1, I*tmp2, N[I]*tmp2} {{8.00056, True}, {20.0002, False}, {20.0002, False}, {20.0002, False}, {60.0002, False}, {60.0002, False}, {60.0002, False}, {60.0002, False}} from - John Tanner home - john@janacek.demon.co.uk mantra - curse Microsoft, curse... work - john.tanner@gecm.com I hate this 'orrible computer, I really ought to sell it: It never does what I want, but only what I tell it. Date Subject Author 8/2/99 RE: [mg18963] Packed Arrays in version 4 Ersek, Ted R 8/5/99 Re: [mg18963] Packed Arrays in version 4 John Tanner
{"url":"http://mathforum.org/kb/thread.jspa?threadID=232432&messageID=781230","timestamp":"2014-04-17T18:39:43Z","content_type":null,"content_length":"20867","record_id":"<urn:uuid:62e1e961-aa77-4be3-b0d2-d8a31e77f924>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
, Computer Networking Encoding process, Computer Networking Encoding Process c= uG u: binary data sequence of length 4( input) G: Generator matrix which is 7*4 c: Codeword Decoding Process s= rH^T r: received data sequence of length 7 ( input) H: Parity-Check Matrix s: Syndrome to check error-position If some output values of the encoding and decoding multiplication is over than 1, the value should be changed into remainder on division of itself by 2. (Ex: (4 2 1 ) => (0 0 1)) s(Syndrome) detect error position. We can know the single-error-bit position in the table below. Fig. Decoding table for Hamming code(7,4) If s is [0 0 0] and r= [0 1 0 0 0 1 1], it supposes that there is no error and transmitted data is [0 0 1 1]. If s is [0 1 0] and r = [1 0 1 0 0 1 0], it supposes that 2^th bit is corrupted, so r should be changed to [1 1 1 0 0 1 0], and transmitted data is lower 4bits, [0 0 1 0]. Posted Date: 2/19/2013 1:12:08 AM | Location : United States Your posts are moderated ROLE OF DESIGN IN ACHIEVING RELIABILITY One of the most important methods for achieving high reliability is through design. Use reliability cannot be higher than the design re What are MAC addresses? MAC, or Media Access Control, uniquely identifies a device on the network. It is also called as physical address or Ethernet address. A MAC address is m Q. Show the Communication between Switches? Communication between Switches - Must know which station belongs to which VLAN as well as membership of stations connected to Network throughput It is a symptomatic measure of the message carrying capability of a network. It is termed as the total number of messages network can send in per unit time. 802.11a OFDM a) Orthogonal frequency-division multiplexing utilizing a 5-GHz band b) Same as FDM except all sub bands are used by only one source at a given time c) Secu What are the drawbacks of ARCNet? Here are some drawbacks of ARCNet: 1. Standard ARCNet is very slow (2.5 Mb/s). It is almost seven times slower than Token Ring. 2. ARC Internet Protocols Control Protocols ( IPCP) The internet protocols control protocols establishes configures and terminates the TCP/ IP network protocols layer in a PPP The LAN switch copies the whole frame into its onboard buffers and then looks up the destination address in its forwarding, or switching, table and verifies the outgoing interface University of Wolverhampton School of Technology 6CI007 Database Server Management Resit Assessment Hand in December 14th 2012 In this assignment you will build a small database t Determine about the Complete and proper security Complete and proper security configuration and administration is indeed a complex issue. One should think carefully about the
{"url":"http://www.expertsmind.com/questions/encoding-process-30134746.aspx","timestamp":"2014-04-18T23:15:51Z","content_type":null,"content_length":"31083","record_id":"<urn:uuid:7474e63e-8c97-40fa-8089-596e3d7b054b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/vinrar/answered","timestamp":"2014-04-19T02:04:15Z","content_type":null,"content_length":"104359","record_id":"<urn:uuid:f6edfd70-f735-4f8f-ac27-28d16b661388>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Greedy Algorithm for license purchase problem November 15th 2008, 09:19 PM #1 Feb 2008 Your friends are starting a security company that needs to obtain licenses for n different pieces of cryptographic software. Due to regulations, they can only obtain these licenses at the rate of at most one per month. Each license is currently selling for a price of $100. However, they are all becoming more expensive according to exponential growth curves: in particular, the cost of license j increases by a factor of rj > 1 each month, where rj is a given parameter. This means that if license j is purchased t months from now, it will cost 100×rtj . We will assume that all the price growth rates are The question is: Given that the company can only buy at most one license a month, in which order should it buy the licenses so that the total amount of money it spends is as small as possible?
{"url":"http://mathhelpforum.com/discrete-math/59768-greedy-algorithm-license-purchase-problem.html","timestamp":"2014-04-17T22:38:20Z","content_type":null,"content_length":"32656","record_id":"<urn:uuid:bff28305-5777-4868-9c1a-a1a4b63590d1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Closest Pair in the Plane with (2,4)-trees Web Page and Applets by Sam Sanjabi Presented to Prof. Godfried Toussaint The Closest Pair problem consists of finding a pair of points p and q from a set of n points such that the p and q are at a minimum distance from each other. The brute force solution to this problem takes O(n^2) comparisons to check the distance between each possible pair of points. However, there are faster methods which are presented in this document. We shall begin by presenting the problem in one-dimension. Upon generalizing to two dimensions we shall present two different techniques to efficiently solve the problem: a divide and conquer approach and a plane sweep algorithm. At the end of the document, we present an interactive java applet that demonstrates these techniques. Since a primary structure used to store the points in these applets is (2-4)-trees, we also present a complete tutorial on this data structure (complete with its own applet). Sam Bakhtiar SANJABI Last modified: Tue Apr 25 08:37:15 EDT 2000
{"url":"http://www.cs.mcgill.ca/~cs251/ClosestPair/index.html","timestamp":"2014-04-17T06:42:51Z","content_type":null,"content_length":"2564","record_id":"<urn:uuid:f1fe4bf2-cfb9-414d-80de-01ff9fe5a620>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantization and Non-holomorphic Modular Forms (Lecture Notes in Mathematics Vol. 1742) von Andre Unterberger Kategorie: Distributionentheorie ISBN: 3540678611 Kurzbeschreibung This is a new approach to the theory of non-holomorphic modular forms, based on ideas from quantization theory or pseudodifferential analysis. Extending the Rankin-Selberg method so as to apply it to the calculation of the Roelcke-Selberg decomposition of the product of two Eisenstein series, one lets Maass cusp-forms appear as residues of simple, Eisenstein-like, series. Other results, based on quantization theory, include a reinterpretation of the Lax-Phillips scattering theory for the automorphic wave equation, in terms of distributions on R2 automorphic with respect to the linear action of SL(2,Z). Synopsis This is an approach to the theory of non-holomorphic modular forms, based on ideas from quantization theory or pseudodifferential analysis. Extending the Rankin-Selberg method so as to apply it to the calculation of the Roelcke-Selberg decomposition of the product of two Eisenstein series, one lets Maass cusp-forms appear as residues of simple, Eisenstein- like, series. Other results, based on quantization theory, include a reinterpretation of the Lax-Phillips scattering theory for the... Distributionentheorie > Quantization and Non-holomorphic Modular Forms (Lecture Notes in Mathematics Vol. 1742)
{"url":"http://www.uni-protokolle.de/buecher/isbn/3540678611/","timestamp":"2014-04-21T04:44:10Z","content_type":null,"content_length":"6701","record_id":"<urn:uuid:73bc4741-a424-4583-82b0-eace68fb9faf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
The Math Teacher's Resource Site This picture inspires a wonderful volume project, and can easily have scientific notation and proportions integrated into the project as well. (1) Have students calculate the volume of the Earth. (2) Research the amount of water that’s on the Earth (about 326 million trillion gallons according to science.howstuffworks.com) (3) Have students calculate what size sphere would hold that volume of water (4) Either with a computer drawing program or just on a piece of paper, have students use proportions to show the size of the Earth compared to the sphere that would hold the world’s water. ** The same thing can be done with air (atmosphere), though I couldn’t find a specific number as to the exact volume of air. But considering the atmosphere extends (very roughly) out to about 300 km (there’s more atmosphere, I’m sure, but the density of the molecules would be very negligible), simply take the radius of the Earth (6,378.1 km) to figure out the volume of the Earth, then draw another sphere around the Earth that has a radius of 6,678.1 (radius of Earth + 300) and calculate the volume of that sphere, and the difference would be the volume of the atmosphere … albeit a very rough estimation. Students shouldn’t be told this, of course! Here’s a site with more information about the Amount of Water in/on/above the Earth. Google Earth and Complex Area Here is a project by realworldmath.org. Real World Math integrates Google Earth with various math topics, this one on Complex Area. Below is a very brief excerpt from their site, but you need to visit the site itself for the full project: Complex Area Problems – Real World Math •Measure distances •Find the area of complex polygons •Solve word problems involving rates Lesson Description This lesson consists of two parts. The first requires students to find the area of a complex shape using … the area formulas for a parallelogram and triangle. The second part … Students will need to be able to solve rate problems with a proportion for this section. World’s Largest … I’m a great fan of dy/dan’s philosophy of teaching math. But I’m also a fan of practical ideas to integrate into a lesson plan. Luckily, he provides wonderful resources in that regard. He has “3 Act” lessons which are lessons that incorporate multimedia. This particular lesson, as the title states, is about the world’s largest coffee cup. Here’s his complete coffee cup lesson: dy/dan’s World’s Largest Coffee Cup I categorized this post in the “Volume and Surface Area,” “Proportions,” and “Linear equations” because I’m considering using it as a “visual word problem” for those units. Volume and Surface Area - It could be a simple question about volume and surface area. How much paint did they need to paint the outside of the cup? How much coffee did they need to fill up the cup? - If the average person drinks x ounces of coffee, how many people will it take to drink all the coffee in the worlds largest coffee cup or -If the average person is 69 inches tall, what size would a person have to be for this cup to be a “normal” cup for him/her. (The typical regular sized coffee mug is 3.5 to 4 inches tall, but I wouldn’t have to tell the students this. I have enough coffee mugs that I could just give them a mug and a ruler…) Linear Equations - Given a rate of flow (and assuming a constant rate), students could calculate and graph how long it took to fill up the cup, or given the time of how long it took to fill up the cup, students could calculate the rate of flow, etc. For whatever reason, my laptop mini won’t play the videos from Dan’s links, but will play them directly from YouTube. Here’s two of the videos in the lesson that can be found on YouTube: Intro to Problem (The repetitive “music” gets annoying…) I’d still encourage you to go to Dan’s site, maybe his video links will work for you. You’ll also need the other information, such as the picture that shows the dimensions of the cup, etc. Lastly, you’ll see the whole picture of what Dan’s lesson is really about, which is more than my currently watered down version. Following the same concept is the world’s largest burger: I’ll list some of the stats that can be molded into questions below, but you can find them and the accompanying video here. 1,375,000 calories Over 600 pounds of beef 30 pounds of lettuce 12 pounds of pickles 20 pounds of onions 28 inch thick, 110 pound bun. (That’s only a total of 772 pounds so when they say over 600 pounds of beef, I’ll assume they mean 605 pounds of beef). 14 hours to cook 3 ft thick, 5 ft in diameter Paper Models, Scales and Proportions These paper toy cut-outs are instant replicas of everything from stealth airplanes to famous buildings. Some are very simple one page designs like the Great Pyramid, and others are more complex, such as some of the buildings that require up to 3 pages of cut outs. Here’s how I integrate this into the classroom. I print out a variety of different cutout patterns (for which the actual dimensions of the real life version is easily found with a quick google search). The students then select their pattern, put together the model, and research the dimensions. On one side of an index card, they write the scale and on the other side, they write the real-life dimensions. Both sides also have the dimensions of their model. (For the models that aren’t perfect replicas, they can scale only one dimension, such as height). (1) If we’re short of time, this is all there is to it, and it is done as a “project” to do at home. It can be done as a mobile, with their model and index card on a piece of string. These are wonderful to hang up on the classroom ceiling. Or simply staple a few to a bulletin board. (2) If we have more time, I will place the models around the room, and students walk around to various models with its corresponding index card on display. I flip the cards around so that some show the scale and others state the real life dimensions, and students can calculate the missing piece of information. For my advanced students who don’t require much time to go through the proportions unit, I simply print out some of the more complicated designs and let them work in small groups and just put them together. You know, for those kinds of days…
{"url":"http://algebrafunsheets.com/blog/tag/proportions/","timestamp":"2014-04-17T15:26:50Z","content_type":null,"content_length":"39243","record_id":"<urn:uuid:02e4fc73-8b78-4fb6-80a8-3bc44711a6ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Berkeley, CA Precalculus Tutor Find a Berkeley, CA Precalculus Tutor ...My teaching methodology is to aid my students in developing an intuition that will allow them to easily tackle complex problems, understand underlying principles, and independently overcome difficulties when faced with challenging material. Feel free to contact me for information about my availa... 15 Subjects: including precalculus, Spanish, calculus, ESL/ESOL ...I always volunteered at my children's schools. Now that my children do not need me as much as before; I try to help other children more. I am quite knowledgeable because of my great education. 24 Subjects: including precalculus, calculus, physics, geometry ...References are available on request. As an undergraduate at New York University, I took two semesters of film production classes its renown Tisch School of the Arts Film & TV program. The courses covered pre-production, production and post-production on several different types of cameras. 41 Subjects: including precalculus, English, reading, Spanish ...Math is not easy. It takes time. It takes effort. 8 Subjects: including precalculus, reading, algebra 1, algebra 2 ...To accept these funds, I was required to administer standardized tests to the students before and after tutoring. Those students who completed a 12-19 hour course of tutoring universally improved their scores, most by more than an entire grade level! That's a whole grade level of improvement over the course of 6-10 weeks! 29 Subjects: including precalculus, English, reading, writing Related Berkeley, CA Tutors Berkeley, CA Accounting Tutors Berkeley, CA ACT Tutors Berkeley, CA Algebra Tutors Berkeley, CA Algebra 2 Tutors Berkeley, CA Calculus Tutors Berkeley, CA Geometry Tutors Berkeley, CA Math Tutors Berkeley, CA Prealgebra Tutors Berkeley, CA Precalculus Tutors Berkeley, CA SAT Tutors Berkeley, CA SAT Math Tutors Berkeley, CA Science Tutors Berkeley, CA Statistics Tutors Berkeley, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Berkeley_CA_Precalculus_tutors.php","timestamp":"2014-04-19T15:13:29Z","content_type":null,"content_length":"23861","record_id":"<urn:uuid:859630d4-4b4b-4810-9f43-067c41416d25>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Hobbits and Prime Ministers: The Physics of DoorsHobbits and Prime Ministers: The Physics of Doors Over at Tor.com, Kate has begun a chapter-by-chapter re-read of The Hobbit, and has some thoughts on Chapter 1. It’s full of interesting commentary about characters and literary technique, but let’s get right to the important bit: Physics! Kate mentions in passing in the post that the Hobbit style round door with a knob in the middle seems a suboptimal design choice, however pretty it may look once Peter Jackson’s set designers get done with it. This draws a couple of comments noting that the doorknob-in-the-middle thing is an English affectation, and pointing to the Prime Minister’s residence at Number Ten Downing St. as an (Image from this blog post.) My comment to Kate when she mentioned this was “See, that’s why we had a Revolution– to get out from under people dumb enough to do that.” Why does this matter at all? Well, because when you push open a door, what you’re really doing is trying to make a solid object rotate about an axis defined by the hinges. For making something rotate, what matters is not just the force, but the torque you exert, which is the product of the force you exert and the distance from the rotation axis to the point where you exert that force. To most effectively open a door, then, you want to push on it as far away from the hinges as you can manage. Which is one reason why doorknobs are usually located on the opposite side of the dor from the hinges, and also why it’s really annoying when office buildings have those glass doors where you can’t easily tell the location of the hinge as you approach. If you guess wrong, you end up trying to open the door by pushing on the near edge, and end up looking like an idiot as you walk smack into the glass. I use this in class as an example of how physics turns up in unexpected places in the design of everyday objects. So, having established that putting the knob in the middle is kind of dumb from a physics perspective, how can we wring some more physics out of this. Well, Kate raised an interesting point: it might actually be the case that the knob-in-the-center for a round door is less dumb than for an ordinary rectangular door. A round door is, after all, exactly as wide as it is tall, which makes for dramatic framing of arriving dwarves: (Image from this collection of PR stills) It also gives you a longer lever arm– halfway across a round door is farther from the hinge than halfway across a narrower rectangular door. Then again, a round door probably has more surface area than a rectangular door of the same height, meaning it contains more material, and would thus be harder to move. So, which is it? The relevant quantity for our purposes is the change in the angular speed of the door (ω, the rate at which it’s rotating) for a given applied force. This is determined from the Angular Momentum Principle which relates torque to change in angular momentum: $\Delta L = \tau \Delta t$ Where τ is the torque exerted, Δ t is the time the torque is applied, and L is the symbol for angular momentum, for some obscure reason. We can write this in terms of force and distance and the “moment of inertia” I, which is the analogue of mass for a rotating system: $rF\Delta t = I \Delta \omega$ So, what’s this “moment of inertia” thing, I? Well, it depends on the mass of the object and also the distribution of that mass relative to the axis of rotation. You can find tables of the moments of inertia for common objects in lots of references, such as Hyperphysics and Wikipedia. These are usually listed for rotations about the center of mass, but you can use the parallel axis theorem to figure out the moment of inertia for doors of different shapes. For a circular door of radius R and mass M, then, we have: $\frac{R}{2}F \Delta t = \frac{1}{2}MR^{2} \Delta \omega$ Which gives us a change in rotational velocity: $\Delta \omega = \frac{F \Delta t}{MR}$ The corresponding calculation for a rectangular door of width W is: $\frac{W}{2}F \Delta t = \frac{1}{3}MW^{2} \Delta \omega$ Which gives us a change in rotational velocity: $\Delta \omega = \frac{3}{2} \frac{F \Delta t}{MW}$ The ratio of these two tells you which would be more efficient, assuming you used the same force for the same amount of time trying to open them: $\frac{\Delta \omega_{rect}}{\Delta \omega_{circ}} = \frac{3}{2} \frac{M_{rect}}{M_{circ}} \frac{R}{W}$ If the two doors have the same mass, you would need the round door to have a radius no more than 2/3rds the width of the rectangular door to break even. Any bigger than that, and the rectangle with a knob in the middle will open more easily than the round door with a knob in the middle. So, it turns out that British prime ministers have life a little easier than hobbits, at least when it comes to opening their front doors. But then both of them are working twice as hard as they would need to if they had the good physics sense to put the knob on the opposite side from the hinges… A couple of other issues relating to weird doors with knobs in the middle: 1) In the comments to the Tor post, it’s pointed out (comment #55) that putting hinges on a round door is kind of a hassle. It took a little while to find an image where you could see the hinges on the Bag End set door, but I eventually Googled up this collection of observations about the Fellowship of the Ring movie, which includes this image: This not only shows how the door is hung, but also that Jackson’s set designers have better sense than hobbits, and put the interior knob on the outer edge, where it belongs. 2) Of course, physics isn’t the only reason to put the knob at the edge of the door– it also allows you to easily integrate the latching mechanism to hold the door closed with the knob. If you want a centered knob, you have to either have the latch controlled elsewhere, or have some really long connections between the knob in the middle and the edge of the door. 3) For extra credit, use one of the images above to estimate the size and mass of the door to Bag End, and determine whether it would, in fact, be more difficult to open than a reasonable estimate of the size and mass of the door to Number Ten Downing St.. Show all your work for full credit. (Peter Jackson picture at the top of this post from this page.) 1. #1 Tim Eisele November 16, 2012 One way around the hinge problem would be to mount the door on a center pivot like a butterfly valve. Of course, that would change putting the knob in the middle from merely inconvenient, to downright unworkable. Plus it would cut the door opening in half. But at least the weight of the door wouldn’t tend to rip off the hinges. 2. #2 Paul November 16, 2012 In the case of Number 10 the central knob makes some sense; the door can’t be opened from the outside and is always opened by a policeman stationed inside, watching CCTV cameras to make sure he can let people straight in. 3. #3 Eric Lund November 16, 2012 Assuming the doors are made of wood of equal density ρ and thickness a [1], the mass of the round door will be ρπr^2a. Rectangular doors are typically a bit more than twice as tall as they are high (the door to my office measures about 2 m tall by 0.9 m wide)–call it a mass of 2.2ρw^2a for the rectangular door. That gives a ratio of (3.3/π)(w/r). So the Prime Minister [2] comes out ahead by about 5% if w and r are equal. But if w and r are equal, that would be a huge hobbit door. [1] In reality, I would expect the door at Number 10 to have some armor plating that a hobbit would not need. That means a higher density (or at least thickness) for the Prime Minister than for a [2] As Paul says above, the Prime Minister doesn’t actually open his door. He has a security detail to do that for him. 4. #4 HP November 16, 2012 With the round door and central knob arrangement, you could easily set up four or more bolts that extend radially into the frame, operated by a single twist of the knob, much like a naval bulkhead door. With heavy felt (assuming Hobbits lack natural rubber or polymers) weatherstripping, you’d have a nearly watertight seal. Perhaps Hobbiton is subjected to frequent flooding? 5. #5 Thomas Hahn United States November 16, 2012 We don’t even really need the picture, once we’ve looked up the necessary reference material – Tolkein writes that hobbits are between two and four feet (0.61–1.22 m) tall; We can assume that like humans, hobbits build doors for the height of the upper end of norm, giving that round door a 1.22m diameter. Using my monitor and a piece of lego I had lying around, I estimate the door to be 1/12th as thick as it’s diameter – .1 m. A cylinder with those dimensions encloses a volume of .11 cubic meters, Densities of woods range from 30-100 kg/m^3; Beechwood is a likely candidate (Native to New Zealand), and has a density of about 80 kg/m^3, giving us a weight of about 8.8 kg for the door of Bag End. Heavy, then, for a hobbit, but not unworkable. 6. #6 Kate Nepveu November 16, 2012 These comments are all brilliant. 7. #7 Lord November 16, 2012 The reason you might want a knob in the center is if it connects to a crossbar extending the width of the door, something for the security conscious. 8. #8 wereatheist Berlin, Germany November 17, 2012 Thomas Hahn, you´ve got your wood densities wrong by a factor of ten: Hard wood like oak, or ironwood, approaches or even surpasses 1000 kg/m^3, some don´t even float in seawater. 9. #9 Scott November 17, 2012 is 1/2 MR^2 the correct moment of inertia for the round door? For a thin disk rotating about its diameter, inertia = 1/4 MR^2, so I would expect the round door have a moment of inertia of 5/4 MR^ 2. I believe that would make angular velocity ratio 15R/4W, more than double 3R/2W. If I’m correct (and I have a cold, so I’m a bit foggy), the diameter of the round door would need to be less than half the width of a rectangular door to have the same torque at the outside edge. 10. #10 Derek Jones November 18, 2012 I don’t want people barging into my house and if putting the door knob in the center of the door makes it more difficult for them to enter then it is a good idea. Ten Downing Street has a policeman standing outside (not present in your picture) and when visitors turn up the door appear to magically open for them and close sharply after they enter. The circular entrance is itself another feature designed to reduce the probability that more than one person will enter at a time since walking in left/right of center requires stepping over a raised edge and potentially stooping so as not to gang one’s head. Back to door design, what really jumped out at me was how the forces on the door hinge were concentrated over such a small distance and are much closer to the floor than most doors. The surrounding frame would need to be very robust and the hinges very solid to prevent the door rotating and rubbing on the floor. Perhaps the door rubbing against the floor is another security feature making it hard to enter from outside (the door would need to be lifted as well as pushed). I don’t think the frame as shown would be capable of handling the torque generated by such closely spaced hinges, if made of wood, without some serious distortion. 11. #11 Hamish November 19, 2012 Regarding the last photo. There is no contradiction here. As is the case with Number 10 and many other doors in the UK, the knob is only for pushing and pulling — there is no locking/unlocking function. In that still from the film, you can’t actually see the far left of the door from the outside, where the keyhole should be! 12. #12 Rick Pikul November 19, 2012 IME, with a well hung door it doesn’t cause a problem. Yes you need some more force but it’s still not very much. I know this because I just checked by pushing a door closed[1] from the hinge side, (never mind the middle). The problem comes about when you have doors with an autoclosing mechanism and you have to press against a spring or counterweight. [1] Closed because that door isn’t quite perfectly hung and tends to open on its own. 13. #13 Mu November 29, 2012 Physics are nice, but the location of the door knob is really dictated by ergonomics. If you position the door knob on a round door in the traditional spot, you would be unable to open the door with the door knob more than about 20 degrees or so. You cannot follow the door in if you’re standing close to one side, unless you’re a very athletic contortionist. And unless you’re an orangutan, your arms will be too short to open the door all the way without stepping over the threshold. Door knob in the middle allows both options. 14. #14 Jeff Rubinoff December 12, 2012 Indeed, the round knobs in British doors of a certain period do not turn. They are for pulling and pushing only, and you open the door with a key. Also, I think the “suboptimal knob location” at No. 10 is being resident at No. 10. 15. #15 BPM December 13, 2012 Mu’s comment is spot on. Its the best spot from which to follow the door in. The inner handle can be on the edge because you are pulling the door and not stepping thru that impractical threshold.
{"url":"http://scienceblogs.com/principles/2012/11/16/hobbits-and-prime-ministers-the-physics-of-doors/","timestamp":"2014-04-16T22:06:11Z","content_type":null,"content_length":"99608","record_id":"<urn:uuid:8de1bf22-a8a8-4379-be0a-6a62e77a0693>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: using the first n observations in a dataset w/o evaluating the w [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: using the first n observations in a dataset w/o evaluating the whole thing? From "Kieran McCaul" <kamccaul@meddent.uwa.edu.au> To <statalist@hsphsun2.harvard.edu> Subject st: RE: using the first n observations in a dataset w/o evaluating the whole thing? Date Fri, 4 Apr 2008 07:54:44 +0800 use in 1/100 using mydata, clear Kieran McCaul MPH PhD WA Centre for Health & Ageing (M573) University of Western Australia Level 6, Ainslie House 48 Murray St Perth 6000 Phone: (08) 9224-2140 Phone: -61-8-9224-2140 email: kamccaul@meddent.uwa.edu.au -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Rodini, Mark Sent: Friday, 4 April 2008 7:32 AM To: statalist@hsphsun2.harvard.edu Subject: st: using the first n observations in a dataset w/o evaluating the whole thing? Suppose I have a large Stata dataset (e.g. 3,000,000 observations) and I only with to read in the first, say, 100 observations. I have tried the code, which works: use mydata if ( _N<100 ) However, evidently, this code goes through ALL 3 million observations to evaluate the expression in parentheses, which can be very time consuming (and sort of defeats the purpose). Is there a way to only read the first 100 observations without having to evaluate the entire dataset? Perhaps some application of the "set obs 100"? But I have not been Thank you. Mark Rodini * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-04/msg00189.html","timestamp":"2014-04-18T01:26:36Z","content_type":null,"content_length":"7414","record_id":"<urn:uuid:0908a94c-1aea-45ea-93b5-cba5c86a64d9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Whole numbers and integers pls help Kidzworld Help Homework Help Need help with math, geography, science or any other school work? Post your questions here! Or maybe you are an expert. Share you knowledge here. Whole numbers and integers pls help Could someone explain me negative integers?Please help. Whole numbers and integers pls help I can help you. Do you need help adding them, subtracting them, multiplying them, or just comprehending them? All new users please search "Saveyou's unhelpful guide to KW", it will teach you all about what to expect. It should've gotten stickied, if you ask me. This sentence of my sig is dedicated to Tyco (tycotickles/tycolovesyou), James (HeyThereI'mJames), and the other James (NachozRule) May they always be apart of KW. Whole numbers and integers pls help Thanks. I don't understand adding, multiplying and subtracting. I wasn't on lesson when our teacher was explaining them. Whole numbers and integers pls help Okay, I'll start with multiplying. Personally, I think it's the easiest. When you multiply two negative integers it will always equal a positive integers. i.e. -2 times -2 = 4. When you multiply a positive integer and a negative integer, it will always equal a negative integer. i.e. -2 times 2 = -4. When you multiply a positive integer with a positive integer it will equal a positive integer. i.e. 2 times 2 = 4. Make sense so far? All new users please search "Saveyou's unhelpful guide to KW", it will teach you all about what to expect. It should've gotten stickied, if you ask me. This sentence of my sig is dedicated to Tyco (tycotickles/tycolovesyou), James (HeyThereI'mJames), and the other James (NachozRule) May they always be apart of KW. Whole numbers and integers pls help Posted By: sugarpetals Member since: August, 2013 Status: Offline Whole numbers and integers pls help "sugarpetals" wrote: -3(x +4)??/ Are you the same person on a different account?? All new users please search "Saveyou's unhelpful guide to KW", it will teach you all about what to expect. It should've gotten stickied, if you ask me. This sentence of my sig is dedicated to Tyco (tycotickles/tycolovesyou), James (HeyThereI'mJames), and the other James (NachozRule) May they always be apart of KW. Whole numbers and integers pls help Posted By: sugarpetals Member since: August, 2013 Status: Offline Whole numbers and integers pls help No, I have only one account. Whole numbers and integers pls help Thank you so much for explaining. What about adding? Whole numbers and integers pls help You're welcome! And for adding, I would use a number line if I were you. Just use logic. It's really hard to explain. Use a number line for adding and subtracting. All new users please search "Saveyou's unhelpful guide to KW", it will teach you all about what to expect. It should've gotten stickied, if you ask me. This sentence of my sig is dedicated to Tyco (tycotickles/tycolovesyou), James (HeyThereI'mJames), and the other James (NachozRule) May they always be apart of KW.
{"url":"http://www.kidzworld.com/forums/homework-help/t/1021830-whole-numbers-and-integers-pls-help","timestamp":"2014-04-19T16:04:51Z","content_type":null,"content_length":"81100","record_id":"<urn:uuid:b07ec047-c896-44e0-a31c-926720c13ff6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Prime Factors • Knowing rules for divisibility by 2, 3, 4, 5, 9 and 10 can be very helpful in finding the prime factorization of numbers. • When students seek to find the prime factorization of a number like 48, the process will be quicker and probably easier for them if they start with the composite factors like 6 and 8 than with a prime factor like 2 and 24. • Recognizing numbers which are the "basic-fact" products (the product of two numbers from 2 to 10) is also useful. For example, if the students are asked to find the prime factorization of numbers such as 27, 48, 56 and 63, knowing that they are part of the basic facts for multiplication can help them find the factors much more quickly. • When introducing the concept of prime factorization, work with two-digit numbers the first day and extend it to simple three-digit numbers in which they can apply their rules for divisibility like 240 or 189. • When students believe they have completely factored a number, have them check their answers by multiplying the factors to see if they get the original number.
{"url":"http://www.eduplace.com/math/mathsteps/5/b/5.primefact.tips.html","timestamp":"2014-04-16T16:16:20Z","content_type":null,"content_length":"5965","record_id":"<urn:uuid:42f3e25b-4b56-4e85-9568-826ef672c433>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Why does m = slope? Replies: 5 Last Post: Dec 2, 2004 5:30 PM Messages: [ Previous | Next ] Why does m = slope? Posted: Feb 2, 1997 4:33 PM I am an eighth grade teacher and was discussing slope with my Algebra I class and one of my students asked, "Why did they chose the letter m to represent slope? Why not s or another variable?" I know that s is referred to as distance in other mathematics, but why m? I have searched and all I can find is the explanation on how to derive slope, not why that specific variable was chosen. Does anyone know the answer? Please e-mail me the answer if you know why. My kids and I would really appreciate it! Also, if you know of a source I could use, please include that as well. Thanks in advance for your help! Janene Shearburn Sutherland Middle School Charlottesville, VA
{"url":"http://mathforum.org/kb/thread.jspa?threadID=369902","timestamp":"2014-04-20T21:52:29Z","content_type":null,"content_length":"22322","record_id":"<urn:uuid:18e62dfc-1cab-4323-abaf-f06f60e5dd45>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Measures of Central Tendency Measures of central tendency are numbers that tend to cluster around the “middle” of a set of values. Three such middle numbers are the mean, the median, and the mode. For example, suppose your earnings for the past week were the values shown in Table 1. You could express your daily earnings from Table 1 in a number of ways. One way is to use the average, or mean, of the data set. The arithmetic mean is the sum of the measures in the set divided by the number of measures in the set. Totaling all the measures and dividing by the number of measures, you get $1,000 ÷ 5 = $200. Another measure of central tendency is the median, which is defined as the middle value when the numbers are arranged in increasing or decreasing order. When you order the daily earnings shown in Table 1, you get $50, $100, $150, $350, and $350. The middle value is $150; therefore, $150 is the median. If there is an even number of items in a set, the median is the average of the two middle values. For example, if we had four values—4, 10, 12, and 26—the median would be the average of the two middle values, 10 and 12; in this case, 11 is the median. The median may sometimes be a better indicator of central tendency than the mean, especially when there are outliers, or extreme values. Example 1 Given the four annual salaries of a corporation shown in Table 2, determine the mean and the median. The mean of these four salaries is $275,000. The median is the average of the middle two salaries, or $40,000. In this instance, the median appears to be a better indicator of central tendency because the CEO's salary is an extreme outlier, causing the mean to lie far from the other three salaries. Another indicator of central tendency is the mode, or the value that occurs most often in a set of numbers. In the set of weekly earnings in Table 1, the mode would be $350 because it appears twice and the other values appear only once. Notation and formulae The mean of a sample is typically denoted as x bar). The mean of a population is typically denoted as μ (pronounced mew). The sum (or total) of measures is typically denoted with a Σ. The formula for a sample mean is where n is the number of values. Mean for grouped data Occasionally, you may have data that consist not of actual values but rather of grouped measures. For example, you may know that, in a certain working population, 32 percent earn between $25,000 and $29,999; 40 percent earn between $30,000 and $34,999; 27 percent earn between $35,000 and $39,999; and the remaining 1 percent earn between $80,000 and $85,000. This type of information is similar to that presented in a frequency table. Although you do not have precise individual measures, you still can compute measures for grouped data, data presented in a frequency table. The formula for a sample mean for grouped data is where x is the midpoint of the interval, f is the frequency for the interval, fx is the product of the midpoint times the frequency, and n is the number of values. For example, if 8 is the midpoint of a class interval and there are ten measurements in the interval, fx = 10(8) = 80, the sum of the ten measurements in the interval. Σ fx denotes the sum of all the products in all class intervals. Dividing that sum by the number of measurements yields the sample mean for grouped data. For example, consider the information shown in Table 3. Substituting into the formula: Therefore, the average price of items sold was about $15.19. The value may not be the exact mean for the data, because the actual values are not always known for grouped data. Median for grouped data As with the mean, the median for grouped data may not necessarily be computed precisely because the actual values of the measurements may not be known. In that case, you can find the particular interval that contains the median and then approximate the median. Using Table 3, you can see that there is a total of 32 measures. The median is between the 16th and 17th measure; therefore, the median falls within the $11.00 to $15.99 interval. The formula for the best approximation of the median for grouped data is where L is the lower class limit of the interval that contains the median, n is the total number of measurements, w is the class width, f [med]is the frequency of the class containing the median, and Σ f [b] is the sum of the frequencies for all classes before the median class. Consider the information in Table 4. As we already know, the median is located in class interval $11.00 to $15.99. So L = 11, n = 32, w = 4.99, f [med] = 4, and Σ f [b] = 14. Substituting into the formula: Symmetric distribution In a distribution displaying perfect symmetry, the mean, the median, and the mode are all at the same point, as shown in Figure 1. Figure 1.For a symmetric distribution, mean, median, and mode are equal. Skewed curves As you have seen, an outlier can significantly alter the mean of a series of numbers, whereas the median will remain at the center of the series. In such a case, the resulting curve drawn from the values will appear to be skewed, tailing off rapidly to the left or right. In the case of negatively skewed or positively skewed curves, the median remains in the center of these three measures. Figure 2 shows a negatively skewed curve. Figure 2.A negatively skewed distribution, mean < median < mode. Figure 3 shows a positively skewed curve. Figure 3.A positively skewed distribution, mode < median < mean.
{"url":"http://www.cliffsnotes.com/math/statistics/numerical-measures/measures-of-central-tendency","timestamp":"2014-04-20T08:23:23Z","content_type":null,"content_length":"169078","record_id":"<urn:uuid:6fe21fbb-4b7b-43e2-bc08-69a8f341c5bc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra 1 Tutors West Bloomfield, MI 48322 Master Certified Coach for Exam Prep, Mathematics, & Physics ...I look forward to speaking with you and to establishing a mutually beneficial arrangement in the near future! Best Regards, Brandon S. Algebra 1 covers topics such as linear equations, systems of linear equations, polynomials, factoring, quadratic equations,... Offering 10+ subjects including algebra 1
{"url":"http://www.wyzant.com/Detroit_MI_algebra_1_tutors.aspx","timestamp":"2014-04-20T21:12:34Z","content_type":null,"content_length":"62237","record_id":"<urn:uuid:1a4f3745-5b1c-4d68-b1a2-4fb12739f2a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Temecula Algebra 1 Tutor Find a Temecula Algebra 1 Tutor ...My philosophy is that in order to help a student understand their problem areas, you must first understand how the student learns and gear the lesson/tutoring to the students style in order for them to comprehend the information that is being presented to them. My main goal when tutoring is to d... 35 Subjects: including algebra 1, Spanish, English, reading ...I have achieved various honors in each subject throughout my life, ranging from first place in Math Olympiads to full scores on standardized tests. Whenever my friends were struggling in a subject, I was always there to help them. Aside from academics, I have also been a lifeguard, so I can teach beginners how to swim. 38 Subjects: including algebra 1, chemistry, calculus, statistics I have always had a passion for learning and partnering with others in the learning process. I currently hold my AA in Liberal Arts from El Camino College and I have been tutoring in group settings and one-on-one for the past five years privately, at church, and through various home school academie... 16 Subjects: including algebra 1, reading, English, grammar ...I can help you with word choice, order and arrangement of sentences, fluidity, transitions, fiction/non-fiction, persuasion, expository, reference material, and much more. I have taught English for several years and can help you with spelling, vocabulary, word choice, grammar, syntax, word order... 25 Subjects: including algebra 1, English, writing, reading ...I have always believed that the best learning and study methods come from repeat exposure and extreme practice! My methods will be patient and comprehensive (and hopefully very relatable) lessons on any uncertainties and reinforcement of all necessary skills. I also realize how much of a chore ... 16 Subjects: including algebra 1, reading, calculus, writing
{"url":"http://www.purplemath.com/Temecula_algebra_1_tutors.php","timestamp":"2014-04-19T23:40:35Z","content_type":null,"content_length":"23930","record_id":"<urn:uuid:aa7e6b34-9b14-4860-a77f-df631cf77f7e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
A fresh look at testing for asynchronous communication Testing is one of the fundamental techniques for verifying if a computing system conforms to its specification. Developing an efficient and exhaustive test suite is a challenging problem, especially in the setting of distributed systems. We take a fresh look at the theory of testing for message-passing systems based on a natural notion of observability in terms of input-output relations. We propose two notions of test equivalence: one which corresponds to presenting all test inputs up front and the other which corresponds to interactively feeding inputs to the system under test. We compare our notions with those studied earlier, notably the equivalence proposed by Tretmans. In Tretmans' framework, asynchrony is modelled using synchronous communication by augmenting the state space of the system with queues. We show that the first equivalence we consider is strictly weaker than Tretmans' equivalence and undecidable, whereas the second notion is incomparable. We also establish (un)decidability results for these equivalences. This is a joint work with Puneet Bhateja and Madhavan Mukund from the Chennai Mathematical Institute
{"url":"http://www.newton.ac.uk/programmes/LAA/gastin.html","timestamp":"2014-04-18T15:44:41Z","content_type":null,"content_length":"3067","record_id":"<urn:uuid:1c870d41-461b-4731-a754-7421daf097b8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Verifying solutions with partial derivatives. I have no idea how to do these two questions. I know you have to take partial derivatives with respect to certain variables and the like. Basically I understand the concepts to some degree but I wouldn't know how to go about starting. If anyone can give me a starting push that is all I really need I don't want the answer. This is the assignment page if there are troubles accessing it let me know. It is questions 3 c) and d). After talking with one of my peers He basically just told me to simply everything that wasn't of concern for me in the equation and treat them like constants. This is what I had originally assumed. If anyone can confirm this please let me know. EDIT: AHAHAHAHA that was so ridiculously easy once its simplified. Carry on good sirs and this thread shall be closed. :D
{"url":"http://www.physicsforums.com/showthread.php?p=3513373","timestamp":"2014-04-19T15:04:24Z","content_type":null,"content_length":"20601","record_id":"<urn:uuid:830809f4-2e56-4603-9b2b-872a14014fd4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayes' Theorem Ever wonder what happens to those amazing breakthroughs you hear about on the news, but never hear about again? Somehow, when they're finally released, the amazing qualities of, say, that new wonder drug, never seem to reduce the suffering the way most people hoped. Look through the reports on the test results of those breakthroughs, and you'll frequently find one line that says p < 0.05. In other words, the tests indicate that the results reported on in the report had only a 5% chance of happening randomly. If I flip a coin 20 times, and heads shows up 15 or more times (in other words, greater than 14 times), we can work out that there is roughly a 2.07% chance of that happening at random. Reporting on this, we'd note that p < 0.05, and use this to justify examining whether the coin is really fair. That works great for events dealing with pure randomness, such as coins, but how do you update the probabilities for non-random factors? In other words, how do you take new knowledge into account as you go? This is where Bayes' theorem comes in. It's named after Thomas Bayes, who developed it in the mid-1700s, but the basic idea has been around for some time. You should be familiar, of course, with the basic formula for determining the probability of a targeted outcome: $Probability=\frac{targeted \ outcome(s)}{total \ possibilities}$ The following video describes the process of Bayes' theorem without going into any more mathematics than the above formula, using the example of an e-mail spam filter: To get into the mathematical theorem itself, it's important to understand a few things. First, Bayes' theorem pays close attention to the differences between the event (an e-mail actually being spam or not, in the above video) and the test for that event (whether a given e-mail passes the spam test or not). It doesn't assume that the test is 100% reliable for the event. BetterExplained.com's post An Intuitive (and Short) Explanation of Bayes’ Theorem takes you from this premise and a similar example, all the way up to the formula for Bayes' theorem. It's interesting to note that it's effectively the same as the classic probability formula above, but modified to account for new knowledge. The following video uses another example, and is also simple to follow, but delves into the math as well as the process. Understanding the process first, and then seeing how the math falls into place helps make it clear: The tree structure used in this video helps dramatize one clear point. Bayes' theorem allows you to see a particular result, and make an educated guess as to what chain of events led to that result. The p < 0.05 approach simply says We're at least 95% certain that these results didn't happen randomly. The Bayes' theorem approach, on the other hand, says Given these results, here are the possible causes in order of their likelihood. If I shuffle a standard 52-card deck, probability tells us that the odds of the top card being an Ace of Spades is 1/52. If I turn up the top card and show you that it's actually the 4 of Clubs, our knowledge not only chance the odds of the top card being the Ace of Spades to 0/52, but gives us enough certain data we can switch to employing logic. Having seen the 4 of Clubs on top and knowing that all the cards in the deck are different, I can logically conclude that the 26th card in the deck is NOT the 4 of Clubs. We can switch from probability to logic in this manner because we've gone from randomness to certainty. What if I don't introduce certainty, however? What if I look at the top card without showing it to you, and only state that it's an Ace? This is the strength of Bayes' theorem. It bridges the ground between probability with logic, by allowing you to update probabilities based on your current state of knowledge, not just randomness. That's really the most important point about Bayes' theorem. There's much more to Bayes' theorem than I could convey in a short blog post. If you're interested in a more in-depth look, I suggest the YouTube video series Bayes' Theorem for Everyone. I think you'll find it surprisingly fascinating. No Response to "Bayes' Theorem"
{"url":"http://headinside.blogspot.com/2012/11/bayes-theorem.html","timestamp":"2014-04-21T14:59:06Z","content_type":null,"content_length":"658323","record_id":"<urn:uuid:32ad9ffc-c687-46e9-8387-f676a741ae5a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Numbers part one: Integers from the Course PHP with MySQL Essential Training Start learning with our library of video tutorials taught by experts. Get started PHP is a popular, reliable programming language at the foundation of many smart, data-driven websites. This comprehensive course from Kevin Skoglund helps developers learn the basics of PHP (including variables, logical expressions, loops, and functions), understand how to connect PHP to a MySQL database, and gain experience developing a complete web application with site navigation, form validation, and a password-protected admin area. Kevin also covers the basic CRUD routines for updating a database, debugging techniques, and usable user interfaces. Along the way, he provides practical advice, offers examples of best practices, and demonstrates refactoring techniques to improve existing code. Over the course of the next two movies, we're going to be talking about numbers. And we're going to start out by first talking about Integers. Integers are whole numbers, so that's numbers like 1, 2, 3, 4, 5, as well as negative 1, negative 2, negative 3, and so on. I think what a number is, is pretty intuitive. We all kind of have an idea of that by now. But we do still need to see how we work with them in PHP. Let's create a new document we can work with. I'm going to open up basic.HTML and I'm going to do just Save As. I'm going to change this one to be integers.PHP. Make sure you've got PHP at the end. Integers. Now, we've seen how we could just have our basic PHP tags and assign a variable. var1 equals 3, and var2 equals 4. And we saw how we can add those together and we can echo back the result. Let me go ahead and just show you a bit more math. We're going to be echoing the result back from adding 1 plus 2 plus the value of var1. That's 3, and then multiplying that by var2. This asterisk is what we use for multiplying when we're programming. And then, the result of all of that in parentheses is going to be divided by, that's what the forward slash here means, divided by 2 minus 5. Now, the parentheses, the order of operations between the multiplication and division and all that, it's going to still apply. It's just basic math that follows the basic math rules. So, let's try that out. Let's open that up in a browser. Instead of string functions, we're going to be looking for integers. And there it is, basic math, the answer is 7. So, it added all of those together and did a calculation for us. Let me show you a few other math things that we can do, and that's to use some functions. So, here's if you want to find absolute value, you use the abs function. So, 0 minus 300, that will, of course, return the absolute value, which would be 300 instead of negative 300. Exponential, raising something to the power of something. The function we're going to use is pow. So, you see the arguments are 2 and 8, so that's 2 to the 8th power, and then we have the square root, sqrt. The square root of 100, and then fmod is for modulo. Now, if you've never worked for modulo before, what it's going to do is it's going to take 20 and divide it by 7. And return just the remainder to me. What's left over from that, things don't divide evenly, it will tell us. That can be very handy for finding out whether one number divides evenly into another number. And then rand, of course, will return a random number to us. And rand with a minimum and maximum value will return a random number within that range. Let's try all of those out. Save it. Back, and here you go. And you can see the random number it gave me was a really large number. Whereas the random min max it gave me was between 1 and 10. If we reload that page, you'll see that it gives me different values for each of those this time around. And you can see what I was also talking about with modulo. We were dividing 20 by 7, 20 by 7, will divide into it twice, with 6 left over. So, the modulo is the remainder that's left after that. So, we could know that it does not divide evenly. In addition to these functions, I want to also show you how to increment and decrement the numbers a little bit. Let's do a new row here and let's, first, I want to show you how to do plus equals. So, we'll do some PHP, and we're going to say var2 plus equals 4. Now, what that's going to do is it's going to update variable 2 in place by adding 4 to it. It's the same way we were doing that concatenation in assignment at the same time when we were working with strings. And if the exact same thing as if we had actually typed out var2 equals var2 plus 4, it just saves us some typing to write it this way. Let's echo that value back just so we can see what it is, var2, we'll put a br tag at the end. Now, not only is there the plus version of this, but there's a minus version, a multiplication version and a division version. So ,we can do all of those as well. And just so they aren't all exactly the same, I'm going to change this four to a three. So, that'll multiply it by 3 and then divide it by 4. Let's try this out real quick. Save the document. Switch back. We'll reload the page. And you can see they took the last value. It starts out with a value of four It adds four to it to get 8, which subtracts four to get 4. Multiplies by 3 to get 12, and then divides by 4 to get 3. Now, incrementing and decrementing by one is super common, especially when we start working with loops. I'll just show you how that works, so we're going to call this increments. And if we want to increment plus equals 1, we could just do it like that. There is nothing wrong with that and it might actually be very clear what we're doing. But it's so common that we could just do it as var2++. That just mean adds one to it. Increment it. This actually comes from the world of C. C uses this when we're incrementing loops. It's just a nice handy short cut. We can do the same thing with decrementing with minus, minus. So, if you're going through a loop and you want to keep a variable incremented. Every time you go through the loop to count your iterations through the loop, you can do that by using this plus, plus or minus, minus. And we could try it out real quick. As you would expect, it comes up and tells us that it went from 3 to 4 and 4 back down to 3. The last thing that I want to show you about integers before we move on to floating point numbers. Is I just want to make sure that it's clear to you that there is a difference between the number 1 and the string 1, right? Those are not the same thing. This is just a character. It could might as well be x or y or z. It's just a character on the screen, it's not a number that's ready to be added. Now, in truth, PHP will do its best job to add it together. If we did something like this, PHP will say, all right, I realize this is not a number. So, it's not suitable for adding, but I wonder if I can convert it. I wonder if I can change it into an integer so that I can complete the operation. And it will do that. Let's just see that real quick, and you see it came up with two. But if I said this is 1 plus 2 houses, usually it comes up with 3. The word houses just disappears because it converts it. And the best way it can convert it to an integer is to pick the number that it see out of it. And throw the rest of it away. And that's the only way that it knows how to. In general, you should not rely on PHP to convert strings into integers for you. You should be working with one or the other, and you should intentionally switch from one to the other if you really need to. And we'll talk about how to do that later on. But, it's considered sloppy programming to combine types like this without explicitly converting them to the right type. Find answers to the most frequently asked questions about PHP with MySQL Essential Training. Access exercise files from a button right under the course name. Search within course videos and transcripts, and jump right to the results. Remove icons showing you already watched videos if you want to start over. Make the video wide, narrow, full-screen, or pop the player out of the page into its own window. Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted.
{"url":"http://www.lynda.com/MySQL-tutorials/Numbers-part-one-Integers/119003/136943-4.html?w=0","timestamp":"2014-04-20T04:29:43Z","content_type":null,"content_length":"245319","record_id":"<urn:uuid:9df225e9-bbaa-4f1a-b2a1-f047bfb58915>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 21 "... The theme of this paper is profunctors, and their centrality and ubiquity in understanding concurrent computation. Profunctors (a.k.a. distributors, or bimodules) are a generalisation of relations to categories. Here they are first presented and motivated via spans of event structures, and the seman ..." Cited by 263 (33 self) Add to MetaCart The theme of this paper is profunctors, and their centrality and ubiquity in understanding concurrent computation. Profunctors (a.k.a. distributors, or bimodules) are a generalisation of relations to categories. Here they are first presented and motivated via spans of event structures, and the semantics of nondeterministic dataflow. Profunctors are shown to play a key role in relating models for concurrency and to support an interpretation as higher-order processes (where input and output may be processes). Two recent directions of research are described. One is concerned with a language and computational interpretation for profunctors. This addresses the duality between input and output in profunctors. The other is to investigate general spans of event structures (the spans can be viewed as special profunctors) to give causal semantics to higher-order processes. For this it is useful to generalise event structures to allow events which “persist.” - Proceedings of the Tenth International Conference on Application and Theory of Petri Nets , 1989 "... ABSTRACT The "stubborn set " theory and method for generating reduced state spaces is presented. The theory takes advantage of concurrency, or more generally, of the lack of interaction between transitions, captured by the notion of stubborn sets. The basic method preserves all terminal states and t ..." Cited by 155 (1 self) Add to MetaCart ABSTRACT The "stubborn set " theory and method for generating reduced state spaces is presented. The theory takes advantage of concurrency, or more generally, of the lack of interaction between transitions, captured by the notion of stubborn sets. The basic method preserves all terminal states and the existence of nontermination. A more advanced version suited to the analysis of properties of reactive systems is developed. It is shown how the method can be used to detect violations of invariant properties. The method preserves the liveness (in Petri net sense) of transitions, and livelocks which cannot be exited. A modification of the method is given which preserves the language generated by the system. The theory is developed in an abstract variable/transition framework and adapted to elementary - IEEE Transactions on Software Engineering , 1993 "... Abstmct- The original proposals of several stochastic Petri net modeling techniques and of generalized stochastic Petri nets (GSPN) in particular were based mainly on the characteristics of their underlying stochastic processes. This led to the use of GSPN only as a shortened notation for the descri ..." Cited by 55 (9 self) Add to MetaCart Abstmct- The original proposals of several stochastic Petri net modeling techniques and of generalized stochastic Petri nets (GSPN) in particular were based mainly on the characteristics of their underlying stochastic processes. This led to the use of GSPN only as a shortened notation for the description of stochastic models. Although already quite useful in practice, this approach did not fully exploit the benefits of a Petri net description; in particular, it did not use any of the results of classical net theory. The integration of qualitative net theory results, together with the probabilistic analysis approach, requires a deep structural foundation of the GSPN definition. In this paper, the class of Petri nets obtained by eliminating timing from GSPN models while preserving the qualitative behavior is identified. Structural results for those nets are also derived, thus obtaining the first structural analysis of Petri nets with priority and inhibitor arcs. A revision of the GSPN definition based on the structural properties of the models is then presented. The main advantage is that for a (wide) class of nets, the definition of firing probabilities of conflicting immediate transitions does not require the information on reachable markings (which was, instead, necessary with the original definition). Identification of the class of models for which the net-level specification is possible is also based on the structural analysis results. The new procedure for the model specification is illustrated by means of an example, which shows the usefulness of the new approach. A net level specification of the model associated with efficient structural analysis techniques can have a substantial impact on model analysis as well. Index Terms-Conflicts and concurrency, Markovian models, performance modeling, probabilistic specification, stochastic Petri nets, structural Petri net analysis, timed and immediate transi-tions, transition priorities. I. , 1993 "... We study the complexity of several standard problems for 1-safe Petri nets and some of its subclasses. We prove that reachability, liveness, and deadlock are all PSPACE-complete for 1-safe nets. We also prove that deadlock is NP-complete for free-choice nets and for 1-safe free-choice nets. Finally, ..." Cited by 44 (7 self) Add to MetaCart We study the complexity of several standard problems for 1-safe Petri nets and some of its subclasses. We prove that reachability, liveness, and deadlock are all PSPACE-complete for 1-safe nets. We also prove that deadlock is NP-complete for free-choice nets and for 1-safe free-choice nets. Finally, we prove that for arbitrary Petri nets, deadlock is equivalent to reachability and liveness. This paper is to be presented at FST&TCS 13, Foundations of Software Technology & Theoretical Computer Science, to be held 1517 December 1993, in Bombay, India. A version of the paper with most proofs omitted is to appear in the proceedings. 1 Introduction Petri nets are one of the oldest and most studied formalisms for the investigation of concurrency [33]. Shortly after the birth of complexity theory, Jones, Landweber, and Lien studied in their classical paper [24] the complexity of several fundamental problems for Place/Transition nets (called in [24] just Petri nets). Some years later, Howell,... - Theoretical Computer Science , 1989 "... : Concurrent transition systems (CTS's), are ordinary nondeterministic transition systems that have been equipped with additional concurrency information, specified in terms of a binary residual operation on transitions. Each CTS C freely generates a complete CTS or computation category C , whose ..." Cited by 40 (5 self) Add to MetaCart : Concurrent transition systems (CTS's), are ordinary nondeterministic transition systems that have been equipped with additional concurrency information, specified in terms of a binary residual operation on transitions. Each CTS C freely generates a complete CTS or computation category C , whose arrows are equivalence classes of finite computation sequences, modulo a congruence induced by the concurrency information. The categorical composition on C induces a "prefix" partial order on its arrows, and the computations of C are conveniently defined to be the ideals of this partial order. The definition of computations as ideals has some pleasant properties, one of which is that the notion of a maximal ideal in certain circumstances can serve as a replacement for the more troublesome notion of a fair computation sequence. To illustrate the utility of CTS's, we use them to define and investigate a dataflow-like model of concurrent computation. The model consists of machines, which ... , 1995 "... We extend labelled transition systems to distributed transition systems by labelling the transition relation with a finite set of actions, representing the fact that the actions occur as a concurrent step. We design an action-based temporal logic in which one can explicitly talk about steps. The log ..." Cited by 29 (5 self) Add to MetaCart We extend labelled transition systems to distributed transition systems by labelling the transition relation with a finite set of actions, representing the fact that the actions occur as a concurrent step. We design an action-based temporal logic in which one can explicitly talk about steps. The logic is studied to establish a variety of positive and negative results in terms of axiomatizability and decidability. Our positive results show that the step notion is amenable to logical treatment via standard techniques. They also help us to obtain a logical characterization of two well known models for distributed systems: labelled elementary net systems and labelled prime event structures. Our negative results show that demanding deterministic structures when dealing with a "noninterleaved " notion of transitions is, from a logical standpoint, very expressive. They also show that another well known model of distributed systems called asynchronous transition systems exhibits a surprising a... - IN APPLICATIONS AND THEORY OF PETRI NETS , 1996 "... The objective of this thesis is to give time Petri nets a partial order semantics, like the nonsequential processes of untimed net systems. A time process of a time Petri net is defined as a traditionally constructed causal process with a valid timing. This means that the events of the process are l ..." Cited by 16 (1 self) Add to MetaCart The objective of this thesis is to give time Petri nets a partial order semantics, like the nonsequential processes of untimed net systems. A time process of a time Petri net is defined as a traditionally constructed causal process with a valid timing. This means that the events of the process are labeled with occurrence times which must satisfy specific validness criteria. An efficient algorithm for checking validness of known timings is presented. Interleavings of the time processes are defined as linearizations of the causal partial order of events where also the time order of events is preserved. The relationship between firing schedules of a time Petri net and the interleavings of the time processes of the net is shown to be bijective. Also, a sufficient condition is given for when the invalidity of timings for a process can be inferred from its initial subprocess. An alternative characterization for the validness of timings results in an algorithm for constructing the set of all vali... , 1999 "... : Objects are studied as higher-level net tokens having an individual dynamical behaviour. In the context of Petri net research it is quite natural to also model such tokens by Petri nets. To distinguish them from the system net, they are called object nets. Object nets behave like tokens, i.e., the ..." Cited by 13 (2 self) Add to MetaCart : Objects are studied as higher-level net tokens having an individual dynamical behaviour. In the context of Petri net research it is quite natural to also model such tokens by Petri nets. To distinguish them from the system net, they are called object nets. Object nets behave like tokens, i.e., they are lying in places and are moved by transitions. In contrast to ordinary tokens, however, they may change their state (i.e. their marking) when lying in a place or when being moved by a transition. By this approach an interesting and challenging two-level system modelling technique is introduced. Similar to the object-oriented approach, complex systems are modelled close to their real appearance in a natural way to promote clear and reliable concepts. Applications in fields like workflow, agent-oriented approaches (mobile agents and/or intelligent agents as in AI research) or open system networks are feasible. This paper gives a precise definition of the basic model together with a suitab... - In Fourteenth ACM Symposium on Principles of Programming Languages , 1987 "... Using concurrent transition systems [Sta86], we establish connections between three models of concurrent process networks, Kahn functions, input /output automata, and labeled processes. For each model, we define three kinds of algebraic operations on processes: the product operation, abstractio ..." Cited by 9 (7 self) Add to MetaCart Using concurrent transition systems [Sta86], we establish connections between three models of concurrent process networks, Kahn functions, input /output automata, and labeled processes. For each model, we define three kinds of algebraic operations on processes: the product operation, abstraction operations, and connection operations. We obtain homomorphic mappings, from input/output automata to labeled processes, and from a subalgebra (called "input/output processes") of labeled processes to Kahn functions. The proof that the latter mapping preserves connection operations amounts to a new proof of the "Kahn Principle." Our approach yields: (1) extremely simple definitions of the process operations; (2) a simple and natural proof of the Kahn Principle that does not require the use of "strategies" or "scheduling arguments"; (3) a semantic characterization of a large class of labeled processes for which the Kahn Principle is valid, (4) a convenient operational semantics... , 1998 "... The main concern of this thesis is the formal reasoning about reactive systems, that is, systems that repeatedly act and react in interaction with their environment without necessarily terminating. When describing such systems the focus is not on what is computed but rather on the interaction capabi ..." Cited by 8 (0 self) Add to MetaCart The main concern of this thesis is the formal reasoning about reactive systems, that is, systems that repeatedly act and react in interaction with their environment without necessarily terminating. When describing such systems the focus is not on what is computed but rather on the interaction capabilities over time. Moreover, reactive systems are usually highly concurrent, typically spatially distributed, and often non-deterministic. Such systems include telecommunication protocols, telephone switches, air-tra#c controllers, circuits, and many more. The goal of formal reasoning is to achieve system with provably correct behaviour. The task of formal reasoning is to specify systems and properties of systems as mathematical objects and to supply methodologies and techniques supporting formal proofs of properties of systems. Numerous semantic formalisms such as synchronisation trees, event structures, transition systems, temporal logics, Petri nets, and process algebras, to mention a few, have been proposed for the specification of reactive systems. In particular, formalisms vary in the sort of reasoning methodologies they support and encourage. Some methods have little practical pertinence, others have more. Some methods are decidable, others are not. Hence, numerous methods for reasoning about systems have been proposed; ranging from manual methods for the analysis of the most simple isolated aspects of systems to automatic methods for the synthesis of complex systems from succinct logical specifications.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1623748","timestamp":"2014-04-20T07:03:35Z","content_type":null,"content_length":"41034","record_id":"<urn:uuid:7448d079-e377-4194-ad23-2d4a286977a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Some distance properties of latent root and vector methods used in multivariate analysis Results 1 - 10 of 110 , 1997 "... Image rendering maps scene parameters to output pixel values; animation maps motion-control parameters to trajectory values. Because these mapping functions are usually multidimensional, nonlinear, and discontinuous, #nding input parameters that yield desirable output values is often a painful pr ..." Cited by 189 (3 self) Add to MetaCart Image rendering maps scene parameters to output pixel values; animation maps motion-control parameters to trajectory values. Because these mapping functions are usually multidimensional, nonlinear, and discontinuous, #nding input parameters that yield desirable output values is often a painful process of manual tweaking. Interactiveevolution and inverse design are two general methodologies for computer-assisted parameter setting in which the computer plays a prominent role. In this paper we present another such methodology. - Biometrics , 1971 "... Biometrics is currently published by International Biometric Society. Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..." Cited by 107 (0 self) Add to MetaCart Biometrics is currently published by International Biometric Society. Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at , 2005 "... An expression-invariant 3D face recognition approach is presented. Our basic assumption is that facial expressions can be modelled as isometries of the facial surface. This allows to construct expression-invariant representations of faces using the bending-invariant canonical forms approach. The re ..." Cited by 103 (22 self) Add to MetaCart An expression-invariant 3D face recognition approach is presented. Our basic assumption is that facial expressions can be modelled as isometries of the facial surface. This allows to construct expression-invariant representations of faces using the bending-invariant canonical forms approach. The result is an efficient and accurate face recognition algorithm, robust to facial expressions, that can distinguish between identical twins (the first two authors). We demonstrate a prototype system based on the proposed algorithm and compare its performance to classical face recognition methods. The numerical methods employed by our approach do not require the facial surface explicitly. The surface gradients field, or the surface metric, are sufficient for constructing the expression-invariant representation of any given face. It allows us to perform the 3D face recognition task while avoiding the surface reconstruction stage. , 1996 "... techniques for the analysis of polygenes ..." - IEEE Trans. Pattern Analysis and Machine Intelligence , 2004 "... Abstract—An optimization criterion is presented for discriminant analysis. The criterion extends the optimization criteria of the classical Linear Discriminant Analysis (LDA) through the use of the pseudoinverse when the scatter matrices are singular. It is applicable regardless of the relative size ..." Cited by 28 (8 self) Add to MetaCart Abstract—An optimization criterion is presented for discriminant analysis. The criterion extends the optimization criteria of the classical Linear Discriminant Analysis (LDA) through the use of the pseudoinverse when the scatter matrices are singular. It is applicable regardless of the relative sizes of the data dimension and sample size, overcoming a limitation of classical LDA. The optimization problem can be solved analytically by applying the Generalized Singular Value Decomposition (GSVD) technique. The pseudoinverse has been suggested and used for undersampled problems in the past, where the data dimension exceeds the number of data points. The criterion proposed in this paper provides a theoretical justification for this procedure. An approximation algorithm for the GSVD-based approach is also presented. It reduces the computational complexity by finding subclusters of each cluster and uses their centroids to capture the structure of each cluster. This reduced problem yields much smaller matrices to which the GSVD can be applied efficiently. Experiments on text data, with up to 7,000 dimensions, show that the approximation algorithm produces results that are close to those produced by the exact algorithm. Index Terms—Classification, clustering, dimension reduction, generalized singular value decomposition, linear discriminant analysis, text mining. 1 - DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, THE UNIVERSITY OF CALIFORNIA, SAN DIEGO , 1994 "... This dissertation examines the use of adaptive methods to automatically improve the performance of ranked text retrieval systems. The goal of a ranked retrieval system is to manage a large collection of text documents and to order documents for a user based on the estimated relevance of the document ..." Cited by 26 (5 self) Add to MetaCart This dissertation examines the use of adaptive methods to automatically improve the performance of ranked text retrieval systems. The goal of a ranked retrieval system is to manage a large collection of text documents and to order documents for a user based on the estimated relevance of the documents to the user's information need (or query). The ordering enables the user to quickly find documents of interest. Ranked retrieval is a difficult problem because of the ambiguity of natural language, the large size of the collections, and because of the varying needs of users and varying collection characteristics. We propose and empirically validate general adaptive methods which improve the ability of a large class of retrieval systems to rank documents effectively. Our main adaptive method is to numerically optimize free parameters in a retrieval system by minimizing a non-metric criterion function. The criterion measures how well the system is ranking documents relative to a target ordering, defined by a set of training queries which include the users' desired document orderings. Thus, the system learns parameter settings which better enable it to rank relevant documents before irrelevant. The non-metric approach is interesting because it is a general adaptive method, an alternative to supervised methods for training neural networks in domains in which rank order or prioritization is important. A second adaptive method is also examined, which is applicable to a restricted class of retrieval systems but which permits an analytic solution. The adaptive methods are applied to a number of problems in text retrieval to validate their utility and practical efficiency. The applications include: A dimensionality reduction of vector-based document representations to a vector spa... , 1995 "... This paper considers numerical algorithms for finding local minimizers of metric multidimensional scaling problems. Both the STRESS and SSTRESS criteria are considered, and the leading algorithms for each are carefully explicated. A new algorithm, based on Newton's method, is proposed. Translational ..." Cited by 24 (3 self) Add to MetaCart This paper considers numerical algorithms for finding local minimizers of metric multidimensional scaling problems. Both the STRESS and SSTRESS criteria are considered, and the leading algorithms for each are carefully explicated. A new algorithm, based on Newton's method, is proposed. Translational and rotational indeterminancy is removed by a parametrization that has not previously been used in multidimensional scaling algorithms. In contrast to previous algorithms, a very pleasant feature of the new algorithm is that it can be used with either the STRESS or the SSTRESS criterion. Numerical results are presented. Key words: Metric multidimensional scaling, STRESS criterion, SSTRESS criterion, unconstrained optimization, Newton's method. Department of Computational and Applied Mathematics, Rice University, Houston, TX 77251-1892. This author was generously supported by a Patricia R. Harris Fellowship. y Department of Computational and Applied Mathematics and Center for Research in... "... Nonlinear dimensionality reduction methods are often used to visualize high-dimensional data, although the existing methods have been designed for other related tasks such as manifold learning. It has been difficult to assess the quality of visualizations since the task has not been well-defined. We ..." Cited by 24 (4 self) Add to MetaCart Nonlinear dimensionality reduction methods are often used to visualize high-dimensional data, although the existing methods have been designed for other related tasks such as manifold learning. It has been difficult to assess the quality of visualizations since the task has not been well-defined. We give a rigorous definition for a specific visualization task, resulting in quantifiable goodness measures and new visualization methods. The task is information retrieval given the visualization: to find similar data based on the similarities shown on the display. The fundamental tradeoff between precision and recall of information retrieval can then be quantified in visualizations as well. The user needs to give the relative cost of missing similar points vs. retrieving dissimilar points, after which the total cost can be measured. We then introduce a new method NeRV (neighbor retrieval visualizer) which produces an optimal visualization by minimizing the cost. We further derive a variant for supervised visualization; class information is taken rigorously into account when computing the similarity relationships. We show empirically that the unsupervised version outperforms existing unsupervised dimensionality reduction methods in the visualization task, and the supervised version outperforms existing supervised methods. - Neural Computation , 1999 "... We derive an efficient algorithm for topographic mapping of proximity data (TMP), which can be seen as an extension of Kohonen's SelfOrganizing Map to arbitrary distance measures. The TMP cost function is derived in a Baysian framework of Folded Markov Chains for the description of autoencoders. It ..." Cited by 24 (7 self) Add to MetaCart We derive an efficient algorithm for topographic mapping of proximity data (TMP), which can be seen as an extension of Kohonen's SelfOrganizing Map to arbitrary distance measures. The TMP cost function is derived in a Baysian framework of Folded Markov Chains for the description of autoencoders. It incorporates the data via a dissimilarity matrix D and the topographic neighborhood via a matrix H of transition probabilities. From the principle of Maximum Entropy a non-factorizing Gibbsdistribution is obtained, which is approximated in a mean-field fashion. This allows for Maximum Likelihood estimation using an EM algorithm. In analogy to the transition from Topographic Vector Quantization (TVQ) to the Self-organizing Map (SOM) we suggest an approximation to TMP which is computationally more efficient. In order to prevent convergence to local minima, an annealing scheme in the temperature parameter is introduced, for which the critical temperature of the first phase-transition is calcul... , 1997 "... Multidimensional scaling (MDS) is a collection of data analytic techniques for constructing configurations of points from information about interpoint distances. Such constructions arise in computational chemistry when one endeavors to infer the conformation (3dimensional structure) of a molecule fr ..." Cited by 23 (5 self) Add to MetaCart Multidimensional scaling (MDS) is a collection of data analytic techniques for constructing configurations of points from information about interpoint distances. Such constructions arise in computational chemistry when one endeavors to infer the conformation (3dimensional structure) of a molecule from information about its interatomic distances. For a number of reasons, this application of MDS poses computational challenges not encountered in more traditional applications. In this report we sketch the mathematical formulation of MDS for molecular conformation problems and describe two approaches that can be employed for their solution. 1 Molecular Conformation Consider a molecule with n atoms. We can represent its conformation, or 3-dimensional structure, by specifying the coordinates of each atom with respect to a Euclidean coordinate system for ! 3 . We store these coordinates in an n \Theta 3 configuration matrix X. Given X, we can easily compute the matrix of interatomic distan...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=161079","timestamp":"2014-04-18T12:05:35Z","content_type":null,"content_length":"39815","record_id":"<urn:uuid:fd23c390-6c83-4b80-90ec-fb24719f06cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
almost function almost function In measure theory, an almost-everywhere-defined function, or almost function, is a partial function whose domain is a full set. We are usually only interested in measurable almost functions. Typically, we consider almost functions up to the equivalence relation of almost equality, whereby two almost functions are almost equal if their equaliser is also a full set, that is if they are equal almost everywhere. As we need to know what a full set is, there is no notion of almost function on an arbitrary classical measurable space. However, if the measurable space is equipped with such a notion (as is always the case with a Cheng measurable space or a localisable measurable space), then we have almost functions. Of course, a measure space also has plenty of structure for this. The morphisms between measurable locales also inherently correspond to measurable almost functions (up to almost equality). It is a commonplace that one really only cares about measurable functions up to almost equality. That one only needs measurable functions to be defined almost everywhere is the same idea. However, in classical mathematics, every almost function may be extended (using excluded middle) to an actual (everywhere-defined) function, and this extension is unique up to almost equality. Accordingly, the notion of almost function is only necessary in constructive mathematics. However, even classically, using them from the start can avoid annoying but trivial technicalities. (For example, one does not need the notion of essentially bounded function; bounded almost functions will do.) Besides measure theory, the concept applies whenever we have a notion of something being true (in this case, that a partial function is defined) almost everywhere. Revised on October 29, 2013 11:50:02 by Toby Bartels
{"url":"http://ncatlab.org/nlab/show/almost+function","timestamp":"2014-04-18T20:48:35Z","content_type":null,"content_length":"14152","record_id":"<urn:uuid:762fc10b-5c15-41a3-ace3-5588337c04f6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Standardizing regression inputs by dividing by two standard deviations - Statistical Modeling, Causal Inference, and Social Science Standardizing regression inputs by dividing by two standard deviations Interpretation of regression coefficients is sensitive to the scale of the inputs. One method often used to place input variables on a common scale is to divide each variable by its standard deviation. Here we propose dividing each variable by two times its standard deviation, so that the generic comparison is with inputs equal to the mean +/- 1 standard deviation. The resulting coefficients are then directly comparable for untransformed binary predictors. We have implemented the procedure as a function in R. We illustrate the method with a simple public-opinion analysis that is typical of regressions in social science. Here’s the paper, and here’s the R function. Standardizing is often thought of as a stupid sort of low-rent statistical technique, beneath the attention of “real” statisticians and econometricians, but I actually like it, and I think this 2 sd thing is pretty cool. 2 Comments 1. Is the sign switch on the income / z.income coefficient on pg 5 a typo? 2. Marc, I think the signs are right. What happens is, when you have an interaction with predictors that are not centered (as in the first regression), the coefficients for the main effect are hard to directly interpret.
{"url":"http://andrewgelman.com/2006/06/21/standardizing_r/","timestamp":"2014-04-20T20:58:47Z","content_type":null,"content_length":"22077","record_id":"<urn:uuid:2579d100-be86-4d24-9610-dc2a925edffb>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 3(x + 1) + 6 = 33 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50930a57e4b0b86a5e52f38b","timestamp":"2014-04-16T13:41:36Z","content_type":null,"content_length":"41833","record_id":"<urn:uuid:a1fd6432-49cc-4423-8186-e214e4ee89c3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Behavioural Investing In my last post I questioned the long term relevance of some of the fundamental factors outlined in the Navellier's Little Book that makes you rich. However, I also noted that all eight of these factors only add up to a 30% weight in his final analysis. The other 70% is given to what Navellier calls his quantitative stock grade. He describes this as a measure of buying pressure amongst institutional investors. However, he also tells us exactly what this measure actually is. On page 88 he says " In basic form, we divide a stock's alpha (the return independent of the stock market that typically comes from buying pressure) by its standard deviation. We measure this over a 52-week period The 'alpha' Navellier calculates comes from a simple CAPM model. However, as we show in Chapter 35 of Behavioural Investing, the simple CAPM model is deeply flawed. It just doesn't work. In fact in general there seems to be a negative relationship between beta and return, rather than a positive one. Of course, Fama and French suggested a revised multifactor model of asset pricing - based on size, and price to book as well as the normal market factor. To help reduce the pricing errors in this model a momentum term was introduced by Cahart. A very recent modification has been proposed by . This adds a new factor to the equation of repurchases minus issuers (labeled UMO). This again reduces the significance of the alphas calculated from the FF4 model. In general the alphas become statistically insignificant under this five factor model. So effectively, Navellier is running a semi reduced form model, not specifying which factors matter, but rather taking the alpha as a catch all term (which could be broken down into more understandable elements - such as size, value, issuance and momentum). However, Navellier demonstrates that these alphas have persistence. He estimates them over the past 52 weeks, and then uses them going forward. To my mind this is consistent with recent work on style momentum. Chen and De Bondt have shown that style categories have a degree of persistence. They show that if you buy styles that have done well in the last year (in terms of size, price to book and dividend yield) they continue to do well over the next 12 months (but not beyond). The return achieved from a long short position based around this style momentum is around 7% p.a. using a 12 month holding period, and style past returns calculated over 12 months. In long only space a style momentum strategy generates a return of around 17% p.a. over the period 63-97. So Navellier's idea of alpha persistence certainly gets some support from this viewpoint. Some final thought I found much to agree with in Navellier's Little Book such as the over-reliance on stories, and the meeting with company management being a waste of time. His reliance on numbers based analysis echoes very much my own views on evidence based investing. However, ultimately I found the book couldn't stick with its own discipline. For instance, Navellier can't help but eulogize over the wonderful outlook for stocks that deliver out future. Despite his pronouncements that his eight factors and his quantitative grading system are really all you need to invest, he spends a considerable amount of time telling you to read the newspapers, whilst simultaneously ignoring the noise. Such overtly contradictory advice can do little but confuse the reader. Personally I am not convinced that Navellier puts together a coherent defense of growth investing. But then again that won't surprise those of you who know me! Firstly apologies for the recent lack of posts, I've been enjoying a sojourn visiting some of my family in New Zealand. I've recently been reading Louis Navellier's 'Little Book that makes you rich' subtitled "a proven market beating formula for growth investing". I'm generally skeptical of the benefits of growth investing. All too often growth investing simply seems to be a cover for buying the latest fad or fashion in the investing world. So when I saw a book purporting to offer a numbers based approach (what I have called evidence based investing) to growth investing I was intrigued. Navellier starts out by listing out his eight criteria for fundamental investing. 1. Earnings revisions 2. Earnings surprise 3. Sales growth 4. Operating margin growth 5. Cash flow to MV 6. Earnings growth 7. Earnings momentum 8. ROE When I looked at this list I was somewhat surprised. Many of these factors struck me as odd. For instance, I have never come across a single paper claiming that sales growth had any kind of positive relationship with returns, nor ROE. Others such as earnings revisions and surprises were less shocking. I decided to run a quick check on each of these factors, using a variety of sources ( I will run a full set of tests once I'm back at work, but for now I'll rely on others results). For each factor I tried to find the study with the longest history. The table below presents the results showing how much each factor added to a long only portfolio vs the market. 1. Earnings revisions 4.8% p.a 2. Earnings surprises 2.7% p.a. 3. Sales growth -13% p.a 4. Operating margin growth N/A 5. Cash flow to MV 4% p.a. 6. Earnings growth -2% p.a. 7. Earnings momentum 0% p.a. 8. ROE 0.8% p.a In fairness to Navellier, he does note that the importance of each of these factors waxes and wanes over time. However, with a number of his factors appearing to add no value over a consistent time horizon, one must wonder what these fundamental variables bring to the party? Interestingly, one of the best fundamental factors turns out to be a value factor! Although Navellier dresses up his use of cash flow as a growth variable, nothing can alter the fact that it is really a value variable. This is consistent with work that I have done which showed that value strategies did well within a growth universe (see Chapter 31 of Behavioural Investing). It is also noteworthy that despite spending around two thirds of the little book of these variables, they only get a 30% weight in the final system....so what is the this little book really doing? I'll examine this in my next post.
{"url":"http://behaviouralinvesting.blogspot.com/2007_11_01_archive.html","timestamp":"2014-04-17T15:49:56Z","content_type":null,"content_length":"42483","record_id":"<urn:uuid:41d59949-763a-48ca-bd2e-17f0c975792f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Lesson 23: Integer Exponents To motivate the convention for negative and zero exponents, the lesson begins by observing the halving pattern found in continuously decreasing the exponent of a power of two by 1. After an application looking at the formula for Body Mass Index calculation, power functions of the form f(x) = kx^p are introduced. Radiation Intensity application problems follow before pure algebraic manipulation of exponential expressions are presented. The lesson concludes with a review of scientific notation.
{"url":"http://www.curriki.org/xwiki/bin/view/Coll_Group_SanDiegoCommunityCollegesDevelopmentalMathExchange/Lesson23IntegerExponents?bc=;Coll_Group_SanDiegoCommunityCollegesDevelopmentalMathExchange.IntermediateAlgebra;Coll_Group_SanDiegoCommunityCollegesDevelopm","timestamp":"2014-04-16T21:52:43Z","content_type":null,"content_length":"47421","record_id":"<urn:uuid:ef1f91f7-7759-43e3-860c-7dc6babe1c22>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding an intersect October 17th 2011, 05:43 AM #1 Oct 2011 Finding an intersect Hi guys, I'm trying to find the intersect between a linear function and a power (fractal) function. I thought all this would require is a simple bit of algebra. I've since broken my and several friends' heads over it and we keep getting stuck. linear function: 18886 -233.7*x power function: 3052000*x^-1.972 Bit of background: They are functions that describe product size from a ball mill for grinding minerals. So basically, what I'm trying to solve is: 18886-233.7*x = 3052000*x^-1.972 I've rewritten this as: 18886 = 3052000 / x^1.972 + 233.7*x 18886*x^1.972 = 3052000 + 233.7*x^2.972 18886*x^1.972 - 233.7*x^2.972 = 305200 80.183*x^1.972 -x^2.972 = 13059.481 And this is more or less where I get stuck. Hope it is correct to this point... Cheers in advance!!! Re: Finding an intersect Hi guys, I'm trying to find the intersect between a linear function and a power (fractal) function. I thought all this would require is a simple bit of algebra. I've since broken my and several friends' heads over it and we keep getting stuck. linear function: 18886 -233.7*x power function: 3052000*x^-1.972 Bit of background: They are functions that describe product size from a ball mill for grinding minerals. So basically, what I'm trying to solve is: 18886-233.7*x = 3052000*x^-1.972 I've rewritten this as: 18886 = 3052000 / x^1.972 + 233.7*x 18886*x^1.972 = 3052000 + 233.7*x^2.972 18886*x^1.972 - 233.7*x^2.972 = 305200 80.183*x^1.972 -x^2.972 = 13059.481 And this is more or less where I get stuck. Hope it is correct to this point... Cheers in advance!!! I would use a numerical method like Newtons. I've got $x \approx 14.5785$ I don't know if this is a plausible value because I don't know the dimension of the variable x. EDIT: There exists a 2nd solution at $x\approx 78.4131$ . Maybe this helps better. Last edited by earboth; October 17th 2011 at 06:29 AM. Re: Finding an intersect Yeah, the first answer is what I'm after! I've also managed to get Newton's method working for other test cases, same problem just different numbers. Thanks very much for your help!!!! Just out of curiosity though, is there actually a way of solving through normal algebra? Got half the office baffled by this seemingly simple problem.... Re: Finding an intersect Some equations, such as this, It isn't feasible to solve algebraically; (LHS) a linear equation on one side = (RHS) exponential equation on the other... I've tried doing this before using Logs and it doesn't work out... Only way to do this is using a GDC. (x=14.6 and x=78.4) Re: Finding an intersect Cheers for the quick answer. Not familiar with the abbreviation GDC, what does it stand for (it's going to be something obvious isn't it?). October 17th 2011, 06:09 AM #2 October 18th 2011, 02:50 AM #3 Oct 2011 October 18th 2011, 02:52 AM #4 October 18th 2011, 03:03 AM #5 Oct 2011
{"url":"http://mathhelpforum.com/algebra/190582-finding-intersect.html","timestamp":"2014-04-19T20:41:36Z","content_type":null,"content_length":"44595","record_id":"<urn:uuid:a21214f1-f5b1-4b6e-94bc-68cd09082a08>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractional partial differential equations and modified Riemann-Liouville derivative new methods for solution Find out how to access preview-only content Fractional partial differential equations and modified Riemann-Liouville derivative new methods for solution Purchase on Springer.com $39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. Get Access The paper deals with the solution of some fractional partial differential equations obtained by substituting modified Riemann-Liouville derivatives for the customary derivatives. This derivative is introduced to avoid using the so-called Caputo fractional derivative which, at the extreme, says that, if you want to get the first derivative of a function you must before have at hand its second derivative. Firstly, one gives a brief background on the fractional Taylor series of nondifferentiable functions and its consequence on the derivative chain rule. Then one considers linear fractional partial differential equations with constant coefficients, and one shows how, in some instances, one can obtain their solutions on by-passing the use of Fourier transform and/or Laplace transform. Later one develops a Lagrange method via characteristics for some linear fractional differential equations with nonconstant coefficients, and involving fractional derivatives of only one order. The key is the fractional Taylor series of non differentiable functionf(x + h) =E [ α ] (h ^ α D [ x ] ^ α )f(x). 1. V. V. Anh and N. N. Leonenko,Scaling laws for fractional diffusion-wave equations with singular initial data, Statistics and Probability Letters48 (2000), 239–252. CrossRef 2. E. Bakai,Fractional Fokker-Planck equation, solutions and applications, Physical review E63 (2001), 1–17. 3. M. Caputo,Linear model of dissipation whose Q is almost frequency dependent II, Geophys. J. R. Ast. Soc.13 (1967), 529–539. 4. L. Decreusefond and A. S. Ustunel,Stochastic analysis of the fractional Brownian motion, Potential Anal.10 (1999), 177–214. CrossRef 5. M. M. Djrbashian and A. B. Nersesian,Fractional derivative and the Cauchy problem for differential equations of fractional order (in Russian), Izv. Acad. Nauk Armjanskoi SSR,3(1) (1968), 3–29. 6. T. E. Duncan, Y. Hu and B. Pasik-Duncan,Stochastic calculus for fractional Brownian motion, I. Theory, SIAM J. Control Optim.38 (2000), 582–612. CrossRef 7. M. S. El Naschie,A review of E infinity theory and the mass spectrum of high energy particle physics, Chaos, Solitons and Fractals19 (2004), 209–236. CrossRef 8. A. El-Sayed,Fractional order diffusion-wave equation, Int. J. Theor. Phys.35 (1996), 311–322. CrossRef 9. A. Hanyga,Multidimensional solutions of time-fractional diffusion-wave equations, Proc. R Soc. London, A458 (2002), 933–957. CrossRef 10. Y. Hu and B. Øksendal,Fractional white noise calculus and applications to finance, Infinite Dim. Anal. Quantum Probab. Related Topics6 (2003), 1–32. CrossRef 11. F. Huang and F. Liu,The space-time fractional diffusion equation with Caputo derivatives, J. Appl. Math & Computing19(1–2) (2005), 179–190. 12. G. Jumarie,A Fokker-Planck equation of fractional order with respect to time, Journal of Math. Physics3310 (1992), 3536–3542. CrossRef 13. G. Jumarie,Stochastic differential equations with fractional Brownian motion input, Int. J. Syst. Sc.24(6) (1993), 1113–1132. CrossRef 14. G. Jumarie,Maximum Entropy, Information without Probability and Complex Fractals, 2000, Kluwer (Springer), Dordrecht. 15. G. Jumarie,Schrödinger equation for quantum-fractal space-time of order n via the complex-valued fractional Brownian motion, Intern. J. of Modern Physics A16(31) (2001), 5061–5084. CrossRef 16. G. Jumarie,Further results on the modelling of complex fractals in finance, scaling ob- servation and optimal portfolio selection, Systems Analysis, Modelling Simulation, 4510 (2002), 1483–1499. 17. G. Jumarie,Fractional Brownian motions via random walk in the complex plane and via fractional derivative, Comparison and further results on their Fokker-Planck equations, Chaos, Solitons and Fractals4 (2004), 907–925. CrossRef 18. G. Jumarie,On the representation of fractional Brownian motion as an integral with respect to (dt) ^ α , Applied Mathematics Letters18 (2005), 739–748. CrossRef 19. G. Jumarie,On the solution of the stochastic differential equation of exponential growth driven by fractional Brownian motion, Applied Mathematics Letters18 (2005), 817–826. CrossRef 20. G. Jumarie,Fractional Hamilton-Jacobi equation for the optimal control of non-random fractional dynamics with fractional cost function, J. Appl. Math. & Computing, in press. 21. H. Kober,On fractional integrals and derivatives, Quart. J. Math. Oxford11 (1940), 193–215. CrossRef 22. K. M. Kolwankar and A. D. Gangal,Holder exponents of irregular signals and local fractional derivatives, Pramana J. Phys.48 (1997), 49–68. 23. K. M. Kolwankar and A. D. Gangal,Local fractional Fokker-Planck equation, Phys. Rev. Lett.80 (1998), 214–217. CrossRef 24. A. V. Letnivov,Theory of differentiation of fractional order, Math. Sb.3 (1868), 1–7. 25. J. Liouville,Sur le calcul des differentielles á indices quelconques (in french), J. Ecole Polytechnique13 (1832), 71. 26. B. B. Mandelbrot and J. W. van Ness,Fractional Brownian motions, fractional noises and applications, SIAM Rev.10 (1968), 422–437. CrossRef 27. B. B. Mandelbrot and R. Cioczek-Georges,A class of micropuls es and antipersistent fractional Brownian motions, Stochastic Processes and their Applications60 (1995), 1–18. CrossRef 28. B. B. Mandelbrot and R. Cioczek-Georges,Alternative micropulses and fractional Brownian motion. Stochastic Processes and their Applications64 (1996), 143–152. CrossRef 29. E. Nelson,Quantum Fluctuations, 1985, Princeton University Press, Princeton, New Jersey. 30. L. Nottale,Fractal space-time and microphysics, 1993, World Scientific, Singapore. 31. L. Nottale,Scale-relativity and quantization of the universe I. Theoretical framework, S. Astronm Astrophys327 (1997), 867–889. 32. L. Nottale,The scale-relativity programme Chaos, Solitons and Fractals10(2-3) (1999), 459–468. CrossRef 33. G. N. Ord and R. B. Mann,Entwined paths, difference equations and Dirac equations, Phys. Rev A, 2003 (67): 0121XX3. 34. T. J. Osler,Taylor’s series generalized for fractional derivatives and applications, SIAM. J. Mathematical Analysis2(1) (1971), 37–47. CrossRef 35. N.T. Shawagfeh,Analytical approximate solutions for nonlinear fractional differential equations, Appl. Math. and Comp.131 (2002), 517–529. CrossRef 36. W. Wyss,The fractional Black-Scholes equation, Fract. Calc. Apl. Anal.3(1) (2000) (3), 51–61. Fractional partial differential equations and modified Riemann-Liouville derivative new methods for solution Cover Date Print ISSN Online ISSN Additional Links □ 26A33 □ 49K20 □ 44A10 □ Fractional PDE □ Riemann-Liouville derivative □ fractional Taylor series □ Mittag-Leffler function □ Lagrange characteristics □ Lagrange constant variation Author Affiliations □ 1. Department of Mathematics, University of Quebec at Montreal, Downtown St, P.O. Box 8888, H3C 3P8, Montreal, Qc, Canada
{"url":"http://link.springer.com/article/10.1007%2FBF02832299","timestamp":"2014-04-24T11:47:17Z","content_type":null,"content_length":"55523","record_id":"<urn:uuid:4724a60c-f623-4bdf-903c-d4700618c064>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Array selection help Jose Luis Gomez Dans josegomez@gmx.... Wed Feb 11 09:03:15 CST 2009 > >> scipy.ndimage.mean(arr2, labels=arr1, index=np.unique(arr1)) > > > > True. The question is, how do I get the output of your code back into my > > original array? Presumably, there's another function that does that > quickly? > It is already in an array, so I'm not sure I understand. Maybe you mean > out[:] = scipy.ndimage.mean(...) ? Sorry, I was clumsy with my wording. What I meant is how to put together the results, so that I have a 2D array where the value of each element is the value that corresponds to the mean of the corresponding label. So if arr1[100,100] = 4 (say), and after running the mean of arr2 for elements that in arr1 are labeled as 4, the mean value is 2.3, I'd like to have an array (out, out.shape == arr1.shape) where the values of elements of out that share a common label are given the mean value (2.3 for those labeled as 4 in my previous example). In essence, I want to have an array where each element is the mean value for its corresponding class. many thanks! Jetzt 1 Monat kostenlos! GMX FreeDSL - Telefonanschluss + DSL für nur 17,95 Euro/mtl.!* http://dsl.gmx.de/?ac=OM.AD.PD003K11308T4569a More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-February/019877.html","timestamp":"2014-04-17T19:08:29Z","content_type":null,"content_length":"3780","record_id":"<urn:uuid:907e5c04-22e5-43d4-b379-4f005466b984>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
... Review of Trig Ratios Worksheet 2: 8 introduces the trig ratios of sine, cosine, and ... So sin 1 a=means sin=a The same rule applies to cos 1 a and t an 1 a. If you wish towrite ... Sine Cosine Rules Mark scheme Sine Cosine Rules Mark scheme 1. (a) 27.6 3 sin30 8.1 sin 7.5 B 8.1 7.5sin 30 ... A1 7.83 7.84 OR M1 for correct use of sine rule to find sin B or sin A Pythagoras rule and trigonometry ... it Podcast H-14 Higher GCSE Revision Pythagoras rule and trigonometry Topics Pythagoras rule - Trigonometry, finding sides and angles - Trig graphs - Sine and cosine rule Area ... Belmont High School The Definite Integral, Riemann Sums, evaluating Riemann Sums, Midpoint Rule ... We also use scientific (not graphing) calculators for square roots and sine, cosine and ... 5-04 The sine rule 166 5-05 Using the sine rule to nd an unknown angle 170 5-06 The cosine rule 173 5-07 Using the cosine rule to nd an unknown angle KS4 Maths Contents Graphs and transforming graphs, the area of a triangle, the sine rule, the cosine rule and using these rules. Using vectors 18 slides 4 Flash activities 1 worksheet In oblique triangles, we cant use the trigonometry of hypotenuse, adjacent, and opposite sides, but we can use the sine law and cosine law. Maths-it Podcast H-14 Maths-it Podcast H-14 Higher GCSE Revision Pythagoras rule and trigonometry Topics Pythagoras rule Trigonometry, finding sides and angles Trig graphs Sine ... Chapter 7 ... of logarithmic functions Need to be able to apply chain rule to ... radians Trigonometric functions and their graphs Values of sine, cosine ... by using the sine, cosine and area rule and by contracting and ... Rule. Textbook, Calculators and Worksheets. Exemplar papers Method: Class works Curriculum Overview Math - Grade 10 Term I (2011) Worksheets, group work, Individual investigation Angle Properties and systems Extra ... and apply the concept of correlation and draw lines of best fit Sine and Cosine Rule ... c2 = a2 + b2 b2 = c2 a2 a2 = c2 b2 ... Non-right angled triangles Formulae for non-right angled triangles are based on the diagram and notation below. You should be able to apply the sine rule, cosine rule, and ... Trigonometry WORKSHEETS The worksheets available in this unit DO NOT constitute a course since no ... T/33 Sine rule T/34 Cosine rule T/35 Herons formula and circles T/36 Problems The SineandCosine Functions The Properties of Sine and Cosine Let us now list in algebraic form the properties of the ... derivative of sin x also comes from an application of the constant multiple rule ... Pure Mathematics Unit C2 ATM Resources Student Worksheets Excel File Circles Lesson Starter ... taught by teachers of Mech. / Stats ) Trig Revision of the sine rule and the cosine rule ... Worksheet 4 8 Properties of Trigonometric Functions ... section reviews some of the material covered in Worksheets ... Using the cosine rule we have d2 = 12 +12 2(1)(1)cos(A B) ... lies in the rst quadrant, where sine is positive ... The Mother of All Integral Review Sheets (MAIRS) you used u-substitution, your derivative should involve the chain rule. If you used integration by parts, your derivative should involve the product rule. ) +r = p+(q+r) + q geometric methods eg. trigonometry, cosine rule, sine rule. Eg. Forces 12 newtons and 8 newtons acting at a point have a resultant of 9 newtons. Grade: 11 Trigonometry * Independent practice - textbook assignments and worksheets ... Given translated function be able to tell the rule ... Learner Activity: * Define sine, cosine, tangent * Identify ... Graph of the Trigonometric Functions point moves, the graph of either the sine or the cosine function is traced. Students are strongly encouraged to use this applet to understand why UCLA Math Content Programs for Teachers/LUCI Project PYTH - PP1 Pythagorean Theorem and Distance Formula PYTHAGOREAN THEOREM AND DISTANCE FORMULA Participants review ... TERM / WEEK - Solve word problems involving quadratic equations Investigation worksheets on GSP based on the angle properties ... Sine Rule, including ambiguous Apply the Cosine Rule for any triangle. Worksheet for Test #8 Use the area formulas you learned in the chapter to solve the following problems #15 -22. 15. ) Determine the area of the triangle shown in Problem #9. Worksheet for Test #8 Use the Law of Sines Trigonometry Worksheet for Test #8 Use the Law of Sines to solve for the missing dimension in each problem. NOTE: Use the generic triangle below to help you place ...
{"url":"http://www.cawnet.org/docid/sine+rule+and+cosine+rule+worksheets/","timestamp":"2014-04-18T10:39:57Z","content_type":null,"content_length":"57058","record_id":"<urn:uuid:564dd0bd-ffc8-416e-9951-693c290be7d5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
free online math tutorial for grade 11 Best Results From Yahoo Answers Youtube From Yahoo Answers Question:Is there anything thats is free online where i can go over the whole grade 11 math course, so i can get ahead in class. its ontario,canada math MCR 3UG functions grade 11 university prep im taking the course now Answers:Try this. http://hippocampus.org/;jsessionid=6E7B1A02CEC629AB17E8F7F9A2B1BE08 Question:first two questions are free service Answers:I can help you for free.....Just send me an e-mail or add me on Yahoo! messenger.....My e-mail address is jesusfreak_012008@yahoo.com . I am a pro at upper level math. I am a sophmore in high school. I have already completed the necessary math requirements for graduation. It was easy. I am willing to help anyone I can. Math is FUN!! You just have to be patient and determined. Please reply back ASAP if interested/ Question:my finals are coming up, and i need to study for my math final! I know theres websites where you can play math games for like grade 2 level, but do you know of any for about grade 11? (math 20?) i feel like playing a game, will help give me more enthusiasm for studying my math... thanks!! :) Answers:give a try to http://www.meritnation.com Play study puzzles and see online study videos there Question:like reviews n those Answers:Self.study free at AplusClick From Youtube Free math tutorial tutoring lesson - learning ratio concept with word problems :free math tutor tutoring lesson tutorial on ratio concept or equivalent ratio with word problem grade GED preparation. How to do ratio word problem, how to solve equivalent ratio word problem. homeschooling, homeschool, home schooling, Grade 4 and up. Free online math tutoring on ratio and equivalent ratio with word problems. Grade 8 or 9 level. GED preparation. To get more out of this video, visit our website, EducateTube.com, and download free worksheets with answers. Learn math at your own pace for free. Please donate to help continue our website at: www.EducateTube.com Thanks! Free math tutorial tutoring lesson- learning ratio concept with word problems :free math tutorial, free math tutoring, free math lesson, homeschooling, homeschool, home schooling Free online math tutoring on ratio and equivalent ratio with word problems. Grade 8 or 9 level. GED preparation. To get more out of this video, visit our website, EducateTube.com, and download free worksheets with answers. Learn math at your own pace for free. Please donate to help continue our website at: www.EducateTube.com Thanks!
{"url":"http://www.edurite.com/kbase/free-online-math-tutorial-for-grade-11","timestamp":"2014-04-17T15:29:06Z","content_type":null,"content_length":"68459","record_id":"<urn:uuid:d8b11131-7f89-48ba-8d5f-896f8d681e65>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
October 14th 2008, 03:48 AM Given a triangle ABC, Let A',B',C' be midpoints of [BC],[CA] and [AB], respectively: a) Consider L element of BC-{B,C}, M element of CA-{C,A}, N element of AB-{A,B} such that AL,BM,CN are concurrent if P,Q,R are respective midpoints of [AL], [BM],[Cn] prove that PA',QB',RC' are concurrent or parallel b) Consider collinear points U element of BC-{B,C} V element of CA-{C,A} W element of AB-{A,B} Prove that respectively midpoints X,Y,Z of [AU],[BV],[CW] are collinear
{"url":"http://mathhelpforum.com/geometry/53611-midpoints-print.html","timestamp":"2014-04-23T16:32:05Z","content_type":null,"content_length":"3416","record_id":"<urn:uuid:95ab535b-01bb-469b-a1c5-ff1059807e21>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
The Michelson Interferometer For the same reason, any observer placed in the system's center O and using a radar signal would find by timing the echoes that all A, B, C and D distances are the same. However, he would not notice that the four radar echoes are two times longer because his clock also runs two times slower according to the gamma factor. Because all forces are transmitted by means of waves, which undergo the Doppler effect, all matter mechanisms are also slowed down in the same way. All happens with a slower rate of time. Most often, one speaks about time dilation. This is incorrect. Time is a concept, an idea; it does not really exist. But a clock does exist, and it can tick slower. This does not mean that time runs slower. Secondly, waves (or planes) would reach the A point in the rear much more rapidly. Because of the Doppler effect, the B point will be attained later. Henri Poincare showed how, inside such a system, the central O observer must use light or radio signals in order to synchronize A and B clocks. Because the signal speed is much faster backward, A will be in advance and B will be late. However, nobody would be able to notice the resulting time shift. Inside such a moving frame of reference, Poincare showed that such local hours lead to a sort of virtual simultaneity. One cannot detect his actual speed any more. All happens as if the system was perfectly at rest inside aether. In addition, waves propagating forward contract according to 1 – beta while those moving backward expand according to 1 + beta. The backward wavelength expansion ranges only from 1 to 2 while waves can be infinitely compressed forward. It turns out that the wave compression is much more severe; it increases matter's energy according to the gamma factor. So matter's mass/energy is doubled for .866 c and this was also discovered by Lorentz. As a matter of fact, this phenomenon could be regarded as Lorentz's fourth transformation: 1 – Distances along the displacement axis contract. 2 – All phenomena occur with a slower rate of time. 3 – A time shift appears between the front and the rear. 4 – Matter is gaining in mass and the mass gain becomes kinetic energy. This was the secret: matter is made of waves. 100 years were needed to solve this mystery. It was obvious, though, because Lorentz's equations are almost a copy of the Woldemar Voigt's ones on the Doppler effect (1887). Actually, the only difference was an unknown constant whose k = 1 value was finally discovered by Lorentz and Poincaré in 1904. It is that simple: matter behaves like waves. Standing wave contraction. Very few people noticed that because light waves are reflected back on the interferometer's mirrors, standing waves appear inside both of its branches. Moreover, in 1904, standing waves compression was not a well known phenomenon. As far as I know it has been discovered by Mr. Yuri Ivanov, and only me and Mr. Serge Cabala seem to have studied it in an acceptable manner. For more details about moving standing waves (Ivanov's "lively standing waves") see standing waves. So, in the animation, I managed to add some reference marks along both branches in order to locate the nodes' position. It should be pointed out that such nodes are still present in the direction of motion, notwithstanding the fact that the wavelength is two times longer while the waves are traveling backward. This is possible because those waves also seem to travel two times faster. One needs a little concentration to observe this, but it finally becomes obvious. The mean relative velocity of light in order to perform a complete round-trip can be calculated as shown above. Let's repeat that waves behave in exactly the same manner as planes in the presence of wind. So the compression rate is also given by g and g squared: Relative on-axis mean wave velocity: v = c g^ 2 On-axis standing wave compression: lambda' = lambda * g^ 2 Relative transverse constant wave velocity: v = c g Transverse standing wave compression: lambda' = lambda * g Those results can also be established according to the "relative" Doppler effect. I elaborated the first equation below around 1998. It derives from the regular Doppler effect 1 – beta * cos phi, but the number one gradually transforms to g until phi reaches 90°. The second formula yields the same results; I adapted it from Mr. Ivanov's web site. It should be pointed out that these equations also indicate the relative wave velocity according to Michelson's calculus. Using a 90° phi angle, one can easily note that transverse waves undergo a compression according to Lorentz's g factor. Because the relative transverse wave velocity is slowed down according to g, the time needed to perform the transverse go and return trip is also longer according to 1/g. The calculus for the on-axis whole round trip is a bit more complicated because the go and return trips are different, but one can use the simpler 1 – beta formula forward and 1 + beta backward. Finally, the standing wave compression ratio equals the change in the relative velocity of light on a go and return trip. Both calculi yield the same results. Michelson preferred velocity and did not use the standing wave method, which is simpler though, especially for studying the Kennedy-Thorndyke experiment. Actually, Kennedy and Thorndyke did not take the frequency reduction into account. The overall increased wavelength cancels the transverse contraction and reduces the on-axis contraction to g. So they were wrong.
{"url":"http://www.rhythmodynamics.com/Gabriel_LaFreniere/sa_Michelson.htm","timestamp":"2014-04-20T05:47:36Z","content_type":null,"content_length":"85956","record_id":"<urn:uuid:bed11638-0be5-4546-ab8e-5bf81149f42e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
A Numerical Model for Railroad Freight Car-to-Car End Impact Discrete Dynamics in Nature and Society Volume 2012 (2012), Article ID 927592, 11 pages Research Article A Numerical Model for Railroad Freight Car-to-Car End Impact School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China Received 11 September 2012; Revised 9 November 2012; Accepted 21 November 2012 Academic Editor: Wuhong Wang Copyright © 2012 Chao Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A numerical model based on Lagrange-D'Alembert principle is proposed for car-to-car end impact in this paper. In the numerical model, the friction forces are treated by using local linearization model when solving the differential equations. A computer program has been developed for the numerical model based on Runge-Kutta fourth-order method. The results are compared with the Multibody Dynamics/Kinematics software SIMPACK results and they are close. The ladings' relative displacement to struck car and the relative displacement between two ladings get larger as impact speed increases. There is no displacement between two ladings when the contact surfaces have the same friction coefficient. 1. Introduction The freight damage incurred during railroad transportation is a serious economic and safety problem. The railroad freight car’s dynamic characteristic leads to most of the freight damage. The dynamics of railroad car and freight damage can be divided into two groups:(1)during the marshalling operation in train yard, the car-to-car end impacts from coupling cause high car and lading acceleration;(2)the car-body vibrations come from track irregularities and some extra forces, such as the wind, and so forth. Most of the damage is attributed to car-to-car end impacts in the marshalling yard, so more focus is given on it when working out the load support and load securement method. Railroad freight car impact tests are always carried out for checking if the method can ensure transportation safety and no damage to the ladings. A railroad freight car impact test usually needs a lot of work to do; it needs much workforce, material resources, and financial support. Most of the time, carrying out an impact test will lead to disorder and break-off transportation. Compared with impact test, numerical simulation is a more economical and faster method of investigating the effect on ladings when coupled. Car-to-car end impact is a special multibody dynamic problem between railroad freight cars. Investigation into the multibody dynamic has been carried out in the works [1–9]. Later, mathematical models are derived in [10] for studying the effect of impact on packaging. At the same time, numerical methods need to be developed for solving mathematical models. Euler tangent method, Newmark- method, Wilson- method, and Runge-Kutta fourth-order method are developed and widely applied in solving mathematical models [11–17]. Runge-Kutta fourth-order method means that the truncation error per step is . It is an important numerical method used extensively in engineering problems for solving first-order differential equations. Draft gear is the most important component of a freight car during impact. Its performance is investigated by mechanics dynamics software in [18–21]. The draft gear’s characteristic is analyzed under different impact speeds. The force versus draft gear travel of the Chinese MT-2 under impact speed of 58km/h is given by simulation and test. In the paper, the second-order differential equations of the car-to-car end impact are converted to first-order differential equations and solved by using Runge-Kutta 4th order method. 2. Draft Gear Interaction Process Railroad car-to-car end impacts usually occur in train yard, and most of the time the struck car is static when coupled with the striking car. The draft gear is an important component for reducing freight and car damage during car-to-car end impacts. MT-2 friction draft gear is widely used in the class 70t universal freight cars in China. This draft gear is composed by springs and friction mechanism; when it is compressed, part of kinetic energy is converted to friction energy and part of kinetic energy is converted to potential energy. MT-2 draft gear has different force versus travel characteristic curves when loading and unloading. In Figure 1, the irreversible force versus draft gear travel characteristic curve is shown [22]. As shown in Figure 1, is draft gear travel and is speed difference between striking car and struck car. means draft gear loading process and means unloading process. Figure 1 shows that the resistant force in loading process is larger than unloading process. In the numerical calculation program, draft gear force versus travel characteristic curve is based on the test results, and the force is calculated by linear interpolation. MT-2 draft gear force versus travel characteristic curves under impact speeds of 5km/h, 6km/h, 7km/h, and 8km/h are presented in the appendix. 3. Car-to-Car End Impact Dynamic Models 3.1. Railroad Freight Car Impact System Railroad freight car impact test is using a striking car with a certain speed running to a static struck car and collides. The longitudinal status of the ladings and struck car is mainly observed during impact for checking the loading support and loading secure method. The method must ensure transportation safety and no lading damage. The assumptions in models are(1)the wind acting on the striking car and struck car, and the rolling resistance between wheel and rail are neglected;(2)the car-body deformations during impact are neglected;(3)the car-body vertical bounce, yaw, pitch, and sway vibrations are neglected;(4)no lateral forces between ladings. Sometimes more than one lading are loaded on freight car. There are longitudinal forces between ladings and car, between adjacent ladings. Figure 2 shows the railroad freight car-to-car end impact dynamic system, where , represent mass and displacement, represents striking car, represents struck car, represent ladings, , are the stiffness and damping coefficients between ladings and car, , are the stiffness and damping coefficients between ladings and , is draft gear force, and is distance between two cars. 3.2. Longitudinal Dynamic Equations The striking car, struck car and ladings in the impact dynamic system are treated as mass elements. According to the assumptions in Section 3.1, the external forces, constraint forces and inertial forces are an equivalent static system based on Lagrange-D’Alembert principle. Then, universal longitudinal dynamic equations can be derived for each mass element. External force acting on the striking car is the draft gear force. So the differential equation for striking car is where is gross weight of the striking car and is calculated by the following The acceleration and velocity initial values of the striking car are , ; the initial value of the draft gear force is . The external forces acting on the struck car are the draft gear force and forces between ladings and car, where is the struck car tare weight and the acceleration and velocity initial values of struck car are , . External forces acting on lading are the forces between lading and car, between adjacent ladings, where the acceleration and velocity initial values of lading are , . 4. Double-Stack Loading Impact Models Double-stack loading method in gondola car is proposed as an example for analyzing longitudinal relation between ladings and car. It shows that the numerical method applied in railroad freight car-to-car end impact simulation. In the model, the striking car and struck car are the same type and have the same draft gears. 4.1. Load Support and Load Securement Method and Force Analysis Figure 3 shows the double-stack loading in a 70t class gondola car. The securement method is using friction cushion to enlarge friction force. The above struck car system includes three mass elements that are the struck car, 1st lading and 2nd lading. Forces acting on struck car are draft gear force and friction force from 1st lading; forces acting on the 1st lading are friction forces from struck car and the 2nd lading; force acting on the 2nd lading is friction force from 1st lading. The force analysis is illustrated in Figure 4 4.2. Dynamic Equations of Motion The universal longitudinal dynamic equations in Section 3.2 can be rewritten based on the force analysis As striking car and struck car have the same draft gear type, so the travel of one draft gear is given as The speed difference between striking car and struck car is given as The model has two one-dimensional friction elements: one is between 1st lading and car-body and the other is between 1st lading and 2nd lading. The one-dimensional friction element’s friction force direction is dependent on the direction of the relative sliding velocity. Figure 5(a) shows that the direction of friction force changes abruptly as the direction of relative velocity changes. More calculation time is needed near the zero-point, and even the differential equations cannot be integrated at zero-point. The friction element is treated by using a local linearization model [23], which uses a parameter called switching speed. Figure 5(b) illustrates linear relationship between friction force and relative velocity, and the friction force is given as 4.3. Solution Methodology For using Runge-Kutta fourth-order method, second-order differential equations need to be rewritten in the form of first-order differential equations. In general, to solve first-order differential equation , Runge-Kutta fourth-order method is given as where is the step size. Let in the striking car’s second-order differential equation, then the reduced-order differential equations are given as In the same way, the second-order differential equations of other mass elements are given as, 4.4. Numerical Results and Discussion The type of draft gear is MT-2 in the dynamic models; the impact speeds are 5km/h, 6km/h, 7km/h, and 8km/h. The simulation is for studying relative displacement of 1st lading and 2nd lading, which are secured by the friction cushion. Case 1 only has friction cushion between 1st lading and car floor; Case 2 has friction cushion between 1st lading and car floor, and between two ladings; the friction cushion in Case 3 is the same as Case 1, but two ladings have different weight. The parameters in numerical model are given in Table 1. A virtual model is built in Multibody Dynamics/Kinematics software SIMPACK which has the same parameters as Case 1 to validate the model and numerical method. It can be observed from Figure 6 that there is an excellent agreement between the results from Runge-Kutta fourth-order method and SIMPACK. The displacement between ladings and car-body and between 1st lading and 2nd lading increased with impact speeds. Figure 7 shows the displacement between ladings and car-body under Case 2 versus different impact speeds. The displacement between ladings and car-body increased with impact speeds, but the displacement between 1st lading and 2nd lading is zero as two surfaces have the same friction coefficient. In Case 3, the weight of 2nd lading is less than that in Case 1, so the gross weight of struck car reduced. The displacement between ladings and car-body versus impact speeds are illustrated in Figure 8. The displacements between ladings and car-body are increased compared with Case 1 under the same impact speed. 5. Conclusion In this paper, railroad freight car-to-car end impact system and the influence on ladings are considered. To derive differential equations of the system motion, forces acting in the system are analyzed and Lagrange-D’Alembert principle is used. The obtained solution of the differential equations by Runge-Kutta fourth-order method is close to the results from SIMPACK. Based on numerical results from double-stack model, it is concluded that the higher the impact speed, the larger the ladings’ relative displacement to struck car. The lower weight the ladings, the larger the ladings’ relative displacement to struck car. There is no displacement between two ladings if they have the same friction coefficient between struck car and 1st lading and between 1st lading and 2nd lading. See Figures 9(a)–9(d). This paper is supported by “the Fundamental Research Funds for the Central Universities” (2011JBM246). 1. W. Wang, Vehicle's Man-Machine Interaction Safety and Driver Assistance, China Communications Press, Beijing, China, 2012. View at Zentralblatt MATH 2. H. J. Fletcher, L. Rongved, and E. Y. Yu, “Dynamics analysis of a two-body gravitationally oriented satellite,” Bell System Technical Journal, vol. 42, no. 5, pp. 2239–2266, 1963. 3. W. W. Hooker and G. Margulies, “The dynamical attitude equations for an $n$-body satellite,” American Astronautical Society. Journal of the Astronautical Sciences, vol. 12, pp. 123–128, 1965. 4. W. W. Hooker, “A set of r-dynamical attitude equations for an arbitrary n-body satellite having rotational degrees of freedom,” American Institute of Aeronautics and Astronautics Journal, vol. 8, no. 7, pp. 1205–1207, 1970. View at Zentralblatt MATH · View at Scopus 5. R. Schwertassek and R. E. Roberson, “A state-space dynamical representation for multibody mechanical systems part I: systems with tree configuration,” Acta Mechanica, vol. 50, no. 3-4, pp. 141–161, 1984. View at Publisher · View at Google Scholar · View at Scopus 6. R. E. Roberson and R. Schwertassek, Dynamics of Multibody Systems, Springer, Berlin, Germany, 1988. 7. W. Wang, X. Jiang, S. Xia, and Q. Cao, “Incident tree model and incident tree analysis method for quantified risk assessment: an in-depth accident study in traffic operation,” Safety Science, vol. 48, no. 10, pp. 1248–1262, 2010. View at Publisher · View at Google Scholar · View at Scopus 8. R. E. Roberson and W. Wittenburg, “A dynamical formalism for an arbitrary number of interconnected rigid bodies with reference to the problem of satellite attitude control,” in Proceedings of the 3rd International Federation of Automatic Control Congress (IFAC '66), London, UK, 1966. View at Publisher · View at Google Scholar 9. J. Wittenburg, Dynamics of Systems of Rigid Bodies, Teubner, Stuttgart, Germany, 1977. 10. R. V. Dukkipati, Vehicle Dynamics, CRC Press, Boca Raton, Fla, USA, 2000. 11. N. M. Newmark, “A method of computation for structural dynamics,” Journal of the Engineering Mechanical Division, vol. 85, no. 2, pp. 67–94, 1959. 12. E. L. Wilson, I. Farhoomand, and K. J. Bathe, “Nonlinear dynamic analysis of complex structure,” Earthquake Engineering and Structural Dynamics, vol. 1, no. 3, pp. 241–252, 1973. View at Scopus 13. K. J. Bath and E. L. Wilson, Numerical Methods in Finite Element Analysis, Prentice Hall, New York, NY, USA, 1976. 14. E. Süli and D. F. Mayers, An Introduction to Numerical Analysis, Cambridge University Press, Cambridge, UK, 2003. 15. J. Stoer and R. Bulirsch, Introduction to Numerical Analysis, Springer, New York, NY, USA, 3rd edition, 2002. 16. G. Da-Hu and S. A. Meguid, “Stability of Runge-Kutta methods for delay differential systems with multiple delays,” IMA Journal of Numerical Analysis, vol. 19, no. 3, pp. 349–356, 1999. View at Publisher · View at Google Scholar 17. W. Zhiqiao, L-Stable Methods for Numerical Solution of Structural Dynamics Equations and Multibody Dynamics Equations, National University of Defense Technology, 2009. 18. T. Guangrong, Study on System Dynamics of Heavy Haul Train, Southwest Jiaotong University, 2009. 19. W. Chengguo, M. Dawei, W. Xuejun, and L. Lan, “Research on structures and performances of heavy haul buffer by numerical simulation,” Railway Locomotive & Car, vol. 29, no. 5, pp. 1–4, 2009. 20. H. Guoliang, L. Fengtao, W. Xuejun, and W. Chengguo, “Characteristics research in spring friction draft gear of heavy haul freight cars,” Lubrication Engineering, vol. 34, no. 7, pp. 69–73, 2009. 21. L. Ming, H. Guoliang, W. Xuejun, and W. Chengguo, “Simulation analysis and experiment research into two typical draft gear of heavy haul freight cars,” Coal Mine Machinery, vol. 30, no. 9, pp. 76–78, 2009. 22. H. Yunhua, L. Fu, and F. Maohai, “Research on the characteristics of vehicle buffers,” China Railway Science, vol. 26, no. 1, pp. 95–99, 2005. View at Scopus 23. D. Karnopp, “Computer simulation of stick-slip friction in mechanical dynamic systems,” Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, vol. 107, no. 1, pp. 101–103, 1985. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/ddns/2012/927592/","timestamp":"2014-04-20T18:47:45Z","content_type":null,"content_length":"158976","record_id":"<urn:uuid:b330c3c1-0154-4967-b2b3-cc50aad9ac4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Rational and % operator remix Ryan Ingram ryani.spam at gmail.com Mon Mar 30 13:21:52 EDT 2009 2009/3/30 michael rice <nowgate at yahoo.com>: > I'm still not sure what some of the error messages I was getting were about. > As I wrote the function I tried to be aware of what "mixed mode" operations > were kosher ala This is a mistake, but understandable given your lispy background; there aren't really "mixed mode" operations in Haskell. One thing that might be confusing you: Numeric literals are really calls to "fromInteger"; that is, "5" is really "fromInteger (5 :: Generally you will find that :t in the interpreter is your friend: ghci> :t 1 1 :: (Num t) => t This says that the expression "1" can be of any type that is an instance of Num. ghci> :t (\a b -> a - b) (\a b -> a - b) :: (Num a) => a -> a -> a "-" takes two arguments that are of the same type, and returns something of that type. ghci> :m Data.Ratio ghci> :t (%) (%) :: (Integral a) => a -> a -> Ratio a "%" lifts objects from an integral type into a type that represents the ratio of two integers. ghci> :t floor floor :: (RealFrac a, Integral b) :: a -> b So "floor" is a "cast" operation that converts between any fractional type to any integral type. This is why you need to either use "fromInteger" or "%" on the result of "floor" to get it back as a Rational. -- ryan More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2009-March/058943.html","timestamp":"2014-04-24T06:24:24Z","content_type":null,"content_length":"4184","record_id":"<urn:uuid:0788f723-4ef8-4d44-b029-e91255a79a6a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: dB-LINEAR VOLTAGE-TO-CURRENT CONVERTER Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A dB-linear voltage-to-current (V/I) converter that is amenable to implementation in CMOS technology. In a representative embodiment, the dB-linear V/I converter has a voltage scaler, a current multiplier, and an exponential current converter serially connected to one another. The voltage scaler supplies an input current to the current multiplier based on an input voltage. The current multiplier multiplies the input current and a current proportional to absolute temperature and supplies the resulting current to the exponential current converter. The exponential current converter has a differential MOSFET pair operating in a sub-threshold mode and generating an output current that is proportional to a temperature-independent, exponential function of the input voltage. A device, comprising:a current multiplier that multiplies a first current and a current proportional to absolute temperature to generate a second current; andan exponential converter that applies an exponential transfer function to the second current to generate an output current, wherein:the exponential transfer function depends on a thermal voltage; andtemperature dependence of the current proportional to absolute temperature counteracts temperature dependence of the thermal voltage to cause the output current to be proportional to a temperature-independent, exponential function of the first current over an operating range of the device. The invention of claim 1, wherein, on a scale of the output current, the operating range is at least about 40 dB. The invention of claim 1, further comprising a voltage scaler that generates the first current based on an input voltage and applies the first current, through a resistive load, to the current multiplier, wherein, over the operating range, the output current is proportional to a temperature-independent, exponential function of the input voltage. The invention of claim 3, further comprising a current limiter that imposes at least one of an upper limit and a lower limit on the output current. The invention of claim 3, wherein the voltage scaler comprises:an operational amplifier;a reference current source coupled to a first input of the operational amplifier; anda feedback loop that connects an output and a second input of the operational amplifier, wherein:the input voltage is coupled to the first input of the operational amplifier through a programmably variable resistor; andthe output of the operational amplifier is coupled to the resistive load. The invention of claim 5, wherein:the reference current source is a programmably variable current source; andsettings of the programmably variable resistor and of the programmably variable current source define relationship between the input voltage and the first current. The invention of claim 1, wherein the current multiplier comprises:first and second differential amplifiers, each having a corresponding current mirror as a load;a first current source coupled as a tail supply of the first differential amplifier; anda second current source coupled as a tail supply of the second differential amplifier, wherein:the second current source generates the current proportional to absolute temperature;the first differential amplifier receives the first current; andthe second differential amplifier outputs the second current. The invention of claim 7, wherein the first current source generates a reference current that is independent of technological process used for fabrication of the current multiplier and of The invention of claim 7, wherein:each of the first and second differential amplifiers comprises a corresponding first transistor and a corresponding second transistor;gates of the first transistors are electrically connected to a first common node; andgates of the second transistors are electrically connected to a second common node. The invention of claim 1, wherein the exponential converter comprises:a differential transistor pair comprising a first transistor and a second transistor;a resistor that electrically connects a gate of the first transistor and a gate of the second transistor; anda current source that drives a reference current through the second transistor, wherein:the second current flows through the resistor; andthe output current flows through the first transistor. The invention of claim 10, wherein each of the first and second transistors is a MOSFET transistor that operates in a sub-threshold mode to enable the exponential converter to apply said exponential transfer function. The invention of claim 1, wherein:the current multiplier comprises:first and second differential amplifiers, each having a corresponding current mirror as a load;a first current source coupled as a tail supply of the first differential amplifier; anda second current source coupled as a tail supply of the second differential amplifier, wherein:each of the first and second differential amplifiers comprises a corresponding first transistor and a corresponding second transistor;gates of the first transistors are electrically connected to receive a first common voltage;gates of the second transistors are electrically connected to receive a second common voltage;the first current source generates a reference current that is independent of technological process used for fabrication of the current multiplier and of temperature;the second current source generates the current proportional to absolute temperature;the first differential amplifier receives the first current; andthe second differential amplifier outputs the second current; andthe exponential converter comprises:a differential transistor pair comprising a third transistor and a fourth transistor;a resistor that electrically connects a gate of the third transistor and a gate of the fourth transistor; anda third current source that drives a corresponding reference current through the fourth transistor, wherein:the second current flows through the resistor;the output current flows through the third transistor; andeach of the third and fourth transistors is a MOSFET transistor operating in a sub-threshold mode to enable the exponential converter to apply said exponential transfer function. The invention of claim 1, further comprising a current limiter that imposes at least one of an upper limit and a lower limit on the output current. The invention of claim 13, wherein the current limiter comprises:a first reference-current source directly coupled to an output terminal;an operational amplifier;a second reference-current source coupled to a first input of the operational amplifier, wherein:an output and a second input of the operational amplifier are connected via a feedback loop; andthe output current is applied to the second input of the operational amplifier; andfirst and second current mirrors coupled between second input of the operational amplifier and the output terminal, wherein:the output terminal outputs a limited current;a reference current generated by the first reference current source sets the lower limit; anda reference current generated by the second reference current source sets the upper limit. A method, comprising:multiplying a first current and a current proportional to absolute temperature to generate a second current; andapplying an exponential transfer function to the second current to generate an output current, wherein:the exponential transfer function depends on a thermal voltage; andtemperature dependence of the current proportional to absolute temperature counteracts temperature dependence of the thermal voltage to cause the output current to be proportional to a temperature-independent, exponential function of the first current over a specified operating range. The invention of claim 15, further comprising generating the first current based on an input voltage, wherein, over the operating range, the output current is proportional to a temperature-independent, exponential function of the input voltage. The invention of claim 16, further comprising imposing at least one of an upper limit and a lower limit on the output current. The invention of claim 15, wherein the step of multiplying comprises:providing first and second differential amplifiers, each having a corresponding current mirror as a load;coupling a first current source as a tail supply of the first differential amplifier; andcoupling a second current source as a tail supply of the second differential amplifier, wherein:the second current source generates the current proportional to absolute temperature;the first differential amplifier receives the first current; andthe second differential amplifier outputs the second current. The invention of claim 15, wherein the step of applying comprises:providing a differential transistor pair that comprises a first transistor and a second transistor;providing a resistor that electrically connects a gate of the first transistor and a gate of the second transistor; anddriving a reference current through the second transistor, wherein:the second current flows through the resistor;the output current flows through the first transistor;each of the first and second transistors is a MOSFET transistor; andthe step of applying comprises configuring said MOSFET transistors to operate in a sub-threshold mode to enable said exponential transfer function. A device, comprising:means for multiplying a first current and a current proportional to absolute temperature to generate a second current; andmeans for applying an exponential transfer function to the second current to generate an output current, wherein:the exponential transfer function depends on a thermal voltage; andtemperature dependence of the current proportional to absolute temperature counteracts temperature dependence of the thermal voltage to cause the output current to be proportional to a temperature-independent, exponential function of the first current over an operating range of the device. BACKGROUND OF THE INVENTION [0001] 1. Field of the Invention The present invention relates to electronics and, more specifically, to voltage-to-current (V/I) converters having an exponential transfer function. 2. Description of the Related Art This section introduces aspects that may help facilitate a better understanding of the invention(s). Accordingly, the statements of this section are to be read in this light and are not to be understood as admissions about what is in the prior art or what is not in the prior art. An exponential (or dB-linear) voltage-to-current (V/I) converter is a key component for the design of automatic gain-control (AGC) circuits, which are used in a variety of applications, such as wireless communications devices, hearing aids, and disk drives. A representative AGC circuit employs an exponential V/I converter in the feedback loop that controls the gain of a variable-gain amplifier (VGA). The exponential characteristic of the V/I converter enables the AGC circuit to advantageously have a substantially constant settling time for a variety of initial input-signal conditions, which is very desirable for the above-specified applications. Additional details on the use of exponential V/I converters in AGC circuits can be found, e.g., in U.S. Pat. No. 6,369,618, which is incorporated herein by reference in its entirety. One problem with exponential V/I converters is that they are not straightforwardly amenable to implementation in CMOS technology. More specifically, unlike bipolar transistors, which have an inherent exponential transfer characteristic, MOSFET transistors have a square-law transfer characteristic in strong inversion. As a result, designing a CMOS V/I converter that exhibits an exponential transfer characteristic and has other desirable properties is relatively difficult. SUMMARY OF THE INVENTION [0007] Problems in the prior art are addressed by various embodiments of an exponential (or dB-linear) voltage-to-current (V/I) converter that is amenable to implementation in CMOS technology. In a representative embodiment, the exponential V/I converter has a voltage scaler, a current multiplier, and an exponential current converter serially connected to one another. The voltage scaler supplies an input current to the current multiplier based on an input voltage. The current multiplier multiplies the input current and a current proportional to absolute temperature and supplies the resulting current to the exponential current converter. The exponential current converter has a differential MOSFET pair operating in a sub-threshold mode and generating an output current that is proportional to a temperature-independent, exponential function of the input voltage. Advantageously, the exponential V/I converter can be implemented to have a dB-linear operation range as wide as about 40 dB. According to one embodiment, provided is a device having (A) a current multiplier that multiplies a first current and a current proportional to absolute temperature to generate a second current; and (B) an exponential converter that applies an exponential transfer function to the second current to generate an output current. The exponential transfer function depends on a thermal voltage. Temperature dependence of the current proportional to absolute temperature counteracts temperature dependence of the thermal voltage to cause the output current to be proportional to a temperature-independent, exponential function of the first current over an operating range of the device. According to another embodiment, provided is a method having the steps of: (A) multiplying a first current and a current proportional to absolute temperature to generate a second current; and (B) applying an exponential transfer function to the second current to generate an output current. The exponential transfer function depends on a thermal voltage. Temperature dependence of the current proportional to absolute temperature counteracts temperature dependence of the thermal voltage to cause the output current to be proportional to a temperature-independent, exponential function of the first current over a specified operating range. According to yet another embodiment, provided is a device having (A) means for multiplying a first current and a current proportional to absolute temperature to generate a second current; and (B) means for applying an exponential transfer function to the second current to generate an output current. The exponential transfer function depends on a thermal voltage. Temperature dependence of the current proportional to absolute temperature counteracts temperature dependence of the thermal voltage to cause the output current to be proportional to a temperature-independent, exponential function of the first current over an operating range of the device. BRIEF DESCRIPTION OF THE DRAWINGS [0011] Other aspects, features, and benefits of the present invention will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which: FIG. 1 shows a block diagram of an exponential (or dB-linear) voltage-to-current (V/I) converter according to one embodiment of the invention; FIG. 2 shows a circuit diagram of a voltage scaler that can be used in the exponential V/I converter of FIG. 1 according to one embodiment of the invention; [0014]FIG. 3 shows a circuit diagram of a current multiplier that can be used in the exponential V/I converter of FIG. 1 according to one embodiment of the invention; FIG. 4 shows a circuit diagram of an exponential current converter that can be used in the exponential V/I converter of FIG. 1 according to one embodiment of the invention; and FIG. 5 shows a circuit diagram of a current limiter that can be used in the exponential V/I converter of FIG. 1 according to one embodiment of the invention. DETAILED DESCRIPTION [0017] FIG. 1 shows a block diagram of an exponential (or dB-linear) voltage-to-current (V/I) converter 100 according to one embodiment of the invention. Converter 100 receives input voltage V and converts it into output current I so that there is an exponential relationship between the input voltage and the output current. As further described below, converter 100 is amenable to implementation in CMOS technology and is advantageously capable of maintaining the exponential transfer characteristic over a relatively wide (e.g., about 40 dB) output-current range. Converter 100 has a voltage scaler 110 that conditions input voltage V for further processing in the subsequent circuit blocks of the converter. More specifically, voltage scaler 110 scales input voltage V and adds bias voltage V to the scaled voltage according to Eq. (1): where V[110] is the output voltage of the voltage scaler, and γ is a scaling factor. In one embodiment, one or both of bias voltage V and scaling factor γ are programmable so that output voltage V remains in an optimal range for the entire variability range of input voltage V . In a representative embodiment, |V |≈0.2V and γ≈0.5. Converter 100 applies output voltage V to resistive load R , which drives current I through that load. In effect, the combination of voltage scaler 110 and resistive load R serves as a voltage-to-current converter that converts input voltage V into current I . The subsequent signal processing in converter 100 is current-based and converts current I into output current I Converter 100 further has a current multiplier 120 whose output current I is expressed according to Eq. (2): η is a constant, and I is a current proportional to absolute temperature (PTAT). In effect, current multiplier 120 generates output current I by multiplying the input current (i.e., current I ) and current I As further described below, the temperature proportionality of current I is utilized to make the exponential transfer characteristic of converter 100 substantially temperature independent and enable the converter to operate accurately and reliably in a variety of ambient conditions without a thermostat. Output current I produced by current multiplier 120 is applied to an exponential current converter 130 that converts output current I into output current I according to Eq. (3): 3 = A exp ( σ I 2 V T ) ( 3 ) ##EQU00001## where A and σ are constants, and V is the thermal voltage (=k /q, where k is the Boltzmann constant, T is temperature in Kelvin, and q is the electron charge). Eqs. (2) and (3), taken together, indicate that current multiplier 120 and exponential current converter 130 work together to provide a substantially temperature-independent, exponential transfer function between currents I and I . More specifically, according to Eqs. (2) and (3), the argument of the exponent in Eq. (3) contains current I and thermal voltage V in the nominator and denominator, respectively. Since both current I and thermal voltage V are linear functions of temperature, their temperature dependencies cancel each other, thereby causing the exponential transfer function between currents I and I to be substantially temperature independent. Exponential current converter 130 applies output current I to a current limiter 140, where it is processed to generate output current I . More specifically, current limiter 140 imposes lower limit I and upper limit I onto current I . If the magnitude of current I is smaller than lower limit I , then current limiter 140 forces I . If the magnitude of current I is larger than upper limit I , then current limiter 140 forces I . If the magnitude of current I is between lower limit I and upper limit I , then current limiter 140 forces I FIG. 2 shows a circuit diagram of a voltage scaler 200 that can be used as voltage scaler 110 according to one embodiment of the invention. Voltage scaler 200 has a reference current source 202, an operational amplifier 204, and four resistors R interconnected as shown in FIG. 2. If R , then voltage scaler 200 implements the transfer function defined by Eq. (1), wherein: V bias ≈ I ref R 2 ( R 3 + R 4 ) R 3 ( 4 a ) γ ≈ R 2 ( R 3 + R 4 ) R 1 R 3 ( 4 b ) ##EQU00002## where I[ref] is the current generated by reference current source 202. Note that resistor R is a programmably variable resistor, which enables programmability of scaling factor γ. In one embodiment, reference current source 202 is a programmably variable current source, which enables programmability of bias voltage V [0024]FIG. 3 shows a circuit diagram of a current multiplier 300 that can be used as current multiplier 120 according to one embodiment of the invention. Current multiplier 300 has two nested differential amplifiers, each having an active current-mirror load. The active devices of the first (outer) differential amplifier are MOSFET transistors M5 and M6, and the load of that differential amplifier is the current mirror formed by transistors MOSFET M1 and M2. Similarly, the active devices of the second (inner) differential amplifier are transistors MOSFET M7 and M8, and the load of that differential amplifier is the current mirror formed by MOSFET transistors M3 and M4. The gates of transistors M5 and M7 are both electrically connected to a common node having floating voltage V . The gates of transistors M6 and M8 are both electrically connected to a common node that receives reference voltage V In one embodiment, reference voltage V is supplied by a programmable reference-voltage source (not explicitly shown in FIG. 3 ). The voltage source is programmed to set a value of V so that there is a desired relationship between output current I and input voltage V . In particular, V is selected from a voltage range, wherein, if V changes, then the transfer function between output current I and input voltage V is translated along the voltage axis without changing its slope. Current multiplier 300 further has reference current sources 302 and 304 that function as tail supplies of the outer and inner differential amplifiers, respectively. Current source 302 is designed to generate reference current I that does not depend on the technological process used in the fabrication of current multiplier 300 or on the temperature of the current multiplier. In one embodiment, current source 302 can be implemented, as known in the art, using a conventional bandgap circuit. Current source 304 is designed to generate temperature-dependent PTAT current I (see also Eq. (2)). In one embodiment, current source 304 can be a PTAT current source disclosed, e.g., in U.S. Patent Application Publication No. 2008/0284493, which is incorporated herein by reference in its entirety. In one configuration, current source 304 generates current I so that, at room temperature (T =298 K), I Operation of transistors M5-M8 in current multiplier 300 is described by Eqs. (5a)-(5d), respectively: I bgap + I 1 2 = 1 2 μ n C ox W 1 l 1 ( V x - V a - V th ) 2 ( 5 a ) I PTAT + I 2 2 = 1 2 μ n C ox W 2 l 2 ( V x - V b - V th ) 2 ( 5 b ) I bgap - I 1 2 = 1 2 μ n C ox W 1 l 1 ( V ref - V a - V th ) 2 ( 5 c ) I PTAT - I 2 2 = 1 2 μ n C ox W 2 l 2 ( V ref - V b - V th ) 2 ( 5 d ) ##EQU00003## is the mobility of electrons; C is the capacitance of the oxide layer; W and l are the width and length, respectively, of transistors M5 and M6; W and l are the width and length, respectively, of transistors M7 and M8; V and V are the voltages indicated in FIG. 3 ; and V is the threshold voltage. If transistors M5-M8 are implemented so that ( W 1 / l 1 ) ( W 2 / l 2 ) = 1 , ##EQU00004## then V[x] and Eqs. (5a)-(5d) reduce to Eq. (2), wherein η=(I If current multiplier 300 is used in V/I converter 100 to implement current multiplier 120, then the following relationship exists between input voltage V and current I and i . Note that, for a given configuration of V/I converter 100, α and i are constants. FIG. 4 shows a circuit diagram of an exponential current converter 400 that can be used as exponential current converter 130 according to one embodiment of the invention. Exponential current converter 400 has a differential pair of MOSFET transistors M1 and M2 that are configured to operate in a sub-threshold mode (also referred to as a cut-off or weak-inversion mode). The gates of transistors M1 and M2 are electrically connected through resistor R . The gate of transistor M2 receives bias voltage V from a bias-voltage generator 410. Transistor M3 serves as a tail supply for the differential pair. A current source 402 drives reference current I through transistor M2. A current source 404 and transistor M4 are used to appropriately bias transistors M2 and M3. As known in the art, drain-to-source current I in a MOSFET transistor operating in a sub-threshold mode varies exponentially with gate-to-source voltage V , as expressed by Eq. (7): I ds ≈ I 0 exp ( V gs - V th nV T ) ( 7 ) ##EQU00005## where I[0] is a constant; and n=1+C , where C is the capacitance of the depletion layer. Applying Eq. (7) to transistor M2, one finds that: I ref 1 ≈ I 0 exp ( V bias 1 - V s - V th nV T ) ( 8 ) ##EQU00006## where V[s] is the voltage indicated in FIG. 4. Further applying Eq. (7) to transistor M1 and then using Eq. (8), one finds that: 3 ≈ I 0 exp ( V bias 1 + I 2 R 12 - V s - V th nV T ) = I ref 1 exp ( I 2 R 12 nV T ) ( 9 ) ##EQU00007## Note that Eq . (9) is equivalent to Eq. (3), wherein A=I and σ=R If current multiplier 300 and exponential current converter 400 are used in V/I converter 100 to implement current multiplier 120 and exponential converter 130, respectively, then the following relationship exists between input voltage V and current I 3 = B exp ( β V i n ) where B = I ref 1 exp ( R 12 I PTAT i c nV T I bgap ) and β = α R 12 I PTAT nV T I bgap . ( 10 ) ##EQU00008## Note that , for a given configuration of V/I converter 100, B and β are constants that do not depend on the temperature because the temperature dependencies of current I and thermal voltage V substantially cancel each other. Thus, Eq. (10) indicates that V/I converter 100 employing current multiplier 300 and exponential current converter 400 provides a temperature-independent, exponential transfer function between input voltage V and current I . In addition, current multiplier 300 and exponential current converter 400 are advantageously capable of exhibiting a dB-linear transfer function over a relatively wide (e.g., about 40 dB) operation range because the exponential current converter invokes the inherent exponential characteristic of MOSFET transistors in the sub-threshold operating mode. FIG. 5 shows a circuit diagram of a current limiter 500 that can be used as current limiter 140 according to one embodiment of the invention. Current limiter 500 has reference current sources 502 and 504, an operational amplifier 506 with a feedback network, and two current mirrors formed by transistors M1, M4, M5, and M6. Reference current I generated by current source 502 sets the minimum output current for current limiter 500. Reference current I generated by current source 504 sets the maximum output current for current limiter 500. Current source 502 sets the minimum output current for current limiter 500 because it is directly connected to an output terminal 508 of the current limiter. As a result, output current I has at least a current component corresponding to reference current I . Hence, output current I does not drop below the value of I even if current I becomes zero. Current source 504 sets the maximum output current for current limiter 500 in the following manner. Transistors M1 and M2 have substantially the same size, which causes the value of I to set the ON/OFF level for transistor M3. More specifically, if current I is smaller than I , then operational amplifier 506 holds transistor M3 in the OFF state. As a result, the two current mirrors formed by transistors M1, M4, M5, and M6 force output current I to mirror current I . However, if current I is greater than I , then operational amplifier 506 turns ON transistor M3, which sinks the excess current and causes the current flowing through transistor M1 to remain at the value of I . The two current mirrors then mirror the current flowing through transistor M1 onto output terminal 508, thereby substantially forcing output current I not to exceed the value of I While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Although certain embodiments of the invention have been described in reference to NMOS technology, the invention is not so limited. Various circuits of the inventions can also be implemented using the PMOS technology, the bipolar CMOS technology, and various non-MOS technologies, including implementations in an integrated circuit. Various modifications of the described embodiments, as well as other embodiments of the invention, which are apparent to persons skilled in the art to which the invention pertains are deemed to lie within the principle and scope of the invention as expressed in the following claims. As used herein, the term dB-linear means that, when plotted on a logarithmic scale over an operation range, the output current is a substantially linear function of the input voltage (or current), wherein slope of the linear function does not deviate from a specified value by more than about ±5%. Unless explicitly stated otherwise, each numerical value and range should be interpreted as being approximate as if the word "about" or "approximately" preceded the value of the value or range. It will be further understood that various changes in the details, materials, and arrangements of the parts which have been described and illustrated in order to explain the nature of this invention may be made by those skilled in the art without departing from the scope of the invention as expressed in the following claims. Although the elements in the following method claims, if any, are recited in a particular sequence with corresponding labeling, unless the claim recitations otherwise imply a particular sequence for implementing some or all of those elements, those elements are not necessarily intended to be limited to being implemented in that particular sequence. Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments. The same applies to the term "implementation." Also for purposes of this description, the terms "couple," "coupling," "coupled," "connect," "connecting," or "connected" refer to any manner known in the art or later developed in which energy is allowed to be transferred between two or more elements, and the interposition of one or more additional elements is contemplated, although not required. Conversely, the terms "directly coupled," "directly connected," etc., imply the absence of such additional elements. Transistors are typically shown as single devices for illustrative purposes. However, it is understood by those with skill in the art that transistors will have various sizes (e.g., gate width and length) and characteristics (e.g., threshold voltage, gain, etc.) and may consist of multiple transistors coupled in parallel to get desired electrical characteristics from the combination. Further, the illustrated transistors may be composite transistors. As used in the claims, the terms "source," "drain," and "gate" should be understood to refer either to the source, drain, and gate of a MOSFET or to the emitter, collector, and base of a bi-polar device when the present invention is implemented using bi-polar transistor technology. Patent applications by Bipul Agarwal, Irvine, CA US Patent applications by Dean Badillo, Laguna Beach, CA US Patent applications by Hasan Akyol, Newport Beach, CA US Patent applications by SKYWORKS SOLUTIONS, INC. Patent applications in class Converting input voltage to output current or vice versa Patent applications in all subclasses Converting input voltage to output current or vice versa User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20100194443","timestamp":"2014-04-19T03:33:26Z","content_type":null,"content_length":"64414","record_id":"<urn:uuid:d68438ef-339e-4c2e-bb31-27c4df4f706f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Downey, CA Trigonometry Tutor Find a Downey, CA Trigonometry Tutor ...Three of my SAT Math II students this past year were boarding school students in Russia; I tutored these girls remotely via Skype-esque video conferencing and advanced document mark-up & screen-sharing. AL of my students saw their SAT Math II scores improve after just two weeks of rigorous tutor... 60 Subjects: including trigonometry, reading, Spanish, chemistry ...I am extremely patient and understanding, with an adaptable teaching style based on the student's needs. I specialize in high school math subjects like Pre-Algebra, Algebra, Algebra 2/ Trigonometry, Precalculus and Calculus. I can also tutor college math subjects like Linear Algebra, Abstract Algebra, Differential Equations, and more. 9 Subjects: including trigonometry, calculus, geometry, algebra 1 ...In general, I favor a "hands-on" approach to teaching that involves a lot of interaction between the tutor and student. I look forward to hearing from youHi, I have been teaching all math subjects for many years. I can improve your math skills in just a very short time. 11 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...I use AutoCAD for 20 years that required deep understanding of trigonometry. My programming involved constant trigonometric calculations (2D and 3D graphics' programming). Like in Latin America Armenia (where I was born) is a "soccer" country so, I played soccer from my early ages, and I do play... 21 Subjects: including trigonometry, reading, English, geometry ...I finished my undergraduate education this past May at the University of Michigan, where I graduated with degrees in music (clarinet performance) and Honors German Studies. During my undergraduate years, I also took an interest in mathematics and chose to enroll in calculus and physics-based cos... 28 Subjects: including trigonometry, reading, English, writing Related Downey, CA Tutors Downey, CA Accounting Tutors Downey, CA ACT Tutors Downey, CA Algebra Tutors Downey, CA Algebra 2 Tutors Downey, CA Calculus Tutors Downey, CA Geometry Tutors Downey, CA Math Tutors Downey, CA Prealgebra Tutors Downey, CA Precalculus Tutors Downey, CA SAT Tutors Downey, CA SAT Math Tutors Downey, CA Science Tutors Downey, CA Statistics Tutors Downey, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Downey_CA_Trigonometry_tutors.php","timestamp":"2014-04-18T23:57:54Z","content_type":null,"content_length":"24274","record_id":"<urn:uuid:00c2c184-4f61-4d83-af56-78f275e1c65c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
The rows of this unitary matrix form an orthonormal set but the columns don't. Why? September 8th 2013, 02:54 PM #1 Super Member Nov 2008 The rows of this unitary matrix form an orthonormal set but the columns don't. Why? The rows of this ( {{1/3 - 2/3i,2/3i},{-2/3i,-1/3-2/3i}} - Wolfram|Alpha ) unitary matrix form an orthonormal set but, the columns of A do not form an orthonormal set despite this ( https:// en.wikipedia.org/wiki/Unitar...ent_conditions ) Wikipedia article saying that they should.Could someone please help me understand why the columns of A do not form an orthonormal set for this unitary matrix? Re: The rows of this unitary matrix form an orthonormal set but the columns don't. Wh Hmm...let's see: Let's take the inner product of the first two columns: $\left(\frac{1}{3} - \frac{2}{3}i\right)\left(\overline{\frac{2}{3}i} \right) + \left(-\frac{2}{3}i \right)\left(\overline{-\frac{1}{3} - \frac{2}{3}i} \right)$ $=\left(\frac{1}{3} - \frac{2}{3}i\right)\left(-\frac{2}{3}i \right) + \left(-\frac{2}{3}i \right)\left(-\frac{1}{3} + \frac{2}{3}i \right)$ $= -\frac{4}{9} - \frac{2}{9}i + \frac{4}{9} + \frac{2}{9}i = 0$. So it appears the columns are orthogonal. Taking norms, for the first column we have: $\sqrt{\left(\frac{1}{3} - \frac{2}{3}i \right)\left(\frac{1}{3} + \frac{2}{3}i \right) + \left(-\frac{2}{3}i \right)\left(\frac{2}{3}i \right)}$ $= \sqrt{\frac{1}{9} + \frac{4}{9} + \frac{4}{9}} = \sqrt{1} = 1$. I leave it to you to show that the second column is also a unit vector. I don't see the problem. Re: The rows of this unitary matrix form an orthonormal set but the columns don't. Wh I had just figured out that I was doing the dot products incorrectly and was going to mention it ... sorry that you had to type that out but, thank you! September 9th 2013, 08:15 AM #2 MHF Contributor Mar 2011 September 9th 2013, 08:28 AM #3 Super Member Nov 2008
{"url":"http://mathhelpforum.com/advanced-algebra/221826-rows-unitary-matrix-form-orthonormal-set-but-columns-don-t-why.html","timestamp":"2014-04-16T09:14:40Z","content_type":null,"content_length":"38186","record_id":"<urn:uuid:585a7d57-b028-4d66-937c-555107124c68>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
piecewise function August 9th 2009, 11:05 PM #1 Junior Member Aug 2009 Let y be a piecewise function given by y(r) =kr^2 − m, if r ≤ R; −2kR^3 / r, if r > R for some (k,m, r,R > 0) ∈ R. This function represents the gravitational potential of the Earth. Find the values of k for which y is continuous at r = R. any idea how to do this queation? thank you! You have some typo in your equation which makes it impossible for me to guess what the function is. I assume that you mean: $y(r) = \left\{\begin{array}{lcr}kr^2-m & if & r\leq R\\-\dfrac{2kR^3}{r} & if & r>R\end{array}\right.$ If $y_1(r)=kr^2-m ~ if ~ r\leq R$ and $y_2(r)= -\dfrac{2kR^3}{r} ~ if ~ r>R$ Then you have to solve the system of equations for k: $y_1(r)=y_2(r)~\wedge~y'_1(r) = y'_2(r)$ $y'_1(r) = y'_2(r): 2kr=\dfrac{2kR^3}{r^2}~\implies~\boxed{r=R}$ $y_1(r)=y_2(r): kR^2-m=-\dfrac{2kR^3}{R}~\implies~3kR^2=m~\implies~ \boxed{k=\dfrac m{3R^2}}$ Actually you have to calculate some limits to be exact. The final result will be the same which I've posted here. I assume that you mean: $y(r) = \left\{\begin{array}{lcr}kr^2-m & if & r\leq R\\-\dfrac{2kR^3}{r} & if & r>R\end{array}\right.$ If $y_1(r)=kr^2-m ~ if ~ r\leq R$ and $y_2(r)= -\dfrac{2kR^3}{r} ~ if ~ r>R$ Then you have to solve the system of equations for k: $y_1(r)=y_2(r)~\wedge~y'_1(r) = y'_2(r)$ $y'_1(r) = y'_2(r): 2kr=\dfrac{2kR^3}{r^2}~\implies~\boxed{r=R}$ $y_1(r)=y_2(r): kR^2-m=-\dfrac{2kR^3}{R}~\implies~3kR^2=m~\implies~ \boxed{k=\dfrac m{3R^2}}$ Actually you have to calculate some limits to be exact. The final result will be the same which I've posted here. thanks so much, you express the right queation, the only thing wrong is actually it is not y, I use y to take the place of "that symbol", I cant wrote or copy that symbol, it is o and I together, any idea what is it ? $\Phi$? or $\phi$? This is the greek letter phi, the first is capital the second is lower case. August 9th 2009, 11:14 PM #2 August 9th 2009, 11:35 PM #3 August 9th 2009, 11:55 PM #4 Junior Member Aug 2009 August 9th 2009, 11:59 PM #5 August 10th 2009, 12:05 AM #6 Junior Member Aug 2009
{"url":"http://mathhelpforum.com/calculus/97531-piecewise-function.html","timestamp":"2014-04-19T13:05:55Z","content_type":null,"content_length":"51023","record_id":"<urn:uuid:2c353f1a-29a0-43d4-803c-a6325a1beab1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
UBC Mathematics Department Colloquium Abstract: Professor Walter Rudin, Department of Mathematics, University of Wisconsin-Madison Analytic aspects of the Bohr Topology If G is an abelian group, then G^# denotes G equipped with the weakest topology that makes every character of G continuous. This is the Bohr topology of G. If G= \Bbb Z, the additive group of the integers, and A is a Hadamard set in \Bbb Z, it is shown that: (i) A-A has 0 as its only limit point in \Bbb Z^#, (ii) No Sidon subset of A-A has a limit point in \Bbb Z^#, (iii) A-A is a \Lambda (p) set for all p<\infty. This leads to an explicit example of a set which is \Lambda (p) for all p<\infty and is dense in \Bbb Z^#. If f(x) is a quadratic or cubic polynomial with integer coefficients, then the closure of f(\Bbb Z) in the Bohr compactification of \Bbb Z is shown to have Haar measure 0. Return to this week's seminars
{"url":"http://www.math.ubc.ca/Dept/Events/colloquia/rudin.html","timestamp":"2014-04-18T03:22:01Z","content_type":null,"content_length":"1481","record_id":"<urn:uuid:762a70ff-1f20-401d-9535-4c5eae2ae8d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Tech Briefs Improved Contact Solutions and Traction Calculations In the previous versions of ADINA, contact surfaces are identified by linear segments. For improved accuracy, in particular when quadratic elements are used, the actually-interpolated surfaces are employed in ADINA 8.3 to detect and calculate the contact conditions. Hence, in 3-D analysis, 10- and 11-node tet elements in contact use a 6-node contact segment, while 20- and 27-node brick elements use 8- and 9-node contact segments, respectively. This enhancement is also used for the 2-D and shell elements resulting frequently in a significant increase in accuracy for contact problems involving quadratic elements. A new, more accurate contact traction calculation algorithm is also used for all elements. These improved contact analysis procedures are available for linear and large deformation nonlinear solutions and are different from the "glueing algorithm" presented in the previous News, see ADINA News, Sep 15, 2005. In the glueing algorithm, surfaces are glued together, that is, separation and sliding are not possible, whereas in the new contact algorithm, the conditions of contact, sliding with friction, and separation are all automatically solved for. To demonstrate the accuracy of the improved ADINA 8.3 contact analysis procedure, the above shown 3-D patch test is solved using three different mesh combinations as shown in the figure below. In the first case, both bodies are meshed with 10-node tets. In the second case, both are meshed with 20-node bricks, and in the third case, the bottom body is meshed with 27-node bricks, while the top one with 11-node tets. Note that the top and bottom meshes are totally incompatible in each case, see also the last figure where the contact surfaces are separated and the top bodies have been rotated about the X axis by 180 degrees. In all three cases, the error in the calculated tractions is within ±6%. Note also that the band plotting of contact tractions is a new feature in ADINA 8.3. The glueing algorithm, referred to above, satisfies the patch test exactly, but the contact algorithm has much more general capabilities.
{"url":"http://www.adina.com/newsgC37.shtml","timestamp":"2014-04-18T11:20:07Z","content_type":null,"content_length":"15567","record_id":"<urn:uuid:dbeee1db-d932-49d7-af4e-8e34884c9053>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
This is the command window for XPPAUT. Click for a fullsize image. The data viewer, equation window, initial data window, and parameter window. This is a and the for a planar ODE along with the sets in blue and purple. An integrable system colored according to energy levels. The famous Lorenz equations projected along the coordinate axes. Click here to see it rotate. An animated Julia set for the complex quadratic map and of course the associated Mandelbrot set The basins of attraction for the solutions to Newton's method applied to the cube roots of unity. Linearly coupled pendula with no friction. Class IV cellular automata. Download the table ca5.tab before running ca100.ode The Grey-Scott PDE using the method of lines and the integrator CVODE. XPP allows you to make little cartoons using a simple scripting language so that you can create a physical representation of your ODE. Here are some same ODEs and the animation files as well as animated gifs of the simulations. Click on the miniature picture to bring up the full animation. Here is the simple undamped pendulum rendered with the animation file. This is a double pendulum rendered with the animation file. Here is another way to look at the Grey-Scott equations rendered with the animation file. The curves represent the spatial profiles as of the two chemical species. This shows the evolution of the relative phases for a 6x6 array of weakly coupled oscillators rendered with this animation file. The oscillators are nearly locked and have crossed a saddle-node bifurcation. Integrate for 80 time units and look at x33 drift. This is a model for a waterwheel with 10 cups. In the limit as the number of cups goes to infinity, the behavior is modeled by the Lorenz equations. This cartoon is rendered by this animation file.
{"url":"http://www.math.pitt.edu/~bard/xpp/ss/screen.html","timestamp":"2014-04-20T05:42:04Z","content_type":null,"content_length":"4449","record_id":"<urn:uuid:5becca9f-9b20-4d1b-b97c-c04d12c2cfb9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Collision polynomials up vote 5 down vote favorite Consider $P_n(x)$ polynomials defined through the recurrence relations $$P_n(x)=2(1-x)P_{n-1}(x)-(1+x)^2P_{n-2}(x),$$ with $P_0(x)=1$ and $P_1(x)=1-3x$. In fact, the explicit solution of these recurrence relations is given by the formula $$P_n(x)=\frac{1}{2}\left[ (1+i\sqrt{x})^{2n+1}+(1-i\sqrt{x})^{2n+1}\right].$$ What condition(s) should satisfy the number $n$, the polynomial $P_n(x)$ to be reducible? I have checked, that $P_n(x)$ is reducible for $n=4,7,10,12,13,16,17,19,\ldots$ What is special about these numbers? Another question: given a number $n$, for what numbers $0<m<n$ the polynomial $P_n(x)$ is divisible by $P_m(x)$? For example, $P_4$ is divisible by $P_1$; $P_7$ is divisible by $P_1$ and $P_2$; $P_ {10}$ is divisible by $P_1$ and $P_3$; $P_{12}$ is divisible by $P_2$; $P_{13}$ is divisible by $P_1$; $P_{16}$ is divisible by $P_1$ and $P_5$; $P_{17}$ is divisible by $P_2$ and $P_3$; $P_{19} $ is divisible by $P_1$ and $P_6$. These polynomials emerged in the solution of the certain collision problem (hence the name) from the book: David Morin, Introduction to Classical Mechanics With Problems and Solutions (Cambridge University Press, 2007). See problem 5.88 on page 192. It can be also found here (the problem for Week 19. The solution is also provided there, which, however, does not use the collision nt.number-theory polynomials factorization 3 It's always a good idea to try the OEIS when encountering a mysterious sequence: oeis.org/… – Felix Goldberg Jul 25 '13 at 11:59 Thanks! I was not aware of OEIS. So now we have a conjucture: $P_n$ is reducible if and only if $n=(m-1)/2$ for some odd nonprime integer $m$. Amusing! It remains to prove it (I have checked some other numbers from the OEIS list and all of them works). – Zurab Silagadze Jul 25 '13 at 12:39 add comment 2 Answers active oldest votes Writing $P_n(x)=\frac{1}{2}\left[ (i\sqrt{x}+1)^{2n+1}-(i\sqrt{x}-1)^{2n+1}\right]$ and using that $x^u-y^u$ divides $x^v-y^v$ once $u$ divides $v$ easily shows that $P_m$ divides $P_n$ once $2m+1$ divides $2n+1$. With a little more effort, one should see that this sufficient condition is necessary, too. up vote 4 down I'm not sure whether your first question would like to see an answer why $P_n$ is irreducible over $\mathbb Q$ when $2n+1$ is a prime. Anyway, this is the case because the vote accepted reciprocal $x^nP(1/x)$ is an Eisenstein polynomial with respect to the prime $2n+1$. add comment This is not an 'answer' per se, but a long comment with some observations which seem to pan out quite well. First notice that the sequence $4,7,10,12,13,16,17,19$ when passed through the function $\lambda n \rightarrow 2n+1$ (which is gotten from the closed-form expression you provided), one gets the following sequence: $$9,15,21,25,27,33,35,39$$ which factorizes as $$3*3,5*3,7*3,5*5,3*3*3,3*11,5*7,3*13.$$ The most important item about the above sequence is that it corresponds exactly to your observed sequence of factorizations. up vote 2 down vote I would then bet that the pattern of factorization is that only for those $n$ where $2n+1$ factorizes in only odd numbers do you get a divisibility relation in your $P_n$'s. Can $2n+1$ have even factors ...??? – Peter Mueller Jul 25 '13 at 12:42 @Peter - of course not, that was just sloppy phrasing on my part. I guess I could have said "when it factors". – Jacques Carette Jul 25 '13 at 12:45 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory polynomials factorization or ask your own question.
{"url":"http://mathoverflow.net/questions/137719/collision-polynomials","timestamp":"2014-04-20T03:42:55Z","content_type":null,"content_length":"60162","record_id":"<urn:uuid:accfe902-cfad-4b7e-8c33-b8f7b37ada9e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Reinhold Baer Born: 22 July 1902 in Berlin, Germany Died: 22 October 1979 in Zurich, Switzerland Click the picture above to see three larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Reinhold Baer's parents were Emil Baer and Bianka Timendorfer. Emil Baer had been born in Kempen, Posen, Prussia (since 1920 this town is Kepno, Poznan, Poland) on 10 October 1861 but at the time Reinhold was born he was living in Berlin where he was a successful clothing manufacturer. He was the son of Jakob Baer, a manufacturer, and Charlotte Gallewski, both from Kempen. Bianka was born in Berlin on 30 March 1875, the daughter of Adolf Timendorfer and Sarah Loewy. Adolf Timendorfer was a merchant who had served in the Prussian army during the wars of 1864, 1866 and 1870-71. He died in Berlin in 1922. Emil and Bianka were married in Berlin on 28 August 1901 and their only child was Reinhold, the subject of this biography. Sadly, it is relevant to the events of Baer's life to say at this point that both his parents and his four grandparents were Jewish. Now ([5] and [6]):- .. the family was prosperous. They lived in Charlottenburg, an elegant part of Berlin. As a small child, Reinhold travelled a good deal with his parents, and remembered staying in the best hotels and eating exotic food. Baer began his studies at the Kaiser Friedrich Schule in Charlottenburg in 1908. He remained in this Humanistisches Gymnasium for twelve years until he took the Abitur examination and graduated at the age of 18 in 1920. These were dramatic years with the 1914-18 war having a major impact on German life. Also Baer's father had died in Berlin on 28 April 1917. When Baer began his schooling he had belonged to a prosperous family looking forward to a good future. By the time of his graduation, however, the family fortune was no more and the future looked difficult and uncertain. Baer felt that he could best support the family by training to be an engineer. He also decided that he would be more successful if he became a Christian so, in 1920, he adopted the Protestant faith. He studied mechanical engineering for a year at the Technische Hochschule at Hannover but, after an academic year studying the theory and the summer of 1921 spent undertaking practical engineering work in a factory, he realised that this was not the subject for him. He then changed to mathematics and philosophy at the Albert-Ludwigs University of Freiburg im Breisgau beginning his course on 1 October 1921. Wolfgang Krull had just been appointed as a Privatdocent in Freiburg and Krull's thesis advisor, Alfred Loewy, was an ordinary professor there. Friedrich Karl Schmidt had been an undergraduate at Freiburg for two years when Baer arrived but they became friends ([5] and [6]):- Baer found the town and surrounding Black Forest countryside very much to his liking and retained a great fondness for Freiburg throughout his life. He went to Göttingen in 1922 and was influenced by Emmy Noether and Hellmuth Kneser (the son of Adolf Kneser) who supervised Baer's doctoral thesis on the classification of curves on surfaces. Baer won a scholarship for specially gifted students in 1924 and this enabled him to study at Kiel for a year with Helmut Hasse, Ernst Steinitz and Otto Toeplitz. He gained much mathematically in working with Hasse but this year also gave him the opportunity to continue his interest in philosophy and the foundations of mathematics for he made friends with Heinrich Scholz, a theologian and philosopher. During this year in Kiel, Baer wrote up his doctoral dissertation Kurventypen auf Flächen, and presented it to Göttingen in 1925. It was published in Crelle's Journal in 1927 and presented a classification of the homotopy classes of closed curves on a closed orientable surface of genus > 1. This was not his only publication in 1927 for in that year he also published Über nicht-archimedisch geordnete K örper and Algebraische Theorie der differentiierbaren Funktionenk örper. A post at Freiburg was offered to him by Alfred Loewy who was looking for an assistant. Baer held this assistant position from 1926 until 1928 and, during this time, he turned towards algebra. Loewy clearly was one of the main influences in this change of direction. There was another influence on Baer at Freiburg, however, namely Wolfgang Krull. Influenced by Loewy and Krull, he began undertaking deep research into various aspects of algebra. But an event at Freiburg was to have a major influence on his personal life. In the autumn of 1927([5] and [6]):- ... Baer had been asked by a school friend to look up the daughter of a friend of his who was coming from Leipzig to continue mathematical studies at Freiburg. According to Baer's own account he complied, but grudgingly. Thus he met Marianne Kirstein, who was later to become his wife. Their marriage took place in 1929 and Reinhold and Marianne had one son, Klaus Baer who become an Egyptologist and professor at the University of Chicago's Oriental Institute and also in its Near Eastern Languages Department. Baer habilitated at Freiburg on 1 March 1928 and taught his first course there during the summer semester. However, Hasse offered him a post at Halle in 1928 and Baer accepted, partly because the prospect of working with Hasse excited him but also because he would then be living close to Leipzig where his wife's parents lived and her father published art books. Baer's marriage was an extremely happy one but he gained mathematically as well. This was because Friedrich Levi, who had been a friend of his wife's family since she was a young girl, was teaching at Leipzig University. Baer and Levi began a mathematical collaboration which lasted until Levi's death in 1966. While at Halle he undertook a joint project with Hasse, publishing Steinitz's Algebraische Theorie der Körper, which had been first published in Crelle's Journal in 1910. They turned Steinitz's paper into a book with a commentary on the text and an appendix on Galois theory written by Baer. While Baer was on holiday in Igls, near Innsbruck in Austria, with his wife, Hitler came to power and Baer was required to complete a questionnaire to comply with the "Law for the Restoration of the Professional Civil Service" enacted on 7 April 1933. He completed the form, giving details of his parents and grandparents, on 30 June 1933 while still in Igls. He was then informed that his services at Halle were no longer required. In a letter written on 7 September 1933 by the 'Prussian minister for science, art and education of the people' he was informed that:- Based on §3 of the Law for the Restoration of the Professional Civil Service, dated 7 April 1933, I hereby withdraw your right to teach at the University of Halle-Wittenberg. Previously received remunerations shall cease by the end of September 1933. Baer and his wife never returned to Halle but remained in Austria wondering what they should do. Helmut Hasse was fully aware of Baer's predicament and he contacted Louis Mordell in England. Mordell sent Baer an invitation to go to Manchester which he gladly accepted. British academics had set up the Academic Assistance Council to assist refugees and funded it from their own salaries. Baer was one of the first to get financial assistance from the Council and spent two academic years 1933-35 at Manchester as an Honorary Research Fellow ([5] and [6]):- When the Baer family arrived in England, they stayed for the first three weeks with the Mordells, who were extremely kind and helpful to them. Marianne and Reinhold knew hardly a word of English, but they quickly learned to speak effortlessly and fluently, and were happy to be in Manchester. They met interesting non-mathematicians, something they always valued greatly wherever they lived. Among these were Leonard Palmer, then assistant lecturer in classics and later professor of comparative philology at Oxford, and the historian A J P Taylor. The Taylors had a cottage in the Peak District and the Baers often spent week-ends with them there, as well as longer periods hiking in the Lake District. After going to Oxford to meet Weyl, who was based at the Institute for Advanced Study in Princeton but spending 1934-35 in Oxford, Baer received an invitation to Princeton which he accepted and spent two years, 1935-37, as a member of the Institute for Advanced Study. This was a wonderful time for Baer, both from the exciting mathematics and from the rural setting in hills and woods that he so Baer had begun studying infinite abelian groups while at Manchester and he continued his study of this topic at Princeton; in fact he lectured for a term in Princeton on infinite abelian groups. He moved to North Carolina at Chapel Hill as an assistant professor in 1937 but when he was offered a position as Associate Professor of Mathematics at the University of Illinois at Urbana-Champaign in 1938 by Arthur B Coble he accepted the post. He was promoted to full professor at Urbana-Champaign in 1944 and, in the same year, both Baer and his wife became American citizens. However, the Baers did not find Illinois the ideal place to live since, as we have seen from their time in Germany, they loved the mountains. Now the United States is not short of mountains and the Rockies provided exactly the type of scenery that Baer loved. He made his first trip to Estes Park, Colorado in the Rockies in the summer of 1939 and returned there every year until 1950. Often other friends stayed with the Baers at Estes Park, including Richard Brauer, Hermann Weyl and Max Dehn. Baer had a very positive impact on the mathematical life at the University of Illinois at Urbana-Champaign [4]:- Baer's time in Illinois was very productive and led to much important work in both commutative and non-commutative group theory. He had no less than 20 Ph.D. students ... Baer also had a very positive effect on the development of the Mathematics Department: in particular he was responsible for Michio Suzuki coming to Illinois - a crucial event that led to the Department becoming a centre of research in finite simple group theory. However, Baer did not enjoy certain aspects, particularly teaching low level undergraduate mathematics of a standard that he considered "high school mathematics". He wanted to return to Germany as soon as the war had ended but, since he had not held a permanent position when he was dismissed, there was no post to which he could return. He was shortlisted for the professorship of mathematics at the University of Münster in 1946 but was not successful. He attempted to get back into the German mathematical scene through visiting professorships and lecture tours, the first being in 1950. We see how he felt from the letter he wrote to Wilhelm Süss in June 1951:- Today I would like to ask your advice, as you had offered it to us so generously on the occasion of our visit last year. As you know I have for quite a while been entitled to a sabbatical year, and i would like to spend this year in the intellectual realm of central Europe. There are many reasons: some sentimental and aesthetic, some intellectual and mathematical. And in order to squeeze the greatest benefit from this year, particularly concerning the latter reasons, I feel that I should once again fully integrate myself into the local academic community. The memories I have in this respect need to be refreshed, as the approaches and values here are quite different - even though I am sure that, after the catastrophes and enticements of the last 18 years, European intellectual life has been exposed to ample Americanisation. I cannot quite estimate how such a temporary inclusion into German academia can be organised, and this is where I would be grateful for your advice. Apart from the intellectual problems, there is also a material problem. For the duration of such a sabbatical year the university will only pay half of my salary (and my wife's additional income will disappear completely). The remaining amount may be considered quite sufficient in Europe, but I do have the running expenses here that absorb quite a proportion of my income on a regular basis, not to mention travel expenses and a certain necessary, not only desirable, degree of mobility in Europe. All of this will require careful planning, and there are some problems for which I cannot adequately envision solutions, and for this reason I am turning to you for advice. ... He continued to hold his permanent position in the University of Illinois until 1956 when he returned to Frankfurt am Main in Germany, to the Johann Wolfgang Goethe-Universität where Ruth Moufang held a senior position. Despite wanting to return to Germany, he was sad to leave the United States and he returned for research visits over the following years; he was a visiting professor at the University of Chicago in 1958, and at the University of California at Berkeley in 1963. These were years during which he travelled world-wide, lecturing in New Zealand, South Africa, Japan and at many universities throughout North America. He visited the University of Warwick, England, in 1966, 1973 and 1977. I [EFR] met him for the first time when he visited Warwick in 1966 and we met at several conferences over the following years. Baer retired from his professorship at Frankfurt in 1967 and, again showing his love for the mountains, he settled in Zurich. The school he built at Frankfurt is described by Otto Kegel, one of Baer's doctoral students [8]:- In order to stimulate his research team and to kindle the curiosity of his students, Baer would bring in visitors for two weeks and Colloquium speakers working on problems of interest to at least one of us. Especially the visitors who stayed a little longer were of great value to all of us, as they gave seminar lectures and were available for detailed discussions. Once or twice a year Baer would transfer his seminar to Oberwolfach for week-ends or, occasionally, for a whole week. This, of course, was a very good occasion for closer scientific and personal contact within the group; the general climate was set by Reinhold and Marianne Baer. In a way, they considered his students as "their children". He believed that the only real way of learning mathematics is by doing mathematics; so he tried to be an exemplary mathematician teaching to do mathematics, at all levels. He was an infectious teacher, hiding a kind and very generous heart behind irony and ( occasionally sharp) sarcasm. His mathematical work, some of which has been mentioned above, was wide ranging; topology, abelian groups and projective geometry. This last mentioned subject, which he was led to by his study of abelian groups, he thought of as being the lattice of all linear subspaces of a vector space. He then generalised this to consider a new type of geometry, namely the lattice of subgroups of an abelian group. In 1940 he introduced the concept of an injective module, then began studying group actions in geometry. He applied group theory to the study of projective planes and his work in this area led to the topic of combinatorics as we know it today. His algebraic formulation of geometry appeared in his paper A unified theory of projective spaces and finite abelian groups (1942). His 1952 book Linear algebra and projective geometry presented a completely new approach to projective geometry. He wanted:- ... to establish the essential structural identity of projective geometry and linear algebra. Friedrich Levi writes in a review:- Geometers of the older type may wonder about a book which neither mentions the favourite subjects of classical geometry nor uses any method of analysis, but nevertheless gives a deep insight into the background of geometry. The results are obtained by a skilful combination of general algebra, lattice theory, and abstract set-theory with methods of classical synthetic geometry. A great number of "remarks'' inserted into the text provide a welcome help to the reader who studies this very valuable and interesting work. Probably Baer's most important work, however, was in group theory; on the extension problem for groups, finiteness conditions, soluble and nilpotent groups. From 1950 onwards his work turned more towards finiteness conditions on groups and generalisations of soluble and nilpotent groups. Many concepts in this area were introduced by him, in particular the Baer radical of a group and Baer groups (groups in which every cyclic subgroup is subnormal). He published the monograph Gruppen mit abzählbaren Automorphismengruppen (1970) on group theory. In it he studied groups whose factor groups all have countable automorphism groups, and groups every factor group of which has every abelian subgroup of its automorphism group countable. I (EFR) heard him lecture at the University of Warwick in 1977. This was a sad occasion as by this time Baer knew that he was seriously ill (with stomach cancer). He described work which he felt he wanted to communicate but felt that he would not live long enough to be able to polish it to his high standards. An operation in 1978 was successful and Baer enjoyed a while longer creating the mathematics he loved and communicating it to others showing his excitement and joy in his subject. He enjoyed a final Oberwolfach meeting in May 1979 where he was:- ... a trim and fit figure, dressed in an open-neck white shirt, grey flannel trousers and tennis shoes. A happy smile on his face... . Among the honours given to Baer, we mention that he received honorary degrees from the University of Giessen in 1974, The University of Kiel in 1976, and the University of Birmingham in 1978. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (9 books/articles) A Poster of Reinhold Baer Mathematicians born in the same country Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © January 2014 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Baer.html","timestamp":"2014-04-20T20:59:01Z","content_type":null,"content_length":"32287","record_id":"<urn:uuid:e881c0be-59f5-42b2-9646-b39930bd5481>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Law of an application March 14th 2010, 08:40 AM #1 Law of an application I'm currently reading a paper, and I'm having difficulties in grasping the concept of a tree, and more precisely this : Let $\mathbb{T}~:~ \Omega \to\Omega$ be the identity application. Let $(p_k,k\geq 0)$ be a probability. There exists a probability $\mathbf{P}$ on $\Omega$ such that the law of $\mathbb{T}$ under $\mathbf{P}$ is the law of the tree (with reproduction distribution $p_k$) there are 3 things I don't understand : - why would one be interested in the identity application ? - how would one explain with words what the law of $\mathbb{T}$ is ? - what does "the law of T under P" mean ? Does it have something related to... I don't know, we call it in French "mesure image", which is $\mu_X$ such that $\mu_X(A)=\mu(X^{-1}(A))=\mu(X\in A)$ Thanks for any input... And if there's something unclear, I'll try to correct the thing up Hi, Moo! here are a few pieces of answer... The law of T under P is just the law of T when the probability space (here, $\Omega$) is endowed with the probability measure P. In other words, this is indeed the image measure of P by T. - why would one be interested in the identity application ? There are always two possible viewpoints in any probabilistic setting: either a fixed undefined very large probability space $(\Omega,P)$ on which we define several random variables, this is the usual case in introductory courses; or a space $\Omega$ with several distributions and the identity map as a unique random variable. The interest in the first case is simplicity and universality (it doesn't matter what base space we use, so why specify); it assumes however that we rely on sometimes not obvious existence The second case is useful when one wants to study the same random variable under various distributions. For instance, if we want to vary a parameter (like the parameter of a Bernoulli), we simply introduce a family of probabilities indexed by this parameter, and we say: "under $P_p$, ...." to say that we consider the value p of the parameter. This is a very convenient setting for Markov chains, where the parameter is the starting point. We have one random variable $(X_n)_{n\geq 0}$ which is the identity map on $E^{\mathbb{N}}$, and one measure $P_x$ for every site $x\in E$, which is the law of the Markov chain (with some given transition matrix) starting at $x$. This allows to give meaning to expressions like $P_x(X_2=y)=\sum_z P_x(X_1=z)P_z(X_1=y)$ (i.e. the Markov - how would one explain with words what the law of $\mathbb{T}$ is ? This is called a Galton-Watson tree. From one ancestor (the root of the tree), we have a random number $Z_1$ of children, each of which itself has random numbers of children, etc., where the numbers of children are all independent of each other. This is a genealogic tree where the numbers of children is random: k children (possibly 0) with probability $p_k$. A basic question is: does the family eventually get extinct? i.e. is the tree finite? In this case, $\Omega$ would be the set of trees (or a larger set), or an equivalent representation of trees. A convenient choice is to let $U=\cup_n \mathbb{N}^n$, the set of all finite sequences of integers, and $\Omega=\mathbb{N}^U$, the set of positive-integer sequences indexed by elements of $U$. Intuitively, an individual corresponds to a sequence $u=a_1a_2\cdots a_n\in U$ if it is obtained from the root as the $a_n$-th child of the $a_{n-1}$-th child of .... of the $a_1$-th child of the root. And the number of children of this individual is encoded in the index $n_u$ of the tree $T=(n_u)_{u\in U}\in \mathbb{N}^U$ (with $n_u$ arbitrary if $u$ is not connected to the root, thus there are more codings than trees). With these notations, we can define the law of $\mathbb{T}$ as $\mu^{\otimes U}$ where $\mu$ is the probability distribution $\mu(\{k\})=p_k$. Actually, what I just did is "prove" your statement, assuming that the existence of an infinite-product measure is trivial, which it is not... This is even probably the reason why this statement is outlined by the author. NB: if the author starts with such a statement, you can expect sharp rigor in the following! here are a few pieces of answer... These don't look like pieces, they just explain what I had been looking for... This is called a Galton-Watson tree. From one ancestor (the root of the tree), we have a random number $Z_1$ of children, each of which itself has random numbers of children, etc., where the numbers of children are all independent of each other. This is a genealogic tree where the numbers of children is random: k children (possibly 0) with probability $p_k$. A basic question is: does the family eventually get extinct? i.e. is the tree finite? In this case, $\Omega$ would be the set of trees (or a larger set), or an equivalent representation of trees. A convenient choice is to let $U=\cup_n \mathbb{N}^n$, the set of all finite sequences of integers, and $\Omega=\mathbb{N}^U$, the set of positive-integer sequences indexed by elements of $U$. Intuitively, an individual corresponds to a sequence $u=a_1a_2\cdots a_n\in U$ if it is obtained from the root as the $a_n$-th child of the $a_{n-1}$-th child of .... of the $a_1$-th child of the root. And the number of children of this individual is encoded in the index $n_u$ of the tree $T=(n_u)_{u\in U}\in \mathbb{N}^U$ (with $n_u$ arbitrary if $u$ is not connected to the root, thus there are more codings than trees). Well, you got it absolutely right... I should've mentioned it, it would've spared you from too much writing, sorry It's a paper about GW trees, and it's defined almost the same way as you did... (a tree is defined to always contain the root element) Actually, what I just did is "prove" your statement, assuming that the existence of an infinite-product measure is trivial, which it is not... This is even probably the reason why this statement is outlined by the author. NB: if the author starts with such a statement, you can expect sharp rigor in the following! He doesn't start with this statement, but I think it's quite a rigorous paper... That's not nice from him to have outlined the statement, it bugged me a lot The law of T under P is just the law of T when the probability space (here, $\Omega$) is endowed with the probability measure P. In other words, this is indeed the image measure of P by T. Okay, I feel better now that I know that There are always two possible viewpoints in any probabilistic setting: either a fixed undefined very large probability space $(\Omega,P)$ on which we define several random variables, this is the usual case in introductory courses; or a space $\Omega$ with several distributions and the identity map as a unique random variable. The interest in the first case is simplicity and universality (it doesn't matter what base space we use, so why specify); it assumes however that we rely on sometimes not obvious existence The second case is useful when one wants to study the same random variable under various distributions. For instance, if we want to vary a parameter (like the parameter of a Bernoulli), we simply introduce a family of probabilities indexed by this parameter, and we say: "under $P_p$, ...." to say that we consider the value p of the parameter. Oh yeah, actually we work with that kind of things in statistics, defining the probability space $(\mathbb{R}^k,\mathcal{B}_{\mathbb{R}^k},P_\theta) _{\{\theta\in\Theta\}}$ This is a very convenient setting for Markov chains, where the parameter is the starting point. We have one random variable $(X_n)_{n\geq 0}$ which is the identity map on $E^{\mathbb{N}}$, and one measure $P_x$ for every site $x\in E$, which is the law of the Markov chain (with some given transition matrix) starting at $x$. This allows to give meaning to expressions like $P_x(X_2=y)=\ sum_z P_x(X_1=z)P_z(X_1=y)$ (i.e. the Markov property). But in this example of the Markov property, there isn't a unique random variable, is there ? Hmm I think it's likely that there will be more questions, but not because your excellent explanations weren't sufficient, it's just that I may need to read this and to be explained this several times, in different ways =) Thanks, as always... There is, considering $(X_n)_{n\geq 0}$ as a unique random variable. Another common way to write the Markov property, without using the trick of the identity map (i.e. in the "first viewpoint" fashion, cf. previous post), is to take $X$ to be a Markov chain starting at $x$ and write $P(X_2=z)=\sum_y P(X_1=y)P(X_2=z|X_1=y)$. The problem is that you actually would have to restrict the sum to the values $y$ such that $P(X_1=y)>0$ in order to write the conditioning ; that complicates the notation for nothing. Feel free to ask further questions, about this or your paper! Oh yeah, so it's just a matter of definition The problem is that you actually would have to restrict the sum to the values y such that P(X_1=y)>0 in order to write the conditioning I recall already having this problem... lol Feel free to ask further questions, about this or your paper! Thanks ! And well, I need to look at this viewpoint more and more times to grasp it, but I understand the general idea. See ya ! March 14th 2010, 01:05 PM #2 MHF Contributor Aug 2008 Paris, France March 15th 2010, 12:11 PM #3 March 17th 2010, 11:26 AM #4 MHF Contributor Aug 2008 Paris, France March 17th 2010, 11:40 AM #5
{"url":"http://mathhelpforum.com/advanced-statistics/133745-law-application.html","timestamp":"2014-04-19T02:23:49Z","content_type":null,"content_length":"70528","record_id":"<urn:uuid:56c0a9d2-9ea5-4364-b7cc-c922bf4eeda8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Proposed Schema for Double Induction Replies: 4 Last Post: Jul 18, 2013 12:00 PM Messages: [ Previous | Next ] Re: Proposed Schema for Double Induction Posted: Jul 18, 2013 12:00 PM & ALL(a):[a in N => [P(a) => P(a+1)]] => ALL(a):[a in N => P(a)] where P is a unary predicate & ALL(a):[a in N => [P(1,a) => P(1,a+1)]] & ALL(a):ALL(b):[a in N & b in N => [P(a,b) => P(a+1,b)]] => ALL(a):ALL(b):[a in N & b in N => P(a,b)] where P is a binary predicate Download my DC Proof 2.0 software at http://www.dcproof.com Date Subject Author 7/18/13 Proposed Schema for Double Induction Dan Christensen 7/18/13 Re: Proposed Schema for Double Induction Dan Christensen 7/18/13 Re: Proposed Schema for Double Induction Dan Christensen 7/18/13 Re: Proposed Schema for Double Induction Dan Christensen 7/18/13 Re: Proposed Schema for Double Induction Dan Christensen
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2582530&messageID=9173351","timestamp":"2014-04-17T01:10:48Z","content_type":null,"content_length":"21151","record_id":"<urn:uuid:4b640d79-ac2e-4a20-ae1c-629c80872a82>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Hacettepe Üniversitesi - ELE - 204 Hacettepe Üniversitesi - ELE - 204 Hacettepe Üniversitesi - ELE - 203 Hacettepe Üniversitesi - ELE - 203 Hacettepe Üniversitesi - ELE - 203 Hacettepe Üniversitesi - ELE - 203 Hacettepe Üniversitesi - ELE - 203 Hacettepe Üniversitesi - ELE - 203 American River - BIO - 101 Cross B Wildtype Name Sarah Miller Morgan Chambers Lexi Chelsea Rameez, Tina, Paul Sarah Miller Sarah Miller Lexi Morgan Chambers Chelsea Anna Myrle Morgan Chambers Lexi Newcomers Dylan Males 16 5 7 0 62 3 0 7 6 10 18 14 9 4 13 8 Females 11 1 3 7 33 3 2 5 CUNY Baruch - ECON - 4000 Department of Economics and Finance ECO 4000 (MW6) Statistical Analysis for Economics and FinanceSpring 2011Professor: Seyhan Erden Arkonac, PhD seyhan.arkonac@nyu.edu Office hours: Immediately after each class I answer short questions in class room, if CUNY Baruch - ECON - 4000 Department of Economics and Finance Final Exam PRACTICEECO 4000 Fall 2010Instructions1. Do not turn this page until so instructed. 2. The exam has 30 questions. Please put your answers in the back page of the exam. 3. You are permitted to use a simple CUNY Baruch - ECON - 4000 REVIEW FOR FINAL EXAM (ECO 4000)bySeyhan Erden Arkonac, PhD1The Population Linear Regression Model general notationYi = 0 + 1Xi + ui, i = 1, n X is the independent variable or regressor Y is the dependent variable 0 = intercept 1 = slope ui = the reg CUNY Baruch - ECON - 4000 ECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 1 Prof: Seyhan Arkonac, PhDWhat is Econometrics?Why do I need to take this course? Is this a hard course? Why am I here? Will I learn any thing useful here? Will I have to study CUNY Baruch - ECON - 4000 ECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 2 Prof: Seyhan Arkonac, PhDWhat is a Random Variable? Gender of the next newborn baby Number of times your computer will crash while you are writing a term paper Number of times CUNY Baruch - ECON - 4000 ECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 3 Prof: Seyhan Arkonac, PhD Last Lecture we talked about expected value, mean and variance. Lets remember them:Expected Value, Mean and Variance Let X be a random variable (say CUNY Baruch - ECON - 4000 Linear Regression with One RegressorECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 4 Prof: Seyhan Arkonac, PhD1Linear Regression with One RegressorLinear regression allows us to estimate, and make inferences about, populati CUNY Baruch - ECON - 4000 Linear Regression with One RegressorECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 5 Prof: Seyhan Arkonac, PhD1The Least Squares Assumptions(SW Section 4.4)What, in a precise sense, are the properties of the OLS estimator? CUNY Baruch - ECON - 4000 Introduction to Multiple Regression ECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 7 Prof: Seyhan Erden Arkonac, PhD1Where we stopped last time, and what will we do today? Regression when X is a binary (dummy) variable (i.e. CUNY Baruch - ECON - 4000 Multiple Regression I (cont) &amp; II ECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 8 Prof: Seyhan Erden Arkonac, PhD1Where we stopped last time, and what will we do today?Regression when X is a binary (dummy) variable (i.e. X= CUNY Baruch - ECON - 4000 Introduction to Multiple Regression ECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 9 Prof: Seyhan Erden Arkonac, PhD1The Sampling Distribution of the OLS Estimator (SW Section 6.6)Under the four Least Squares Assumptions, Th CUNY Baruch - ECON - 4000 Multiple Regression (cont) Multicollinearity, Hypothesis Testing in Multiple Regression ECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 10 Prof: Seyhan Arkonac, PhD1Last thing we did last time was;Multicollinearity: (a) perfe CUNY Baruch - ECON - 4000 Multiple Regression (cont) Confidence Intervals in Multiple Regression for ECO 4000, Statistical Analysis for Economics and Finance Fall 2010Lecture 11 Prof: Seyhan Arkonac, PhD1The F-statistic testing 1 and 2:2 t12 t2 2 t1 ,t2 t1t2 1 F= 2 2 1 t1 ,t2 CUNY Baruch - ECON - 4000 Nonlinear Regression Functions ECO 4000, Statistical Analysis for Economics and FinanceLecture 12 by Seyhan Erden Arkonac, PhD1Nonlinear Regression Functions(SW Chapter 8)Everything so far has been linear in the Xs But the linear approximation is not CUNY Baruch - ECON - 4000 Nonlinear Regression Functions ECO 4000, Statistical Analysis for Economics and FinanceLecture 13 by Prof: Seyhan Erden Arkonac, PhD1Interpreting the estimated regression function:(a) Plot the predicted values TestScore = 607.3 + 3.85Incomei 0.0423(In CUNY Baruch - ECON - 4000 Nonlinear Regression III (cont) Assessing Regression Studies I, ECO 4000Lecture 14 by Seyhan Erden Arkonac, PhD1(c) Interactions between two continuous variablesYi = 0 + 1X1i + 2X2i + ui X1, X2 are continuous As specified, the effect of X1 doesnt depe CUNY Baruch - ECON - 4000 Assessing Regression ECO 4000, Statistical Analysis for Economics and FinanceLecture 15 by Prof: Seyhan Erden Arkonac, PhD1Assessing Studies Based on Multiple Regression (SW Chapter 9)Lets step back and take a broader look at regression: Is there a sy CUNY Baruch - ECON - 4000 LAST NAME:_ FIRST NAME:_ECO 4000 MIDTERM EXAM March 21st 2011Questions 1-6:Use the following STATA outputs to answer questions 1 through 6Regression 1. reg ahe age, robust Linear regression Number of obs = 7986 F( 1, 7984) = 187.92 Prob &gt; F = 0.0000 CUNY Baruch - ECON - 4000 Practice Midterm Exam for ECO 4000MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. 1) Analyzing the behavior of unemployment rates across U.S. states in March of 2002 is an example of using A) cross- CUNY Baruch - FIN - 3610 CHAPTER 5 INTRODUCTION TO VALUATION: THE TIME VALUE OF MONEYAnswers to Concepts Review and Critical Thinking Questions 1. 2. The four parts are the present value (PV), the future value (FV), the discount rate ( r), and the life of the investment (t). Com CUNY Baruch - FIN - 3610 CHAPTER 9 NET PRESENT VALUE AND OTHER INVESTMENT CRITERIAAnswers to Concepts Review and Critical Thinking Questions 1. A payback period less than the projects life means that the NPV is positive for a zero discount rate, but nothing more definitive can b CUNY Baruch - FIN - 3610 CHAPTER 4 LONG-TERM FINANCIAL PLANNING AND GROWTHAnswers to Concepts Review and Critical Thinking Questions 1. The reason is that, ultimately, sales are the driving force behind a business. A firms assets, employees, and, in fact, just about every aspect CUNY Baruch - FIN - 3610 CHAPTER 6 DISCOUNTED CASH FLOW VALUATIONAnswers to Concepts Review and Critical Thinking Questions 1. 2. 3. 4. The four pieces are the present value (PV), the periodic cash flow ( C), the discount rate (r), and the number of payments, or the life of the CUNY Baruch - FIN - 3610 CHAPTER 7 INTEREST RATES AND BOND VALUATIONAnswers to Concepts Review and Critical Thinking Questions 1. 2. 3. No. As interest rates fluctuate, the value of a Treasury security will fluctuate. Long-term Treasury securities have substantial interest rate CUNY Baruch - FIN - 3610 CHAPTER 10 MAKING CAPITAL INVESTMENT DECISIONSAnswers to Concepts Review and Critical Thinking Questions 1. In this context, an opportunity cost refers to the value of an asset or other input that will be used in a project. The relevant cost is what the LaGuardia CC - HUW - 166 9/22/10 Paragraph &lt;p&gt; Single line break &lt;br /&gt; Heading &lt;h1&gt;-&lt;h6&gt; Strong emphasis &lt;strong&gt;emphasis &lt;em&gt; Bold &lt;b&gt; I talicize &lt;i&gt; underline &lt;u&gt; anchor (link) &lt;a&gt; unordered list &lt;ul&gt; ordered list &lt;ol&gt; list i tem &lt;li&gt; &lt;img src=_ alt=stuar t /&gt; &lt;i&gt; = &lt;em&gt; &lt;b&gt; = Syracuse - ECS - 102 Exam 3 ECS 102 Dr. Baruch Fall 2009Name:problem points score 1 13 2 16 3 10 4 10 5 10 6 14 7 17 8 10 Total 10011. Write a function void subString( char early[ ], char later[ ], int startIndex, int endIndex) that copies the piece of early from position Syracuse - ECS - 102 Exam 3 ECS 102 Dr. Baruch Spring 2009Name:problem points score 1 10 2 8 3 8 4 10 5 10 6 11 7 7 8 9 9 9 10 4 11 14 Total 10011. For each of the following, what will be the output? (a) char mom[] = & quot;Martha&quot;; char dad[] = &quot;George&quot;; mom[1] = dad[0]; print Syracuse - ECS - 102 Exam 3 ECS 102 Dr. Baruch Spring 2010Name:problem points score 1 12 2 16 3 12 4 12 5 12 6 14 7 14 8 8 Total 10011. Write a function printSomeStrings( char list[ ][80], int sizeOfList, int startindex, int stopindex); that will print the strings in the Syracuse - EEE - 212 MAT 397 - Calculus III Spring 2011 All sections Course Description: MAT 397 is the third course in a three semester sequence in Calculus. This sequence is designed for Mathematics, Science and Engineering majors and for those students in other majors who Syracuse - EEE - 212 0.0.1Hand-in Assignment 1 Solutions (Math 20300 DD, PP, ST) Your Name: Foster, Antony Your Student ID: 0000 (only last 4 digits) Your Instructor: Prof. A. Foster Due Date: Friday, February 8, 2008.SECTION 13.1 (pages 829834): DO Exercises 1, 3, 4, 5, 1 Syracuse - EEE - 212 ELE232: EE Fundamentals IPage 1HW #1 (Due: JAN-31-2011)NOTE: Please remember to STAPLE, use only ONE SHEET per problem, tear off any spiral notebook EDGES, BOX or UNDERLINE your final answer.90Problem #1: For the circuit below: a) Determine the time Syracuse - EEE - 212 Due: 02/07/2011ELE231 HOMEWORK # 2Fall 2011NOTE: Please remember to STAPLE, solve ONE PROBLEM per a single sheet of paper, make sure to tear off any spiral notebook EDGES, and BOX or UNDERLINE your final answer(s). Points will be deducted for incorrect Syracuse - EEE - 212 How t o bui l d a carI ns t r uc t i ons: f unc t i on d ef i ni t i on123d a r ow of car s spot 1;f unc t i on c al l sPr ogr am t o Bui l cfw_ b ui l d a car ; d r i ve i t t o p ar k i t ; b ui l d a car ; d r i ve i t t o p ar k i t ; b ui l d Syracuse - EEE - 212 function definition How to build a color house with number bedrooms. Build number bedrooms. Buy color paint. Paint the outside of the house.input parametersfunction definition input parametersHow to build a color house with number bedrooms. Build numbe Syracuse - EEE - 212 int abs(int); double fabs(double);Both compute the absolute value. abs is used for ints and fabs is used for doubles. abs(-3) fabs(-4.5) 3 4.5double ceil(double); double floor(double);ceil(3.2)=4.0 3.2 floor(3.2)=3.0ceil floor ceil(-3.2)=-3.0 -3.2 flo Syracuse - EEE - 212 Grading Scheme for Homework 1 Name: Section: 1 2 Grader: Introduction Submit schedule, 2 versions Computes grades, showing work Uploaded a TEXT file Self Check at end of section 1.3 .2 each Self Check at end of section 1.4 .2 each Review Question at end o Syracuse - EEE - 212 student XHW:75 (each out of 10 pts)LAB: 252EXAM 1: 89 (out of 100)EXAM 2: 85 (out of 100)EXAM 3: 86 (out of 100)FINAL PROJECT: 91 (out of 100)(10+9+9+9+10+10+10+8)/80*20%+ Syracuse - EEE - 212 ECS 102 Lab 3 Name(s)_ Section 1 (3:45)Spring 2011 Section 2 (12:45)(Lab 2 is online, if you need to recall how to do something you learned last time.) See the link for using Xcode if you are doing this on a Mac. I. Input, output, and format strings. 1. Syracuse - EEE - 212 ECS 102 Lab 7 Spring 2011 Name _ Section 1(3:45) 2(12:45) Anywhere that I ask you to observe what happens, write down your observations in the space provided. Before entering code, finish reading the instructions for that part! I. Experimenting with funct Syracuse - EEE - 212 ECS 102 Lab 8 Spring 2011 Name _ Sec 1(3:45) 2 (12:45) Anywhere that I ask you to observe what happens, write down your observations in the space provided. I. Here is a program that will introduce you to the if and the if . else statements. Run it with a Syracuse - EEE - 212 TimeMondayTuesday Wednesday Thursday Friday 08:00 09:00 275 275 10:00 232 232 232 232 11:00 232 232 232 232 12:00 261 261 397 261 13:00 397 397 397 261 14:00 397 211 397 211 397 15:00 16:00 102 102 Rutgers - ECON - 102 Book NotesTuesday, January 11, 2011 4:02 PM1. a. principle 1: People Face Trade -offs Because of the higher costs, these firms end up earning sm aller profits, paying lower wages, charging h igher prices, or som e com bination of these threeAnother tr Rutgers - ECON - 102 Class notesThursday, January 20, 2011 12:30 PMAllocating resources: peoples tastes in those resources Does it provide. Allocation of resources (value based on what they are willing to pay, people who value them the most will pay the most) Efficiency- pe Rutgers - ECON - 102 Book NotesTuesday, January 11, 2011 4:16 PM1. The Economist as a Scientist They devise theories, collect data, and then analyze these data in an attempt to verify or refute their theories The essence of science, however, is the scientific methodthe disp Rutgers - ECON - 102 Class NotesThursday, January 27, 2011 12:22 PMA. The Economist as Scientist 1) Economists play two roles: a. Scientists: try to explain the world b. Policy advisors: try to improve it 2) In the first, economists employ the scientific method: the dispass Rutgers - ECON - 102 Chapter 1: Ten Principles of EconomicsTuesday, January 04, 2011 6:33 PM business cycleeconomics efficiency equalityFluctuations in economic activity, such as employment and production. The study of how society manages its scarce resources. The proper Rutgers - ECON - 102 Chapter 2: Thinking Like an EconomistTuesday, January 11, 2011 4:16 PM circular -flow diagramA v isual model of the economy that shows how dollars flow through markets among h ouseholds and firms.macroeconomics The study of economy -wide phenomena, Rutgers - ECON - 102 Class NotesMonday, January 31, 2011 12:31 PMI. Interdependence A. Evert day you rely on many people from around the world, most of whom you've never met, to provide you with the goods and services you enjoy B. One of the Ten Principles from Chapter 1: T Rutgers - ECON - 102 Chapter 3Monday, January 31, 2011 12:31 PM a bsolute advantageThe ability to produce a good using fewer inputs than another producer.comparative advantage exportsThe ability to produce a good at a lower opportunity cost than another producer. Goods Rutgers - ECON - 102 Chapter 4Thursday, February 03, 2011 12:20 PM Competitive market A market with many buyers and sellers trading identical products so that each buyer and seller is a price taker.complementsTwo goods for which an increase in the price of one leads to a Rutgers - ECON - 102 Chapter 12 Taxes and Efficiency One tax system is more efficient than another if it raises the same amount of revenue at a smaller cost to tax payers The costs to taxpayers include; o o o The tax payment itself Deadweight losses Administrative burdenInco
{"url":"http://www.coursehero.com/file/6241383/ELE204-MT22006-7/","timestamp":"2014-04-16T04:33:59Z","content_type":null,"content_length":"48356","record_id":"<urn:uuid:e57c78d5-4ea6-4c83-b0a9-70d430b6f201>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph of a function February 22nd 2009, 12:04 AM #1 Junior Member Feb 2009 List all the asymptotes of the graph of the function and the approximate coordinates of each local extremum. I've had a hard time trying to solve this graph , I used my graphing calculator, but it didn't help me. Acording to my book, the negative x axis is an asymptote. Can someone explain me how to interpret and graph the function properly? Thanks in advance! $f(x) = x\:2^x = x\:e^{2\ln x}$ $f'(x) = e^{2\ln x} (1 + x \ln 2)$ Therefore f is decreasing from $-\infty$ to $-\frac{1}{\ln 2}$ and increasing from $-\frac{1}{\ln 2}$ to $+\infty$ The minimum has coordinates $(-\frac{1}{\ln 2} ; -\frac{1}{e\:\ln 2})$ The limit in $-\infty$ is 0 which shows that x-axis is an asymptot The limit in $+\infty$ is $+\infty$ February 22nd 2009, 12:57 AM #2 MHF Contributor Nov 2008
{"url":"http://mathhelpforum.com/calculus/74961-graph-function.html","timestamp":"2014-04-19T05:36:04Z","content_type":null,"content_length":"33965","record_id":"<urn:uuid:c0a5833e-332a-486b-bd40-37e51142577c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Cauchy's Integral theorem March 18th 2010, 05:47 AM Cauchy's Integral theorem Quick question im trying to calculate the integral around the closed curve C of z(bar) dz where z(bar) is the complex conjugate of z where C is the unit circle. Using the subsitution z = e^it i calculated the integral to be 0. => z(bar) = e^-it However after finding the question in a maths book the answer is given to be 2pi(i). Can any of ye tell me which solution is correct March 18th 2010, 06:01 AM $\mathop\oint\limits_{z=e^{it}}\overline{z}dz=\int_ 0^{2\pi}e^{-it}ie^{it}=2\pi i$
{"url":"http://mathhelpforum.com/calculus/134435-cauchys-integral-theorem-print.html","timestamp":"2014-04-20T20:18:33Z","content_type":null,"content_length":"4025","record_id":"<urn:uuid:19f685cb-1439-4f47-b499-3ccfeb1345d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mathematics of Marriage For Buying Options, Start Here Hardcover | $50.00 Short | £34.95 | ISBN: 9780262072267 | 500 pp. | 6 x 9 in | December 2002 Paperback | $28.00 Text | £19.95 | ISBN: 9780262572309 | 500 pp. | 6 x 9 in | January 2005 Essential Info The Mathematics of Marriage Divorce rates are at an all-time high. But without a theoretical understanding of the processes related to marital stability and dissolution, it is difficult to design and evaluate new marriage interventions. The Mathematics of Marriage provides the foundation for a scientific theory of marital relations. The book does not rely on metaphors, but develops and applies a mathematical model using difference equations. The work is the fulfillment of the goal to build a mathematical framework for the general system theory of families first suggested by Ludwig Von Bertalanffy in the 1960s. The book also presents a complete introduction to the mathematics involved in theory building and testing, and details the development of experiments and models. In one "marriage experiment," for example, the authors explored the effects of lowering or raising a couple's heart rates. Armed with their mathematical model, they were able to do real experiments to determine which processes were affected by their interventions. Applying ideas such as phase space, null clines, influence functions, inertia, and uninfluenced and influenced stable steady states (attractors), the authors show how other researchers can use the methods to weigh their own data with positive and negative weights. While the focus is on modeling marriage, the techniques can be applied to other types of psychological phenomena as well. About the Authors John M. Gottman is Professor of Psychology at the University of Washington. James D. Murray is Professor Emeritus of Applied Mathematics at the University of Washington. Catherine Swanson is a software engineer at the University of Washington. Rebecca Tyson is Research Scientist at the University of Arizona. Kristin R. Swanson is Senior Fellow in Pathology and Applied Mathematics at the University of Washington. "... neatly presents marriage as a process both mathematical and unpredictable, both stable and prone to catastrophe." , Jordan Ellenberg, Slate "The Mathematics of Marriage is a splendid, important, and extremely useful book. Gottman and colleagues set a new standard for psychological explanation with their exquisite conversation among theory, models, data, and clinical intervention. They also provide the most clear and accessible introduction to the mathematics I have seen. This work is compelling evidence of the power of nonlinear dynamic models for understanding complex psychological phenomena. It will also change forever the way you look at marriage." —Esther Thelen, Department of Psychology, Indiana University, Co-editor of A Dynamic Systems Approach to the Development of Cognition and Action "Dynamic systems theory is infiltrating psychology in a variety of ways, increasing the sensitivity, realism, and scope of psychological models and methods. But I know of no other application that covers so much ground, from theory-building and modeling to methodology and measurement, and finally to clinical interventions that actually work. Gottman's determination to heal marriages fuels a rigorous scientific enterprise, based on a sophisticated understanding of complex systems and the mathematics for decoding them." —Marc D. Lewis, Professor, University of Toronto, Co-editor of Emotion, Development, and Self-Organization: Dynamic Systems Approaches to Emotional Development
{"url":"https://mitpress.mit.edu/books/mathematics-marriage","timestamp":"2014-04-20T21:59:58Z","content_type":null,"content_length":"45815","record_id":"<urn:uuid:3210407c-cd00-48d2-81bd-5e0ce03d10a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A triangle has side lengths of 12 cm, 15 cm, and 20 cm. Classify it as acute, obtuse, or right. (1 point) acute obtuse right There is not enough information. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511559fce4b09e16c5c7fcc3","timestamp":"2014-04-17T01:39:19Z","content_type":null,"content_length":"72704","record_id":"<urn:uuid:2ffe7e96-dca7-4d41-b034-971036edd2c6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00429-ip-10-147-4-33.ec2.internal.warc.gz"}