content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Comparing ratios to the Golden Ratio
June 8th 2013, 03:23 AM #1
Jun 2013
Comparing ratios to the Golden Ratio
Hi all,
I'm currently trying to write an equation or piece of logic which will allow me to test ratios against the Golden Ratio to find the closest match. The ratios I need to test range from 1:255 to
255:1 (to several decimal places) and I want to convert these ratios into a decimal (z) between 0 & 1, such that 0 is a perfect match (1:1.618) and 1 is a the furthest possible (255:1). All the z
values should be positive, such that a ratio of 10:1 will return a result similar to 1:11.
The application of this is in testing whether the brightness of two parts of an image conform to the Golden Ratio. In practice most of the ratios will be much closer, say between 5:1 and 1:5, so
it might suit a log(10) solution?
So, where x = left side of ratio and y = right side, how do I define z as this number between 0 & 1?
This is keeping me up at night!
Also mods, please move this thread if I'm in the wrong place.
Re: Comparing ratios to the Golden Ratio
Hey Pietbot.
One piece of advice I have regarding this is to use the definition of the ratio (in terms of square root of 5) and find a rational approximation to the square root factor.
Re: Comparing ratios to the Golden Ratio
Thanks chiro that's an interesting idea and I guess would help in terms of apportioning more of the scale to the lower ratios I expect to get.
Anyone have any ideas about the actual calculation needed here?
June 8th 2013, 03:49 AM #2
MHF Contributor
Sep 2012
June 8th 2013, 04:24 AM #3
Jun 2013 | {"url":"http://mathhelpforum.com/calculus/219660-comparing-ratios-golden-ratio.html","timestamp":"2014-04-21T03:19:22Z","content_type":null,"content_length":"34642","record_id":"<urn:uuid:03f822e6-475b-407b-8ded-f555dbaa358c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume of a Cylinder - The Basics and Examples
If we have a cylinder with the radius
and height
, the volume,
of the cylinder is:
V = πr^2h
is a number that is approximately equals to 3.14.
The math video below will give more explanation about this formula. Also, we will see some examples on how to use it. | {"url":"http://www.mathexpression.com/volume-of-a-cylinder.html","timestamp":"2014-04-18T11:09:11Z","content_type":null,"content_length":"49398","record_id":"<urn:uuid:30635331-80f1-4039-9649-6bfc1c675ba8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Celebrating Pi Day
A mathematical formula discovered a decade ago in part by David H. Bailey (above), the Chief Technologist of the Computational Research Department at the Lawrence Berkeley National Laboratory, was
the basis for researchers to find the sixty-trillionth binary digit of Pi-squared. | Photo Courtesy of Lawrence Berkely National Lab
You might recognize the date March 14 (in the format 3/14) as a number from classes long forgotten. But even for the not-so-math-conscious, 3.14(...) is a number universally recognized – and
celebrated on this day of Pi (and pie).
As a quick refresher, you would multiply Pi by the diameter of any circle to get the circumference. Though as vital as Pi might be to architecture and engineering, it’s really one of the mysterious
numbers in mathematics.
Let’s take a look at Pi as a number: It’s irrational, which means it can’t be expressed as a simple fraction. It’s transcendental, which means it is not a root of a non-zero polynomial equation with
rational coefficients.
Pi is also uncountably infinite, meaning it goes on forever, which, along with its homonym to the delicious baked good, may lend itself to its popularity. As of 2012, scientists have figured out Pi
to the 60 trillionth digit with supercomputers. To put this into perspective, a value of Pi to 40 digits would be more than enough to compute the circumference of the Milky Way galaxy to an error
less than the size of a proton.
The importance of Pi has long been known. Ancient Egyptians used this number in their design of the pyramids. Ancient scholars in Jerusalem, India, Babylon, Greece and China used this proportions in
their studies of architecture and symbols.
More recently, Pi has been used in machining parts, broadcasting radio signals, simulating load conditions, and even testing supercomputers. The digits of Pi are used to test the integrity of
computer hardware and software. Researchers check the computations of Pi by a new computer against the known digits to ensure new machinery is working appropriately.
Still it seems humanity may never have anything but approximations of Pi. In the meantime, let us celebrate Pi Day with talk of the irrational number and its baked homophone. | {"url":"http://energy.gov/articles/celebrating-pi-day","timestamp":"2014-04-17T14:23:41Z","content_type":null,"content_length":"62540","record_id":"<urn:uuid:30c2cf0e-1680-43fc-8482-3b61eb1434b5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 31
- In 42nd FOCS , 2001
"... The simulation paradigm is central to cryptography. A simulator is an algorithm that tries to simulate the interaction of the adversary with an honest party, without knowing the private input of
this honest party. Almost all known simulators use the adversary’s algorithm as a black-box. We present t ..."
Cited by 214 (13 self)
Add to MetaCart
The simulation paradigm is central to cryptography. A simulator is an algorithm that tries to simulate the interaction of the adversary with an honest party, without knowing the private input of this
honest party. Almost all known simulators use the adversary’s algorithm as a black-box. We present the first constructions of nonblack-box simulators. Using these new non-black-box techniques we
obtain several results that were previously proven to be impossible to obtain using black-box simulators. Specifically, assuming the existence of collision resistent hash functions, we construct a
new zeroknowledge argument system for NP that satisfies the following properties: 1. This system has a constant number of rounds with negligible soundness error. 2. It remains zero knowledge even
when composed concurrently n times, where n is the security parameter. Simultaneously obtaining 1 and 2 has been recently proven to be impossible to achieve using black-box simulators. 3. It is an
Arthur-Merlin (public coins) protocol. Simultaneously obtaining 1 and 3 was known to be impossible to achieve with a black-box simulator. 4. It has a simulator that runs in strict polynomial time,
rather than in expected polynomial time. All previously known constant-round, negligibleerror zero-knowledge arguments utilized expected polynomial-time simulators.
- Journal of Cryptology , 1995
"... Constant-round zero-knowledge proof systems for every language in NP are presented, assuming the existence of a collection of claw-free functions. In particular, it follows that such proof
systems exist assuming the intractability of either the Discrete Logarithm Problem or the Factoring Problem for ..."
Cited by 157 (8 self)
Add to MetaCart
Constant-round zero-knowledge proof systems for every language in NP are presented, assuming the existence of a collection of claw-free functions. In particular, it follows that such proof systems
exist assuming the intractability of either the Discrete Logarithm Problem or the Factoring Problem for Blum Integers.
- Journal of Cryptology , 2001
"... Abstract. In this paper we show that any two-party functionality can be securely computed in a constant number of rounds, where security is obtained against malicious adversaries that may
arbitrarily deviate from the protocol specification. This is in contrast to Yao’s constant-round protocol that e ..."
Cited by 76 (14 self)
Add to MetaCart
Abstract. In this paper we show that any two-party functionality can be securely computed in a constant number of rounds, where security is obtained against malicious adversaries that may arbitrarily
deviate from the protocol specification. This is in contrast to Yao’s constant-round protocol that ensures security only in the face of semi-honest adversaries, and to its malicious adversary version
that requires a polynomial number of rounds. In order to obtain our result, we present a constant-round protocol for secure coin-tossing of polynomially many coins (in parallel). We then show how
this protocol can be used in conjunction with other existing constructions in order to obtain a constant-round protocol for securely computing any two-party functionality. On the subject of
coin-tossing, we also present a constant-round perfect coin-tossing protocol, where by “perfect ” we mean that the resulting coins are guaranteed to be statistically close to uniform (and not just
pseudorandom). 1
, 2001
"... We present session-key generation protocols in a model where the legitimate parties share only a human-memorizable password. The security guarantee holds with respect to probabilistic
polynomial-time adversaries that control the communication channel (between the parties), and may omit, insert and ..."
Cited by 75 (7 self)
Add to MetaCart
We present session-key generation protocols in a model where the legitimate parties share only a human-memorizable password. The security guarantee holds with respect to probabilistic polynomial-time
adversaries that control the communication channel (between the parties), and may omit, insert and modify messages at their choice. Loosely speaking, the effect of such an adversary that attacks an
execution of our protocol is comparable to an attack in which an adversary is only allowed to make a constant number of queries of the form “is w the password of Party A”. We stress that the result
holds also in case the passwords are selected at random from a small dictionary so that it is feasible (for the adversary) to scan the entire directory. We note that prior to our result, it was not
clear whether or not such protocols were attainable without the use of random oracles or additional setup assumptions.
, 1993
"... We provide a treatment of encryption and zero-knowledge in terms of uniform complexity measures. This treatment is appropriate for cryptographic settings modeled by probabilistic polynomial-time
machines. Our uniform treatment allows to construct secure encryption schemes and zero-knowledge proof s ..."
Cited by 73 (10 self)
Add to MetaCart
We provide a treatment of encryption and zero-knowledge in terms of uniform complexity measures. This treatment is appropriate for cryptographic settings modeled by probabilistic polynomial-time
machines. Our uniform treatment allows to construct secure encryption schemes and zero-knowledge proof systems (for all INP) using only uniform complexity assumptions. We show that uniform variants
of the two definitions of security, presented in the pioneering work of Goldwasser and Micali, are in fact equivalent. Such a result was known before only for the non-uniform formalization.
- In 32nd STOC , 1999
"... We introduce the notion of Resettable Zero-Knowledge (rZK), a new security measure for cryptographic protocols which strengthens the classical notion of zero-knowledge. In essence, an rZK
protocol is one that remains zero knowledge even if an adversary can interact with the prover many times, eac ..."
Cited by 71 (7 self)
Add to MetaCart
We introduce the notion of Resettable Zero-Knowledge (rZK), a new security measure for cryptographic protocols which strengthens the classical notion of zero-knowledge. In essence, an rZK protocol is
one that remains zero knowledge even if an adversary can interact with the prover many times, each time resetting the prover to its initial state and forcing him to use the same random tape.
- In 43rd FOCS , 2002
"... We construct the first constant-round non-malleable commitment scheme and the first constantround non-malleable zero-knowledge argument system, as defined by Dolev, Dwork and Naor. Previous
constructions either used a non-constant number of rounds, or were only secure under stronger setup assumption ..."
Cited by 70 (4 self)
Add to MetaCart
We construct the first constant-round non-malleable commitment scheme and the first constantround non-malleable zero-knowledge argument system, as defined by Dolev, Dwork and Naor. Previous
constructions either used a non-constant number of rounds, or were only secure under stronger setup assumptions. An example of such an assumption is the shared random string model where we assume all
parties have access to a reference string that was chosen uniformly at random by a trusted dealer. We obtain these results by defining an adequate notion of non-malleable coin-tossing, and presenting
a constant-round protocol that satisfies it. This protocol allows us to transform protocols that are non-malleable in (a modified notion of) the shared random string model into protocols that are
non-malleable in the plain model (without any trusted dealer or setup assumptions). Observing that known constructions of a non-interactive non-malleable zeroknowledge argument systems in the shared
random string model are in fact non-malleable in the modified model, and combining them with our coin-tossing protocol we obtain the results mentioned above. The techniques we use are different from
those used in previous constructions of nonmalleable protocols. In particular our protocol uses diagonalization and a non-black-box proof of security (in a sense similar to Barak’s zero-knowledge
- STOC 2003 , 2003
"... ..."
, 2004
"... The notion of efficient computation is usually identified in cryptography and complexity with (strict) probabilistic polynomial time. However, until recently, in order to obtain constant-round
Cited by 43 (8 self)
Add to MetaCart
The notion of efficient computation is usually identified in cryptography and complexity with (strict) probabilistic polynomial time. However, until recently, in order to obtain constant-round
- In CRYPTO 2004 , 2004
"... We consider the central cryptographic task of secure twoparty computation: two parties wish to compute some function of their private inputs (each receiving possibly di#erent outputs) where
security should hold with respect to arbitrarily-malicious behavior of either of the participants. Despit ..."
Cited by 33 (4 self)
Add to MetaCart
We consider the central cryptographic task of secure twoparty computation: two parties wish to compute some function of their private inputs (each receiving possibly di#erent outputs) where security
should hold with respect to arbitrarily-malicious behavior of either of the participants. Despite extensive research in this area, the exact roundcomplexity of this fundamental problem (i.e., the
number of rounds required to compute an arbitrary poly-time functionality) was not previously known. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=521037","timestamp":"2014-04-17T16:56:26Z","content_type":null,"content_length":"36043","record_id":"<urn:uuid:ad2bc105-eabe-4768-bb5e-4e2cacf84f51>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Godfrey Harold Hardy
G. H. Hardy attended Cranleigh school up to the age of 12 with great success, but he did not appear to have the passion for mathematics that many mathematicians experience when young. He won a
scholarship to Winchester College in 1889, entering the College the following year. Like all public schools it was a rough place for a frail, shy boy like Hardy. While at Winchester, Hardy won an
open scholarship to Trinity College Cambridge, which he entered in 1896. He was turned on to mathematics by A. E. H. Love. Hardy was elected a fellow of Trinity in 1900 then, and the next year he was
awarded a Smith's prize.
Hardy was a very private man. He never did marry, and he was always known as Hardy except to one or two close friends who called him Harold.
In the next 10 years, he wrote many papers on the convergence of series and integrals and allied topics. Although this work established his reputation as an analyst, his greatest service to
mathematics in this early period was A course of pure mathematics, published in 1908. This work was the first rigorous English exposition of number, function, limit, and so on, adapted to the
undergraduate, and it transformed university teaching.
A major change in Hardy's work came about in 1911 when he began his collaboration with Littlewood, which was to last 35 years. His long collaboration with Littlewood produced mathematics of the
highest quality. It was a collaboration in which Hardy acknowledged Littlewood's greater technical mathematical skills, but at the same time Hardy brought great talents of mathematical insight and a
great ability to write their work up in papers with great clarity. In early 1913, he received Ramanujan's first letter from India, which was to start his second major collaboration. Hardy instantly
spotted Ramanujan's genius from a manuscript sent to him by Ramanujan. Hardy brought Ramanujan to Cambridge and they wrote 5 remarkable papers together.
During World War I, Hardy was unhappy at Cambridge, and he took the opportunity to leave in 1919 when he was appointed as Savilian professor of geometry at Oxford. These are the years when he
produced his best mathematics in the collaboration with Littlewood. Despite having been unhappy at Cambridge, Hardy returned to the Sadleirian chair there in 1931 when Hobson retired, because he
still considered Cambridge the center of English mathematics, and the Sadleirian chair was the foremost mathematics chair in England.
Hardy's interests covered many topics of pure mathematics - Diophantine analysis, summation of divergent series, Fourier series, the Riemann zeta function, and the distribution of primes. He wrote
joint papers with Titchmarsh, Ingham, Landau, Pólya, Wright, Rogosinski and Riesz.
Hardy was a pure mathematician who hoped his mathematics could never be applied. However in 1908, near the beginning of his career, he gave a law describing how the proportions of dominant and
recessive genetic traits would be propagated in a large population. Hardy considered it unimportant but it has proved of major importance in blood group distribution.
Hardy was known for his eccentricities. He could not endure having his photograph taken and only 5 snapshots are known to exist. He also hated mirrors and his first action on entering any hotel room
was to cover any mirror with a towel. He always played an amusing game of trying to fool God, which is also rather strange since he claimed all his life not be believe in God. For example, during a
trip to Denmark he sent back a postcard claiming that he had proved the Riemann hypothesis. He reasoned that God would not allow the boat to sink on the return journey and give him the same fame that
Fermat had achieved with his "last theorem".
Hardy's book A mathematicians apology was written in 1940. It is one of the most vivid descriptions of how a mathematician thinks and the pleasure of mathematics.
Hardy received many honours for his work. He was elected a Fellow of the Royal Society in 1910. He received the Royal Medal of the Society in 1920, the Sylvester Medal of the Society in 1940, and the
Copley Medal of the Royal Society in 1947. He was president of the London Mathematical Society from 1926 to 1928, and again from 1939 to 1941. He received the De Morgan Medal of the Society in 1929. | {"url":"http://www2.stetson.edu/~efriedma/periodictable/html/Ho.html","timestamp":"2014-04-21T02:32:34Z","content_type":null,"content_length":"4956","record_id":"<urn:uuid:21e75d96-2117-470c-a3a6-cdf9925fed8c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sum and product rule of measures
So P[i] are sets??
Then how did you define Sum P[i] and Prod P[i]??
OK. I wanted to leave even that part open in case there are particular definitions of Sum and Prod for which the measure distributes inside the Sum or the Prod. But initially my first guess is that
Sum is defined as traditional addition of numbers, and Prod is defined as traditional multiplication of numbers, and the P_i are sets.
For example, as I understand measure theory, in order that m(P1 union P2)=m(P1)+m(P2), there must be the restriction that P1 intersect P2 = empty set. In other words, P1 and P2 must be disjoint. In
all the books I've looked at, this seems to be given as an axiom that's not proven. Yet, I wonder if there is a similar or perhaps dual requirement for Prod?
Also, my goal is to be able to somehow put a measure on the space of propositions so that disjunction and conjunction get translated to addition and multiplication of measures on propositions or on
sets of propositions. Can one get from the more traditional treatments of measures on sets to getting measures on propositions by letting the number of elements in a set go to 1 element? Then
propositions could be labeled synonymously with its set. Would this turn unions and intersections into disjunction and conjunction? Any help would be very much appreciated. | {"url":"http://www.physicsforums.com/showthread.php?p=3778524","timestamp":"2014-04-20T03:25:02Z","content_type":null,"content_length":"37960","record_id":"<urn:uuid:7b916ebb-f3cf-4c7e-ad0b-10a8715f87e0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Texas Instruments TI-83
The equation y=ax²+bx+c is a mathematical representation of all parabolas with the axis of symmetry parallel to the y-axis. To select a particular parabola from the set as a whole, the parameters a,
b and c need to get the matching numerical values. So, if the program Parabola has to examine an individual case, it will ask the user to put in the three numbers.
At first the quadratic equation ax²+bx+c=0 is evaluated. Using the values of a, b and c the program calculates the discriminant D and the real roots x[1] and x[2] (as far as they exist).
By request a diagram is shown. The following lines are displayed:
- the parabola y = ax^2+bx+c (parabola Y1 at the screen);
- the derivative y' = dy/dx = 2ax+b (oblique line Y2);^
- the directrix y = c–(b^2+1)/(4a) (horizontal line Y3);
- and both axes (y=0 and x=0).^
From now all calculator functions are available. Move the cursor along a line (left/right arrow), jump to another line (up/down arrow), scroll the diagram by moving the cursor position to the centre
of the screen (ENTER) or put in an x-value and read off the new cursor coordinates at the bottom line. Perform calculations leading to for instance the coordinates of the parabola top or a point of
intersection, or determine an area as done below (CALC, TABLE).
The coordinate axes
The program pursues to compose diagrams in which the x- and y-axis have identical divisions. In several cases however, if the underlying procedure tends to generate a scarcely recognizable parabolic
curve, the program takes action to produce a more satisfying picture. After such an intervention the divisions of the axes aren't identical any more and the message "Axes differ" is displayed instead
of "Axes equal". Press WINDOW to see the actual settings.
Using D, x[1], x[2]
To^ keep any imaginable result, such as x2=-1.234567E-12, within one line at the screen, the roots to show, x[1] and x[2], are rounded to 7 'significant' figures. The discriminant holds at most 8 of
them (in the example above the noughts are omitted).
Even though hidden from view, the remaining decimals are still available: end the program and recover the unprocessed results from the memories D, E and F at the basic screen. In case of the given
example, pressing ALPHA, E, ENTER reveals an x[1] of -6,828427125 (-6,8284271247462).
In particular when working with an item selected from the CALC menu, the option may be very useful. The already mentioned area determination is a good example.
Finally an important note: if D<0 (no real roots) the values stored in E and F (1) are meaningless.
Additional formulas
D = b^2–4ac
x[1] = (-b–D^0,5)/(2a)
x[2] = (-b+D^0,5)/(2a)
x[top] = -b/(2a)
y[top] = -D/(4a)
directrix: y = -(D+1)/(4a)
distance from top to directrix = abs(1/(4a))
Errors and peculiarities
• If the displayed directrix is a dotted line...
Your screen's resolution setting is greater than one (Xres>1). Press WINDOW to see the actual Xres value.
• If not all expected lines (Y1,Y2,Y3) are displayed...
Press the "Y=" key to view the equations Y1, Y2 and Y3, all of which should be highlighted. If not so, and in addition the WINDOW screen shows an option "SETTINGS", uninstalling the application
"Transfrm" may solve the problem. Start up the application. If a menu appears, select "1:Uninstall", press ENTER (this will not delete the application) and restart the program "PARABOLA". If the
initial screen does not offer an uninstall option, press 2nd, OFF (any other keystroke will install "Transfrm"). | {"url":"http://home.hccnet.nl/tenhorn/parabola-TI.htm","timestamp":"2014-04-18T21:20:36Z","content_type":null,"content_length":"11735","record_id":"<urn:uuid:ed62fde7-360a-4c32-8446-5feb3da80a10>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Perform Basic Calculations in Excel Spreadsheets
This Series of Excel Articles and Tutorials will teach you the fundamentals of using formulas and functions to perform basic calculations in Excel spreadsheets.
are one of the most useful features of the program. Formulas can be as simple as adding two numbers or can be complex calculations needed for high end business projections. Once you learn the basic
format of creating a formula, Excel does all the calculations for you.
This step by step tutorial shows you the steps for writing basic formulas in Excel 2010. The topics include using
cell references
to create formulas and how to copy formulas using the
fill handle
This step by step tutorial shows you the steps for writing basic formulas in Excel 2007. The topics include using
cell references
to create formulas.
This step by step tutorial shows you the steps for writing basic formulas, using
cell references
to create formulas, how to use
formula operators
and to edit the
in formulas.
The Excel SUM
is probably the most often used function in Excel spreadsheets. This tutorial covers how to use the SUM function in Excel 2010.
This tutorial covers how to use the SUM
in Excel 2007.
One of the most useful functions in Excel is the IF function. The IF function works by testing to see if a certain condition is true. If it is, the function enters one result in a specific cell, if
it is not, it enters a different result in that cell.
"What if" questions involve changing the data used in Excel formulas to give different answers. Asking "What if" questions is very useful in business when planning new projects. Cost projections for
different scenarios can be quickly created and the results compared. | {"url":"http://spreadsheets.about.com/od/excel101/a/frmla_funct_hub.htm","timestamp":"2014-04-18T00:12:59Z","content_type":null,"content_length":"42611","record_id":"<urn:uuid:db32df31-23a4-438f-92b5-b390497ff8cd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Overview of Recent Research Results and Future Research Avenues Using Simulation Studies in Project Management
ISRN Computational Mathematics
Volume 2013 (2013), Article ID 513549, 19 pages
Review Article
An Overview of Recent Research Results and Future Research Avenues Using Simulation Studies in Project Management
^1Faculty of Economics and Business Administration, Ghent University, Tweekerkenstraat 2, 9000 Gent, Belgium
^2Technology and Operations Management Area, Vlerick Business School, Reep 1, 9000 Gent, Belgium
^3Department of Management Science and Innovation, University College London, Gower Street, London WC1E 6BT, UK
Received 21 July 2013; Accepted 11 September 2013
Academic Editors: R. A. Krohling and R. Pandey
Copyright © 2013 Mario Vanhoucke. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper gives an overview of three simulation studies in dynamic project scheduling integrating baseline scheduling with risk analysis and project control. This integration is known in the
literature as dynamic scheduling. An integrated project control method is presented using a project control simulation approach that combines the three topics into a single decision support system.
The method makes use of Monte Carlo simulations and connects schedule risk analysis (SRA) with earned value management (EVM). A corrective action mechanism is added to the simulation model to measure
the efficiency of two alternative project control methods. At the end of the paper, a summary of recent and state-of-the-art results is given, and directions for future research based on a new
research study are presented.
1. Introduction
Completing a project on time and within budget is not an easy task. Monitoring and controlling projects consists of processes to observe project progress in such a way that potential problems can be
identified in a timely manner and corrective actions can be taken, when necessary, to bring endangered projects back on track. The key benefit is that project performance is observed and measured
regularly to identify variances from the project baseline schedule. Therefore, monitoring the progress and performance of projects in progress using integrated project control systems requires a set
of tools and techniques that should ideally be integrated into a single decision support system. In this paper, such a system is used in a simulation study using the principles of dynamic scheduling
The term dynamic scheduling is used to refer to an integrative project control approach using three main dimensions which can be briefly outlined along the following lines. (i)Baseline scheduling is
necessary to construct a timetable that provides a start and finish date for each project activity, taking activity relations, resource constraints, and other project characteristics into account and
aiming to reach a certain scheduling objective. (ii)Risk analysis is crucial to analyse the strengths and weaknesses of the project baseline schedule in order to obtain information about the schedule
sensitivity and the impact of potential changes that undoubtedly occur during project progress. (iii)Project control is essential to measure the (time and cost) performance of a project during its
progress and to use the information obtained during the scheduling and risk analysis steps to monitor and update the project and to take corrective actions in case of problems.
The contribution and scope of this paper is fourfold. First, the paper aims at introducing the reader into the three dimensions of the integrated dynamic scheduling approach. Secondly, a project
control simulation approach that can be used for testing existing and novel project scheduling and control techniques is presented. The study is based on a Monte Carlo simulation approach using the
three dimensions of dynamic scheduling. Third, the project control simulation approach is illustrated using three simulation experiments published in the literature. This paper provides a summary of
these three simulation studies and gives general results. Finally, a summary will be given, and directions for future research avenues will be highlighted.
For a recent overview of the integration between baseline scheduling, risk analysis, and project control, the reader is referred to the book written by Vanhoucke [2]. In the following sections, many
of the topics discussed in this paper will be used as illustrative example studies. The outline of this paper is as follows. In Section 2, a general framework for project control simulation studies
is presented, and the four steps are briefly discussed. Section 3 gives an overview of three simulation studies published in the literature that measure the importance and relevance of risk analysis
and project control in relation with the baseline schedule. Moreover, this section also reviews the main techniques to generate fictitious project data. In Section 4, some recommendations for future
research topics will be discussed. Section 5 draws general conclusions.
2. Monte Carlo Simulation
In this section, a general project control simulation algorithm that makes use of Monte Carlo simulation runs and aims at combining the three dimensions of dynamic scheduling into a single research
approach is presented. The pseudocode of the algorithm is given below, and details are displayed along the following subsections.
Algorithm 1 (Project Control Simulation). Step1. Construct Baseline Schedule.Step2. Define Activity Distributions.Step3. Run Simulation and Measure.Step4. Report Output Metrics.
2.1. Baseline Scheduling
The project baseline schedule plays a central role in any project control simulation study since it acts as a point of reference for all calculations done during the simulation runs of Step 3.
Constructing a baseline schedule is necessary to have an idea about the expected time and cost of a project. Indeed, by determining an activity timetable, a prediction can be made about the expected
time and cost of each individual activity and the complete project. This timetable will be used throughout all simulation studies as the point of reference from which every deviation will be
monitored and saved. These deviations will then be used to calculate the output metrics of Step 4 and to draw general conclusions as will be illustrated in later sections of this paper.
Project baseline scheduling can be defined as a mathematical approach to determine start and finish times for all activities of the project, taking into account precedence relations between these
activities as well as a limited availability of resources, while optimising a certain project objective, such as lead-time minimisation, cash-flow optimisation, levelling of resource use, and many
The early research endeavours on baseline scheduling stem from the late 1950s resulting in the two well-known scheduling techniques known as the critical path method (CPM) and the program evaluation
and review technique (PERT) [5–7]. These methods make use of activity networks with precedence relations between the activities and the primary project objective to optimise the minimisation of the
total project time. Due to the limited computer power at this time, incorporating resource constraints has been largely ignored. However, these methods are still widely recognised as important
project management tools and techniques and are often used as basic tools in more advanced baseline scheduling methods. Since the development of the PERT/CPM methods, a substantial amount of research
has been carried out covering various areas of project baseline scheduling. The most important extensions of these basic scheduling methods are the incorporation of resource constraints, the
extension to other scheduling objectives, and the development of new and more powerful solution methods to construct these baseline schedules, as summarised along the following lines.
(i) Adding Resources. Resource-constrained project scheduling is a widely discussed project management topic which has roots in and relevance for both academic and practical oriented environments.
Due to its inherent problem complexity, it has been the subject of numerous research projects leading to a wide and diverse set of procedures and algorithms to construct resource feasible project
schedules. Thanks to its practical relevance, many of the research results have found their way into practical project management applications. Somewhat more than a decade ago, this overwhelming
amount of extensions has inspired authors to bring structure in the chaos by writing overview papers [8–12], summary books [2, 13] and by developing two different classification schemes [12, 14] on
resource-constrained project scheduling.
(ii) Changing Objectives. The PERT/CPM methods mainly focused on constructing baseline schedules aiming at minimising the total lead-time of the project. However, many other scheduling objectives can
be taken into account and the choice of an objective to optimise can vary between projects, sectors, countries, and so forth. Some of these scheduling objectives take the cost of resources into
account. The so-called resource availability cost project (RACP) aims at minimising the total cost of the resource availability within a predefined project deadline, and references can be found in
Demeulemeester [15], Drexl and Kimms [16], Hsu and Kim [17], Shadrokh and Kianfar [18], Yamashita et al. [19], Drexl and Kimms [16], Gather et al. [20], and Shahsavar et al. [21]. The resource
leveling project (RLP) aims at the construction of a precedence and resource feasible schedule within a predefined deadline with a resource use that is as level as possible within the project
horizon. Various procedures have been described in papers written by Bandelloni et al. [22], Gather et al. [20], Neumann and Zimmermann [23, 24], Coughlan et al. [25], Shahsavar et al. [21], and
Gather et al. [26]. The resource-constrained project with work continuity constraints (RCP-WC) takes the so-called work continuity constraints [27] into account during the construction of a project
schedule. The objective is to minimise the total idle time of bottleneck resources used in the project, and it has been used by Vanhoucke [28, 29]. The resource renting problem (RRP) aims at
minimising the total resource cost, consisting of time-dependent, and time-independent costs. Time-dependent costs are encountered each time unit a renewable resource is in the resource set while
time-independent costs are costs made every time a resource is added to the existing resource set. References to solution procedures for this problem can be found at Nübel [30], Ballestín [31, 32],
and Vandenheede and Vanhoucke [33]. Other scheduling objectives take the cost of activities into account to determine the optimal baseline schedule. The resource-constrained project with discounted
cash fows (RCP-DCF) optimises the timing of cash flows in projects by maximising the net present value. The basic idea boils down to shifting activities with a negative cash flow further in time
while positive cash flow activities should be scheduled as soon as possible, respecting the precedence relations and limited resource availabilities. Algorithms have been developed by Smith-Daniels
and Aquilano [34], Elmaghraby and Herroelen [35], Yang et al. [36], Sepil [37], Yang et al. [38], Baroum and Patterson [39], Icmeli and Erengüç [40], Pinder and Marucheck [41], Shtub and Etgar [42],
Özdamar et al. [43], Etgar et al. [44], Etgar and Shtub [45], Goto et al. [46], Neumann and Zimmermann [24], Kimms [47], Schwindt and Zimmermann [48], Vanhoucke et al. [49, 50], Selle and Zimmermann
[51], Vanhoucke et al. [52], and Vanhoucke [53, 54]. An overview is given by Mika et al. [55]. Vanhoucke and Demeulemeester [56] have illustrated the use and relevance of net present value
optimisation in project scheduling for a water company in Flanders, Belgium. When activities have a preferred time slot to start and/or end and penalties have been defined to avoid that these
activities start/end earlier or later than this preferred time slot, the problem is known as the Resource-Constrained Project with weighted earliness/tardiness (RCP-WET) problem. This baseline
scheduling problem is inspired by the just-in-time philosophy from production environments and can be used in a wide variety of practical settings. Algorithmic procedures have been developed by
Schwindt [57] and Vanhoucke et al. [58]. The Resource-Constrained Project with quality time slots (RCP-QTSs) is an extension of this RCP-WET problem. Quality-dependent time slots can be considered as
an extension of the RCP-WET scheduling objective. In this scheduling objective, multiple time slots are defined rather than a single preferred start time and earliness/tardiness penalties must be
paid when activities are scheduled outside one of these time slots. To the best of our knowledge, the problem has only been studied by Vanhoucke [59].
(iii) New Solution Methods. The vast majority of solution methods to construct resource-constrained baseline schedules can be classified in two categories. The exact procedures aim at finding the
best possible solution for the scheduling problem type and are therefore often restricted to small projects under strict assumptions. This class of optimisation procedures is widely available in the
literature but can often not be used in real settings due to the high computation times needed to solve the problems. Heuristic procedures aim at finding good, but not guaranteed to be optimal
schedules for more realistic projects (i.e., under different assumptions and for larger sizes) in a reasonable (computational) time. Although these procedures do not guarantee an optimal solution for
the project, they can be easily embedded in any scheduling software tool due to their simplicity and generality to a broad range of different projects. An extensive discussion of the different
algorithms is not within the scope of this paper. For a review of exact problem formulations, the reader is referred to Demeulemeester and Herroelen [13]. The use of heuristic procedures consists of
single pass and multipass algorithms as well as the use of metaheuristics and their extensions, and the number of published papers has exploded through the last years. An experimental investigation
of heuristic search methods to construct a project schedule with resources can be found in Kolisch and Hartmann [60].
Today, project baseline scheduling research continues to grow in the variety of its theoretical models, in its magnitude, and in its applications. Despite this ever growing amount of research on
project scheduling, it has been shown in the literature that there is a wide gap between the project management discipline and the research on project management, as illustrated by Delisle and Olson
[61], among many others. However, research efforts of the recent years show a shift towards more realistic extensions trying to add real needs to the state-of-the-art algorithms and procedures. Quite
recently, a paper has been written in which a survey of variants and extensions of the resource-constrained project scheduling problem have been described [62]. This paper clearly illustrates that
the list of extensions to the basic resource-constrained project scheduling problem is long and could possibly lead to continuous improvements in the realism of the state-of-the-art literature,
bringing researchers closer to project management professionals. Moreover, while the focus of decennia of research was mainly on the static development of algorithms to deal with the complex baseline
scheduling problems, the recent research activities gradually started to focus on the development of dynamic scheduling tools that make use of the baseline schedule as a prediction for future project
progress in which monitoring and controlling the project performance relative to the baseline schedule should lead to warning signals when the project tends to move into the danger zone [63].
2.2. Activity Distributions
The construction of a project baseline schedule discussed in the previous section relies on activity time and cost estimates as well as on estimates of time lags for precedence relations and use of
resources assigned to these activities. However, the constructed baseline schedule assumes that these deterministic estimates are known with certainty. Reality, however, is flavoured with
uncertainty, which renders the PERT/CPM methods and their resource-constrained extensions often inapplicable to many real life projects. Consequently, despite its relevance in practice, the PERT/CPM
approach often leads to underestimating the total project duration and costs (see e.g., Klingel [65], Schonberger [66], Gutierrez and Kouvelis [67], and many others), which obviously results in time
and cost overruns. This occurs for the following reasons.(i)The activity durations in the critical path method are single point estimates that do not adequately address the uncertainty inherent to
activities. The PERT method extends this to a three point estimate, but still relies on a strict predefined way of analysing the critical path. (ii)Estimates about time and cost are predictions for
the future, and human beings often tend to be optimistic about it or, on the contrary, often add some reserve safety to protect themselves against unexpected events. (iii)The topological structure of
a network often implies extra risk at points where parallel activities merge into a single successor activity.
Uncertainty in the activity time and cost estimates or in the presence of project activities, uncertainty in the time lags of precedence relations or in the network structure, and even uncertainty in
the allocation and costs of resources assigned to the activities can be easily modelled by defining distributions on the unknown parameters. These stochastic values must be generated from predefined
distributions that ideally reflect the real uncertainty in the estimates. The use of activity distributions on activity durations has been investigated in project management research since the early
developments of PERT/CPM. From the very beginning, project scheduling models have defined uncertainty in the activity durations by beta distributions. This is mainly due to the fact that the PERT
technique has initially used these distributions [68]. Extensions to generalised beta distributions are also recommended and used in the literature (see e.g., AbouRizk et al. [69]). However, since
these generalised beta distribution parameters are not always easily understood or estimated, variation in activity durations are often simulated using the much more simple triangular distributions [
70] where practitioners often base an initial input model on subjective estimates for the minimum value, the most likely value and the maximum value of the distribution of the activity duration.
Although it has been mentioned in the literature that the triangular distribution can be used as a proxy for the beta distribution in risk analysis (see e.g., Johnson [71]), its arbitrary use in case
no empirical data is available should be taken with care (see e.g., Kuhl et al. [72]). In the simulation studies of [64, 73], the generalised beta distribution has been used to model activity
duration variation, but its parameters have been approximated using the approximation rules of Kuhl et al. [72]. Other authors have used other distributions or approximations, resulting in a variety
of ways to model activity duration variation. This is also mentioned in a quite recent paper written by Trietsch et al. [74], where the authors argue that the choice of a probability distribution
seems to be driven by convenience rather than by empirical evidence. Previous research studies have revealed that the choice of distributions to model empirical data should reflect the properties of
the data. As an example, AbouRizk et al. [69] defend the importance of appropriate input models and state that their inappropriate use is suspect and should be dealt with carefully. To that purpose,
Trietsch et al. [74] advocate the use of lognormal functions for modelling activity times, based on theoretical arguments and empirical evidence. An overview of stochastic modelling in project
scheduling and the proper use of activity times distributions would lead us too far from the dynamic scheduling and project control simulation topic of this paper. Therefore, the reader is referred
to the recent paper of Trietsch et al. [74] as an ideal starting point on the use of activity time distributions to be used in the stochastic project scheduling literature.
In the remainder of this paper, activity distributions will only be used to model variation in the activity time estimates, and hence, no variation is modelled on the cost estimates or on the
estimates about precedence relation time lags or resource assignments. This also means that all experiments and corresponding results discussed in the further sections will only hold for controlling
the time performance of the projects, and the results can therefore not be generalised to cost controlling.
2.3. Run Simulation and Measure
In the third step, the project is subject to Monte Carlo simulations to imitate fictitious project progress. The literature on using Monte Carlo simulations to generate activity duration uncertainty
in a project network is rich and widespread and is praised as well as criticised throughout various research papers. In these simulation models, activity duration variation is generated using often
subjective probability distributions without precise accuracy in practical applications (see previous section). However, the inability of the simulation runs to incorporate the management focus on a
corrective action decision making process to bring late running projects back on track has led to the crumbling credibility of these techniques. Despite the criticism, practitioners as well as
academics have used project network models within a general simulation framework to enable the generation of activity duration and cost uncertainties. For a discussion on the (dis)advantages of
project network simulation, the reader is referred to Williams [75].
Despite the shortcomings and criticism of using Monte Carlo simulations in project management, it is a powerful and easy to use tool to analyse the behaviour of projects in progress and to measure
the impact of changes in the initial estimates on the project objectives. Indeed, during each run of the simulation, a value for the activity duration is generated from the predefined distribution,
leading to differences between the planned durations and the simulated values. These differences between the baseline schedule key metrics and their corresponding simulated values must be measured
during each simulation step. Thanks to the enormous computer power and memory, many deviations can be measured and saved during each simulation run, such as differences in the activity criticality,
delays in the total project duration, variability in the control performance metrics, and corresponding forecasts. The specific choice of what type of measurement points will be saved during each
simulation run depends on the specific study. In the three example simulation studies of Section 3, it will be shown that the measurement points saved during each simulation run differ along the
scope of each simulation study. Afterwards, upon the finish of the simulation runs, these measurement points are analysed, and some output metrics are calculated, as briefly discussed in Step 4.
2.4. Report Output Metrics
The huge amount of data that has been saved during the simulation runs will now be analysed and summarised in key output metrics. These key output metrics differ from study to study and depend on the
definition, scope, and target of the simulation study. In the current paper, the simulations are used to perform a dynamic scheduling and integrated project control study. It will be shown in the
next section that the output metrics depend on the scope of the simulation study and the intended outcome of the research. An important aspect of the output metrics is that they need interpretation
and understanding such that they can lead to drawing conclusions that add insight to enhance the project control approach.
3. Simulation Studies
In this section, three illustrative project control simulation studies will be briefly presented, and references to interesting publications will be given for more details. For each simulation study,
the measurement points and output metrics will be discussed in line with the scope of the study. In Section 3.2, a schedule risk analysis study will be presented to validate the power and reliability
of risk metrics that measure the sensitivity of the activity durations. Section 3.3 gives an overview of an accuracy simulation study using earned value management and earned schedule predictors by
using three methods from the literature. Finally, in Section 3.4, an action oriented project control study is discussed in which two alternative project control methods are compared and benchmarked.
All simulation studies are carried out on a big set of fictitious projects generated under a controlled design. This generation process as well as the metrics to control the structure and design of
the data is discussed in Section 3.1.
3.1. Test Data
In this section, the generation process to construct a set of project networks that differ from each other in terms of their topological structure is described in detail. Rather than drawing
conclusions for a (limited) set of real life projects, the aim is to generate a large set of project networks that spans the full range of complexity [76]. This guarantees a very large and diverse
set of generated networks that can and might occur in practice such that the results of the simulation studies can be generalised. The generation process relies on the project network generator
developed by Vanhoucke et al. [77] to generate activity-on-the-node project networks where the set of nodes represents network activities and the set of arcs represents the technological precedence
relations between the activities. These authors have proposed a network generator that allows generating networks with a controlled topological structure. They have proven that their generator is
able to generate a set of very diverse networks that differ substantially from each other from a topological structure point of view. Moreover, it has been shown in the literature that the structure
of a network heavily influences the constructed schedule [78], the risk for delays [79], the criticality of a network [80], or the computational effort an algorithm needs to schedule a project [76].
In the simulation experiments, the design and structure of the generated networks are varied and controlled, resulting in 4,100 diverse networks with 30 activities. For more information about the
specific topological structures and the generation process, the reader is referred to Vanhoucke et al. [77]. The constructed data set can be downloaded from http://www.or-as.be/measuringtime.
Various research papers dealing with network generators for project scheduling problems have been published throughout the academic literature. Demeulemeester et al. [81] have developed a random
generator for activity-on-the-arc (AoA) networks. These networks are so-called strongly random since they can be generated at random from the space of all feasible networks with a specified number of
nodes and arcs. Besides the number of nodes and the number of arcs, no other characteristics can be specified for describing the network topology. Kolisch et al. [82] describe ProGen, a network
generator for activity-on-the-node (AoN) networks which takes into account network topology as well as resource-related characteristics. Schwindt [83] extended ProGen to ProGen/Max which can handle
three different types of resource-constrained project scheduling problems with minimal and maximal time lags. Agrawal et al. [84] recognize the importance of the complexity index as a measure of
network complexity and have developed an activity-on-the-arc network generator DAGEN for which this complexity measure can be set in advance. Tavares [85] has presented a new generator RiskNet based
on the concept of the progressive level by using six morphological indicators (see later in this section). Drexl et al. [86] presented a project network generator ProGen/ based on the project
generator ProGen, incorporating numerous extensions of the classical resource-constrained project scheduling problem. Demeulemeester et al. [87] have developed an activity-on-the-node network
generator RanGen which is able to generate a large amount of networks with a given order strength (discussed later). Due to an efficient recursive search algorithm, RanGen is able to generate project
networks with exact predefined values for different topological structure measures. The network generator also takes the complexity index into account. Akkan et al. [88] have presented a constraint
logic programming approach for the generation of acyclic directed graphs. Finally, Vanhoucke et al. [77] have adapted RanGen to an alternative RanGen2 network generator which will be used for the
generation of the project networks of the studies that have led to the writing of this paper. It is based on the RiskNet generator of Tavares [85]. Neither of the networks generated by the previously
mentioned network generators can be called strongly random because they do not guarantee that the topology is a random selection from the space of all possible networks which satisfy the specified
input parameters.
Next to the generation of project networks, numerous researchers have spent attention on the topological structure of a project network. The topological structure of a network can be calculated in
various ways. Probably the best known measure for the topological structure of activity-on-the-arc networks is the coefficient of network complexity (CNC), defined by Pascoe [89] as the number of
arcs over the number of nodes and redefined by Davies [90] and Kaimann [91, 92]. The measure has been adapted for activity-on-the-node problems by Davis [93] as the number of direct arcs over the
number of activities (nodes) and has been used in the network generator ProGen [82]. Since the measure relies totally on the count of the activities and the direct arcs of the network and as it is
easy to construct networks with an equal CNC value but a different degree of difficulty, Elmaghraby and Herroelen [76] questioned the usefulness of the suggested measure. De Reyck and Herroelen [94]
and Herroelen and De Reyck [95] conclude that the correlation of the CNC with the complexity index is responsible for a number of misinterpretations with respect to the explanatory power of the CNC.
Indeed, Kolisch et al. [82] and Alvarez-Valdes and Tamarit [96] had revealed that resource-constrained project scheduling networks become easier with increasing values of the CNC, without considering
the underlying effect of the complexity index. In conclusion, the CNC, by itself, fails to discriminate between easy and hard project networks and can therefore not serve as a good measure for
describing the impact of the network topology on the hardness of a project scheduling problem.
Another well-known measure of the topological structure of an AoN network is the order strength, OS [97], defined as the number of precedence relations including the transitive (When two direct or
immediate precedence relations exist between activities and activities , then there is also an implicit transitive relation between activities .) ones but not including the arcs connecting the dummy
start or end activity divided by the theoretical maximum number of precedence relations , where denotes the number of nondummy activities in the network. It is sometimes referred to as the density [
98] or the restrictiveness [99] and is equal to 1 minus the flexibility ratio [100]. Herroelen and De Reyck [95] conclude that the order strength OS, the density, the restrictiveness, and the
flexibility ratio constitute one and the same complexity measure. Schwindt [83] uses the order strength in the problem generator ProGen/Max and argues that this measure plays an important role in
predicting the difficulty of different resource-constrained project scheduling problems. De Reyck [101] verified and confirmed the conjecture that the OS outperforms the complexity index as a measure
of network complexity for the resource-constrained project scheduling problem.
The complexity index was originally defined by Bein et al. [102] for two-terminal acyclic activity-on-the-arc networks as the reduction complexity, that is, the minimum number of node reductions
which—along with series and parallel reductions—allow to reduce a two-terminal acyclic network to a single edge. As a consequence, the complexity index measures the closeness of a network to a
series-parallel directed graph. Their approach for computing the reduction complexity consists of two steps. First, they construct the so-called complexity graph by means of a dominator and a
reverse-dominator tree. Second, they determine the minimal node cover through the use of the maximum flow procedure by Ford and Fulkerson [103]. De Reyck and Herroelen [94] adopted the reduction
complexity as the definition of the complexity index of an activity network and have proven the complexity index to outperform other popular measures of performance, such as the CNC. Moreover, they
also show that the OS, on its turn, outperforms the complexity index. These studies motivated the construction of an AoN problem generator for networks where both the order strength OS and the
complexity index can be specified in advance, which has led to the development of the RanGen and RanGen2 generators (see earlier).
The topological structure of an activity-on-the-node network used in the three simulation studies is calculated based on four indicators initially proposed by Tavares et al. [79, 104] and further
developed by Vanhoucke et al. [77]. These indicators serve as classifiers of project networks by controlling the design and structure of each individual project network. All indicators have been
rescaled and lie between 0 and 1, inclusive, denoting the two extreme structures. The logic behind each indicator is straightforward and relies on general topological definitions from the project
scheduling literature. Their intuitive meaning is briefly discussed along the following lines.(i)Serial/parallel indicator (SP) measures how closely the project network lies to a 100% parallel (SP =
0) or 100% serial (SP = 1) network. This indicator can be considered as a measure for the amount of critical and noncritical activities in a network and is based on the indicator proposed by Tavares
et al. [79]. (ii)Activity distribution (AD) measures the distribution of the activities along the network from a uniform distribution across the project network (AD = 0) to a highly skewed
distribution (e.g., a lot of activities in the beginning, followed by only a few activities near the end) (AD = 1). (iii)Length of arcs (LA) measures the length of each precedence relation between
two activities as the distance between two activities in the project network. A project network can have many precedence relations between two activities lying far from each other (LA = 0), and hence
most activities can be shifted further in the network. When all precedence relations have a length of one (LA = 1), all project activities have only immediate successors with little freedom to shift.
(iv)Topological float (TF) measures the degrees of freedom for each activity as the difference between the progressive and regressive level [105] for each activity in the project network. TF = 0 when
the network structure is 100% dense and no activities can be shifted within its structure. A network with TF = 1 consists of one serial chain of activities without topological float (this chain
defines the SP value) while the remaining activities have a maximal float value.
3.2. Study 1: Schedule Risk Analysis
Schedule risk analysis (SRA) [106] is a technique that relies on the project control simulation algorithm presented in Section 2. It generates sensitivity metrics for project activities that express
the relation between variation in the activity duration estimates and variation in the total project duration. The literature on sensitivity metrics for measuring the impact of variability in the
project activity estimates is wide and diverse. Typically, many papers and handbooks mention the idea of using Monte Carlo simulations as the most accessible technique to estimate a project’s
completion time distribution. These research papers often present simple metrics to measure a project’s sensitivity under various settings. Williams [70] reviews three important sensitivity measures
to measure the criticality and/or sensitivity of project activities. The author shows illustrative examples for three sensitivity measures and mentions weaknesses for each metric. For each
sensitivity metric, anomalies can occur which might lead to counter-intuitive results. For these reasons, numerous extensions that have been presented in the literature (partly) give an answer on
these shortcomings and/or anomalies. Tavares et al. [80] present a surrogate indicator of criticality by using a regression model in order to offer a better alternative to the poor performance of the
criticality index in predicting the impact of an activity delay on the total project duration. Kuchta [107] presents an alternative criticality index based on network information. However, no
computational experiments have been performed to show the improvement of the new measure. In Elmaghraby [108], a short overview is given on the advantages and disadvantages of the three sensitivity
measures discussed in Williams [70]. The author conjectures that a relative importance of project activities should be given by considering a combined version of these three sensitivity measures and
reviews the more advanced studies that give partial answers on the mentioned shortcomings. More precisely, the paper reviews the research efforts related to the sensitivity of the mean and variance
of a project’s total duration due to changes in the mean and variance of individual activities. Cho and Yum [109] propose an uncertainty importance measure to measure the effect of the variability in
an activity’s duration on the variability of the overall project duration. Elmaghraby et al. [110] investigate the impact of changing the mean duration of an activity on the variability of the
project duration. Finally, Gutierrez and Paul [111] present an analytical treatment of the effect of activity variance on the expected project duration. Motivated by the heavy computational burden of
simulation techniques, various researchers have published analytical methods and/or approximation methods as a worthy alternative. An overview can be found in the study of Yao and Chu [112] and will
not be discussed in the current research paper. Although not very recently published, another interesting reference related to this topic is the classified bibliography of research related to project
risk management written by Williams [113]. A detailed study of all sensitivity extensions is outside the scope of this paper, and the reader is referred to the different sources mentioned above.
In this section, four sensitivity metrics for activity duration sensitivity will be used in the project control simulation study originally presented by Vanhoucke [73] and further discussed in
Vanhoucke [63] and Vanhoucke [2]. Three of the four activity duration sensitivity measures have been presented in the criticality study in stochastic networks written by Williams [70], while a fourth
sensitivity measure is based on the sensitivity issues published in PMBOK [114]. The four sensitivity metrics used in the simulation are described along the following lines.(i)Criticality index (CI)
measures the probability that an activity lies on the critical path. (ii)Significance index (SI) measures the relative importance of an activity taking the expected activity and project duration into
account as well as the activity slack. (iii)Schedule sensitivity index (SSI) measures the relative importance of an activity taking the CI as well as the standard deviations of the activity and
project durations into account. (iv)Cruciality index (CRI) measures the correlation between the activity duration and the total project duration in three different ways: (a)CRI(): Pearson’s
product-moment correlation coefficient;(b)CRI(): Spearman’s rank correlation coefficient;(c)CRI(): Kendall’s tau rank correlation coefficient.
The aim of the study is to compare the four sensitivity metrics in a project control setting and to test their ability to distinguish between highly and lowly sensitive activities such that they can
be used efficiently in a project control setting. Therefore, the scope of the study and the used measurement points and resulting output metrics can be summarised as follows.(i)Scope of the study is
to compare and validate four well-known sensitivity metrics for activity duration variations. (ii)Measurement points are activity criticality, activity slack, variability in and correlations between
the activity and project durations. (iii)Output metrics are values for the four sensitivity measures (6 values in total since three versions of CRI are used).
Figure 1 shows computational results of various experiments. The figure shows the six previously mentioned sensitivity metrics on the -axis and displays their values between 0 and 1 on the -axis. The
size of the bubbles in the graphs is used to display the frequency of occurrence as the number of activities in the project network with such a value. The three graphs display results for projects
with values of the SP indicator discussed in Section 3.1 equal to 0.25, 0.50, and 0.75.
The figure can be used to validate the discriminative power of the sensitivity metrics to make a distinction between highly sensitive activities (with a high expected impact) and the other less
important activities that require much less attention. Ideally, the number of highly sensitive activities should be low such that only a small part of the project activities require attention while
the others can be considered as safe. The criticality index and sensitivity index do not report very good results on that aspect for projects with a medium (SP = 50) to high (SP = 75) number of
serial activities, since many (SP = 50) or most (SP = 75) activities are considered to be highly sensitive. The other sensitivity measures SSI and the three versions of CRI perform much better since
they show higher upward tails with a low number of activities.
The CRI metric has a more or less equal distribution of the number of observations between the lowest and highest values, certainly for the SP = 50 and SP = 75 projects. The SSI clearly shows that a
lot of activities are considered as less sensitive for SP = 25 and SP = 50 while only a few activities have much higher (i.e., sensitive) values. Consequently, the SSI and CRI metrics have a higher
discriminative power compared to the SI and CI metrics. Similar findings have been reported in Vanhoucke [73].
It should be noted that the picture does not evaluate the sensitivity metrics on their ability to measure the real sensitivity of the project activities to forecast the real impact of activity
duration changes on the project duration. Moreover, their applicability in a project control setting is also not incorporated in this figure. However, this topic is discussed in the project control
experiments of Section 3.4.
3.3. Study 2: Forecasting Accuracy
In this section, a simulation study to measure the accuracy of two earned value management (EVM) methods and one earned schedule (ES) method to forecast the final duration of a project in progress is
discussed, based on the work presented in Vanhoucke and Vandevoorde [115]. This study is a follow-up simulation study of the comparison made by Vandevoorde and Vanhoucke [116] where three forecasting
methods have been discussed and validated on a small sample of empirical project data. Results of this simulation study have also been reported in follow-up papers published by Vanhoucke and
Vandevoorde [117], Vanhoucke [118], Vanhoucke and Vandevoorde [119, 120], and Vanhoucke [121] and in the book by Vanhoucke [63] and have been validated using empirical project data from 8 Belgian
companies from various sectors [122].
Earned value management is a methodology used to measure and communicate the real physical progress of a project and to integrate the three critical elements of project management (scope, time, and
cost management). It takes into account the work completed, the time taken, and the costs incurred to complete the project and it helps to evaluate and control project risks by measuring project
progress in monetary terms. The basic principles and the use in practice have been comprehensively described in many sources [123]. EVM relies on the schedule performance index (SPI) to measure the
performance of the project duration during progress. Although EVM has been set up to follow up both time and cost, the majority of the research has been focused on the cost aspect (see e.g., the
paper written by Fleming and Koppelman [124] who discuss EVM from a price tag point of view). In 2003, an alternative method, known as earned schedule has been proposed by Lipke [125] which relies on
similar principles of EVM not only but also measures the time performance of projects in progress by an alternative schedule performance index SPI() that better measures the real-time performance of
projects in progress.
The three forecasting methods to forecast the final project duration are known as the planned value method (PVM) [126], the earned duration method (EDM) [127, 128], and the earned schedule method
(ESM) [125]. A prediction for the final project duration along the progress of the progress using one of these three methods is known as the estimated duration at completion, abbreviated by EAC().
Each of the three methods can be used in three alternative ways, expressing the assumption about future expected project performance, resulting in 3 * 3 = 9 EAC() methods.
Unique to this simulation study is the use of the activity distribution functions to simulate activity variation, as discussed in Step 2 of the project control simulation algorithm of Section 2. The
simple and easy to use triangular distribution is used to simulate activity duration variation, but its parameters are set in such a way that nine predefined simulation scenarios could be tested.
These 9 simulation scenarios are defined based on two parameters. The first is the variation in activity durations that can be defined on the critical and or noncritical activities. A second
parameter is the controlled performance measured along the simulation runs measured by the schedule performance index SPI() at periodic time intervals for each simulation run. The use of these two
parameters results in 9 simulation scenarios that can be classified as follows.
True Scenarios. Five of the nine scenarios report an average project duration performance (ahead of schedule, on time, or delay) measured by the periodic SPI(), and finally result in a real project
duration that corresponds to the measured performance. These scenarios are called true scenarios since the measures performance metric SPI() measures the true outcome of the project.
Misleading Scenarios. Two of the nine scenarios are somewhat misleading since they measure a project ahead of schedule or a project delay, while the project finishes exactly on time.
False Scenarios. Two of the nine scenarios are simply wrong since the performance measurement of SPI() reports the complete opposite than the final outcome. When a project is reported to be ahead of
schedule, it finishes late, while a warning for project delays turned out to result in a project finishing earlier than expected.
The reason why different simulation settings are used for critical versus noncritical activities lies at the heart of EVM and is based on the comments made by Jacob [127]. This author argues that the
use of EVM and ES metrics on the project level is dangerous and might lead to wrong conclusions. The reason is that variation in noncritical activities has no real effect on the project duration but
is nevertheless measured by the SPI and SPI() metrics on the project level and hence might give a false warning signal to the project manager. Consequently, the authors suggest to use the SPI and SPI
() metrics on the activity level to avoid these errors, and certainly not at higher levels of the work breakdown structure (WBS). This concern has also been raised by other authors and has led to a
discussion summarised in papers such as Book [129, 130], Jacob [131], and Lipke [132].
Although it is recognised that, at higher WBS levels, effects (delays) of nonperforming activities can be neutralised by well performing activities (ahead of schedule), which might result in masking
potential problems, in the simulation study of this section, the advice of these authors has not been followed. Instead, in contradiction to the recommendations of Jacob [127], the SPI and SPI()
indicators are nevertheless measured on the project level, realising that it might lead to wrong conclusions. Therefore, the aim of the study is to test what the impact of this error is on the
forecasting accuracy when the performance measures are used at the highest WBS level (i.e., the project level). By splitting the scenarios between critical and noncritical activities, the simulation
study can be used to test this known error and its impact on the forecasting accuracy. The reason why these recommendations are ignored is that it is believed that the only approach that can be taken
by practitioners is indeed to measure performance on the project level. These measures are used up as early warning signals to detect problems and/or opportunities in an easy and efficient way at
high levels in the WBS rather than a simple replacement of the critical path based scheduling tools. This early warning signal, if analysed properly, defines the need to eventually drill down into
lower WBS levels. In conjunction with the project schedule, it allows to take corrective actions on activities that are in trouble (especially those tasks that are on the critical path). A similar
observation has been made by Lipke et al. [133] who also note that detailed schedule analysis is a burdensome activity and, if performed, often can have disrupting effects on the project team. EVM
offers calculation methods yielding reliable results on higher WBS levels, which greatly simplify final duration and completion date forecasting.
The scope of the study and the used measurement points and resulting output metrics can be summarised as follows: (i)Scope of the study is to compare and validate three EVM/ES techniques (PVM, EDM,
and ESM) for forecasting the project duration. (ii)Measurement points are periodic performance metrics (SPI and SPI()) and the resulting 9 forecasting methods (EAC()). (iii)Output metrics are mean
absolute percentage error (MAPE) and mean percentage error (MPE) to measure the accuracy of the three forecasting methods.
Table 1 presents partial results of the forecasting accuracy for the three methods (PVM, EDM, and ESM) along the completion stage of the project and for different project networks. The completion
stage is measured as the percentage completed EV/BAC with EV the earned value and BAC the budget at completion. Early, middle, and late stages are defined as 0%, 30%, 30%, 70%, and 70%, 100%
percentage completed, respectively. The project network structure is shown by the serial/parallel degree of a project and is measured by the SP indicator discussed in Section 3.1. The column with
label “P” represents networks with most activities in parallel while the column with label “S” consists of project networks with mainly serial activities. The “S/P” column is a mix of both and
contains both parallel and serial activities. The forecast accuracy is given in the body of the table. Since it is measured by the MAPE, lower numbers denote a higher forecast accuracy.
The table clearly shows that all three EVM/ES forecasting methods are more reliable as the number of serial activities increases. More serial projects have more critical activities, and hence,
potential errors of project performance on high WBS levels are unlikely to happen since each delay on individual (critical) activities has a real effect on the project duration. Moreover, the ESM
outperforms the PVM and EDM along all stages of completion to predict the duration of a project. The table also indicates that the accuracy of all EVM performance measures improves towards the final
stages. However, the PVM shows a low accuracy at the final stages, due to the unreliable SPI trend. Indeed, it is known that the SPI goes to one, even for projects ending late, leading to biased
results, which is not the case for the SPI() metric [63, 125].
3.4. Study 3: Project Control
The relevance of the two previous simulation studies lies in the ability of the two methods (SRA in Section 3.2 and EVM/ES in Section 3.3) to monitor and control projects and to generate warning
signals for actions when the project runs out of control. In the third simulation study, the two previously mentioned methods are integrated into a dynamic project control system. The simulation is
set up to test two alternative project control methods by using two types of dynamic information during project progress to improve corrective action decisions. Information on the sensitivity of
individual project activities obtained through schedule risk analysis (SRA) as well as dynamic performance information obtained through earned value/schedule management (EVM/ES) will be dynamically
used to steer the corrective action decision making process. The simulation study has been originally published by Vanhoucke [64] and further discussed in Vanhoucke [2, 3, 63]. Recently, a new study
on integrating SRA with EVM/ES has been published by Elshaer [134].
The two alternative project control methods are considered from two extreme WBS level starting points. Although they represent a rather black-and-white view on project control, they can be considered
as fundamentally different control approaches, both of which can be easily implemented is a less extreme way or can even be combined or mixed during project progress. Figure 2 graphically displays
the two extreme control methods along the WBS level which are known as the top-down project control approach and a bottom-up project control approach. Details are given along the following lines.
Bottom-Up Project Control. The sensitivity metrics used in the study discussed in Section 3.2 are crucial to the project manager since they provide information about the sensitivity of activity
duration variation and the expected impact on the project duration. This information is crucial to steer a project manager’s attention towards a subset of the project activities that have a highly
expected effect on the overall project performance. These highly sensitive activities are subject to intensive control, while others require less or no attention during project execution. Since the
activity information at the lowest level of the WBS is used to control the project and to take actions that should bring projects in danger back on track, this approach is called bottom bottom-up
project control.
Top-Down Project Control. Project control using EVM/ES systems discussed in Section 3.3 offers the project manager a tool to calculate a quick and easy sanity check on the highest levels of the WBS,
the project level. They provide early warning signals to detect problems and/or opportunities in an easy and efficient way that define the need to eventually drill down into lower WBS levels. In
conjunction with the project schedule, it allows taking corrective actions on those activities that are in trouble (especially those tasks which lie on the critical path).
The scope of the study and the used measurement points and resulting output metrics can be summarised as follows.(i)Scope of the study is the comparison between top-down project control approach
using EVM/ES and bottom-up project control approach using SRA. (ii)Measurement points include the number of control points as a proxy for the control effort and the result of corrective actions taken
by the project manager as a proxy for the quality of the actions. (iii)Output metrics are the efficiency of both project control approaches defined as a comparison between the effort of controlling
the project in progress and the results of the actions, as explained along the following lines.
Unique to this simulation study is the presence of corrective actions to bring projects in danger back on track. These simulated actions must be taken from the moment performance thresholds are
exceeded. The specific threshold depends on the used control approach. For the bottom-up control approach, only highly sensitive activities are controlled, and hence, action thresholds are set on the
values for the sensitivity metrics. As an example, from the moment the SSI is bigger than 70%, the activity is said to be highly sensitive, and it is expected that delays on this activity might have
a significant impact on the total project duration. Therefore, it is better to carefully control this activity when it is in progress. Activities with a low SSI value, on the contrary, are considered
to be safe and need no intensive control during progress (=lower effort). The top-down project control approach is done using schedule performance information at regular points in time, given by the
SPI and SPI(). From the moment these values drop below a certain predefined threshold, say for example, 70%, it is an indication that some of the underlying activities at the lowest WBS level might
be in danger. Therefore, the project manager has to drill down (=increasing effort), trying to detect the problem and find out whether corrective actions are necessary to improve the current low
Figure 3 displays a graphical representation of the simulation approach of the project control study. The dynamic simulation starts at the project start (time ) and gradually increases at each review
period until the project is finished. At each control point , the necessary information is calculated or simulated, and once thresholds are exceeded, triggers for searching project problems and
actions on the activity level might be performed.
The two alternative control methods show one important difference. In the top-down approach displayed at the left of the picture, all EVM performance metrics are calculated at each time period, and
only when thresholds are exceeded, a drill down requires further attention in search for potential problems that might require action. In a bottom-up approach, displayed at the right of the picture,
a subset of activities in progress, determined by the thresholds, is subject to control and might require actions in case of problems. Consequently, the selection of the set of activities that
require intensive control in search of potential problems and the corresponding actions to bring problems back on track are different for the two control methods, as follows.(i)Top-down. At every
time period, all EVM performance metrics are calculated, and when thresholds are exceeded, all activities in progress will be scanned in search for potential problems (and corresponding actions).
Consequently, the search for project problems is triggered by thresholds on periodic EVM metrics and, once exceeded, is done on all activities in progress. (ii)Bottom-up. At every time period, all
SRA sensitivity metrics are simulated, and all activities in progress are scanned for their values. Only a subset of these activities, those that exceed the thresholds, will be further analysed in
search for problems (and corresponding actions). Consequently, the search for project problems is triggered by thresholds on activity sensitivity metrics and is performed only on a subset of those
activities in progress with a value exceeding the threshold value.
Figure 4 shows an illustrative graph of this control efficiency for both control approaches. The -axis displays the closeness of each project to a complete serial or parallel network, as measured by
the SP indicator discussed in Section 3.1. The -axis measures the control efficiency as follows.(i)The effort is measured by the number of control points in the simulation study. This number is equal
to the number of times the action thresholds are exceeded. Indeed, from the moment a threshold is exceeded, the project manager must spend time to find out whether there is a problem during progress.
Hence, the amount of control points is used as a proxy for the effort of control and depends on the value of the action thresholds. Obviously, the lower the effort, the higher the control efficiency,
and hence the effort is set in the denominator of the control efficiency output metric. In Figure 3, the effort is measured by the number of activities that require intense control at each review
period . For both approaches, this is equal to the number of time the “threshold exceed” block gives a “yes” answer. (ii)Return. When corrective actions are taken, their impact should bring projects
in danger back on track and should therefore contribute to the overall success of the project. Therefore, the return of actions is measured as the difference between the project delay without actions
and the project duration with actions. Obviously, the return of the actions can be considered as a proxy for the quality of the actions and should be set in the numerator of the project control
efficiency output metric.
The graph clearly demonstrates that a top-down project-based control approach using the EVM/ES systems provides highly accurate results when the project network contains more serial activities. The
bottom-up control approach using sensitivity information of activities obtained through a standard schedule risk analysis is particularly useful when projects contain a lot of parallel activities.
This bottom-up approach requires subjective estimates of probability distributions to define the activity risk profiles, but it simplifies the control effort by focusing on those activities with a
highly expected effect on the overall project objective.
4. Future Research
In this section, a short overview is given on the ideas for improvements on current project control systems and/or the development of novel techniques and further integration into integrated decision
support systems in order to better control project in progress. Most of the ideas presented in this section consist of work in progress funded by the concerted research actions (CRAs) funding. This
funding has resulted in a research project with duration of six years and started in 2012. This “more than a million euro” research project in collaboration with international universities and
companies will certainly move the research in project management and dynamic scheduling towards a higher level.
4.1. Statistical Project Control
The project control approach of this paper is set up to indicate the direction of change in preliminary planning variables, set by the baseline schedule, compared with actual performance during
project progress. In case the project performance of projects in progress deviates from the planned performance, a warning is indicated by the system in order to take corrective actions.
In the literature, various systems have been developed to measure deviations between planned and actual performance in terms of time and cost to trigger actions when thresholds are exceeded. Although
the use of threshold values has been explained in the study of Section 3.4 to trigger the corrective action process, nothing has been said about the probability that real project problems occur once
these threshold values are exceeded. Indeed, little research is done on the use and setting of these threshold values and their accuracy to timely detect real project problems. Therefore, it is
believed that future research should point to this direction. The vast amount of data available during project progress should allow the project manager to use statistical techniques in order to
improve the discriminative power between in-control and out-of-control project progress situations. The use of these so-called Statistical Project Control (SPC) systems should ideally lead to an
improved ability to trigger actions when variation in a project’s progress exceeds certain predefined thresholds.
The use of statistical project control is not new in the literature and has been investigated by Lipke and Vaughn [135], Bauch and Chung [136], Wang et al. [137], Leu and Lin [138], and National
Research Council [139]. These papers mainly focus on the use of statistical project control as an alternative for the Statistical Process Control used in manufacturing processes. Despite the fact
that both approaches have the same abbreviation SPC, the statistical project control approach should be fundamentally different from the statistical process control [140]. Therefore, it is believed
that the future research on SPC should go much further than the models and techniques presented in these papers. SPC should be a new approach to control projects based on the analysis of data that is
generated before the start of the project (static) as well as during project progress (dynamic) [141]. This data analysis should allow the user to set automatic thresholds using multivariate
statistics for EVM/ES systems and SRA systems in order to replace the often subjective action thresholds set by project managers based on wild guesses and experience. Fundamental research is
therefore crucial to validate the novel statistical techniques to investigate their merits and pitfalls and to allow the development of project control decision support systems based on a sound
methodology. Research on this relatively new project control topic is available in Colin and Vanhoucke [140, 142].
4.2. If Time Is Money, Accuracy Pays
The “if time is money, accuracy pays” [143] statement highlights the relevance and importance of accuracy in the simulation studies presented in this paper. Measuring and improving the accuracy of
predictive methods to forecast the final duration of a project in progress using EVM/ES systems is crucial for project control to monitor the project time objectives, and since time is money, also
the cost objectives. Recent research efforts in project duration forecasting have focused on improving the accuracy of forecasts by combining the existing EVM/ES forecasting techniques or even by
borrowing principles from the traditional forecasting literature and adapting them to a project control setting. Although researchers have recommended combined forecasts for over half a century,
their use in a project control environment is relatively new, and it is therefore believed that future research efforts should focus on forecasting improvement techniques. The use of composite
forecasting methods or the extensions to, for example, Kalman filter [144] or Bayesian inference [145] are excellent examples of future research avenues for project duration forecasting.
However, the quality of forecasting metrics does not only depend on the average accuracy measured by the sum of absolute or relative errors over all review periods, but it also depends on the
stability of the forecasts over the periods. Indeed, when project managers use the periodic forecasts to monitor and control the performance of projects in progress, it is very important to have a
reliable value for each period such that actions can be taken based on the well-considered view on the forecasts over the last few periods. Stability is an important aspect in this control process
since it avoids overreactions based on a single value for the forecast but instead puts focus on a series of forecasts having similar (stability) and reliable (accuracy) values. Various methods for
assessing the stability of forecasts have been discussed in the literature, and an overview would fall outside the scope of this paper. It is however believed that these efforts can and will be used
in a project control setting in future research efforts. Research studies to determine which of the two aspects of forecasting quality, accuracy, or stability is most important should be relevant for
both academics and professionals. Future research should focus on further improvements of forecasting accuracy and stability and the possible trade-offs between these two quality dimensions.
Stability studies in project management are not new. Studies on cost forecasting using EVM have been done by Christensen and Heise [146], and Christensen and Payne [147], among others. Time
forecasting stability studies using ES are relatively new and are done by Henderson and Zwikael [148].
A third possible extension and future research direction in a project control setting is the relevance and importance of the baseline schedule. Since all EVM/ES performance metrics and forecasts are
measured relative to the baseline schedule, the quality of the baseline schedule could be an important driver for forecasting accuracy/stability. The connection between the baseline schedule and the
EVM/ES methodology can be analysed by a relatively new concept, known as the so-called schedule adherence and could potentially play an important role in this future research direction. The concept
of schedule adherence is originally proposed by Lipke [149] as a simple extension of the earned schedule method resulting in the so-called p-factor. The p-factor is defined as the portion of earned
value accrued in congruence with the baseline schedule, that is, the tasks which ought to be either completed or in progress. The rationale behind this new measure is that performing work not
according to the baseline schedule often indicates activity impediments or is likely a cause of rework. The basic assumption behind this new approach lies in the idea that whenever impediments occur
(activities that are performed relatively less efficiently compared to the project progress), resources are shifted from these constrained activities to other activities where they could gain earned
value. However, this results in a project execution which deviates from the original baseline schedule. Consequently, this might involve a certain degree of risk, since the latter activities are
performed without the necessary inputs and might result into a certain portion of rework. Up to today, the concept has not passed the test of logic yet, and future research will indicate whether it
has merit in a control setting. To the best of our knowledge, the concept has only been preliminary analysed and investigated in a simulation study published in Vanhoucke [63, 150].
4.3. Research Meets Practice
A final future research avenue lies in the translation of academic research results into practical guidelines and rules-of-thumb that are relevant for professionals [151]. The research studies
presented and written by the author of this paper have led to various outcomes that aim at bringing the academic world and the professional business world closer to each other. Some of the most
relevant results are briefly discussed along the following lines.
The project scheduling game (PSG, http://www.protrack.be/psg) is an IT-supported simulation game to train young project management professionals the basic concepts of baseline scheduling, risk
management, and project control. The business game is used in university and MBA trainings as well as in commercial trainings and allows the participant to get acquainted with the dynamic scheduling
principles on a learning-by-doing way. References can be found in Vanhoucke et al. [152] and Wauters and Vanhoucke [153].
EVM Europe (http://www.evm-europe.eu/) is the European organisation to bring practitioners and researchers together to share new ideas, to stimulate innovative research, and to advance the
state-of-the-art and best practices on project control. At EVM Europe, research meets practice at yearly conferences showcasing best practices and research results and trying to tighten the gap
between the two project management worlds.
PM Knowledge Center (PMKC, http://www.pmknowledgecenter.com/) is a free and online learning tool to stimulate interaction between researchers, students, and practitioners in the field of project
management. It contains papers and reference resources to inform and improve the practice of dynamic scheduling and integrated project control.
In the book “The Art of Project Management: A Story about Work and Passion” [154], an overview is given about the recent endeavours done in the past and the ideas that will be done in the future. It
tells about the products and ideas in project management and provides a brief overview of the most important people who inspired the author of the current paper for the work that has been done in the
past. It does not look at the Project Management work from only a research point-of-view, but also from a teaching and commercial point-of-view. It tells about work, and the passion that has led to
the results of the hard work. It is not a scientific book. It is not a managerial book either. It is just a story about work and passion.
5. Conclusions
In this paper, an overview of recent research results and future research avenues is given for a specific topic on project management and scheduling research using simulations. It is shown that the
simulation studies of this paper fit in the research domain of dynamic scheduling, which refers to a dynamic and integrated approach on baseline scheduling, risk analysis, and project control. The
focus of this paper lies on the integration between risk analysis and project control, and the baseline scheduling step is considered given.
A simple yet easy to use project control simulation algorithm is presented consisting of 4 steps, including the construction of a project baseline schedule. A nonexhaustive literature overview on
baseline scheduling is given in the paper using different scheduling objectives. The definition of activity variation on the activity durations is the central starting point in this paper, and hence
all the discussed simulation studies only report results for time performance of projects in progress.
Three simulation studies have been discussed in the paper, based on numerous research projects done in the past and published throughout the literature in academic papers, popular magazines,
websites, and books. In a first schedule risk analysis study, four well-known metrics to measure the sensitivity of variation in activity durations are compared and validated, and their ability to
make a distinction between activities with a low and high expected impact on the project duration is analysed. A second forecasting accuracy study focuses on three predictive methods using earned
value management and earned schedule and compares the absolute and relative errors of the forecasting methods. A last project control study integrates the two previous studies in an action-oriented
project control framework and compares two alternative control methods, known as bottom-up and top-down control and measures their efficiency.
Finally, three directions for future research avenues are briefly discussed. First, the overwhelming amount of available data to control projects should lead to improved statistical control
techniques that ultimately should lead to automatic decision support systems to better steer the actions taken by project managers. Moreover, improving the accuracy of existing or new techniques and
extending the studies in stability and schedule adherence will probably contribute to a better understanding of the real drivers of project control and success. Finally, the necessity of bringing the
often separate worlds of research and practice closer to each other is a never-ending task and challenge for both researchers and professionals, in order to let the Project Management discipline move
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
The summary of many of the research papers discussed in this paper has been funded by various research funding organisations. Therefore, the author acknowledges the national support given by the
“Fonds voor Wetenschappelijk Onderzoek” (FWO) for the projects under Contract nos. G/0194/06, G/0095.10N, and 3G015711, as well as the support of the “Bijzonder Onderzoeksfonds” (BOF) for the
projects under Contract nos. 01110003 and BOF12GOA021. Furthermore, the support by the Research Collaboration Fund of PMI Belgium received in Brussels, in 2007, at the Belgian Chapter Meeting, the
support for the research project funding by the Flemish Government (2008), Belgium, and the additional funding of the National Bank of Belgium are also acknowledged. This research is part of the IPMA
Research Award 2008 project by Mario Vanhoucke who was awarded at the 22nd World Congress in Rome, Italy, for his study “Measuring Time—An Earned Value Simulation Study.” An overview of the research
results done at the “Operations Research and Scheduling” Group can be found in Vanhoucke [155].
1. E. Uyttewaal, Dynamic Scheduling with Microsoft Office Project 2003: The Book by and for Professionals, International Institute for Learning, 2005.
2. M. Vanhoucke, Project Management with Dynamic Scheduling: Baseline Scheduling, Risk Analysis and Project Control, vol. 18, Springer, 2012.
3. M. Vanhoucke, “Dynamic scheduling: integrating schedule risk analysis with earned value management,” The Measurable News, vol. 2, pp. 11–13, 2012.
4. M. Vanhoucke, “Measuring time using novel earned value management metrics,” in Proceedings of the 22nd IPMA World Congress, vol. 1, pp. 99–103, Rome, Italy, November 2008.
5. J. Kelley and M. Walker, Critical Path Planning and Scheduling: An Introduction, Mauchly Associates, Ambler, Pa, USA, 1959.
6. M. Walker and J. Sawyer, “Project planning and scheduling,” Tech. Rep. 6959, E. I. du Pont de Nemours and Company, Wilmington, Del, USA, 1959.
7. J. Kelley, “Critical path planning and scheduling: mathematical basis,” Operations Research, vol. 9, pp. 296–320, 1961.
8. O. Icmeli, S. Erengüç, and C. Zappe, “Project scheduling problems: a survey,” International Journal of Operations & Production Management, vol. 13, pp. 80–91, 1993.
9. S. Elmaghraby, “Activity nets: a guided tour through some recent developments,” European Journal of Operational Research, vol. 82, no. 3, pp. 383–408, 1995. View at Scopus
10. L. Özdamar and G. Ulusoy, “A survey on the resource-constrained project scheduling problem,” IIE Transactions, vol. 27, no. 5, pp. 574–586, 1995. View at Scopus
11. W. Herroelen, B. De Reyck, and E. Demeulemeester, “Resource-constrained project scheduling: a survey of recent developments,” Computers & Operations Research, vol. 25, no. 4, pp. 279–302, 1998.
View at Scopus
12. P. Brucker, A. Drexl, R. Möhring, K. Neumann, and E. Pesch, “Resource-constrained project scheduling: notation, classification, models, and methods,” European Journal of Operational Research,
vol. 112, no. 1, pp. 3–41, 1999. View at Scopus
13. E. Demeulemeester and W. Herroelen, Project Scheduling: A Research Handbook, Kluwer Academic, 2002.
14. W. Herroelen, E. Demeulemeester, and B. De Reyck, “A classification scheme for project scheduling problems,” in Project Scheduling—Recent Models, Algorithms and Applications, J. Weglarz, Ed., pp.
1–26, Kluwer Academic, Dortrecht, The Netherlands, 1999.
15. E. Demeulemeester, “Minimizing resource availability costs in time-limited project networks,” Management Science, vol. 41, pp. 1590–1598, 1995.
16. A. Drexl and A. Kimms, “Optimization guided lower and upper bounds for the resource investment problem,” Journal of the Operational Research Society, vol. 52, no. 3, pp. 340–351, 2001. View at
17. C.-C. Hsu and D. Kim, “A new heuristic for the multi-mode resource investment problem,” Journal of the Operational Research Society, vol. 56, no. 4, pp. 406–413, 2005. View at Publisher · View at
Google Scholar · View at Scopus
18. S. Shadrokh and F. Kianfar, “A genetic algorithm for resource investment project scheduling problem, tardiness permitted with penalty,” European Journal of Operational Research, vol. 181, no. 1,
pp. 86–101, 2007. View at Publisher · View at Google Scholar · View at Scopus
19. D. Yamashita, V. Armentano, and M. Laguna, “Scatter search for project scheduling with resource availability cost,” European Journal of Operational Research, vol. 169, no. 2, pp. 623–637, 2006.
View at Publisher · View at Google Scholar · View at Scopus
20. T. Gather, J. Zimmermann, and J.-H. Bartels, “Tree-based methods for resource investment and resource leveling problems,” in Proceedings of the 11th International Workshop on Project Management
and Scheduling (PMS '08), pp. 94–98, Istanbul, Turkey, 2008.
21. M. Shahsavar, S. T. A. Niaki, and A. A. Najafi, “An efficient genetic algorithm to maximize net present value of project payments under inflation and bonus-penalty policy in resource investment
problem,” Advances in Engineering Software, vol. 41, no. 7-8, pp. 1023–1030, 2010. View at Publisher · View at Google Scholar · View at Scopus
22. M. Bandelloni, M. Tucci, and R. Rinaldi, “Optimal resource leveling using non-serial dyanamic programming,” Project Managing and Scheduling, vol. 78, no. 2, pp. 162–177, 1994. View at Scopus
23. K. Neumann and J. Zimmermann, “Resource levelling for projects with schedule-dependent time windows,” European Journal of Operational Research, vol. 117, no. 3, pp. 591–605, 1999. View at
Publisher · View at Google Scholar · View at Scopus
24. K. Neumann and J. Zimmermann, “Procedures for resource leveling and net present value problems in project scheduling with general temporal and resource constraints,” European Journal of
Operational Research, vol. 127, no. 2, pp. 425–443, 2000. View at Publisher · View at Google Scholar · View at Scopus
25. E. Coughlan, M. Lübbecke, and J. Schulz, “A branch-and-price algorithm for multi-mode resource leveling,” in A Branch-and-Price Algorithm for Multi-mode Resource Leveling, P. Festa, Ed., vol.
6049 of Lecture Notes in Computer Science, pp. 226–238, Springer, Berlin, Germany, 2010.
26. T. Gather, J. Zimmermann, and J.-H. Bartels, “Exact methods for the resource leveling problem,” Journal of Scheduling, vol. 14, no. 6, pp. 557–569, 2011. View at Publisher · View at Google
Scholar · View at Scopus
27. K. El-Rayes and O. Moselhi, “Resource-driven scheduling of repetitive activities,” Construction Management and Economics, vol. 16, no. 4, pp. 433–446, 1998. View at Scopus
28. M. Vanhoucke, “Work continuity constraints in project scheduling,” Journal of Construction Engineering and Management, vol. 132, no. 1, pp. 14–25, 2006. View at Publisher · View at Google Scholar
· View at Scopus
29. M. Vanhoucke, “Work continuity optimization for the Westerscheldetunnel project in the Netherlands,” Tijdschrift voor Economie en Management, vol. 52, pp. 435–449, 2007.
30. H. Nübel, “The resource renting problem subject to temporal constraints,” OR Spektrum, vol. 23, no. 3, pp. 359–381, 2001. View at Scopus
31. F. Ballestín, “A genetic algorithm for the resource renting problem with minimum and maximum time lags,” in Evolutionary Computation in Combinatorial Optimization, vol. 4446 of Lecture Notes in
Computer Science, pp. 25–35, Springer, Berlin, Germany, 2007.
32. F. Ballestín, “Different codifications and metaheuristic algorithms for the resource renting problem with minimum and maximum time lags,” in Recent Advances in Evolutionary Computation for
Combinatorial Intelligence, C. Cotta and J. van Hemert, Eds., vol. 153 of Studies in Computational Intelligence, chapter 12, pp. 187–202, Springer, Berlin, Germany, 2008.
33. L. Vandenheede and M. Vanhoucke, “A scatter search algorithm for the resource renting problem,” Working Paper, Ghent University, 2013.
34. D. Smith-Daniels and N. Aquilano, “Using a late-start resource-constrained project schedule to improve project net present value,” Decision Sciences, vol. 18, pp. 617–630, 1987.
35. S. Elmaghraby and W. Herroelen, “The scheduling of activities to maximize the net present value of projects,” European Journal of Operational Research, vol. 49, no. 1, pp. 35–49, 1990. View at
36. K. Yang, F. Talbot, and J. Patterson, “Scheduling a project to maximize its net present value: an integer programming approach,” European Journal of Operational Research, vol. 64, no. 2, pp.
188–198, 1993. View at Scopus
37. C. Sepil, “Comment on Elmaghraby and Herroelen's ‘the scheduling of activities to maximize the net present value of projects’,” European Journal of Operational Research, vol. 73, no. 1, pp.
185–187, 1994. View at Scopus
38. K. Yang, L. Tay, and C. Sum, “A comparison of stochastic scheduling rules for maximizing project net present value,” European Journal of Operational Research, vol. 85, no. 2, pp. 327–339, 1995.
View at Scopus
39. S. Baroum and J. Patterson, “The development of cash flow weight procedures for maximizing the net present value of a project,” Journal of Operations Management, vol. 14, no. 3, pp. 209–227,
1996. View at Publisher · View at Google Scholar · View at Scopus
40. O. Icmeli and S. S. Erengüç, “The resource constrained time/cost tradeoff project scheduling problem with discounted cash flows,” Journal of Operations Management, vol. 14, no. 3, pp. 255–275,
1996. View at Publisher · View at Google Scholar · View at Scopus
41. J. Pinder and A. Marucheck, “Using discounted cash flow heuristics to improve project net present value,” Journal of Operations Management, vol. 14, no. 3, pp. 229–240, 1996. View at Publisher ·
View at Google Scholar · View at Scopus
42. A. Shtub and R. Etgar, “A branch and bound algorithm for scheduling projects to maximize net present value: the case of time dependent, contingent cash flows,” International Journal of Production
Research, vol. 35, no. 12, pp. 3367–3378, 1997. View at Scopus
43. L. Özdamar, G. Ulusoy, and M. Bayyigit, “A heuristic treatment of tardiness and net present value criteria in resource constrained project scheduling,” International Journal of Physical
Distribution & Logistics Management, vol. 28, pp. 805–824, 1998.
44. R. Etgar, A. Shtub, and L. Leblanc, “Scheduling projects to maximize net present value—the case of time-dependent, contingent cash flows,” European Journal of Operational Research, vol. 96, no.
1, pp. 90–96, 1997. View at Scopus
45. R. Etgar and A. Shtub, “Scheduling project activities to maximize the net present value—the case of linear time-dependent cash flows,” International Journal of Production Research, vol. 37, no.
2, pp. 329–339, 1999. View at Scopus
46. E. Goto, T. Joko, K. Fujisawa, N. Katoh, and S. Furusaka, “Maximizing net present value for generalized resource constrained project scheduling problem,” Working Paper, Nomura Research Institute,
Kyoto, Japan, 2000.
47. A. Kimms, “Maximizing the net present value of a project under resource constraints using a lagrangian relaxation based heuristic with tight upper bounds,” Annals of Operations Research, vol.
102, no. 1–4, pp. 221–236, 2001. View at Publisher · View at Google Scholar · View at Scopus
48. C. Schwindt and J. Zimmermann, “A steepest ascent approach to maximizing the net present value of projects,” Mathematical Methods of Operations Research, vol. 53, no. 3, pp. 435–450, 2001. View
at Publisher · View at Google Scholar · View at Scopus
49. M. Vanhoucke, E. Demeulemeester, and W. Herroelen, “Maximizing the net present value of a project with linear time-dependent cash flows,” International Journal of Production Research, vol. 39,
no. 14, pp. 3159–3181, 2001. View at Publisher · View at Google Scholar · View at Scopus
50. M. Vanhoucke, E. Demeulemeester, and W. Herroelen, “On maximizing the net present value of a project under renewable resource constraints,” Management Science, vol. 47, no. 8, pp. 1113–1121,
2001. View at Scopus
51. T. Selle and J. Zimmermann, “A bidirectional heuristic for maximizing the net present value of large-scale projects subject to limited resources,” Naval Research Logistics, vol. 50, no. 2, pp.
130–148, 2003. View at Publisher · View at Google Scholar · View at Scopus
52. M. Vanhoucke, E. Demeulemeester, and W. Herroelen, “Progress payments in project scheduling problems,” European Journal of Operational Research, vol. 148, no. 3, pp. 604–620, 2003. View at
Publisher · View at Google Scholar · View at Scopus
53. M. Vanhoucke, “A genetic algorithm for net present value maximization for resource constrained projects,” in Evolutionary Computation in Combinatorial Optimization, vol. 5482 of Lecture Notes in
Computer Science, pp. 13–24, Springer, Berlin, Germany, 2009.
54. M. Vanhoucke, “A scatter search heuristic for maximising the net present value of a resource-constrained project with fixed activity cash flows,” International Journal of Production Research,
vol. 48, no. 7, pp. 1983–2001, 2010. View at Publisher · View at Google Scholar · View at Scopus
55. M. Mika, G. Waligóra, and J. Weȩglarz, “Simulated annealing and tabu search for multi-mode resource-constrained project scheduling with positive discounted cash flows and different payment
models,” European Journal of Operational Research, vol. 164, no. 3, pp. 639–668, 2005. View at Publisher · View at Google Scholar · View at Scopus
56. M. Vanhoucke and E. Demeulemeester, “The application of project scheduling techniques in a real-life environment,” Project Management Journal, vol. 34, pp. 30–42, 2003.
57. C. Schwindt, “Minimizing earliness-tardiness costs of resource-constrained projects,” in Operations Research Proceedings 1999, Selected Papers of the Symposium on Operations Research (SOR '99),
Magdeburg, Germany, September, 1999, pp. 402–408, Springer, 2000. View at Publisher · View at Google Scholar
58. M. Vanhoucke, E. Demeulemeester, and W. Herroelen, “An exact procedure for the resource-constrained weighted earliness-tardiness project scheduling problem,” Annals of Operations Research, vol.
102, no. 1–4, pp. 179–196, 2001. View at Publisher · View at Google Scholar · View at Scopus
59. M. Vanhoucke, “Scheduling an R&D project with quality-dependent time slots,” in Computational Science and Its Applications, vol. 3982 of Lecture Notes in Computer Science, pp. 621–630, Springer,
Berlin, Germany, 2006.
60. R. Kolisch and S. Hartmann, “Experimental investigation of heuristics for resource-constrained project scheduling: an update,” European Journal of Operational Research, vol. 174, no. 1, pp.
23–37, 2006. View at Publisher · View at Google Scholar · View at Scopus
61. C. Delisle and D. Olson, “Would the real project management language please stand up?” International Journal of Project Management, vol. 22, no. 4, pp. 327–337, 2004. View at Publisher · View at
Google Scholar · View at Scopus
62. S. Hartmann and D. Briskorn, “A survey of variants and extensions of the resource-constrained project scheduling problem,” European Journal of Operational Research, vol. 207, no. 1, pp. 1–15,
2010. View at Publisher · View at Google Scholar · View at Scopus
63. M. Vanhoucke, Measuring Time—Improving Project Performance Using Earned Value Management, vol. 136 of International Series in Operations Research and Management Science, Springer, 2010.
64. M. Vanhoucke, “On the dynamic use of project performance and schedule risk information during project tracking,” Omega, vol. 39, no. 4, pp. 416–426, 2011. View at Publisher · View at Google
Scholar · View at Scopus
65. A. Klingel, “Bias in PERT project completion time calculations for a real network,” Management Science, vol. 13, pp. B194–B201, 1966.
66. R. Schonberger, “Why projects are “always” late: a rationale based on manual simulation of a PERT/CPM network,” Interfaces, vol. 11, pp. 65–70, 1981.
67. G. Gutierrez and P. Kouvelis, “Parkinson's law and its implications for project management,” Management Science, vol. 37, no. 8, pp. 990–1001, 1991. View at Scopus
68. D. Malcolm, J. Roseboom, C. Clark, and W. Fazar, “Application of a technique for a research and development program evaluation,” Operations Research, vol. 7, pp. 646–669, 1959.
69. S. AbouRizk, D. Halpin, and J. Wilson, “Fitting beta distributions based on sample data,” Journal of Construction Engineering and Management, vol. 120, no. 2, pp. 288–305, 1994. View at Scopus
70. T. Williams, “Critically in stochastic networks,” Journal of the Operational Research Society, vol. 43, no. 4, pp. 353–357, 1992. View at Scopus
71. D. Johnson, “The triangular distribution as a proxy for the beta distribution in risk analysis,” Journal of the Royal Statistical Society D, vol. 46, no. 3, pp. 387–398, 1997. View at Scopus
72. M. E. Kuhl, E. K. Lada, N. M. Steiger, M. A. Wagner, and J. R. Wilson, “Introduction to modeling and generating probabilistic input processes for simulation,” in Proceedings of the Winter
Simulation Conference, S. Henderson, B. Biller, M. Hsieh, J. Shortle, J. Tew, and R. Barton, Eds., pp. 63–76, Institute of Electrical and Electronics Engineers, New Jersey, NJ, USA, 2007.
73. M. Vanhoucke, “Using activity sensitivity and network topology information to monitor project time performance,” Omega, vol. 38, no. 5, pp. 359–370, 2010. View at Publisher · View at Google
Scholar · View at Scopus
74. D. Trietsch, L. Mazmanyan, L. Gevorgyan, and K. R. Baker, “Modeling activity times by the Parkinson distribution with a lognormal core: theory and validation,” European Journal of Operational
Research, vol. 216, no. 2, pp. 386–396, 2012.
75. T. Williams, “Towards realism in network simulation,” Omega, vol. 27, no. 3, pp. 305–314, 1999. View at Publisher · View at Google Scholar · View at Scopus
76. S. Elmaghraby and W. Herroelen, “On the measurement of complexity in activity networks,” European Journal of Operational Research, vol. 5, pp. 223–234, 1980.
77. M. Vanhoucke, J. Coelho, D. Debels, B. Maenhout, and L. Tavares, “An evaluation of the adequacy of project network generators with systematically sampled networks,” European Journal of
Operational Research, vol. 187, no. 2, pp. 511–524, 2008. View at Publisher · View at Google Scholar · View at Scopus
78. J. Patterson, “Project scheduling: the effects of problem structure on heuristic performance,” Naval Research Logistics, vol. 23, no. 1, pp. 95–123, 1976. View at Scopus
79. L. Tavares, J. Ferreira, and J. Coelho, “The risk of delay of a project in terms of the morphology of its network,” European Journal of Operational Research, vol. 119, no. 2, pp. 510–537, 1999.
View at Publisher · View at Google Scholar · View at Scopus
80. L. Tavares, J. Ferreira, and J. Coelho, “A surrogate indicator of criticality for stochastic networks,” International Transactions in Operational Research, vol. 11, pp. 193–202, 2004.
81. E. Demeulemeester, B. Dodin, and W. Herroelen, “A random activity network generator,” Operations Research, vol. 41, no. 5, pp. 972–980, 1993. View at Scopus
82. R. Kolisch, A. Sprecher, and A. Drexl, “Characterization and generation of a general class of resource-constrained project scheduling problems,” Management Science, vol. 41, pp. 1693–1703, 1995.
83. C. Schwindt, “A new problem generator for different resource-constrained project scheduling problems with minimal and maximal time lags,” WIOR-Report 449, Institut für Wirtschaftstheorie und
Operations Research, University of Karlsruhe, 1995.
84. M. Agrawal, S. Elmaghraby, and W. S. Herroelen, “DAGEN: a generator of testsets for project activity nets,” European Journal of Operational Research, vol. 90, no. 2, pp. 376–382, 1996. View at
85. L. Tavares, Advanced Models for Project Management, Kluwer Academic, Dordrecht, The Netherlands, 1999.
86. A. Drexl, R. Nissen, J. H. Patterson, and F. Salewski, “ProGen/πχ—an instance generator for resource-constrained project scheduling problems with partially renewable resources and further
extensions,” European Journal of Operational Research, vol. 125, no. 1, pp. 59–72, 2000. View at Publisher · View at Google Scholar · View at Scopus
87. E. Demeulemeester, M. Vanhoucke, and W. Herroelen, “Rangen: a random network generator for activity-on-the-node networks,” Journal of Scheduling, vol. 6, no. 1, pp. 17–38, 2003. View at Publisher
· View at Google Scholar · View at Scopus
88. C. Akkan, A. Drexl, and A. Kimms, “Network decomposition-based benchmark results for the discrete time-cost tradeoff problem,” European Journal of Operational Research, vol. 165, no. 2, pp.
339–358, 2005. View at Publisher · View at Google Scholar · View at Scopus
89. T. Pascoe, “Allocation of resources—CPM,” Revue Franqaise de Recherche Operationnelle, vol. 38, pp. 31–38, 1966.
90. E. Davies, “An experimental investigation of resource allocation in multiactivity projects,” Operational Research Quarterly, vol. 24, no. 4, pp. 587–591, 1973. View at Scopus
91. R. Kaimann, “Coefficient of network complexity,” Management Science, vol. 21, pp. 172–177, 1974.
92. R. Kaimann, “Coefficient of network complexity: erratum,” Management Science, vol. 21, pp. 1211–1212, 1975.
93. E. Davis, “Project network summary measures constrained-resource scheduling,” AIIE Transactions, vol. 7, no. 2, pp. 132–142, 1975. View at Scopus
94. B. De Reyck and W. Herroelen, “On the use of the complexity index as a measure of complexity in activity networks,” European Journal of Operational Research, vol. 91, no. 2, pp. 347–366, 1996.
View at Publisher · View at Google Scholar · View at Scopus
95. W. Herroelen and B. De Reyck, “Phase transitions in project scheduling,” Journal of the Operational Research Society, vol. 50, no. 2, pp. 148–156, 1999. View at Scopus
96. R. Alvarez-Valdes and J. Tamarit, “Heuristic algorithms for resource-constrained project scheduling: a review and empirical analysis,” in Advances in Project Scheduling, R. Slowinski and J.
Weglarz, Eds., Elsevier, Amsterdam, The Netherlands, 1989.
97. A. Mastor, “An experimental and comparative evaluation of production line balancing techniques,” Management Science, vol. 16, pp. 728–746, 1970.
98. E. Kao and M. Queyranne, “On dynamic programming methods for assembly line balancing,” Operations Research, vol. 30, pp. 375–390, 1982.
99. A. Thesen, “Measures of the restrictiveness of project networks,” Networks, vol. 7, no. 3, pp. 193–208, 1977. View at Scopus
100. E. Dar-El, “MALB—a heuristic technique for balancing large single model assembly lines,” AIEE Transactions, vol. 5, no. 4, pp. 343–356, 1973. View at Scopus
101. B. De Reyck, “On the use of the restrictiveness as a measure of complexity for resource-constrained project scheduling,” Research Report 9535, Katholieke Universiteit Leuven, Leuven, Belgium,
102. W. Bein, J. Kamburowski, and M. Stallmann, “Optimal reduction of two-terminal directed acyclic graphs,” SIAM Journal on Computing, vol. 21, pp. 1112–1129, 1992.
103. L. Ford and D. Fulkerson, Flows in Networks, Princeton University Press, Princeton, NJ, USA, 1962.
104. L. Tavares, J. Ferreira, and J. Coelho, “A comparative morphologic analysis of benchmark sets of project networks,” International Journal of Project Management, vol. 20, no. 6, pp. 475–485,
2002. View at Publisher · View at Google Scholar · View at Scopus
105. S. Elmaghraby, Activity Networks: Project Planning and Control by Network Models, John Wiley & Sons, New York, NY, USA, 1977.
106. D. Hulett, “Schedule risk analysis simplified,” Project Management Network, vol. 10, pp. 23–30, 1996.
107. D. Kuchta, “Use of fuzzy numbers in project risk (criticality) assessment,” International Journal of Project Management, vol. 19, no. 5, pp. 305–310, 2001. View at Publisher · View at Google
Scholar · View at Scopus
108. S. Elmaghraby, “On criticality and sensitivity in activity networks,” European Journal of Operational Research, vol. 127, no. 2, pp. 220–238, 2000. View at Publisher · View at Google Scholar ·
View at Scopus
109. J. Cho and B. Yum, “An uncertainty importance measure of activities in PERT networks,” International Journal of Production Research, vol. 35, no. 10, pp. 2737–2757, 1997. View at Scopus
110. S. Elmaghraby, Y. Fathi, and M. Taner, “On the sensitivity of project variability to activity mean duration,” International Journal of Production Economics, vol. 62, no. 3, pp. 219–232, 1999.
View at Publisher · View at Google Scholar · View at Scopus
111. G. Gutierrez and A. Paul, “Analysis of the effects of uncertainty, risk-pooling, and subcontracting mechanisms on project performance,” Operations Research, vol. 48, no. 6, pp. 927–938, 2000.
View at Scopus
112. M.-J. Yao and W.-M. Chu, “A new approximation algorithm for obtaining the probability distribution function for project completion time,” Computers and Mathematics with Applications, vol. 54,
no. 2, pp. 282–295, 2007. View at Publisher · View at Google Scholar · View at Scopus
113. T. Williams, “A classified bibliography of recent research relating to project risk management,” European Journal of Operational Research, vol. 85, no. 1, pp. 18–38, 1995. View at Scopus
114. PMBOK, A Guide to the Project Management Body of Knowledge, Project Management Institute, Newtown Square, Pa, USA, 3rd edition, 2004.
115. M. Vanhoucke and S. Vandevoorde, “A simulation and evaluation of earned value metrics to forecast the project duration,” Journal of the Operational Research Society, vol. 58, no. 10, pp.
1361–1374, 2007. View at Publisher · View at Google Scholar · View at Scopus
116. S. Vandevoorde and M. Vanhoucke, “A comparison of different project duration forecasting methods using earned value metrics,” International Journal of Project Management, vol. 24, no. 4, pp.
289–302, 2006. View at Publisher · View at Google Scholar · View at Scopus
117. M. Vanhoucke and S. Vandevoorde, “Measuring the accuracy of earned value/earned schedule forecasting predictors,” The Measurable News, pp. 26–30, 2007.
118. M. Vanhoucke, “Project tracking and control: can we measure the time?” Projects and Profits, pp. 35–40, 2008.
119. M. Vanhoucke and S. Vandevoorde, “Earned value forecast accuracy and activity criticality,” The Measurable News, no. 3, pp. 13–16, 2008.
120. M. Vanhoucke and S. Vandevoorde, “Forecasting a project’s duration under various topological structures,” The Measurable News, no. 1, pp. 26–30, 2009.
121. M. Vanhoucke, “Measuring time: an earned value performance management study,” The Measurable News, vol. 1, pp. 10–14, 2010.
122. M. Vanhoucke, “Measuring the efficiency of project control using fictitious and empirical project data,” International Journal of Project Management, vol. 30, no. 2, pp. 252–263, 2012. View at
Publisher · View at Google Scholar · View at Scopus
123. Q. Fleming and J. Koppelman, Earned Value Project Management, Project Management Institute, Newtown Square, Pa, USA, 3rd edition, 2005.
124. Q. Fleming and J. Koppelman, “What's your project's real price tag?” Harvard Business Review, vol. 81, no. 9, pp. 20–21, 2003. View at Scopus
125. W. Lipke, “Schedule is different,” The Measurable News, pp. 31–34, 2003.
126. F. Anbari, “Earned value project management method and extensions,” Project Management Journal, vol. 34, no. 4, pp. 12–23, 2003.
127. D. Jacob, “Forecasting project schedule completion with earned value metrics,” The Measurable News, vol. 1, pp. 7–9, 2003.
128. D. Jacob and M. Kane, “Forecasting schedule completion using earned value metrics? Revisited,” The Measurable News, vol. l, pp. 11–17, 2004.
129. S. Book, “Correction note: “earned schedule” and its possible unreliability as an indicator,” The Measurable News, pp. 22–24, 2006.
130. S. Book, “‘Earned schedule’ and its possible unreliability as an indicator,” The Measurable News, pp. 24–30, 2006.
131. D. Jacob, “Is “earned schedule” an unreliable indicator?” The Measurable News, pp. 15–21, 2006.
132. W. Lipke, “Applying earned schedule to critical path analysis and more,” The Measurable News, pp. 26–30, 2006.
133. W. Lipke, O. Zwikael, K. Henderson, and F. Anbari, “Prediction of project outcome. The application of statistical methods to earned value management and earned schedule performance indexes,”
International Journal of Project Management, vol. 27, no. 4, pp. 400–407, 2009. View at Publisher · View at Google Scholar · View at Scopus
134. R. Elshaer, “Impact of sensitivity information on the prediction of project’s duration using earned schedule method,” International Journal of Project Management, vol. 31, pp. 579–558, 2013.
135. W. Lipke and J. Vaughn, “Statistical process control meets earned value,” Cross Talk, vol. 13, pp. 28–29, 2000.
136. G. T. Bauch and C. A. Chung, “A statistical project control tool for engineering managers,” Project Management Journal, vol. 32, pp. 37–44, 2001.
137. Q. Wang, N. Jiang, L. Gou, M. Che, and R. Zhang, “Practical experiences of cost/schedule measure through earned value management and statistical process control,” in Software Process Change,
vol. 3966 of Lecture Notes in Computer Science, pp. 348–354, Springer, Berlin, Germany, 2006.
138. S. S. Leu and Y. C. Lin, “Project performance evaluation based on statistical process control techniques,” Journal of Construction Engineering and Management, vol. 134, no. 10, pp. 813–819,
2008. View at Publisher · View at Google Scholar · View at Scopus
139. National Research Council, Progress in Improving Project Management at the Department of Energy, National Academy Press, Washington, DC, USA, 2001.
140. J. Colin and M. Vanhoucke, “Setting tolerance limits for statistical project control using earned value management,” Working Paper, Ghent University, 2013, http://www.projectmanagement.ugent.be/
141. M. Vanhoucke, “Static and dynamic determinants of earned value based time forecast accuracy,” in Handbook of Research on Technology Project Management, Planning, and Operations, T. Kidd, Ed.,
pp. 361–374, Information Science Reference, 2009.
142. J. Colin and M. Vanhoucke, “A multivariate approach to statistical project control using earned value management,” Working Paper, Ghent University, 2013, http://www.projectmanagement.ugent.be/.
143. M. Vanhoucke, “Dynamic scheduling: if time is money, accuracy pays,” in Proceedings of the 12th International Conference on Project Management and Scheduling, vol. 1, pp. 45–48, 2010.
144. B. Kim and K. Reinschmidt, “Probabilistic forecasting of project duration using kalman filter and the earned value method,” Journal of Construction Engineering and Management, vol. 136, no. 8,
pp. 834–843, 2010. View at Publisher · View at Google Scholar · View at Scopus
145. B. Kim and K. Reinschmidt, “Probabilistic forecasting of project duration using bayesian inference and the beta distribution,” Journal of Construction Engineering and Management, vol. 135, no.
3, pp. 178–186, 2009. View at Publisher · View at Google Scholar · View at Scopus
146. D. Christensen and S. Heise, “Cost performance index stability,” National Contract Management Journal, vol. 25, pp. 7–15, 1993.
147. D. Christensen and K. Payne, “Cost performance index stability—fact or fiction?” Journal of Parametrics, vol. 10, pp. 27–40, 1992.
148. K. Henderson and O. Zwikael, “Does project performance stability exist? A re-examination of CPI and evaluation of SPI(t) stability,” CrossTalk, vol. 21, no. 4, pp. 7–13, 2008. View at Scopus
149. W. Lipke, “Connecting earned value to the schedule,” The Measurable News, vol. 1, pp. 6–16, 2004.
150. M. Vanhoucke, “The impact of project schedule adherence and rework on the duration forecast accuracy of earned value metrics,” in Project Management: Practices, Challenges and Developments, Nova
Science, 2013.
151. M. Vanhoucke, “Operations research and dynamic project scheduling: when research meets practice,” in Proceedings of the 4th International Conference on Applied Operational Research, P.
Luangpaiboon, M. Moz, and V. Dedoussis, Eds., Lecture Notes in Management Science, pp. 1–8, Bangkok, Thailand, 2012.
152. M. Vanhoucke, A. Vereecke, and P. Gemmel, “The project scheduling game (PSG): simulating time/cost trade-offs in projects,” Project Management Journal, vol. 51, pp. 51–59, 2005.
153. M. Wauters and M. Vanhoucke, “A study on complexity and uncertainty perception and solution strategies for the time/cost trade-off problem,” Working Paper, Ghent University, 2013, http://
154. M. Vanhoucke, “The Art of Project Management: A Story about Work and Passion,” 2013, http://www.or-as.be/.
155. M. Vanhoucke, “Welcome to OR&S! Where students, academics and professionals come together,” pp. 1–15, 2013, http://www.or-as.be/books/research. | {"url":"http://www.hindawi.com/journals/isrn.computational.mathematics/2013/513549/","timestamp":"2014-04-18T06:19:11Z","content_type":null,"content_length":"187342","record_id":"<urn:uuid:34a9ffc0-f127-48db-804e-3eb5821f567a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some cute examples of computability theory applied to specific problems.
1. We now know that, if an alien with enormous computational powers came to Earth, it could prove to us whether White or Black has the winning strategy in chess. To be convinced of the
proof, we would not have to trust the alien or its exotic technology, and we would not have to spend billions of years analyzing one move sequence after another. We’d simply have to
engage in a short conversation with the alien about the sums of certain polynomials over finite fields.
2. There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis.
Indeed, every formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list
of the boxes and their dimensions can be feasibly written down.
3. Supposing you do prove the Riemann Hypothesis, it’s possible to convince someone of that fact, without revealing anything other than the fact that you proved it. It’s also possible to
write the proof down in such a way that someone else could verify it, with very high confidence, having only seen 10 or 20 bits of the proof.
This article by Scott Aaronson is an interesting read! Here’s another example:
It would be great to prove that RSA is unbreakable by classical computers. But every known technique for proving that would, if it worked, simultaneously give an algorithm for breaking RSA! For
example, if you proved that RSA with an n-bit key took n^5 steps to break, you would’ve discovered an algorithm for breaking it in 2^n^1/5 steps. If you proved that RSA took 2^n^1/3 steps to
break, you would’ve discovered an algorithm for breaking it in n^(log n)^2 steps. As you show the problem to be harder, you simultaneously show it to be easier.
And an excerpt on the relevance of these examples:
So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this
view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical
questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to
people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.
In my opinion, one of the biggest challenges for our time is to integrate the enormous body of knowledge in theoretical computer science (or quantitative epistemology, or whatever you want to
call it) with the rest of what we know about the universe. In the past, the logical safari mostly stayed comfortably within 19th-century physics; now it’s time to venture out into the early 20th
century. Indeed, that’s exactly why I chose to work on quantum computing: not because I want to build quantum computers (though I wouldn’t mind that), but because I want to know what a universe
that allows quantum computers is like. | {"url":"http://intothecontinuum.tumblr.com/tagged/Scott-Aaronson","timestamp":"2014-04-19T14:28:43Z","content_type":null,"content_length":"45617","record_id":"<urn:uuid:bccc8364-be71-4243-a7f0-02d7cc32a00a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computation of the regular continued fraction for Euler’s constant
"... We consider Ramanujan's contribution to formulas for Euler's constant fl. For example, in his second notebook Ramanujan states that (in modern notation) 1 X k=1 (\Gamma1) k\Gamma1 nk ` x k k! '
n = ln x + fl + o(1) as x ! 1. This is known to be correct for the case n = 1, but incorrect f ..."
Cited by 2 (1 self)
Add to MetaCart
We consider Ramanujan's contribution to formulas for Euler's constant fl. For example, in his second notebook Ramanujan states that (in modern notation) 1 X k=1 (\Gamma1) k\Gamma1 nk ` x k k! ' n =
ln x + fl + o(1) as x ! 1. This is known to be correct for the case n = 1, but incorrect for n ? 2. We consider the case n = 2. We also suggest a different, correct generalization of the case n = 1.
- Bull. Eur. Assoc. Theor. Comput. Sci. EATCS , 2004
"... Whether ’tis nobler in the mind to suffer The slings and arrows of outrageous fortune, Or to take arms against a sea of troubles And by opposing end them? Hamlet 3/1, by W. Shakespeare In this
paper we propose a new perspective on the evolution and history of the idea of mathematical proof. Proofs w ..."
Cited by 1 (1 self)
Add to MetaCart
Whether ’tis nobler in the mind to suffer The slings and arrows of outrageous fortune, Or to take arms against a sea of troubles And by opposing end them? Hamlet 3/1, by W. Shakespeare In this paper
we propose a new perspective on the evolution and history of the idea of mathematical proof. Proofs will be studied at three levels: syntactical, semantical and pragmatical. Computer-assisted proofs
will be give a special attention. Finally, in a highly speculative part, we will anticipate the evolution of proofs under the assumption that the quantum computer will materialize. We will argue that
there is little ‘intrinsic ’ difference between traditional and ‘unconventional ’ types of proofs. 2 Mathematical Proofs: An Evolution in Eight Stages Theory is to practice as rigour is to vigour. D.
E. Knuth Reason and experiment are two ways to acquire knowledge. For a long time mathematical
, 2009
"... doi:10.4169/193009809X468689 (∑n The mathematical constant γ = limn→∞ k=1 1 k − ln(n)) = 0.5772156..., known as Euler’s constant, is not as well known as its cousins π, e, i, but is still
important enough to warrant serious consideration in the circles of applied mathematics, calculus, and number t ..."
Add to MetaCart
doi:10.4169/193009809X468689 (∑n The mathematical constant γ = limn→∞ k=1 1 k − ln(n)) = 0.5772156..., known as Euler’s constant, is not as well known as its cousins π, e, i, but is still important
enough to warrant serious consideration in the circles of applied mathematics, calculus, and number theory. Some authors will occasionally refer to γ as the Euler-Mascheroni constant, so named after
the Italian geometer Lorenzo Mascheroni (1750–1800), who actually introduced the symbol γ for the constant (although there is controversy about this claim) and also computed, though with error, the
first 32 digits [16, 34]. Sometimes one will find in older texts the symbols C (this was Euler’s constant of integration) and A (also from Mascheroni) to represent the constant, but these notations
seem to have disappeared in the modern era [27]. Our aim in this article is to present a survey of γ that is both manageable by, and enlightening to, those who favor mathematics at the undergraduate
level. To try and follow in the footsteps of the big boys π and e is quite a chore, but this brief historical description of γ and colorful portfolio of applications and surprising appearances in a
multitude of settings is both impressive and mathematically educational. Defining and evaluating the constant Calculus students can approximate the integral ∫ n (1/x) dx = ln(n) by inscribed and 1
circumscribed rectangles, and hence obtain the inequalities (for any integer n> 1) 1 n < n ∑ 1 − ln(n) <1, k so if the limit exists, 0 ≤ lim n→∞ k=1 n∑ k=1
, 2008
"... The object of mathematical rigour is to sanction and ..."
, 2013
"... Abstract. This paper has two parts. The first part surveys Euler’s work on the constant γ =0.57721 ·· · bearing his name, together with some of his related work on the gamma function, values of
the zeta function, and divergent series. The second part describes various mathematical developments invol ..."
Add to MetaCart
Abstract. This paper has two parts. The first part surveys Euler’s work on the constant γ =0.57721 ·· · bearing his name, together with some of his related work on the gamma function, values of the
zeta function, and divergent series. The second part describes various mathematical developments involving Euler’s constant, as well as another constant, the Euler–Gompertz constant. These
developments include connections with arithmetic functions and the Riemann hypothesis, and with sieve methods, random permutations, and random matrix products. It also includes recent results on
Diophantine approximation and transcendence related to Euler’s constant. Contents
"... Why are conditionally convergent series interesting? While mathematicians might undoubtably give many answers to such a question, Riemann’s theorem on rearrangements of conditionally convergent
series would probably rank near the top of most responses. Conditionally convergent series are those serie ..."
Add to MetaCart
Why are conditionally convergent series interesting? While mathematicians might undoubtably give many answers to such a question, Riemann’s theorem on rearrangements of conditionally convergent
series would probably rank near the top of most responses. Conditionally convergent series are those series that converge as written, but do not converge when each of their terms is replaced by the
corresponding absolute value. The nineteenthcentury mathematician Georg Friedrich Bernhard Riemann (1826-1866) proved that such series could be rearranged to converge to any prescribed sum. Almost
every calculus text contains a chapter on infinite series that distinguishes between absolutely and conditionally convergent series. Students see the usefulness of studying absolutely convergent
series since most convergence tests are for positive series, but to them conditionally convergent series seem to exist simply to provide good test questions for the instructor. This is unfortunate
since the proof of Riemann’s theorem is a model of clever simplicity that produces an exact algorithm. It is clear, however, that even with such a simple example as the alternating harmonic series
one cannot hope for a closed form solution to the problem of rearranging it to sum to an arbitrary real number. Nevertheless, it is an old result that for any real number of the form ln | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=774896","timestamp":"2014-04-18T00:50:37Z","content_type":null,"content_length":"25226","record_id":"<urn:uuid:369f26f5-9a1e-4eb7-b15d-63925b003b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alt-Ergo @ OCamlPro: Two months later
As announced in a previous post, I joined OCamlPro at the beginning of September and I started working on Alt-Ergo. Here is a report presenting the tool and the work we have done during the two last
Alt-Ergo at a Glance
Alt-Ergo is an open source automatic theorem prover based on SMT technology. It is developed at Laboratoire de Recherche en Informatique, Inria Saclay Ile-de-France and CNRS since 2006. It is capable
of reasoning in a combination of several built-in theories such as uninterpreted equality, integer and rational arithmetic, arrays, records, enumerated data types and AC symbols. It also handles
quantified formulas and has a polymorphic first-order native input language. Alt-Ergo is written in OCaml. Its core has been formally proved in the Coq proof assistant.
Alt-Ergo has been involved in a qualification process (DO-178C) by Airbus Industrie. During this process, a qualification kit has been produced. It was composed of a technical document with tool
requirements (TR) that gives a precise description of each part of the prover, a companion document (~ 450 pages) of tests, and an instrumented version of the tool with a TR trace mechanism.
Alt-Ergo Spider Web
Alt-Ergo is mainly used to prove the validity of mathematical formulas generated by program verification platforms. It was originally designed and tuned to prove formulas generated by the Why tool.
Now, it is used by different tools and in various contexts, in particular via the Why3 platform. As shown by the diagram below, Alt-Ergo is used to prove formulas:
Moreover, Alt-Ergo is used in the context of cryptographic protocols verification by EasyCrypt and in SMT-based model checking by Cubicle.
Some "Hello World" Examples
Below are some basic formulas written in the why input syntax. Each example is proved valid by Alt-Ergo. The first formulas is very simple and is proved with a straightforward arithmetic reasoning.
"goal g2" requires reasoning in the combination of functional arrays and linear arithmetic, etc. The last example contains a quantified sub-formula with a polymorphic variable x. Generating four
ground instances of this axiom where x is replaced by 1, true, 1.4 and a respectively is necessary to prove "goal g5".
(*** Simple arithmetic formula ***)
goal g1 : 1 + 2 = 3
(*** Theories of functional arrays and linear integer arithemtic ***)
logic a : (int, int) farray
goal g2 : forall i:int. i = 6 -> a[i<-4][5] = a[i-1]
(*** Theories of records and linear integer arithmetic ***)
type my_record = { a : int ; b : int }
goal g3 : forall v,w : my_record. 2 * v.a = 10 -> { v with b = 5} = w -> w.a = 5
(*** theories of enumerated data types and uninterpreted equality ***)
type my_sum = A | B | C
logic P : 'a -> prop
goal g4 : forall x : my_sum. P(C) -> x<>A and x<>B -> P(x)
(*** formula with quantifiers and polymorphism ***)
axiom a: forall x : 'a. P(x)
goal g5 : P(1) and P(true) and P(1.4) and P(a)
$$ alt-ergo examples.why
File "examples.why", line 2, characters 1-21:Valid (0.0120) (0)
File "examples.why", line 6, characters 1-53:Valid (0.0000) (1)
File "examples.why", line 10, characters 1-81:Valid (0.0000) (3)
File "examples.why", line 15, characters 1-59:Valid (0.0000) (6)
File "examples.why", line 19, characters 1-47:Valid (0.0000) (10)
Alt-Ergo @ OCamlPro
On September 20, we officially announced the distribution and the support of Alt-Ergo by OCamlPro and launched its new website. This site allows to download public releases of the prover and to
discover available support offerings. It'll be enriched with additional content progressively. The former Alt-Ergo's web page hosted by LRI is now devoted to theoretical foundations and academic
aspects of the solver.
We have also published a new public release (version 0.95.2) of Alt-Ergo. The main changes in this minor release are: source code reorganization into sub-directories, simplification of quantifiers
instantiation heuristics, GUI improvement to reduce latency when opening large files, as well as various bug fixes.
In addition to the re-implementation and the simplification of some parts of the prover (e.g. internal literals representation, theories combination architecture, ...), the main novelties of the
current master branch of Alt-Ergo are the following:
• The user can now specify an external (plug-in) SAT-solver instead of the default DFS-based engine. We experimentally provide a CDCL solver based on miniSAT that can be plugged to perform
satisfiability reasoning. This solver is more efficient when formulas contain a rich propositional structure.
• We started the development of a new tool, called Ctrl-Alt-Ergo, in which we put our expertise by implementing the most interesting strategies of Alt-Ergo. The experiments we made with our
internal benchmarks are very promising, as shown below.
Experimental Evaluation
We compared the performances of latest public releases of Alt-Ergo with the current master branch of both Alt-Ergo and Ctrl-Alt-Ergo (commit ce0bba61a1fd234b85715ea2c96078121c913602) on our internal
test suite composed of 16209 formulas. Timeout was set to 60 seconds and memory was limited to 2GB per formula. Benchmarks descriptions and the results of our evaluation are given below.
Why3 Benchmark
This benchmark contains 2470 formulas generated from Why3's gallery of WhyML programs. Some of these formulas are out of scope of current SMT solvers. For instance, the proof of some of them requires
inductive reasoning.
SPARK Hi-lite Benchmark
This benchmark is composed of 3167 formulas generated from Ada programs used during Hi-lite project. It is known that some formulas are not valid.
BWare Benchmark
This test-suite contains 10572 formulas translated from proof obligations generated by Atelier-B. These proof obligations are issued from industrial B projects and are proved valid.
Alt-Ergo Alt-Ergo Alt-Ergo Ctrl-Alt-Ergo
version 0.95.1 version 0.95.2 master branch* master branch*
Release date Mar. 05, 2013 Sep. 20, 2013 - - - - - -
Why3 benchmark 2270 2288 2308 2363
(91.90 %) (92.63 %) (93.44 %) (95.67 %)
SPARK benchmark 2351 2360 2373 2404
(74.23 %) (74.52 %) (74.93 %) (75.91 %)
BWare benchmark 5609 9437 10072 10373
(53.05 %) (89.26 %) (95.27 %) (98.12 %)
(*) commit ce0bba61a1fd234b85715ea2c96078121c913602 | {"url":"http://www.ocamlpro.com/blog/2013/10/22/alt-ergo-evaluation-october-2013.html","timestamp":"2014-04-18T22:17:27Z","content_type":null,"content_length":"27728","record_id":"<urn:uuid:72a2caac-5cca-4efd-b912-6f7a04a7adc2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Allison A on Monday, November 28, 2011 at 7:11pm.
find the vertex line of symmetry the minimum naximum value of the quadratic function and graph the function f(x)=-2x^2+2x+2.
• Math - Steve, Monday, November 28, 2011 at 7:25pm
A parabola has either a minimum or a maximum, but not both.
The axis of symmetry, which contains the vertex, is the line
x = -b/2a = -2/(-4) = 1/2
You should now be able to get the max/min and thus the vertex
• Math - MathMate, Monday, November 28, 2011 at 7:32pm
Express the function in canonical form by completing squares:
if f(x)=a(x-h)²+k,
a>0 => the parabola is concave up, hence a minimum exists
a<0 => concave down, hence a maximum.
The location of maximum/minimum is given by the point (h,k).
The line of symmetry is x=h.
To proceed with completing the squares, extract and factor out the coefficient of x²:
f(x)=-2(x²-x) + 2
So the curve is concave down, h=1/2, k=5/2 and (h,k)=(1/2,5/2) is a maximum.
The line of symmetry is x=1/2.
• Math - Allison A, Monday, November 28, 2011 at 9:08pm
How do I find the Y coordinate and the x coordinates
is the vertix 1/2, 5/2?
• Math - MathMate, Monday, November 28, 2011 at 10:23pm
By completing squares, we get the two parameters h and k which represent the x- and y-coordinates of the vertex.
In this case, they are (1/2,5/2), as you can see from:
f(x) = -2[(x-1/2)²]+5/2
Related Questions
Algebra - Find the vertex, the line of symmetry, the maximum or minimum of the ...
maths - how do i find out the the equation of the line of symmetry for f(x)=ax^2...
math - find the vertex, the line of symmetry and the maximum or minimum value of...
Algebra - I need a jump start in setting up the equations etc when solving for ...
math - Find vertex, the line of symmetry, and the maximum or minimum value of f(...
ALGEBRA - Find the vertex, the line of symmetry, the maximum or minimum value of...
math - Find the vertex, the line of symmetry, and the maximum or minimum value ...
math algebra trig - For the graph of each function, find an equation of the line...
Algebra - find the vertex, line of symmetry, and minimum or maxium values of the...
algebra - graph the function. find the vertex line of symmetry, and maximum or ... | {"url":"http://www.jiskha.com/display.cgi?id=1322525489","timestamp":"2014-04-21T07:54:38Z","content_type":null,"content_length":"9842","record_id":"<urn:uuid:fc7b02b9-0ed9-4e85-b1c6-f1a56961216c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
stupid, simple question on using accuracy / estimated error value
stupid, simple question on using accuracy / estimated error value - NTP
This is a discussion on stupid, simple question on using accuracy / estimated error value - NTP ; I would think that the estimated error value from, say, ntptime describes the error range; i.e., that
a given timestamp is... +/- estimate/2 # +/- ((float)estimate)/2.0 (Where 'estimate' is the error estimate from, say, ntptime.) ....but I can't explicitly confirm ...
stupid, simple question on using accuracy / estimated error value
I would think that the estimated error value from, say, ntptime
describes the error range; i.e., that a given timestamp is...
+/- estimate/2 # +/- ((float)estimate)/2.0
(Where 'estimate' is the error estimate from, say, ntptime.)
....but I can't explicitly confirm this anywhere. Is there some
footnote in the FAQ or other doc I've missed that covers this? | {"url":"http://fixunix.com/ntp/67748-stupid-simple-question-using-accuracy-estimated-error-value.html","timestamp":"2014-04-17T12:56:06Z","content_type":null,"content_length":"23858","record_id":"<urn:uuid:92256754-0822-4226-914a-4b15ff49643f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
in a children summer camp there are 150 children, 75 of the are not participating in activities, 50 of the have families and 100 of them like to do time pass. what is the largest possible number of
children in the camp, that are not participating, that doesn't have families and that doing time pass? a>25 b>50 c>75 d>100
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506ee5c6e4b060a360ff6b36","timestamp":"2014-04-17T18:33:53Z","content_type":null,"content_length":"37450","record_id":"<urn:uuid:d9a93593-6cc5-4f96-ad73-8b5099d19f73>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
CARA Outreach - YSI 1998 Pictures - Parallax
Click on any small image to retrieve a larger one.
The three members of the reporting group are pictured here laying out the base of the triangle used for the Parallax lab. The three members are Shaconda Dixon (standing front left), Tamika Robinson
(back center), and Micheal Duncan (kneeling, front right). This triangle will be used with geometry to determine the distance to a tree on the other side of the North Lawn of Yerkes Observatory.
After laying out the base of the triangle, the students then measure the length of the base. Tamika Robinson (kneeling front left) is measuring the distance while Micheal Duncan and Shaconda Dixon
hold down the far end.
The reporting group with Shacond Dixon (left back,) Micheal Duncan (kneeling front), Brad Holden (center) and Tamika Robinson (right).
The target of the parallax lab was the skinny tree in the center. The tree appears to shift as the photographer moved from one end of the base of the triangle (represented by the string laid out in
earlier photographs) to the other end of the base.
The students measured the two angle connecting the base of the triangle with the tree using the large protractor pictured here. This right angle was made using a square box and the students verified
the angle with the proctractor. | {"url":"http://astro.uchicago.edu/cara/outreach/se/ysi/98pix/parallax/","timestamp":"2014-04-19T04:21:21Z","content_type":null,"content_length":"2426","record_id":"<urn:uuid:426c5b3a-8085-42d1-ba72-17928c348326>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Johns Creek, GA Algebra Tutor
Find a Johns Creek, GA Algebra Tutor
...First, I earned my degree in Art at the State University of Santo Domingo, Dominican Republic (U.A.S.D.). I later earned a bilingual teaching degree from U.N.P.H.U., also in Santo Domingo.
After teaching for 6 years in the D.R., I moved to Pittsburgh where I taught for the following 10 years. While in Pittsburgh, I received the "Walmart Teacher of the Year" award.
21 Subjects: including algebra 1, reading, Spanish, ESL/ESOL
...I make sure that you understand what the problem is asking you to find and then help you find the information needed to find that answer. This is my 7th year teaching. I have 4 years experience
in high school math, 1 year in middle school all subjects, and two years teaching 5th grade.
21 Subjects: including algebra 1, algebra 2, calculus, ACT Math
...But I know that everyone doesn't love math the way that I do. It is my mission to help students understand math, to see how it fits together, and to become independent, successful learners. I
know that takes time and consistency, both of which I am more than willing to provide.
8 Subjects: including algebra 1, algebra 2, statistics, trigonometry
...I tutored my peers in a university sponsored study hall from Fall 2010 to Spring 2012. These subjects varied from calculus, chemistry, biology, general engineering and general education
courses. I am confident I can help you sharpen your study skills so you can truly understand the material.
15 Subjects: including algebra 1, algebra 2, chemistry, reading
EXPERIENCED Math tutor - Online Specialist - algebra, trigonometry, calculus, physics, ESOL, Portuguese, etc. ALL ages. Patient, creative, and knowledgeable turns confusion into understanding
quickly and easily.
32 Subjects: including algebra 2, algebra 1, physics, reading | {"url":"http://www.purplemath.com/Johns_Creek_GA_Algebra_tutors.php","timestamp":"2014-04-17T19:28:20Z","content_type":null,"content_length":"24110","record_id":"<urn:uuid:07f0839d-8ba8-4602-a1cc-5aa90b8ed09a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turing/Solution 1
var n : int %User input
var ctr : int := 0 %Counter variable
var gotit : boolean %Stores if fractorial is found
get n
ctr := 0 %ctr is NEVER actually used as zero, since that would cause a division by 0 error.
gotit := true %Initially assume that fractorial has been found, and change to false if found otherwise
ctr += n %Increase counter by n
for i : 1 .. n %Try n/1, n/2, n/3... until n/i. If all are whole numbers, then answer has been found
if ctr / i not= ctr div i then
gotit := false
end if
end for
exit when gotit = true
end loop
if gotit = true then
put "Fractorial (", n, ") = ", ctr
end if
end loop
While you might try ctr += 1 initially, it will quickly become evident that this is too slow. Through logical reasoning and some experimentation, you should be able to discover that the fractorial of
n is a multiple of n.
Last modified on 8 February 2010, at 14:41 | {"url":"http://en.m.wikibooks.org/wiki/Turing/Solution_1","timestamp":"2014-04-16T07:24:57Z","content_type":null,"content_length":"14334","record_id":"<urn:uuid:36c414c5-1c38-4177-a82c-72c983b5d1b6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Analyticity of half-exponentials
joeshipman@aol.com joeshipman at aol.com
Mon Apr 16 19:37:14 EDT 2007
>> Is there a monotonic real analytic function defined on the
>> non-negative real numbers such that f(f(x)) = 2^x, or f(f(x))=e^x?
> "For the equation
>(*) f^2(x) = e^x,
>a real analytic solution has been found by H. Kneser.
>This solution, however, is not single-valued (Baker)
>and, as pointed out by G. Szekeres, there is no
>uniqueness attached to the solution. It seems reasonable
>to admit f(x)=F^(1/2)(x), where F^u is the regular
>iteration group of g(x)=e^x, as the "best" solution of
>the equation (*) (best behaved at infinity). However,
>we do not know whether this solution is analytic for
This is not very helpful; from this information I cannot tell
1) is Kneser's "real analytic solution" a function, or isn't it?
2) if it is a function, is it monotonic, or isn't it?
3) how is "F^(1/2)(x)" is defined?
4) is it monotonic?
If "F^(1/2)(x)" is uniquely defined then it is very surprising that
whether it is analytic is open.
I'm trying to ask as precisely focused a question as possible, but I
cannot tell from your reply whether the answer to my original question
is "yes", "no", or "open".
If I look at a modification of the question and ask for a function f
such that f(f(x)) = (e^x - 1) instead of e^x, then I can actually build
a formal power series that uniquely solves the functional equation, but
which has some negative coefficients (and is therefore probably
non-monotonic) and whose radius of convergence I cannot calculate. This
implies that the answer to my question is negative if I replace e^x by
(e^x - 1) AND I strengthen the requirement "real analytic" to "real
analytic with an everywhere-convergent power series"; but it's possible
for a function to be real analytic and need different power series in
different parts of the domain, and this example says nothing about the
original question with f(f(x))=e^x or f(f(x))=2^x because the
construction of the formal power series solution depends on the
constant term being zero.
-- Joe Shipman
AOL now offers free email to everyone. Find out more about what's free
from AOL at AOL.com.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2007-April/011514.html","timestamp":"2014-04-19T01:52:19Z","content_type":null,"content_length":"4811","record_id":"<urn:uuid:5fa8a319-8dd5-45cb-81a5-cacafc98244d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
20 search hits
Multiplicity distribution of electron-positron pairs created by strong external fields (1992)
Christoph Best Walter Greiner Gerhard Soff
We discuss the multiplicity distribution of electron-positron pairs created in the strong electromagnetic fields of ultrarelativistic heavy-ion transits. Based on nonperturbative expressions for
the N-pair creation amplitudes, the Poisson distribution is derived by neglecting interference terms. The source of unitarity violation is identified in the vacuum-to-vacuum amplitude, and a
perturbative expression for the mean number of pairs is given.
Direct formation of quasimolecular 1s sigma vacancies in uranium-uranium collisions (1976)
Wilfried Betz Gerhard Soff Berndt Müller Walter Greiner
The direct (Coulomb) formation of electron vacancies in the 1sσ state of superheavy quasimolecules is investigated for the first time. Its dependence on the impact parameter, projectile
energy, and its contribution from excitations into the continum and higher bound states are determined.
Collective excitations of the QED vacuum (1993)
D. C. Ionescu Walter Greiner Berndt Müller Gerhard Soff
Using relativistic Green’s-function techniques we examined single-electron excitations from the occupied Dirac sea in the presence of strong external fields. The energies of these excited states
are determined taking into account the electron-electron interaction. We also evaluate relativistic transition strengths incorporating retardation, which represents a direct measure of
correlation effects. The shifts in excitation energies are computed to be lower than 0.5%, while the correlated transition strengths never deviate by more than 10% from their bare values. A major
conclusion is that we found no evidence for collectivity in the electron-positron field around heavy and superheavy nuclei.
Internal pair formation following coulomb excitation of heavy nuclei (1976)
Volker Oberacker Gerhard Soff Walter Greiner
Internal conversion of γ rays from Coulomb-excited nuclear levels cannot be neglected compared with the spontaneous and induced positron production in overcritical electric fields. It is
shown that both processes are separable by their different distributions with respect to the ion angle and the positron energy.
Self-energy correction to the hyperfine structure splitting of hydrogenlike atoms (1996)
Hans Persson Stefan M. Schneider Walter Greiner Gerhard Soff Ingvar Lindgren
A first testing ground for QED in the combined presence of a strong Coulomb field and a strong magnetic field is provided by the precise measurement of the hyperfine structure splitting of
hydrogenlike 209Bi. We present a complete calculation of the one-loop self-energy correction to the first-order hyperfine interaction for various nuclear charges. In the low-Z regime we almost
perfectly agree with the Z alpha expansion, but for medium and high Z there is a substantial deviation.
Nuclear polarization in heavy atoms and superheavy quasiatoms (1991)
Günter Plunien Berndt Müller Walter Greiner Gerhard Soff
We consider the contribution of nuclear polarization to the Lamb shift of K- and L-shell electrons in heavy atoms and quasiatoms. Our formal approach is based on the concept of effective photon
propagators with nuclear-polarization insertions treating effects of nuclear polarization on the same footing as usual QED radiative corrections. We explicitly derive the modification of the
photon propagator for various collective nuclear excitations and calculate the corresponding effective self-energy shift perturbatively. The energy shift of the 1s1/2 state in 92238U due to
virtual excitation of nuclear rotational states is shown to be a considerable correction for atomic high-precision experiments. In contrast to this, nuclear-polarization effects are of minor
importance for Lamb-shift studies in 82208Pb.
Nuclear polarization contribution to the Lamb shift in heavy atoms (1989)
Günter Plunien Berndt Müller Walter Greiner Gerhard Soff
The energy shift of the 1s1/2 state in 92238U due to virtual excitation of nuclear rotational modes is shown to be a considerable correction for atomic high-precision experiments. In contrast to
this, nuclear polarization effects are of minor importance for Lamb-shift studies in 82208Pb.
Ionization and pair creation in relativistic heavy-ion collisions (1993)
Klaus Rumrich Gerhard Soff Walter Greiner
Ionization, pair creation, and electron excitations in relativistic heavy-ion collisions are investigated in the framework of the coupled-channel formalism. Collisions between heavy projectiles
and Pb82+ are considered for various bombarding energies in the region E=500 up to 2000 MeV/u. Useful symmetry relations for the matrix elements are derived and the influence of gauge
transformations onto the coupled-channel equations is explored.
Delbrück scattering in a strong external field (1992)
Alexander Scherdin Andreas Schäfer Walter Greiner Gerhard Soff
We evaluate the Delbrück scattering amplitude to all orders of the interaction with the external field of a nucleus employing nonperturbative electron Green's functions. The results are given
analytically in form of a multipole expansion.
Monoenergetic positron conversion in heavy ion fragments (1986)
Paul Schlüter Gerhard Soff Walter Greiner
Conversion processes in light nuclei with transition energies above the e+, e- pair creation threshold are investigated within an analytical framework. In particular, we evaluate the ratio of
electron transition probabilities from the negative energy continuum into the atomic K shell and into the positive energy continuum, respectively. The possible role of monoenergetic positron
conversion with respect to the striking peak structures observed in e+ spectra from very heavy collision systems is examined. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Gerhard+Soff%22/start/0/rows/10/author_facetfq/Walter+Greiner/sortfield/author/sortorder/asc","timestamp":"2014-04-19T02:25:52Z","content_type":null,"content_length":"39940","record_id":"<urn:uuid:884c0259-cf17-4100-a482-d3aee5a927ae>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Akihiko Inoue's Homepage
Akihiko Inoue's Homepage
English / Japanese
Higashi-Hiroshima 739-8526, Japan
E-mail: 'inoue100' followed by '@hiroshima-u.ac.jp'
RESEARCH INTERESTS
Prediction for stochastic processes, Stochastic processes with memory, Mathematical finance, Time series analysis, Tauberian theorems
1. K. Fukuda, A. Inoue and Y. Nakano, Optimal intertemporal risk allocation applied to insurance pricing, submitted. arXiv:0711.1143
2. A. Inoue and V. Anh, Prediction of fractional processes with long-range dependence, Hokkaido Mathematical Journal 41 (2012), 157-183. arXiv:0708.3631
3. N. H. Bingham, A. Inoue and Y. Kasahara, An explicit representation of Verblunsky coefficients, Statistics & Probability Letters 82 (2012), 403-410. arXiv:1109.4513
4. Y. Kasahara, M. Pourahmadi and A. Inoue, Duals of random vectors and processes with applications to prediction problems with missing values, Statistics & Probability Letters 79 (2009),
1637-1646. pdf
5. A. Inoue, Y. Kasahara and P. Phartyal, Baxter's inequality for fractional Brownian motion-type processes with Hurst index less than 1/2, Statistics & Probability Letters 78 (2008), 2889-2894.
6. A. Inoue, AR and MA representation of partial autocorrelation functions, with applications, Probability Theory and Related Fields 140 (2008), 523-551. pdf
7. A. Inoue and Y. Nakano, Remark on optimal investment in a market with memory, Theory of Stochastic Processes 13 (2007), 66-76. pdf
8. A. Inoue and V. Anh, Prediction of fractional Brownian motion-type proesses, Stochastic Analysis and Applications 25 (2007), 641-666. pdf
9. A. Inoue and Y. Nakano, Optimal long-term investment model with memory, Applied Mathematics and Optimization 55 (2007), 93-122. pdf
10. A. Inoue, Y. Nakano and V. Anh, Binary market models with memory, Statistics & Probability Letters 77 (2007), 256-264. pdf
11. M. Pourahmadi, A. Inoue and Y. Kasahara, A Prediction Problem in L^2(w), Proceedings of the American Mathematical Society 135 (2007), 1233-1239. pdf
12. A. Inoue and Y. Kasahara, Explicit representation of finite predictor coefficients and its applications, The Annals of Statistics 34 (2006), 973-993. pdf
13. A. Inoue, Y. Nakano and V. Anh, Linear filtering of systems with memory and application to finance, Journal of Applied Mathematics and Stochastic Analysis, (2006), Art. ID 53104, 26 pp. pdf
14. V. Anh and A. Inoue, Financial markets with memory I: Dynamic models, Stochastic Analysis and Applications 23 (2005), 275-300. pdf
15. V. Anh, A. Inoue and Y. Kasahara, Financial markets with memory II: Innovation processes and expected utility maximization, Stochastic Analysis and Applications 23 (2005), 301-328. pdf
16. V. Anh and A. Inoue, Prediction of fractional Brownian motion with Hurst index less than 1/2, Bulletin of the Australian Mathematical Society 70 (2004), 321-328.
17. A. Inoue and Y. Kasahara, Partial autocorrelation functions of the fractional ARIMA processes with negative degree of differencing, Journal of Multivariate Analysis 89 (2004), 135-147. pdf
18. A. Inoue, On the worst conditional expectation, Journal of Mathematical Analysis and Applications 286 (2003), 237-247. pdf
19. A. Inoue, Asymptotic behavior for partial autocorrelation functions of fractional ARIMA processes, The Annals of Applied Probability 12 (2002), 1471-1491. pdf
20. N. H. Bingham and A. Inoue, Extension of the Drasin-Shea-Jordan theorem, Journal of the Mathematical Society of Japan 52 (2000), 545-559. pdf
21. N. H. Bingham and A. Inoue, Tauberian and Mercerian theorems for systems of kernels, Journal of Mathematical Analysis and Applications 252 (2000), 177-197. pdf
22. N. H. Bingham and A. Inoue, Abelian, Tauberian, and Mercerian theorems for arithmetic sums, Journal of Mathematical Analysis and Applications 250 (2000), 465-493. pdf
23. A. Inoue and Y. Kasahara, Asymptotics for prediction errors of stationary processes with reflection positivity, Journal of Mathematical Analysis and Applications 250 (2000), 299-319. pdf
24. A. Inoue, Asymptotics for the partial autocorrelation function of a stationary process, Journal d'Analyse Mathematique 81 (2000), 65-109. pdf
25. A. Inoue and H. Kikuchi, Abel-Tauber theorems for Hankel and Fourier transforms and a problem of Boas, Hokkaido Mathematical Journal 28 (1999), 577-596. pdf
26. A. Inoue and Y. Kasahara, On the asymptotic behavior of the prediction error of a stationary process, in Trends in Probability and Related Analysis (Taipei, 1998), 207-218, World Sci.
Publishing, River Edghe, NJ, 1999. pdf
27. N. H. Bingham and A. Inoue, Ratio Mercerian theorems with applications to Hankel and Fourier transforms, Proceedings of the London Mathematical Society (3) 79 (1999), 626-648.
28. N. H. Bingham and A. Inoue, An Abel-Tauber theorem for Hankel transforms, in Trends in probability and related analysis (Taipei, 1996), 83-90, World Sci. Publishing, River Edge, NJ, 1997.
29. N. H. Bingham and A. Inoue, The Drasin-Shea-Jordan theorem for Fourier and Hankel transforms, The Quarterly Journal of Mathematics. Oxford Series (2) 48 (1997), 279-307.
30. A. Inoue, Regularly varying correlation functions and KMO-Langevin equations, Hokkaido Mathematical Journal 26 (1997), 457-482. pdf
31. A. Inoue, Abel-Tauber theorems for Fourier-Stieltjes coefficients, Journal of Mathematical Analysis and Applications 211 (1997), 460-480.
32. A. Inoue, An Abel-Tauber theorem for Fourier sine transforms, J. Math. Sci. Univ. Tokyo 2 (1995), 303-309.
33. A. Inoue, On Abel-Tauber theorems for Fourier cosine transforms, Journal of Mathematical Analysis and Applications 196 (1995), 764-776.
34. Y. Okabe and A. Inoue, The theory of KM2O-Langevin equations and applications to data analysis. II. Causal analysis (1), Nagoya Mathematical Journal 134 (1994), 1-28.
35. A. Inoue, On the equations of stationary processes with divergent diffusion coefficients, Journal of the Faculty of Science. University of Tokyo. Section IA. Mathematics 40 (1993), 307-336.
36. Y. Okabe and A. Inoue, On the exponential decay of the correlation functions for KMO-Langevin equations, Japanese Journal of Mathematics. New Series 18 (1992), 13-24.
37. A. Inoue, The Alder-Wainwright effect for stationary processes with reflection positivity. II. Osaka Journal of Mathematics 28 (1991), 537-561.
38. A. Inoue, The Alder-Wainwright effect for stationary processes with reflection positivity, Journal of the Mathematical Society of Japan 43 (1991), 515-526.
39. A. Inoue, Path integral for diffusion equations, Hokkaido Mathematical Journal 15 (1986), 71-99.
1. A. Inoue, Ratio Mercerian and Tauberian theorems. pdf ( English / Japanese)
2. A. Inoue, Partial autocorrelation functions (in Japanese), Rokko Lectures in Mathematics 16, 2005. pdf | {"url":"http://home.hiroshima-u.ac.jp/inoue100/index-e.html","timestamp":"2014-04-18T15:40:00Z","content_type":null,"content_length":"11694","record_id":"<urn:uuid:bc97c69f-caab-4d32-8547-ef78d7465b88>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consider A Series RL Circuit With A Voltage Source. ... | Chegg.com
Consider a series RL circuit with a voltage source. Take the output as the voltage across the resistor with L = 2 henrys and R = 4 ohms. (a) Find the zero input response as a function of time. The
initial current is 2 A. . (b) Determine the integrating factor for the differential equation relating output voltage to the input voltage source. (c) Now let the initial current be zero and determine
an expression for the output response when the voltage source is a unit step function. (d) Now let the initial current be zero and determine an expression for the output response when the voltage
source is an impulse function in which the spike occurs at time zero. . (e) Now let the initial current be zero and determine an expression for the output response when the voltage source is u(t)
%u2013 u(t %u2013 4). .
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-series-rl-circuit-voltage-source-take-output-voltage-across-resistor-l-2-henrys-r-q4074161","timestamp":"2014-04-20T08:43:04Z","content_type":null,"content_length":"22374","record_id":"<urn:uuid:4221dda6-3483-492a-a051-0d545770ba1d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. strong masculine singular genitive form of formal.
2. strong masculine singular accusative form of formal.
3. strong neuter singular genitive form of formal.
4. strong plural dative form of formal.
5. weak masculine singular genitive form of formal.
6. weak masculine singular dative form of formal.
7. weak masculine singular accusative form of formal.
8. weak feminine singular genitive form of formal.
9. weak feminine singular dative form of formal.
10. weak neuter singular genitive form of formal.
11. weak neuter singular dative form of formal.
12. weak plural nominative form of formal.
13. weak plural genitive form of formal.
14. weak plural dative form of formal.
15. weak plural accusative form of formal.
16. mixed masculine singular genitive form of formal.
17. mixed masculine singular dative form of formal.
18. mixed masculine singular accusative form of formal.
19. mixed feminine singular genitive form of formal.
20. mixed feminine singular dative form of formal.
21. mixed neuter singular genitive form of formal.
22. mixed neuter singular dative form of formal.
23. mixed plural nominative form of formal.
24. mixed plural genitive form of formal.
25. mixed plural dative form of formal.
26. mixed plural accusative form of formal.
Last modified on 16 August 2013, at 05:06 | {"url":"http://en.m.wiktionary.org/wiki/formalen","timestamp":"2014-04-18T09:06:53Z","content_type":null,"content_length":"23723","record_id":"<urn:uuid:43ea0458-daf8-43cc-95d7-f8fc2b7b23ea>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lesson plan -Number and algebra: Pythagoras’ theorem
Subject Maths Key stage 4
Topic Area Number and algebra: Pythagoras’ theorem Programme of study links
Objectives To understand Pythagoras' theorem and how it can be used to calculate the hypotenuse 1.3b, 2.2, 3.2c
Lesson This lesson uses large-scale demonstrations and a dynamic real-life context to engage students and reinforce the relevance of the theory behind the theorem of Pythagoras. It begins with a
plan review of the terminology of right-angled triangles and clear visual explanations of how Pythagoras’ theorem can be expressed geometrically and algebraically. The formula is made relevant
context by its application to a real context and its use in calculating the length of the hypotenuse is demonstrated. The new concepts are reinforced with practice questions and reviewed using the
‘Tick or trash’ feature.
Teaching Class and group work. The lesson can be carried out using video clips and a projector/whiteboard. Two worksheets are provided to help consolidate the students’ understanding of the key
context mathematical concepts covered in this lesson plan. These worksheets can be used alongside course materials on this topic.
Notes on This lesson will cover 1-2 hours depending on the ability level of the students. Lower-ability students may need more time to develop their understanding of the principles and application
timing of Pythagoras’ theorem. If time is limited, the worksheet in the Main section could be carried out as an alternative homework activity and the plenary activity could be used as a starter in
a follow-up lesson. For higher-ability students who take less time to grasp the key ideas, the lesson could be extended by incorporating the extension activities in the classroom.
Key How can Pythagoras’ theorem be useful?
Start the lesson with the clip Pythagoras’ theorem (1): introduction, which provides an overview of right-angled triangles and two large-scale demonstrations of the theorem of Pythagoras.
The prompt questions can be used to focus the students’ attention and then stimulate discussion to establish what they already know about this topic.
Students could be presented with triangles in a variety of orientations and discuss how they can identify the hypotenuse. They could also try identifying right-angled triangles within other
common figures, such as diagonals in rectangles and heights in isosceles and other triangles.
The formula for Pythagoras’ theorem is given as c² = a² + b², which could be compared with versions of the formula given in classroom texts. Students could be challenged to express the
theorem algebraically for triangles labelled in a different order, or with a different set of letters, or using vertices to names the sides.
In preparation for the activities in the Main section, an extra question is given to get students thinking about how Pythagoras’ theorem could be useful.
Start the main section of the lesson with the clip Pythagoras’ theorem (2): find the hypotenuse in which Ben tackles an aerial ropeway in order to demonstrate how Pythagoras’ theorem can be
applied to the real-life context. He explains how to identify the right-angled triangle and how to use Pythagoras’ theorem to calculate the length of the hypotenuse.
It is suggested that the clip is watched in two sections to give an opportunity to review the first part of the calculation and to introduce the use of the square root. Students could find
the symbol on their calculators and establish how to use the function. Both sections are supported by accompanying questions, which will enable students to work through Ben’s calculation.
They could then generate more examples in the same context. For example, what if the tower was 12 metres high? What if the field was bigger or smaller?
The worksheet Pythagoras’ theorem: Practice questions gives students the opportunity to practise this type of calculation:
• Questions 1 and 2 revise calculating the squares and square roots.
• Questions 3-6 all involve using Pythagoras’ theorem to calculate the hypotenuse and follow an increasing level of difficulty.
• Question 3 is a multiple-choice problem based on simple right-angled triangles. These are similar triangles based on the Pythagorean triple 3, 4, 5, so the hypotenuse calculations could
be carried out without use of a calculator. Diagrams are provided.
• Question 4 involves identifying a diagonal in a rectangle as the hypotenuse of a right-angled triangle. Diagrams are provided.
• Question 5 is based on an aerial ropeway such as that featured in the clip Pythagoras’ theorem (2): find the hypotenuse. A diagram is provided.
• Question 6 is based on the route taken by a ship sailing due north and then east and recognising that the distance of the ship from its starting point is the hypotenuse of a
right-angled triangle. A diagram is not provided and so students have to interpret the information given in the question to draw their own right-angle triangle.
Students could work through these questions individually before discussing their progress in groups and providing peer-to-peer feedback. The workings and answers could then be reviewed as a
whole class.
Main Worksheet
Pythagoras' theorem: Practice questions
Worksheet Answers
• Question 1
a) 7² = 7 x 7 = 49; b) 25² = 25 x 25 = 625; c) 1.2² = 1.2 x 1.2 = 625
• Question 2
a) √16 = 4; b) √1296 = 26; c) √2.89 = 1.7
• Question 3
a) x = 5; b) x = 8 These two right-angled triangles are similar triangles.
• Question 4
The statement is true.
Length of hypotenuse of right-angled triangle
= √ (19² + 22²) = √ (361 + 484) = √ 845 = 29.1 to 1 d.p.
Length of diagonal of rectangle
= √ (182 + 232) = √ (324 + 529) = √ 853 = 29.2 to 1 d.p.
• Question 5
Length of rope = √ (20² + 35²) = √ (400 + 1225) = √ 1625 = 40.3 m to 1 d.p.
• Question 6
Distance of ship from starting point
= √ (24² + 47²) = √ (576 + 2209) = √ 2785 = 53 km to the nearest kilometre
The plenary is based on the clip Tick or trash: Pythagoras' theorem in which the presenters both tackle a typical exam question involving the use of Pythagoras’ theorem to calculate the
length of the hypotenuse. The task is to look carefully at their working out and decide who has the correct answer.
Instructions are given for watching this clip in three stages which facilitates an excellent group activity: discuss or attempt the question; vote on whose working to tick and whose to
trash; and then discover the correct outcome. The peer assessment approach of this regular feature in Clipbank Maths allows students to review their own understanding and develop their
analytical skills.
The worksheet Pythagoras’ theorem: Tick or trash is provided as a homework activity. It contains three sets of further ‘Tick or trash’ questions and answers based on using Pythagoras’
theorem to calculate the hypotenuse. These problems highlight a number of typical exam errors and students could write their own revision tips based on their analysis of these questions.
Pythagoras' theorem: Tick or trash
Plenary Worksheet Answers
• Question 1
Tick: Ben
Trash: Katie – Instead of squaring 15 and 12, Katie multiplied them by 2 - the squared notation is often confused with multiplying by 2. Checking the answer to see if it was
appropriate, which is always good practice, she should have seen that her answer was too small to be the length of the hypotenuse, which is always the longest side of a right-angled
• Question 2
Tick: Katie
Trash: Ben – He correctly substituted the values into Pythagoras’ theorem but then forgot to square them – another common mistake with the squared notation. If it helps, students can
convert "a²" to "(a x a)" in their working out.
• Question 3
Tick: Ben
Trash: Katie – She incorrectly identified the sides of the right-angled triangle and so carried out the wrong calculation. The orientation of the triangle can contribute to confusion
about which side is the hypotenuse – it is always opposite the right angle and is always the longest side.
Three extension activities are suggested.
1. Students watch the clip Pythagoras’ theorem (3): find a shorter side in which Ben explains another use of Pythagoras’ theorem: in this case, to calculate the length of one of the
Extension shorter sides of a right-angled triangle. They have to use their calculators to work out the final answer of the problem covered and then repeat the calculation with a different set of
input values.
2. Students can demonstrate their understanding of Pythagoras’ theorem by designing a poster to illustrate and explain the demonstrations shown in the clip Pythagoras' theorem (1):
3. Find out more about Pythagoras’ life and theorem from these two useful sites, which include many facts and anecdotes about one of the most famous ancient Greek mathematicians:
Notes on differentiation The students’ responses to the worksheet tasks will be differentiated by outcome. Using these as group activities facilitates feedback from peers, which can be
used to support lower ability pupils.
Cross-curricular links / Functional English
skills Maths Representing, analysing and interpreting. | {"url":"http://preview.channel4learning.com/espresso/clipbank/html/tr_lp_maths_pythagoras_tn.html","timestamp":"2014-04-18T16:00:04Z","content_type":null,"content_length":"13220","record_id":"<urn:uuid:f84f3169-4ef2-4fc0-b462-563d5d35f1e5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pressure Unit Conversion Problem - Worked Example Problem Converting Pressure Units
This is a worked example problem converting units of pressure.
Pressure Conversion Problem
The pressure reading from a barometer is 742 mm Hg. Express this reading in kilopascals, kPa.
This conversion may be performed using two conversion factors:
760 mm Hg = 1.013 x 10^5 Pa
1 kPa = 10^3Pa
742 mm Hg x 1.013 x 10^5 Pa / 760 mm Hg x 1 kPa / 10^3 Pa = 98.9 kPa
98.9 kPa
Always ask yourself whether your final answer seems reasonable. Atmospheric pressure is usually close to 100 kPa, so an answer of 98.9 kPa is reasonable. | {"url":"http://chemistry.about.com/od/workedchemistryproblems/a/pressure-unit-conversion.htm","timestamp":"2014-04-18T20:44:13Z","content_type":null,"content_length":"39364","record_id":"<urn:uuid:25e1924a-54d1-459b-aca3-315326c768d8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard Deviation for Variance?
Date: 01/25/2001 at 06:00:15
From: Dr Peter Lobmayer
Subject: Standard deviation for variance - does it exist ?
I calculate income inequality from a survey sample. One of the
measures I use is the variance of the logarithm of individual income
in different geographical areas. Sample size varies from 100 to 1600
in different areas. I would like to calculate a measure of the
reliability of my data. The best would be the standard deviation of
variance, but I could not find such a term in my reference books.
Does such measure exist ? If so, how to calculate ?
With best regards, Peter Lobmayer.
Date: 01/25/2001 at 06:35:47
From: Doctor Mitteldorf
Subject: Re: Standard deviation for variance - does it exist ?
Dear Peter,
Here's a somewhat personal view, but you might get a different one
from another statistician, and I encourage you to do so.
Don't think in terms of formulas and doing the one right thing with
your data. There are lots of formulas, but there are no hard-and-fast
rules telling you the right one to use in a given circumstance.
The art of the statistician is to create a mathematical model that
(1) applies to the question at hand, and (2) answers the exact
question to which you're seeking a solution.
Formulating that question precisely is the crux of your art. When you
find yourself asking for the "best" measure, or even asking "does this
measure exist?" you're straying from the notion of mathematical
modeling, and seeking to justify your work via some "higher
authority." But there is no higher authority. Every statistical
problem is unique, and you must stand on the cogency of your own
reasoning every time you present a statistical argument in a
scientific journal.
So much for the sermon. What's to be done in your situation? My bias
here leads me to the practical rather than the theoretical. I offer a
prescription that is transparently fair and relevant, but which is not
a textbook formula:
For each of your samples of size n, randomly delete sqrt(n) data
points. (I suggest sqrt(n) because any sample of size n is associated
with a statistical fluctuation on the scale sqrt(n)). Now recalculate
the variance of the log of incomes as you did before.
Repeat this entire process 10,000 times, each time ignoring a
different random subset of sqrt(n) data points for each of the areas
in your sample. Record all 10,000 answers, and calculate their mean
and standard deviation. The mean should be very close to your original
calculation; the standard deviation is a very fair measure of the
reliability of your final answer.
This kind of thinking is called "Monte Carlo simulation" and was
invented around the time of the first computer. It uses a lot of
computer power, but computer power is free for most of us these days.
It requires some programming, whereas many statistical software
packages don't. I like Monte Carlo simulation because it can apply
exactly and specifically to your data and your situation in a way that
a textbook statistical test rarely can.
- Doctor Mitteldorf, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/52204.html","timestamp":"2014-04-19T05:27:47Z","content_type":null,"content_length":"8149","record_id":"<urn:uuid:3a322d7f-2d81-4a26-ad1b-524a98e1be71>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continuous Beams | Design of RCC Structures | Civil Engineering Projects
Continuous Beams | Design of RCC Structures
Design of Continuous Beam | RCC Structures
The beam that rests on more than two supports. The beam that rests over multiple columns. Continuous beams increase the structural rigidity of a structure. They offer alternate load path incase of a
failure of a section in the beam.
Continuous beams are commonly used in buildings in seismic risk zone.
Three moment equation or Clapeyron’s equation
Four Methods for Analysis of a beam are as follows:
1. Three moment equation
2. Moment distribution method
3. Kani’s method
4. Slope deflection method
Clapeyron’s Three moment Equation
A = Area of bending moment diagram
X = CG of the bending moment diagram
E = Young’s modulus
S = Shrinking of supports
Therefore, S1=0, S2=0 (If there is no shrinking of supports)
Case 1
E1 = E2 =E (same material)
I1 = I2 =I (depth of the beam)
E1I1 = E2I2 = EI
MAL1 + 2MB[L1+L2] + McL2 + (A1X1/L1)+(A2X2/l2)=0
MAL1 + 2MB[L1+L2] + McL2 = -6 (A1X1/L1)+(A2X2/l2)
We will further discuss “Continuous Beams” with the help of numerical.
You can
leave a response
, or
from your own site.
#1 by abuagla babiker on July 29, 2011 - 11:22 pm
thanks so much
#2 by sree on August 31, 2011 - 4:42 am
• #3 by BenzuJK on August 31, 2011 - 7:02 am
You are welcome Sree. I am glad you found the article useful. I am sure it will also be useful to your friends. Do suggest them to visit the site. It will help them under Design of RCC Structures
better. Just a suggestion. Do keep visiting.
#4 by renzil on October 13, 2011 - 1:42 pm
what would be the beam size for column distant 24foot between each other? and column size for 2 stoty building?
• #5 by BenzuJK on November 12, 2011 - 9:57 am
Hello Renzil
Check this out. It might be useful.
#6 by Taiwo Kazeem on January 2, 2012 - 10:13 am
Pls i need a worked example of continous beam using Three moment equation
• #7 by BenzuJK on January 18, 2012 - 8:30 am
Hello Taiwo,
I will make it a point to put it up as soon as possible.
#8 by Aftab Ali on February 2, 2012 - 11:22 pm
I hope you will be well. I need your advise, if you give me prompt response i will be very glad and thankful to you. Actually i am going to construct an industrial hall size 80 ft X 40 ft.
Height of hall is 22 ft.
Column to Column distance = 20 X 20 ft.
Column size is = 18 inch X 12 Inch.
06 simple iron rod of 5/8 mm. in each Column.
02 X 20 ft beam for 40 feet . My question is how to keep the iron rod in beam ? same as column or 07 rod like 03 in bottom, 02 in middle and 02 in top. OR please guide me. ?
A Ali,
#9 by Kiran K S on April 26, 2012 - 6:52 am
Hi BenzuJK,
I am impressed by the blog you have created and is amamzingly helpful.
I would like some clarification on the roof beam structure.
Probabaly a ‘Do and Dont’s for tying reinforcement in beams for a framed structure.
#10 by Abhishek on May 14, 2012 - 9:29 am
Can you please tell me that what should be the diameter of iron rods used in the RCC and what should be there distance from each other if the coulms are on 10 mtr distance from each other.
#11 by ramachandran on May 15, 2012 - 11:36 am
mam pls tell me design for cantilever beam
iam now construct one building but before existing building back side. that bck side building is load bearing wall no column provide so front side building cantilever beam provide to connect that
back side building that back side building on the one floor is possible or impassible so doubt pls explain me
that bak side building may be 15′
#12 by Varsha on July 24, 2012 - 11:41 am
This is very good for study of rcc design…it give very knowlege of design
#13 by md.hashmathullah khan on October 8, 2012 - 7:42 am
hi.madam i want to what is vasthu.plz tell the details of vasthu. thanks u
#14 by ani peyang on November 16, 2012 - 3:29 am
what should be the minimun size of clumn and beam for g+1 floor rcc house | {"url":"http://www.civilprojectsonline.com/building-construction/continuous-beams-design-of-rcc-structures/","timestamp":"2014-04-21T07:56:26Z","content_type":null,"content_length":"54398","record_id":"<urn:uuid:b418146a-c2c9-463c-a8f0-b6702e5eee22>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hot Rod Forum : Hotrodders Bulletin Board - rpm to low?
- -
rpm to low?
gearheadslife 01-30-2013 04:11 PM
rpm to low?
Wasn't sure if I should post this here or in the engine area.. you'll see why..
my c-10 if I install a 2004r and 29 tall tires..
with my 3:08 gears the rpm is =
3:73's would be
and 4:11 would be 2018@65mph
as you can see the wallace racing cac. comes up with same rpm for 3:08&3:73 so that can't be correct..
anyways is 1500rpm to low to cruise at 65mph..
the engine I'm looking at is the 96-02 vortec350 with adding the ramjet cam
thinking 1500 might be to low for a 350 to push a brick through the air and not lug the engine..
figure'n 375 ft lb at 3500rpm (guess as I dn't have the power curves of the gm 350 vortec stock or with the ramjet cam)
so at 1500 if it was 200ft lb.. not sure if it be enough..
ideas/insight welcome
vinniekq2 01-30-2013 06:51 PM
1500 RPM is a little low but not unreasonable.
.7 X 3.08 and 29 tall tire = 60 mph @ 1500
.7X3.73 and 29 tall tire= 50 mph@1800
rick 427 01-31-2013 11:13 AM
IMO...If adjusted properly,it won't lug the engine. However, with the 3:08 gear,I would expect performance[or lack of] similier to a ,lets say,1987 caprice,which rolled out of the factory with a
350-700R4-2:56-27"tire.Not sure you would be happy with that.And thats assuming nothing heavy gets put into the truck bed. A 3:73 would greatly improve throttle response and overall performance,and
would be a must if things get hauled in the truck bed.
I must be figureing wrong. I had the 200 4r as .67 overdrive, or is that .70 without overdrive locked up. is there a chart somewere that gives you the correct numbers based on RPM,s, gear ratio, tire
size. Like to print that out. haven't corrected the speedometer on my project yet eather.
vinniekq2 02-01-2013 10:58 AM
painted jester 02-01-2013 11:50 AM
I'll add this info to Vinnies post some of the readers may like to have these!~
= = equals
x = multiply by
/ = divided by
overall tire diameter = mph x axle ratio x 336 / rpm
rpm = mph x axle ratio x 336 / tire diameter
mph = rpm x tire diameter / axle ratio x 336
axle ratio = rpm x tire diameter / mph x 336
These track formulas are so easy I've used them for years at the track they are designed for calculating with a final trans ratio of 1 to 1!
If your getting mph a little low on a pass calculate the tire diameter you need to do the mph your looking for! If your running out of gear calculate, if your pass is bellow rpm and your not at the
top of your final gear calculate!
A lot of the tables on the web have rounded off factors and errors its so easy these days with calculators to do math and algebra why look at the internet! copy these and laminate them keep em safe!
Ill bet all the old guys on here have these and more and calculate with them and then someone looks on the web gets a graph or quotes a table and insults fly! Like dcr and scr tables!!!
At least use the formulas to check the internet tables to see if the table is correct and then you know its right when you quote it!!
Some one is going to ask " what about overdrive " :confused: These are track formulas! Or road formulas for 1To1 overdrive is to save fuel your rpm will drop a little in final drive ( expected)
thank alot. Got tons of information and now one more bit. Now i'll locate tire dilmentions by tire size and i'll have it all.:mwink:
painted jester 02-01-2013 12:51 PM
Originally Posted by dwwl (Post 1641346)
thank alot. Got tons of information and now one more bit. Now i'll locate tire dilmentions by tire size and i'll have it all.:mwink:
Measure tires with a tape ! dont go by the code on the tire @ 5 different manufacturers can all be coded 2.75 x 15 and one can be 1/2 ,3/4,1, ETC inch taller then the other so dont think all tires
are the same size because they are tagged with the same code!!!!
All times are GMT -6. The time now is 11:57 PM.
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0 PL2
Copyright Hotrodders.com 1999 - 2012. All Rights Reserved. | {"url":"http://www.hotrodders.com/forum/rpm-low-228991-print.html","timestamp":"2014-04-17T05:57:20Z","content_type":null,"content_length":"13577","record_id":"<urn:uuid:67f277e2-d3a7-4e59-a92c-522f4333aee3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Rearranging a differential equation
November 25th 2008, 07:21 AM #1
Nov 2008
[SOLVED] Rearranging a differential equation
Hi I am supposed to "just" rearrange the following equation and at one point substitute dh/dt = A/rho with the constant A:
With the constant C and rho(i).
The result looks like this:
I cannot rearrange the 1st equation in a way that I obtain the 2nd. I assume that one has to ibntegrate at one point to get rid of the logarithm though.
Thanks in advance.
If $\rho=f(h)$ and $h=g(t)$ then $\frac{d\rho}{dt}=\frac{d\rho}{dh}\frac{dh}{dt}$ right? So if we calculate what $\frac{d\rho}{dh}$ is and multiply by the given $\frac{dh}{dt}=\frac{A}{\rho}$,
that would give us $\frac{d\rho}{dt}$. So what is:
You can figure that out. Then isolate the $\frac{d\rho}{dh}$ term and then just multiply by $\frac{A}{\rho}$
November 25th 2008, 10:46 AM #2
Super Member
Aug 2008 | {"url":"http://mathhelpforum.com/differential-equations/61587-solved-rearranging-differential-equation.html","timestamp":"2014-04-16T21:01:38Z","content_type":null,"content_length":"34683","record_id":"<urn:uuid:4fd9712b-c288-4e18-a00a-6d563d738e5f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
simple question
:eek: umm.. its an unsigned integer.... meaning... un-negative, positive only... so if you call a unsigned int and its value is less than 0 it will say 0 instead of -(number here)
>what is an unsigned integer? An integer that cannot hold negative values, so instead of spanning a certain length into the negative field and a certain length into the positive field, an unsigned
integer uses all of its bits for positive values. For example: On my system, the max and min values of a signed integer are 2147483647 and -2147483648, respectively. The max and min values of an
unsigned integer are 4294967295 and 0. In the unsigned version, all of the bits are used for a positive value and thus can have a higher value than a signed int. -Prelude | {"url":"http://cboard.cprogramming.com/cplusplus-programming/19811-simple-question-printable-thread.html","timestamp":"2014-04-18T10:00:41Z","content_type":null,"content_length":"7045","record_id":"<urn:uuid:118823f9-90e7-4e8f-8183-b3b5c619a40f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
create tcl variables from current state
Write internal xspec data to a tcl variable. This facility allows the manipulation of xspec data by tcl scripts, so that one can, for example, extract data from xspec runs and store in output files,
format xspec output data as desired, use independent plotting software, etc.
? Show the valid options. Does not set $xspec_tclout.
areascal n <s|b> Writes a string of blank separated values giving the AREASCAL values for spectrum n. If no second argument is given or it is “s” then the values are from the source file,
if “b” from the background file.
arf n The auxiliary response filename(s) for spectrum n.
backgrnd n Background filename for spectrum n
backscal n <s|b> Same as areascal option but for BACKSCAL value.
chain best|last|proposal| The best option returns the parameter values corresponding to the smallest statistic value in the loaded chains. The last option returns the final set of parameter values in
stat the loaded chains. The proposal option takes arguments distribution or matrix and returns the name or covariance matrix for the proposal distribution when using
Metropolis-Hastings. The stat option returns the output of the last chain stat command.
chatter Current xspec chatter level.
compinfo [<mod>:]n Name, 1^st parameter number and number of parameters of model component n, belonging to model w/ optional name <mod> and optional datagroup <groupn>.
cosmo Writes a blank separated string containing the Hubble constant (H0), the deceleration parameter (q0), and the cosmological constant (Lambda0). Note that if Lambda0 is
non-zero the Universe is assumed to be flat and the value of q0 should be ignored.
covariance [m, n] Element (m,n) from the covariance matrix of the most recent fit. If no indices are specified, then entire covariance matrix is retrieved.
datagrp [n] Data group number for spectrum n. If no n is given, outputs the total number of data groups.
datasets Number of datasets.
dof Degrees of freedom in fit, and the number of channels.
energies [n] Writes a string of blank separated values giving the energies for spectrum n on which the model is calculated. If n is not specified or is 0, it will output the energies of
the default dummy response matrix.
eqwidth n [errsims] Last equivalent width calculated for spectrum n. If “errsims” keyword is supplied, this will instead return the complete sorted array of values generated for the most
recent eqwidth error simulation.
error [<mod>:]n Writes last confidence region calculated for parameter n of model with optional name <mod>, and a string listing any errors that occurred during the calculation. The string
comprises nine letters, the letter is T or F depending on whether or not an error occurred. The 9 possible errors are:
(for gain parameters use:
1. new minimum found
rerror [<sourceNum>:]n )
2. non-monotonicity detected
3. minimization may have run into problem
4. hit hard lower limit
5. hit hard upper limit
6. parameter was frozen
7. search failed in –ve direction
8. search failed in +ve direction
9. reduced chi-squared too high
So for example an error string of “FFFFFFFFT” indicates the calculation failed because the reduced chi-squared was too high.
expos n <s|b> Same as areascal option but for EXPOSURE value.
filename n Filename corresponding to spectrum n.
flux [n] [errsims] Last model flux or luminosity calculated for spectrum n. Writes a string of 6 values: val errLow errHigh (in ergs/cm^2) val errLow errHigh (in photons). Error values are
.0 if flux was not run with “err” option.
If the “errsims” keyword is supplied, this will instead return the completed sorted array of values generated during the most recent flux error calculation.
ftest The result of the last ftest command.
gain [<sourceNum>:] For gain fit parameters, value,delta,min,low,high,max for the slope or offset parameter belonging to the [<sourceNum>:]<specNum> response. For nonfit gain parameters, only
<specNum> slope | offset the value is returned.
goodness [sims] The percentage of realizations from the last goodness command with statistic value less than the best-fit statistic using the data. If optional “sims” keyword is specified,
this will instead give the full array of simulation values from the last goodness command.
idline e d Possible line IDs within the range [e-d, e+d].
ignore [<n>] The range(s) of the ignored channels for spectrum <n>.
lumin [n] [errsims] Last model luminosity calculated for spectrum n. Same output format as flux option, in units of 1.0x10^44 erg/s.
margin probability | The probability option returns the probability column respectively from the most recent margin command. Otherwise, the parameter column indicated by <parNum> is returned.
[<modName>:]<parNum> Note that for multi-dimensional margin the returned parameter column will contain duplicate values, in the same order as they originally appeared on the screen during the
margin run.
model Description of current model(s).
modcomp [<mod>] Number of components in model (with optional model name).
modpar [<mod>] Number of model parameters (with optional model name).
modval [<specNum>[<mod]] Write to Tcl the last calculated model values for the specified spectrum and optional model name. Writes a string of blank separated numbers. Note that the output is in
units of photons/cm^2/s/bin.
nchan [<n>] Total number of channels in spectrum n (including ignored channels).
noticed [<n>] Range (low,high) of noticed channels for spectrum n.
noticed energy [<n>] The noticed energies for spectrum n.
nullhyp When using chi-square for fits, this will retrieve the reported null hypothesis probability.
param [<mod>:]n (value,delta,min,low,high,max) for model parameter n.
peakrsid n [lo, hi] Energies and strengths of the peak residuals (+ve and –ve) for the spectrum n. Optional arguments lo, hi specify an energy range in which to search.
pinfo [<mod>:]n Parameter name and unit for parameter n of model with optional name.
plink [<mod>:]n Information on parameter linking for parameter n. This is in the form true/false (T or F) for linked/not linked, followed by the multiplicative factor and additive constants
if linked.
plot <option> <array> Write a string of blank separated values for the array. <option> is one of the valid arguments for the plot or iplot commands. <array> is one of x, xerr, y, yerr, or
[<plot group n>] model. xerr and yerr output the 1-sigma error bars generated for plots with errors. The model array is for the convolved model in data and ldata plots. For contour plots
this command just dumps the steppar results. The command does not work for genetic plot options.
plotgrp Number of plot groups.
query The setting of the query option.
rate <n | all> Count rate, uncertainty and the model rate for the specified spectrum n, or for the sum over all spectra.
rerror [<sourceNumber>:]n Writes last confidence region calculated for response parameter n of model with optional source number, and a string listing any errors that occurred during the calculation.
See the help above on the error option for a description of the string.
response n Response filename(s) for the spectrum n.
sigma [<modelName>:]n The sigma uncertainty value for parameter n. If n is not a variable parameter or fit was unable to calculate sigma, -1.0 is returned.
simpars Creates a list of parameter values by drawing from a multivariate Normal distribution based on the covariance matrix from the last fit. This is the same mechanism that is
used to get the errors on fluxes and luminosities, and to run the goodness command.
solab Solar abundance table values.
stat [test] Value of statistic. If optional ‘test’ argument is given, this outputs the test statistic rather than the fit statistic.
statmethod [test] The name of the fit stat method currently in use. If optional ‘test’ argument is given, this will give the name of the test stat method.
steppar statistic | The statistic and delstat options return the statistic or delta-statistic column respectively from the most recent steppar run. Otherwise, the parameter column indicated by
delstat | [<modName>:] <parNum> is returned. Note that for multi-dimensional steppars the returned parameter column will contain duplicate values, in the same order as they originally appeared on
<parNum> the screen during the steppar run.
varpar Number of variable fit parameters.
version The XSPEC version string.
weight Name of the current weighting function.
xflt n XFLT#### keywords for spectrum n. The first number written is the number of keywords and the rest are the keyword values. | {"url":"http://heasarc.gsfc.nasa.gov/lheasoft/xanadu/xspec/manual/XStclout.html","timestamp":"2014-04-18T16:36:22Z","content_type":null,"content_length":"44344","record_id":"<urn:uuid:70fd11d4-c753-4d57-bfc6-601137a5cf09>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector-valued function
A mathematical (Vector Calculus) term.
A vector-valued function (also called a vector function) is simply any function that has a vector as its output (range). That is (limiting ourself to the Real Numbers) a function that maps R^n -> R^
m, where n > 0, m > 1. This is opposed to your normal scalar function that outputs a single value (m = 1, for instance f(x) = x + 2)
There are two main notations for saying a function is vector-valued. First is to draw some form of an arrow on top of the function's name (think f with a -> on top of it), useful for doing work on
paper. Second is to write the name of the function in bold (f), useful for functions written using a keyboard.
An example
f(x) = (x + 1, x + 2)
f(2) = (3, 4)
Some other notational notes:
Wntrmute says just a couple of other notations for a vector valued function : in my degree course we tend to denote a vector-valued function by simple underlining of the function symbol, as it's
easier than drawing an arrow! This may be a european trait, or just a UK thing (lecturer was austrian). Also, it's quite common to denote a vvf with a capital letter (usually F) then the scalar
components relative to the coordinate system in lower case with a subscript (e.g. F(x,y,z)= f[1](x,y,z)i + f[2](x,y,z)j + f[3](x,y,z)k in 3-space; thus dispensing with bold or underlining entirely if
you know the coordinate system.
(FYI the i, j, and k in 3d vectors represent three special unit vectors. Particularly i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1). Also, capital letters are commonly used to denote vectors.)
Note on the arrow notation: At least in my calc classes we've shorthanded the arrow by making it look like this __\ (the line and top of the arrow). | {"url":"http://www.everything2.com/index.pl?node_id=1525585","timestamp":"2014-04-18T13:06:46Z","content_type":null,"content_length":"21589","record_id":"<urn:uuid:4bb1a827-6cb8-481a-85af-56eceacb647a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational Complexity
A few notes on Dagshul which Lance and I were at last week.
1. I could tell you about the talks, but the website does a better job: here
2. I value going to talks more than Lance does, which is odd since he probably gets more out of them. Nevertheless, they serve two functions:
1. Lets me know of new results or even areas I should look into (E.g. Chattopadhyay's talk on Linear Systems over Composite Moduli was news to me and is stuff I want to learn about.)
2. Makes me feel guilty for not having already looked into an areas (E.g., Anna Gal's talk on Locally Decodable Codes reminded me that I really should learn or relearn some of that stuff and
update my PIR website, Swastik's talk on parity quantifiers and graphs reminds me that I should read a book I've had on my shelf for about 10 years: The strange logic of Random Graphs by Joel
3. I thought that I was the only person going to both Dagstuhl and Ramsey Theory in Logic and ... so that I could give the same talk in both places. But Eli Ben-Sasson will be at both. But he missed
my talk at Dagstuhl. He'll see a better version in Italy since I have made some corrections.
4. At Dagstuhl there is a book that people hand write their abstracts in. In this digital hardwired age do we need this? In case a Nuclear Bomb goes off and all information stored electronically are
erased, we can rebuilt civilization based on those handwritten notebooks. The first thing people will want to know after they crawl out of their bomb shelters will be: is there an oracle
separating BQP from PH. And Scott Aaronson's Handwritten abstract will tell them. (I hope his handwriting is legible.)
5. Peter Bro Miltersen gave an excellent talk. His open problems were very odd. He did NOT say I want to solve problem such-and-such. He instead said I want to find a barrier result to show that
problem such-and-such is hard. It seems to me that one should really try hard to solve a problem before proving its hard. Then again, by trying to prove its hard he may solve it. I hope that if
he solves it he won't be disappointed. Maybe his paper will be titled A proof that showing that problem such-and-such is hard is itself hard.
6. One odd benefit of being in Dagstuhl: I spend some time in the library reading articles that I really could have read at home, but somehow don't have time to. Here is one result I read about that
I will blog about later: Let d be a distance. If you have n points in a circle of diameter 1 how many pairs of points are you guaranteed have distance \le d apart?
7. Next week I'll be in Italy and not posting, so you get Lance Fortnow all five days! Lets hope they are not all on P vs NP.
7 comments:
1. "Next week I'll be in Italy and not posting, so you get Lance Fortnow all five days! Lets hope they are not all on P vs NP."
Does this blog HAVE to have a post EVERY day? It didn't used to, did it?
2. Anon1: If its too much for you just don't look at it every day!
3. Anon2: You are a moron.
Anon1: I agree with your question.
4. Anonymous 1: no comment...
Anonymous 2: you're right...
Anonymous 3: I hope you are not real, maybe some robot...
5. Anon1: I agree with the sentiment.
Anon2: Thats also a fair point, but its no reason not to point out the low utility of contentless posts.
Anon3: That was a bit harsh, don't you think?
Anon4: What do you have against robots?
6. I sense an Anonymous fight??
7. I have a confession to make. I posted all of the comments so far. | {"url":"http://blog.computationalcomplexity.org/2009/10/notes-on-dagstuhl.html","timestamp":"2014-04-19T19:34:54Z","content_type":null,"content_length":"165248","record_id":"<urn:uuid:d9d6bf24-e30a-4fb0-966b-6791126a5fbd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Confusion about calculation of aerodynamic moments
Can anybody explain how the aerodynamic moment is calculated for an airfoil ?
I was reading the first chapter in "Fundamentals of Aerodynamics" by Anderson & the formula for aerodynamic moment for the lower surface of the airfoil confused me. Is the aerodynamic moment
calculated about some other point for the lower surface ?
Thank you. | {"url":"http://www.physicsforums.com/showthread.php?t=579855","timestamp":"2014-04-18T13:43:31Z","content_type":null,"content_length":"24331","record_id":"<urn:uuid:d8c6694b-9bdd-48a8-88fd-5580872f18a3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/tri_edge_1836/answered","timestamp":"2014-04-19T07:12:09Z","content_type":null,"content_length":"68684","record_id":"<urn:uuid:f47f9549-2193-49c3-bf8f-491697c77801>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Java Applets
A series of Applets designed for introductory level Calculus courses. Over 30 different topics covered, including limits, secant lines, applications of derivatives, Riemann sums, applications of
integrals, parametric equations, Taylor series and slope fields.
A collection of Applets that can be used to encrypt, decrypt and break messages using the Vigenere, Rectangular Transposition and Monoalphabetic Substitution cryptosystems.
A series of Applets used to introduce students to basic geometric constructions as well as several classic theorems. Covers Euclidean, hyperbolic and spherical geometries.
Graph Theory
A collection of Applets used to introduce the concept of planar graphs. Can you determine whether or not these graphs are planar?
Information Theory
Recreate Claude Shannon's classic experiment to calculate the entropy of the English language.
Parametric Equations
See parametric equations in action. Draw your favorite Spirograph, watch a cycloid be drawn, experiment with Bezier curves and surfaces or construct several different conic sections.
Probability and Statistics
Includes an introduction to discrete and continuous probability functions via roulette wheels, binomial and normal distributions using a Galton/Plinko/Quincunx board and a study of the Monty Hall and
Gambler's Ruin problems.
Use this applet to help solve and create your own sudoku puzzles. | {"url":"http://www.personal.psu.edu/dpl14/applets.html","timestamp":"2014-04-21T02:42:00Z","content_type":null,"content_length":"4565","record_id":"<urn:uuid:8362ad65-645e-4e02-ba82-ec3270c74674>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Teaching Technology Resource: Learning Cycles on Motion
Website Detail Page
written by Eugenia Etkina
supported by the National Science Foundation
This learning cycle features 19 videotaped experiments, organized sequentially for introducing fundamentals of motion in introductory physics courses. Each video includes learning goal,
prior information needed to understand the material, and elicitation questions. Topics include constant velocity, constant acceleration, falling objects, projectiles, and the physics of
juggling. The instructional method is based on cognitive apprenticeship, in which students focus on the process of science by observing, finding patterns, modeling, predicting, testing,
and revising. The materials were designed to mirror the activities of scientists when they construct and apply knowledge.
See Related Materials for links to the full collection by the same authors and for free access to an article explaining the theoretical basis for this instructional method.
Please note that this resource requires Quicktime.
Subjects Levels Resource Types
Classical Mechanics
- Motion in One Dimension
= Acceleration
= Gravitational Acceleration - Instructional Material
= Velocity = Activity
- Motion in Two Dimensions - High School = Problem/Problem Set
= Projectile Motion - Lower Undergraduate = Unit of Instruction
- Newton's Second Law - Audio/Visual
= Force, Acceleration = Movie/Animation
Education Foundations
- Cognition
= Cognition Development
Appropriate Courses Categories Ratings
- Conceptual Physics - Activity
- Algebra-based Physics - Laboratory
- AP Physics - New teachers
Intended Users:
Access Rights:
Free access
© 2004 Rutgers University
1D motion, ISLE, Investigative Science Learning Environment, freefall, gravitational acceleration, gravity, kinematics, physics videos, projectiles, video clips
Record Cloner:
Metadata instance created November 18, 2011 by Caroline Hall
Record Updated:
November 13, 2012 by Caroline Hall
Last Update
when Cataloged:
September 19, 2008
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4F. Motion
• 3-5: 4F/E1a. Changes in speed or direction of motion are caused by forces.
• 3-5: 4F/E1bc. The greater the force is, the greater the change in motion will be. The more massive an object is, the less effect a given force will have.
• 6-8: 4F/M3a. An unbalanced force acting on an object changes its speed or direction of motion, or both.
• 9-12: 4F/H1. The change in motion (direction or speed) of an object is proportional to the applied force and inversely proportional to the mass.
• 9-12: 4F/H7. In most familiar situations, frictional forces complicate the description of motion, although the basic principles still apply.
• 9-12: 4F/H8. Any object maintains a constant speed and direction of motion unless an unbalanced outside force acts on it.
4G. Forces of Nature
• 3-5: 4G/E1. The earth's gravity pulls any object on or near the earth toward it without touching it.
9. The Mathematical World
9B. Symbolic Relationships
• 9-12: 9B/H4. Tables, graphs, and symbols are alternative ways of representing data and relationships that can be translated from one to another.
9C. Shapes
• 9-12: 9C/H3c. A graph represents all the values that satisfy an equation, and if two equations have to be satisfied at the same time, the values that satisfy them both will be found
where the graphs intersect.
12. Habits of Mind
12C. Manipulation and Observation
• 6-8: 12C/M3. Make accurate measurements of length, volume, weight, elapsed time, rates, and temperature by using appropriate devices.
12D. Communication Skills
• 6-8: 12D/M6. Present a brief scientific explanation orally or in writing that includes a claim and the evidence and reasoning that supports the claim.
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.2 Reason abstractly and quantitatively.
High School — Functions (9-12)
Interpreting Functions (9-12)
• F-IF.7.a Graph linear and quadratic functions and show intercepts, maxima, and minima.
• F-IF.9 Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). Contribute
Building Functions (9-12) Related
• F-BF.1.a Determine an explicit expression, a recursive process, or steps for calculation from a context.
• F-BF.3 Identify the effect on the graph of replacing f(x) by f(x) + k, k f(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the Similar
graphs. Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic Materials
expressions for them.
Featured By
Linear, Quadratic, and Exponential Models^? (9-12)
• F-LE.3 Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial
This resource is part of 2 Physics Front Topical Units.
Kinematics: The Physics of Motion
Unit Title:
Motion in More Than One Dimension
19 videotaped experiments are organized sequentially here for introducing fundamentals of motion in introductory physics classes. The instructional method is based on cognitive
apprenticeship: students focus on the process of science by observing, finding patterns, modeling, testing, and revising. The author is a highly-respected professor of physics, who has
done extensive work in physics education research.
Link to Unit:
Kinematics: The Physics of Motion
Unit Title:
Special Collections
19 videotaped experiments are organized sequentially here for introducing fundamentals of motion in introductory physics classes. The instructional method is based on cognitive
apprenticeship: students focus on the process of science by observing, finding patterns, modeling, testing, and revising. The author is a highly-respected professor of physics, who has
done extensive work in physics education research.
Link to Unit:
ComPADRE is beta testing Citation Styles!
<a href="http://www.compadre.org/precollege/items/detail.cfm?ID=11549">Etkina, Eugenia. Physics Teaching Technology Resource: Learning Cycles on Motion. September 19, 2008.</a>
E. Etkina, (2004), WWW Document, (http://paer.rutgers.edu/pt3/cycleindex.php?topicid=2).
E. Etkina, Physics Teaching Technology Resource: Learning Cycles on Motion (2004), <http://paer.rutgers.edu/pt3/cycleindex.php?topicid=2>.
Etkina, E. (2008, September 19). Physics Teaching Technology Resource: Learning Cycles on Motion. Retrieved April 18, 2014, from http://paer.rutgers.edu/pt3/cycleindex.php?topicid=2
Etkina, Eugenia. Physics Teaching Technology Resource: Learning Cycles on Motion. September 19, 2008. http://paer.rutgers.edu/pt3/cycleindex.php?topicid=2 (accessed 18 April 2014).
Etkina, Eugenia. Physics Teaching Technology Resource: Learning Cycles on Motion. 2004. 19 Sep. 2008. National Science Foundation. 18 Apr. 2014 <http://paer.rutgers.edu/pt3/
@misc{ Author = "Eugenia Etkina", Title = {Physics Teaching Technology Resource: Learning Cycles on Motion}, Volume = {2014}, Number = {18 April 2014}, Month = {September 19, 2008}, Year
= {2004} }
%A Eugenia Etkina
%T Physics Teaching Technology Resource: Learning Cycles on Motion
%D September 19, 2008
%U http://paer.rutgers.edu/pt3/cycleindex.php?topicid=2
%O video/quicktime
%0 Electronic Source
%A Etkina, Eugenia
%D September 19, 2008
%T Physics Teaching Technology Resource: Learning Cycles on Motion
%V 2014
%N 18 April 2014
%8 September 19, 2008
%9 video/quicktime
%U http://paer.rutgers.edu/pt3/cycleindex.php?topicid=2
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Physics Teaching Technology Resource: Learning Cycles on Motion:
Is Based On ISLE: Investigative Science Learning Environment
This is the website for ISLE (Investigative Science Learning Environment), the instructional approach upon which the Rutgers learning cycles for introductory physics are based.
relation by Caroline Hall
See details...
Know of another related resource? Login to relate this resource to it. | {"url":"http://www.compadre.org/precollege/items/detail.cfm?ID=11549","timestamp":"2014-04-18T13:13:09Z","content_type":null,"content_length":"56795","record_id":"<urn:uuid:d7b713fb-9b02-4b39-b92e-067a5ce5e58f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Big Bang versus the 'Big Bounce'
July 6th, 2012 in Physics / Quantum Physics
Credit: Thinkstock
Credit: Thinkstock
Two fundamental concepts in physics, both of which explain the nature of the Universe in many ways, have been difficult to reconcile with each other. European researchers developed a mathematical
approach to do so that has the potential to explain what came before the Big Bang.
According to Einstein s (classical) theory of general relativity, space is a continuum. Regions of space can be subdivided into smaller and smaller volumes without end.
The fundamental idea of quantum mechanics is that physical quantities exist in discrete packets (quanta) rather than in a continuum. Further, these quanta and the physical phenomena related to them
exist on an extremely small scale (Planck scale).
So far, the theories of quantum mechanics have failed to quantise gravity. Loop quantum gravity (LQG) is an attempt to do so. It represents space as a net of quantised intersecting loops of excited
gravitational fields called spin networks. This network viewed over time is called spin foam.
Not only does LQG provide a precise mathematical picture of space and time, it enables mathematical solutions to long-standing problems related to black holes and the Big Bang. Amazingly, LQG
predicts that the Big Bang was actually a Big Bounce , not a singularity but a continuum, where the collapse of a previous universe spawned the creation of ours.
European researchers initiated the Effective field theory for loop quantum gravity (EFTFORLQG) project to further develop this exciting candidate theory reconciling classical and quantum
descriptions of the Universe.
Scientists focused on the background-independent structure of LQG which requires that the mathematics defining the system of spacetime be independent of any coordinate system or reference frame
They applied both semi-classical approximations (Wentzel-Kramers-Brillouin approximations, WKBs) and effective field theory (sort of approximate gravitational field theory) techniques to analyze a
classical geometry of space, study the dynamics of semi-classical states of spin foam and apply the mathematical formulations to astrophysical phenomena such as black holes.
Results produced by the EFTFORLQG project team exceeded expectations. Scientists truly contributed to establishing LQG as a major contender for describing the quantum picture of space and time
compatible with general relativity with exciting implications for unravelling some of the major mysteries of the Universe.
Provided by CORDIS
"The Big Bang versus the 'Big Bounce'." July 6th, 2012. http://phys.org/news/2012-07-big_1.html | {"url":"http://phys.org/print260784395.html","timestamp":"2014-04-17T10:46:28Z","content_type":null,"content_length":"7035","record_id":"<urn:uuid:17f9a24a-b386-47db-9190-070e8c410b30>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gloucester City Calculus Tutors
...One quick note about my cancellation policy, as it's different than most tutors: Cancel one or all sessions at any time, and there is NO CHARGE. Thank you for considering my services, and the
best of luck in all your endeavors! Warm regards, Dr.
14 Subjects: including calculus, physics, geometry, ASVAB
...I am finding more and more as I get older that critical thinking is rarely taught and greatly needed. I feel that getting experience teaching students one on one is the best way for me to have
an immediate impact. This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship.
16 Subjects: including calculus, Spanish, physics, algebra 1
...I am a world-renowned expert in the computer-algebra system and language Maple. I have tutored discrete math many times. I've nearly completed a PhD in math.
11 Subjects: including calculus, statistics, ACT Math, precalculus
No one has more experience. No one has more expertise. Over the last 15 years I've worked for several different test-prep companies.
23 Subjects: including calculus, English, geometry, statistics
...I have experience with the uses of linear algebra and matrices. I have experience dealing with row reduction, multiplication of matrices. I have a bachelor's degree in mathematics and took a
symbolic logic course in college passing with an A.
13 Subjects: including calculus, geometry, GRE, algebra 1 | {"url":"http://www.algebrahelp.com/Gloucester_City_calculus_tutors.jsp","timestamp":"2014-04-18T00:15:08Z","content_type":null,"content_length":"24974","record_id":"<urn:uuid:5823bdc1-d8fb-497b-b81a-8038598cf8df>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
PHA*: Finding the Shortest Path with A* in An Unknown Physical Environment
A. Felner, R. Stern, A. Ben-Yair, S. Kraus, and N. Netanyahu
We address the problem of finding the shortest path between two points in an unknown real physical environment, where a traveling agent must move around in the environment to explore unknown
territory. We introduce the Physical-A* algorithm (PHA*) for solving this problem. PHA* expands all the mandatory nodes that A* would expand and returns the shortest path between the two points.
However, due to the physical nature of the problem, the complexity of the algorithm is measured by the traveling effort of the moving agent and not by the number of generated nodes, as in standard
A*. PHA* is presented as a two-level algorithm, such that its high level, A*, chooses the next node to be expanded and its low level directs the agent to that node in order to explore it. We present
a number of variations for both the high-level and low-level procedures and evaluate their performance theoretically and experimentally. We show that the travel cost of our best variation is fairly
close to the optimal travel cost, assuming that the mandatory nodes of A* are known in advance. We then generalize our algorithm to the multi-agent case, where a number of cooperative agents are
designed to solve the problem. Specifically, we provide an experimental implementation for such a system. It should be noted that the problem addressed here is not a navigation problem, but rather a
problem of finding the shortest path between two points for future usage.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/JAIR/Vol21/jair21-019.php","timestamp":"2014-04-19T12:12:31Z","content_type":null,"content_length":"3420","record_id":"<urn:uuid:95027a18-6167-40ee-8815-cc7d742f12cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometry in Newtonian Dynamics HARD
August 26th 2009, 05:38 AM
Trigonometry in Newtonian Dynamics HARD
A particle of mass $m\sb{1}$ is travelling along the x-axis with velocity $(v\sb{1},0,0)$. It collides elastically with a stationary particle of mass $m\sb{2}$. After the collision, the particle
of mass $m\sb{1}$ is travelling with speed $v\sb{1}'$ at angle $a$ to the x-axis, and the particle of mass $m\sb{2}$ is travelling with speed $v\sb{2}'$ at angle $b$ to the x-axis.
The conservation of momentum laws for this collision are:
(i) $m\sb{1}v\sb{1}'sin(a)+m\sb{2}v\sb{2}'sin(b)=0$
(ii) $m\sb{1}v\sb{1}'cos(a)+m\sb{2}v\sb{2}'cos(b)=m\sb{1 }v\sb{1}$
The conservation of energy equation is:
(iii) $m\sb{1}(v\sb{1}')\sp{2}+m\sb{2}(v\sb{2}')\sp{2}=m\ sb{1}(v\sb{1})\sp{2}$
I have to show that these 3 equations imply:
$sin\sp{2}(a+b)=sin\sp{2}(b)+ksin\sp{2}(a)$, where $k=\frac{m\sb{1}}{m\sb{2}}$
August 26th 2009, 11:24 AM
A particle of mass $m\sb{1}$ is travelling along the x-axis with velocity $(v\sb{1},0,0)$. It collides elastically with a stationary particle of mass $m\sb{2}$. After the collision, the particle
of mass $m\sb{1}$ is travelling with speed $v\sb{1}'$ at angle $a$ to the x-axis, and the particle of mass $m\sb{2}$ is travelling with speed $v\sb{2}'$ at angle $b$ to the x-axis.
The conservation of momentum laws for this collision are:
(i) $m\sb{1}v\sb{1}'\sin(a)+m\sb{2}v\sb{2}'\sin(b)=0$
(ii) $m\sb{1}v\sb{1}'\cos(a)+m\sb{2}v\sb{2}'\cos(b)={\co lor{red}m_1}v\sb{1}$
The conservation of energy equation is:
(iii) $m\sb{1}(v\sb{1}')\sp{2}+m\sb{2}(v\sb{2}')\sp{2}=m\ sb{1}(v\sb{1})\sp{2}$
I have to show that these 3 equations imply:
$\sin\sp{2}(a+b)=\sin\sp{2}(b)+k\sin\sp{2}(a)$, where $k=\frac{m\sb{1}}{m\sb{2}}$
First step is to solve equations (i) and (ii) for $v_1'$ and $v_2'$. Multiply (i) by $\cos a$, multiply (ii) by $\sin a$, and subtract. That gives $m_2v_2'\sin(a-b)=m_1v_1\sin a$ (using the trig
formula $\sin a\cos b-\cos a\sin b = \sin(a-b)$). Therefore $v_2' = \frac{kv_1\sin a}{\sin(a-b)}$. A similar procedure (multiplying (i) by $\cos b$ and (ii) by $\sin b$) gives $v_1' = -\frac{v_1\
sin b}{\sin(a-b)}$.
Now square both of those expressions, and substitute those formulas for $v_1'^{\,2}$ and $v_2'^{\,2}$ into (iii). You'll find that after a bit of simplification and cancellation, it reduces to $\
sin^2b+k\sin^2a = \sin^2(a-b)$. Note: $\sin^2(a-b)$, not $\sin^2(a+b)$ as (wrongly) stated in the question. Note also the missing $m_1$ in equation (ii). | {"url":"http://mathhelpforum.com/trigonometry/99279-trigonometry-newtonian-dynamics-hard-print.html","timestamp":"2014-04-19T12:28:45Z","content_type":null,"content_length":"14137","record_id":"<urn:uuid:2b5574b8-4441-49ab-9751-183aca764b97>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Directional Derivatives and the Gradient
This Demonstration visually explains the theorem stating that the directional derivative of the function at the point , ) in the direction of the unit vector is equal to the dot product of the
gradient of with . If we denote the partial derivatives of at this point by and and the components of the unit vector by and , we can state the theorem as follows:
In this Demonstration there are controls for , the angle that determines the direction vector , and for the values of the partial derivatives and . The partial derivative values determine the tilt
of the tangent plane to at the point , ); this is the plane shown in the graphic. When you view the "directional derivative triangle", observe that its horizontal leg has length 1 (since is a unit
vector), and so the signed length of its vertical leg repesents the value of the directional derivative . When you view the "partial derivative triangles", this signed vertical distance is
decomposed as the sum . The first summand is represented by the vertical leg of the blue triangle; the second is represented by the vertical leg of the green triangle. The visual representation is
most clear when the two summands have the same sign. | {"url":"http://www.demonstrations.wolfram.com/DirectionalDerivativesAndTheGradient/","timestamp":"2014-04-20T00:58:12Z","content_type":null,"content_length":"46028","record_id":"<urn:uuid:be724f7f-b23e-4ebe-ad43-c4aa02096c67>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browse Course Communities
The applet is written in HTML5, so it is available on tablets. It has a clean look and feel with nine examples. There are exploration questions and an idea for a project.
Interactive learning materials and instructional modules demonstrating hands-on use of probability distributi
This is an interactive graphical representation of the universe of probability distributions.
Hands-on probability distribution game – players are asked to match a set of common problems, case-stud | {"url":"http://www.maa.org/programs/faculty-and-departments/course-communities/browse?term_node_tid_depth=All&page=2&device=mobile","timestamp":"2014-04-20T03:37:08Z","content_type":null,"content_length":"45266","record_id":"<urn:uuid:685e4175-a910-4e55-b837-106a2bb4e736>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lie algebra valued 1-forms and pointed maps to homogeneous spaces
up vote 14 down vote favorite
Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$, and let $(M,p_0)$ be a simply connected pointed smooth manifold. A $\mathfrak{g}$-valued 1-form $\omega$ on $M$ can be seen as a connection
form on the trivial principal $G$-bundle $G\times M\to M$. Assume that this connection is flat. Then, parallel transport along a path $\gamma$ in $M$ from $p_0$ to $p$ determines an element in $G$,
which actually depends only on the endpoint $p$ since we are assuming that the connection is flat and that $M$ is simply connected. Thus we get a pointed map $\Phi_\omega:(M,p_0)\to (G,e)$, where $e$
is the identity element of $G$. Now, the Lie group $G$ carries a natural flat $\mathfrak{g}$-connection on the trivial princiapl $G$-bundle $G\times G\to G$, namely the one given by the Maurer-Cartan
1-form $\xi_G$. Using $\Phi_\omega$ to pull-back $\xi_G$ on $M$ we get a $\mathfrak{g}$-valued 1-form $\omega$ on $M$ which, no surprise, is $\omega$ itself. So one has a natural bijection between $\
{\omega\in \Omega^1(M,\mathfrak{g}) | d\omega+\frac{1}{2}[\omega,\omega]=0\}$ and pointed maps from $(M,p_0)$ to $(G,e)$.
(all this is well known; I'm recalling it only for set up)
If now $\omega$ is not flat but has holonomy group at $p_0$ given by the Lie subgroup $H$ of $G$, then we can verbatim repeat the above construction to get a pointed map $\Phi_\omega:(M,p_0)\to (G/H,
[e])$. I therefore suspect by analogy that there should be a natural bijection
$ \{\omega\in \Omega^1(M,\mathfrak{g}) | \text{some condition}\} \leftrightarrow C^\infty((M,p_0),(G/H,[e])), $
but I've been so far unable to see whether this is actually true, nor to make explicit what "some condition" should be (it should be something related to the Ambrose-Singer holonomy theorem and to
Narasimhan-Ramanan results on universal connections, but I've not been able to see this neatly, yet). I think that despite my unability to locate a precise statement for the above, this should be
well known, so I hope you will be able to address me to a reference.
dg.differential-geometry reference-request
Not something like curvature$\in \Omega^2(M,\mathfrak h)$? Good question! – Theo Johnson-Freyd Dec 24 '11 at 18:46
Hi Theo, that was my first guess and what I had hoped, but Ambrose-Singer seems to be a bit more refined than that. (but it could also be me misreading their result) – domenico fiorenza Dec 24 '11
at 21:42
@domenico: One thing I forgot to point out in my 'answer' below (which, I know, wasn't really an answer to your actual question) is that you should realize that the maps $f:(M,p_0)\to \bigl(G/H,[e]
\bigr)$ that you get by this process are, by definition, ones that lift to a map $F:(M,p_0)\to (G,e)$. In general, this is far from being all the maps from $(M,p_0)$ to $\bigl(G/H,[e]\bigr)$.
(Simple-connectedness of $M$ is a red herring, it doesn't help in general.) This should already tell you that there is something fishy about what you are trying to do. – Robert Bryant Dec 27 '11 at
add comment
1 Answer
active oldest votes
The question you are asking is a very basic one in the theory of what Élie Cartan called "the method of the moving frame" (in the original French, "la méthode du repère mobile"), so you
should be looking that up. Cartan's basic goal was to understand maps of manifolds into homogeneous spaces, say, $f:M\to G/H$, by associating to each such $f$, in a canonical way, a
'lifting' $F:M\to G$ in such a way that the lifting of $\hat f = g\cdot f$ would be $\hat F = gF$ for all $g\in G$. If one could do such a thing, then one could tell whether two maps
$f_1,f_2:M\to G/H$ differed by an action of $G$ by checking whether $F_1^*(\gamma) = F_2^*(\gamma)$, where $\gamma$ is the canonical $\frak{g}$-valued left-invariant $1$-form on $G$.
It turns out that it is not always possible to do this in a uniform way for all smooth maps $f:M\to G/H$ (even in the pointed category, which modifies the problem a little bit, but not
by much). However, if one restricts attention to the maps satisfying some appropriate open, generic conditions, then there often is a canonical lifting $F$ for those $f$ belonging to
this set of mappings, and it can be characterized exactly by requiring that the $1$-form $\omega_F = F^*(\gamma)$ satisfy some conditions. Working out these conditions in specific cases
is what is known as the "method of the moving frame".
There's no point in trying to give an exposition of the theory here because it is covered in many texts and articles, but let me just give one specific example that should be very
familiar, the Frenet frame for Euclidean space curves.
Here the group $G$ is the group of Euclidean motions (translations and rotations) of $\mathbb{E}^3$ and $H$ is the subgroup that fixes the origin $0\in\mathbb{E}^3$. The elements of $G$
can be thought of as quadruples $(x,e_1,e_2,e_3)$ where $x\in\mathbb{E}^3$ and $e_1,e_2,e_3$ are an orthonormal basis of $\mathbb{E}^3$.
When $f:\mathbb{R}\to\mathbb{E^3}$ is nondegenerate, i.e., $f'(t)\wedge f''(t)$ is nonvanishing, there is a canonical lifting $F:\mathbb{R}\to G$ given by $$ F(t) = \bigl(f(t),e_1(t),e_2
(t),e_3(t)\bigr) $$ that is characterized by conditions on $\omega = F^*(\gamma)$ that are phrased as follows: First, $e_1\cdot df$ is a positive $1$-form while $e_2\cdot df = e_3\cdot
df = 0$, and, second, $e_2\cdot de_1$ is a positive $1$-form while $e_3\cdot de_1 = 0$.
These conditions take the more familiar form $$ df = e_1(t)\ v(t)dt,\qquad de_1 = e_2(t)\ \kappa(t)v(t)dt, $$ for some positive functions $v$ and $\kappa$ on $\mathbb{R}$, and they imply
$$ de_2 = -e_1(t)\ \kappa(t)v(t)dt + e_3(t)\ \tau(t)v(t)dt, \qquad de_3 = -e_2(t)\ \tau(t)v(t)dt, $$ for some third function $\tau$ on $\mathbb{R}$.
Conversely, any $F:\mathbb{R}\to G$ that satisfies the above conditions on $\omega = F^*(\gamma)$ is the canonical (Frenet) lift of a (unique) nondegenerate $f:\mathbb{R}\to\mathbb{E}^
Without the nondegeneracy condition, the uniqueness fails. Just consider the case in which the image of $f$ is a straight line.
up vote 18
down vote There are similar, but, of course, more elaborate, examples for other homogeneous spaces and higher dimensional $M$, but you should go look at the literature if you are interested in
accepted this.
Added in response to request in the comment: There are several excellent sources for the method of the moving frame. I'll just list (alphabetically by author) some of my favorites, which
means the ones that I find most felicitous:
• Élie Cartan, "La théorie des groupes finis et continus et la géométrie différentielle traitées par la méthode du repère mobile", Paris: Gauthier-Villars, 1937. (His style takes some
getting used to, and so many say that Cartan is unreadable, but, once you get used to the way he writes, there's nothing like Cartan for clarity and concision. I certainly have
learned more from reading Cartan than from any other source.)
• Shiing-shen Chern, W. H. Chen, and K. S. Lam, "Lectures on Differential Geometry", Series on University Mathematics, World Scientific Publishing Company, 1999. (Chern learned from
Cartan himself, and was a master at calculation using the method.)
• Jeanne Clelland, 1999 MSRI lectures on Lie groups and the method of moving frames, available at http://math.colorado.edu/~jnc/MSRI.html. (A nice, short elementary introduction.)
• Mark Green, "The moving frame, differential invariants and rigidity theorems for curves in homogeneous spaces", Duke Math. J. Volume 45, Number 4 (1978), 735-779. (Points out some of
the subtleties in the 'method' and that it sometimes has to be supplemented with other techniques.)
• Phillip Griffiths, " "On Cartan’s method of Lie groups and moving frames as applied to uniqueness and existence questions in differential geometry", Duke Math. J. 41 (1974): 775–814.
(Lots of good applications and calculations.)
• Thomas Ivey and J. M. Landsberg, "Cartan for Beginners: Differential Geometry Via Moving Frames and Exterior Differential Systems", Graduate Studies in Mathematics, AMS, 2003. (Also
contains related material on how to solve the various PDE problems that show up in the applications of moving frames.)
I think that this is enough to go on. I won't try to go into some modern aspects, such as the work of Peter Olver and his coworkers and students, who have had some success in turning
Cartan's method into an algorithm under certain circumstances, or the more recent work of Boris Doubrov and his coworkers on applying Tanaka-type ideas to produce new approaches to the
moving frame in certain cases.
Hi Robert, thanks a lot for your answer, that's very informative and well written. Could you edit it adding a couple of your favourite references for a modern treatment of the method
of the moving frame at the end of it? That would be really helpful: the literature on the subject is so vast that having a good advice on what to read really makes the difference.
Thanks. – domenico fiorenza Dec 25 '11 at 18:16
1 I shouldn't play favorites, since I know everyone (including Robert) listed above, except for Cartan himself, but I really liked learning this stuff from Griffiths's paper. It
clarified and unified a lot of things I had learned elsewhere but couldn't quite put it all together. – Deane Yang Dec 27 '11 at 3:02
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/84218/lie-algebra-valued-1-forms-and-pointed-maps-to-homogeneous-spaces","timestamp":"2014-04-21T12:46:52Z","content_type":null,"content_length":"64210","record_id":"<urn:uuid:8027b193-f5a4-4492-b4b4-519d4e35b43e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Process Algebra,'' Cambridge Univ
- Concur ’95: Concurrency Theory, volume 962 of Lecture Notes in Computer Science , 1995
"... In this paper we present a solution to the long-standing problem of characterising the coarsest liveness-preserving pre-congruence with respect to a full (TCSP-inspired) process algebra. In
fact, we present two distinct characterisations, which give rise to the same relation: an operational one base ..."
Cited by 58 (0 self)
Add to MetaCart
In this paper we present a solution to the long-standing problem of characterising the coarsest liveness-preserving pre-congruence with respect to a full (TCSP-inspired) process algebra. In fact, we
present two distinct characterisations, which give rise to the same relation: an operational one based on a De Nicola-Hennessy-like testing modality which we call should-testing, and a denotational
one based on a refined notion of failures. One of the distinguishing characteristics of the should-testing pre-congruence is that it abstracts from divergences in the same way as Milner’s observation
congruence, and as a consequence is strictly coarser than observation congruence. In other words, should-testing has a built-in fairness assumption. This is in itself a property long sought-after; it
is in notable contrast to the well-known must-testing of De Nicola and Hennessy (denotationally characterised by a combination of failures and divergences), which treats divergence as catrastrophic
and hence is incompatible with observation congruence. Due to these characteristics, should-testing supports modular reasoning and allows to use the proof techniques of observation congruence, but
also supports additional laws and techniques.
, 2000
"... Traditionally, in process calculi, relations over open terms, i.e., terms with free process variables, are defined as extensions of closed-term relations: two open terms are related if and only
if all their closed instantiations are related. Working in the context of bisimulation, in this paper we s ..."
Cited by 20 (0 self)
Add to MetaCart
Traditionally, in process calculi, relations over open terms, i.e., terms with free process variables, are defined as extensions of closed-term relations: two open terms are related if and only if
all their closed instantiations are related. Working in the context of bisimulation, in this paper we study a different approach; we define semantic models for open terms, so-called conditional
transition systems, and define bisimulation directly on those models. It turns out that this can be done in at least two different ways, one giving rise to De Simone's formal hypothesis bisimilarity
and the other to a variation which we call hypothesis-preserving bisimilarity (denoted t fh and t hp, respectively). For open terms, we have (strict) inclusions t fh /t hp / t ci (the latter denoting
the standard ``closed instance' ' extension); for closed terms, the three coincide. Each of these relations is a congruence in the usual sense. We also give an alternative characterisation of t hp in
terms of nonconditional transitions, as substitution-closed bisimilarity (denoted t sb). Finally, we study the issue of recursion congruence: we prove that each of the above relations is a congruence
with respect to the recursion operator; however, for t ci this result holds under more restrictive conditions than for tfh and thp.]
- Information and Computation , 2001
"... We investigate criteria to relate specifications and implementations belonging to conceptually different levels of abstraction. For this purpose, we introduce the generic concept of a vertical
implementation relation, which is a family of binary relations indexed by a refinement function that maps a ..."
Cited by 6 (0 self)
Add to MetaCart
We investigate criteria to relate specifications and implementations belonging to conceptually different levels of abstraction. For this purpose, we introduce the generic concept of a vertical
implementation relation, which is a family of binary relations indexed by a refinement function that maps abstract actions onto concrete processes and thus determines the basic connection between the
abstraction levels. If the refinement function is the identity, the vertical implementation relation collapses to a standard (horizontal) implementation relation. As desiderata for vertical
implementation relations we formulate a number of congruence-like proof rules (notably a structural rule for recursion) that offer a powerful, compositional proof technique for vertical
implementation. As a candidate vertical implementation relation we propose vertical bisimulation. Vertical bisimulation is compatible with the standard interleaving semantics of process algebra; in
fact, the corresponding horizontal relation is rooted weak bisimulation. We prove that vertical bisimulation satisfies the proof rules for vertical implementation, thus establishing the consistency
of the rules. Moreover, we define a corresponding notion of abstraction that strengthens the intuition behind vertical bisimulation and also provides a decision algorithm for finite-state systems.
Finally, we give a number of small examples to demonstrate the advantages of vertical implementation in general and vertical bisimulation in particular. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2492056","timestamp":"2014-04-19T15:39:43Z","content_type":null,"content_length":"19559","record_id":"<urn:uuid:c03f7a5d-805a-40de-905e-960228d173d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Group Theory
Classical groups
Finite groups
Group schemes
Topological groups
Lie groups
Super-Lie groups
Higher groups
Cohomology and Extensions
Given a subset $S$ of a group $G$, its normalizer $N(S)=N_G(S)$ is the subgroup of $G$ consisting of all elements $g\in G$ such that $g S = S g$, i.e. for each $s\in S$ there is $s'\in S$ such that
$g s=s'g$.
If $S$ is itself a subgroup, then $S$ is a normal subgroup of $N_G(S)$; moreover $N_G(S)$ is the largest subgroup of $G$ such that $S$ is a normal subgroup of it. Of course, if $S$ is itself a normal
subgroup of $G$, then its normalizer coincides with the whole of $G$.
Each group $G$ embeds into the symmetric group $Sym(G)$ on the underlying set of $G$ by the left regular representation $g\mapsto l_g$ where $l_g(h) = g h$. The image is isomorphic to $G$ (that is,
the left regular representation of a discrete group is faithful). The normalizer of the image of $G$ in $Sym(G)$ is called the holomorph. This solves the elementary problem of embedding a group into
a bigger group $K$ in which every automorphism of $G$ is obtained by restricting (to $G$) an inner automorphism of $K$ that fixes $G$ as a subset of $K$.
Revised on November 1, 2013 06:40:25 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/normalizer","timestamp":"2014-04-18T18:56:30Z","content_type":null,"content_length":"27229","record_id":"<urn:uuid:0f93d861-317d-4ebc-bdce-a4fcd5f1bac3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is Being Rational The Same As Being Logical?
Here’s my write-up of last night’s event — please add comments where I’ve missed things, got things wrong etc. In particular, this is my recollection of what Wilfrid said and isn’t based on his
notes, so apologies if I’ve mangled anything.
The topic was introduced by Prof Wilfrid Hodges in what transpired to be his last engagement as a member of Queen Mary College staff. He opened with brief descriptions of Aristotle’s syllogistic
logic, and the tradition that followed him. In particular he pointed out that this tradition often claimed that logic — broadly, the formal rules for making arguments or deductions — was at the very
heart of all rationality.
Aristotle’s logical rules were good, but are they sufficient for rationality? It would seem that computer progammes, which are purely logical, would count as rational; the Curry-Howard Correspondence
is a technical result in computer science that exhibits a clear relationship between programmes and arguments, or at least the special case of those mathematical arguments known as “proofs”. But is
this satisfactory? Or is rationality something more?
The American philosopher C S Peirce put forward a “proof” that Aristotle’s rules for logic were sufficient for rationality. His argument proceeded by starting from an intuitive idea about how we
reason and showing that this simply corresponds to a syllogism. But the act of reducing a real-world argument to a formal scheme is very messy and not at all straightforward. What’s more, real-life
decisions are often not based on a specific set of well-defined hypotheses; we’d have to include a huge number of assumptions to write out the decision-making process in formal terms. That’s perhaps
why it’s possible for two rational people who start with the same information and reach different conclusions; their previous experiences will come to bear in all kinds of complex ways.
What’s more, all this would take a huge amount of time, and in real life we don’t have that kind of time. There are also questions of epistemology — the status of our knowledge — and hence
probability, too. We don’t need pure textbook logic to be rational but “bounded rationality“, which operates within human limits; perhaps this just is rationality.
In the ensuring discussion we touched on a wide range of topics. A few that I particularly remember were:
• The contention in some parts of Continental philosophy that “rationality” is culturally specific, and isn’t an absolute. I have some thoughts on this but as I didn’t get into them at the time
I’ll hold my fire for another post.
• On a related note, there was some discussion of whether reason, however defined, was the only valid way of thinking. I was reminded of William S Burrough’s injunction to “abandon all rational
thought” in the quest for creativity and new ideas. We didn’t explore that too much, or the relationship it has to the pragmatic process of forming our basic assumptions, which can’t be a purely
rational process.
• The idea of “getting rid of your assumptions” or minimising them, and the impossibility of really doing so, which I think were recognised on all sides. This is one reason why rationality can’t be
just logic. In order to infer very much using your logical system you have to have some assumptions to get you started.
• The existence of “deviant” logics different from Aristotle’s, including Buddhist logic. The Stanford Encyclopaedia gives a gloss on the Tetralemma which is probably the most relevant part.
Wilfrid mentioned studies of the Buddhist tradition by Western logicians indicating that the actual “laws of thought” being formulated were not as radically different from Aristotle’s as first
appeared; it was more of a superficial difference in presentation rather than a radical difference in rationality.
• On a similar tack I also mentioned Brouwer’s intuitionism, which is a logic that doesn’t assume the Law of the Excluded Middle. But as Wilfrid pointed out, this is a special logic designed to
capture not “truth” but “provability”, and just because something is true doesn’t mean we can prove it (if it did, we’d be omniscient).
• The Wason Selection Test, and the fact that even trained logicians may find logical rules hard to apply in quite simple cases; but also the fact that the way we choose to reason is fitted to
different situations (in this case, a psychology experiment), lending a sociological aspect to the problem.
Wilfrid summed up with the sentiment that, while people are not universally rational, and while we may not yet understand everything about rationality (or even everything about logic) we as a species
can be optimistic about our chances of putting our relationships on ever more rational footings in the future, although that outcome was by no means certain.
It was a great discussion and I know I missed a lot out — as I said above, please feel free to add your own recollections, or follow up on things not said, below. In particular there were some
interesting things said about Leibniz and Wittgenstein (not at the same time) that I don’t now recall well enough to try to write them down.
If you’d like to read something by Wilfrid and haven’t the necessary background for his mathematical work, this paper is relatively accessible and touches on many of the topics we talked about. If
you enjoyed the Wason Selection Test but found it a bit easy then this is a real challenge. | {"url":"http://bigi.org.uk/blog/2008/10/01/is-being-rational-the-same-as-being-logical/","timestamp":"2014-04-18T11:30:09Z","content_type":null,"content_length":"19950","record_id":"<urn:uuid:28c873a8-47ed-4643-ac96-0af9457e28d1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vipers amazing theroy on life in the universe!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
The universe is infinite so that means that there are an infinite number of planets.
Prove it. And infinite spacetime != infinite matter.
Some of these planets are known to have no life on them so there must be a finite number of planets with no life.
Divide infinite by finite and you get a number as close to zero as can possible be.
Incorrect mathematics.
So theres no life on any planet.
Is this a joke thread?[o)] | {"url":"http://www.physicsforums.com/showthread.php?p=32626","timestamp":"2014-04-19T02:12:54Z","content_type":null,"content_length":"34927","record_id":"<urn:uuid:6be37bde-c53f-45d3-9709-c3b09a67b13f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Normal Distribution Inequality
up vote 13 down vote favorite
Let $n(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}}$, and $N(x) = \int_{-\infty}^x n(t)dt$. I have plotted the curves of the both sides of the following inequality. The graph shows that the
following inequality may be true. $$f(x)\equiv (x^2+1)N + xn-(xN+n)^2 > N^2$$ where the dependency of $n$ and $N$ on $x$ are absorbed into the function symbols. However, I have not succeeded in
providing a full proof except for $x$ above some positive number, with the help of various Mill's Ratio $\frac{m}{n}$ bounds.
I am asking for help in proving the above inequality or providing an $x$ that violates the above inequality. Judging from the aforementioned plot I am pretty confident the validity of the inequality,
The left hand side is actually the variance of a truncated normal distribution. I am trying to give it a lower bound. More explicitly, $$f(x)\equiv\int_0^\infty t^2n(t+x)dt-\Big(\int_0^\infty t\,n
(t+x)dt\Big)^2>\Big(\int_0^\infty n(t-x)dt\Big)^2.$$
The form of the inequality is probably more transparent if we set $m=1-N$ and the inequality is equivalent to $$g(x)\equiv m[(x^2+1)(1-m)+2xn]-n(x+n) > 0.$$
Incidentally, I have proved that $N$ is the upper bound of the left hand side of the first inequality, i.e. $$(x^2+1)N + xn-(xN+n)^2 < N$$ or $$h(x)\equiv x^2 m(1-m)-n[x(1-2m)+n]<0$$ as follows.
$h$ is an even function and $h(0)<0$, so we only need to consider $x>0$. From the integration by part of $m(x)$ and dropping a negative term, we have $$xm<n, \forall x>0.$$ The first term of $h(x)$
is then bounded and \begin{eqnarray} h(x)&<&x(1-m)n-n[x(1-2m)+n] \\ &=& n(xm-n) \\ &<& 0, \end{eqnarray} where last inequality is obtained by using $xm<n$ again.
The lower bound of $f(x)$ appears to be more difficult since it requires tighter approximation of $m$ without singularity at $x=0$. I can prove the lower bound for $x$ greater than some positive
number. I know I need to stitch the small and large regions of positive $x$ together, but I have not carried the detailed computation out yet. Does anyone have more clever trick to accomplish this
Here is the proof for $g(x)>0, \forall x\ge\sqrt{\frac{4}{3}}$. \frac{dg}{dx} &= 2n[xr(1-m)-2(0.5-m)] \\ &= 2n^2[(xr-1)n^{-1}+(2-xr)r] where $r:=\frac{m}{n}$. In what follows we will use the first
expression. The second expression is an alternative which I keep just for maybe future reference. Since $$r<\frac{1}{x}\Big(1-\frac{1}{x^2+3}\Big), \forall x>0,$$ \frac{dg}{dx} &< \frac{2n^2}{x^2+3}
(-n^{-1}+(x^2+4)r) \\ &<\frac{2n^2}{x^2+3}\Big(-n^{-1}+x\Big(1+\frac{4}{x^2}\Big)\Big), where on the last line we apply the $r$ bound again. Choose $x\ge x_0:=\sqrt{\frac{4}{3}}$, $$n^{-1}-x\Big(1+\
frac{4}{x^2}\Big)>n^{-1}-4x.$$ It can be shown that $n^{-1}-4x$ is positive at $x=x_0$ and its derivative is always positive for $x\ge x_0$. We thus have $$\frac{dg}{dx}<0, \forall x\ge x_0.$$ It is
easy to see that $g(x)>0$ for sufficiently large $x$. Therefore, $g(x)>0, \forall x\ge x_0$.
When you say "prove the following inequality" - do you mean that you already know the inequality is true? If so, what is your source? If not, then what evidence do you have for why the inequality
might be true? – Yemon Choi Jul 3 '13 at 4:10
1 @WillJagy et al: I have edited my original post to describe the problem with accuracy, provide reason for my speculation of its validity, and give more context. This is not an easy problem. There
is a subject called normal approximation with Stein's Method. Besides, browsing through the forum, I have seen several other more trivial looking but legitimate posts. I would like to ask for the
reason for deeming this question "off topic" and a review of the classification. – Hansen Jul 3 '13 at 13:22
1 I am inclined to believe this may not be easy, as you say, but if you have a write-up of the results you've obtained so far that you can link to, you might have more success in convincing others
that the problem is definitely non-trivial. (This is too far from my areas of research for me to weigh in with any authority, so I won't vote to reopen. I suspect however it might be MO-worthy.) –
Todd Trimble♦ Jul 3 '13 at 14:57
2 Posted this meta.mathoverflow.net/questions/223/requests-for-reopen-votes/… on meta – Yemon Choi Jul 3 '13 at 16:55
@Hans: Note that the argument for the upper bound, as given, isn't quite correct since $x^2 m (1-m) < n x (1-m)$ holds only for nonnegative $x$. You are saved by the fact that $h(x)$ happens to be
1 an even function, so it suffices to consider only nonnegative $x$. Also, $g$ is even. Could you please edit to specify precisely what truncation of a normal you are considering. Perhaps a somewhat
more indirect approach might yield something if we know a little more about the problem you are considering. Cheers. – cardinal Jul 5 '13 at 14:19
show 5 more comments
4 Answers
active oldest votes
Yes, the conjectured lower bound is true and can be proved using fairly simple, if somewhat tedious, analysis of derivatives.
First define $$ b := f - N^2 = x(xN + n) - (xN + n)^2 + N(1-N)\>. $$ The plan is to show that $b$ is a decreasing function bounded below by zero.
Let $u := x N + n$, so that $b = (x-u)u + (1-N)N = (x-u)u + (1-u')u'$. Since $u(-x) = -(x-u(x))$ and $N(-x) = 1-N(x)$, $b$ is an even function and so we restrict ourselves to the case $x
\geq 0$.
Observe that $u' = N$, $u'' = n$, and $b(0) = (1/4) - (1/2\pi) > 0$.
By using the classical inequalities, valid for $x > 0$, $$ \frac{xn}{x^2+1} \leq 1-N \leq \frac{n}{x} \>, $$ on $(x-u)u$, it is straightforward to verify that $\lim_{x\to\infty} b(x) =
Now, using the fact $u = x u' + u''$, $$ b' = 2u(1-u') - 2 u' u'' = 2 u' u''\left(\frac{(1-u')u}{u'u''} - 1\right) \>. $$ So, if we can show that $\frac{(1-u')u}{u'u''} \leq 1$, we will
be done. Plugging in the definitions yields $\frac{(1-u')u}{u'u''} = \frac{1-N}{n}(x+n/N)$.
up vote 6 Lemma 1. For $x \geq 0$, $n/N \leq a e^{-a x}$ where $a = \sqrt{2/\pi}$.
down vote
accepted Proof. Define $g := a^{-1} e^{ax} n - N$. Then $g(0) = 0$ and $$ g' = (1-x/a - e^{-ax})e^{ax} n < 0 \>. $$
In particular, we have, $x+n/N \leq x + a e^{-a x}$ for any $x \geq 0$.
Lemma 2. For $x \geq 0$, $(1-N)/n \leq (x+a e^{-ax})^{-1}$.
Proof. Set $g := (x+ae^{-ax})^{-1} n - (1-N)$. Then, $g(0) = 0$ and $$ g' = (a+ae^{-ax} + x - a^{-1} e^{ax}) \frac{a e^{-ax} n}{(x+a e^{-ax})^2}\>. $$ The fraction on the right is
positive, so we concentrate on the first term on the right. Let $z := a + a e^{-ax} + x - a^{-1} e^{ax}$. Then $z(0) = 2a - 1/a > 0$ and $\lim_{x\to\infty} z(x) = -\infty$. Furthermore,
$$ z' = - a^2 e^{-ax} + 1 - e^{ax} < 0 \>. $$ Hence, $g'$ is positive for small $x$ and negative for large $x$. Since $\lim_{x\to\infty} g(x) = 0$, we conclude that $g \geq 0$.
This allows us to complete the proof, since by applying Lemma 1 and then Lemma 2, we have $$ \frac{1-N}{n} (x + n/N) \leq \frac{1-N}{n} (x+a e^{-ax}) \leq 1 \>. $$
Hence, $b' < 0$, so $b > 0$ as desired.
Beautiful proof! I think the introduction of $u:=\int_{-\infty}^x n(t)dt$ is the key. Lemma 1 and Lemma 2 are useful result in their own rights. They connect the behavior of $N$ or
$1-N$ near $0$ and $\infty$ smoothly. I will wait a while for others to check the computation, before I will check it as THE accepted answer, even though it is too pretty to be wrong.
Meanwhile, could you please describe your motivation in coming up with the function $a e^{ax}$? – Hansen Jul 8 '13 at 2:36
It is unfortunate that there is only 1 point up vote allow for each person per answer. Otherwise, I would have put in ticked more. :-) – Hansen Jul 8 '13 at 2:40
Dear @Hans: Regarding motivation: Note that $n/N$ is decreasing and so I first tried the crudest thing possible, i.e, $x + n/N \leq x + n(0)/N(0) = x + a$. However, this doesn't work
since it turns out by a similar argument to Lemma 2 that $(1-N)/n \geq (x+a)^{-1}$. So, I needed a function that decreased but stayed above $n/N$, while also decreasing fast enough
that I'd still get an upper bound on $(1-N)/n$. Note that, actually, the same basic analysis as Lemma 2 will yield $(1-N)/n \leq (x+a e^{-bx})^{-1}$ where $b = \sqrt{\pi/2}-\sqrt{2/\
pi}$, which is a little sharper, but unneeded here. – cardinal Jul 8 '13 at 3:08
@Hans: (Also, just a minor typo in your first comment: $u := \int_{-\infty}^x N(u)\,\mathrm du$. Cheers.) – cardinal Jul 8 '13 at 3:09
I see your rationale, but can you describe what makes you think of the particular form of the exponential function $e^{-ax}$? Just a first lucky choice? And thanks for pointing out my
typo. – Hansen Jul 8 '13 at 4:46
show 1 more comment
We may see that the inequality is true for every $|x|<0.597$ in the following way:
For a given value of $x$ consider the values of $N$ and $n$. The inequality will be true for this $x$ if the quadratic polynomial in $y$ $$(y^2+1)N+y\, n-(y N+n)^2-N^2$$ is always positive.
In other words the inequality is true for this $x$ as soon as the discriminant $\Delta$ of this quadratic is negative (the coefficient of $y^2$ being positive).
up vote 1
down vote The discriminant is $\Delta =n^2-4N^2(1-N)^2$. Since $n^2<1/(2\pi)$, the inequality will be true for every $x$ such that $4N^2(1-N)^2>1/(2\pi)$.
Thus the inequality is true for every $x$ such that $0.275214<N<0.724786$. This corresponds to the condition $|x|<0.597$.
@ juan : You wrote "Since the derivative of your function is easily bounded". Could you explain that place in detail? The expression for $f'(x)$ (which can be downloaded from
rapidshare.com/files/3032281090/derivative.pdf ) is not so simple. – user64494 Jul 7 '13 at 5:11
@user64494 To apply the maximal slope principle you only need a rough bound of the derivative. For example substitute all exp(-x^2/2) by 1 and all N(x) by 1, all x by 0.597. All in
absolute value and this bound will suffice. – juan Jul 7 '13 at 8:11
@ juan: You don't answer my request concerning the estimate of the derivative. So called slope principle is the next step in your answer. – user64494 Jul 7 '13 at 8:17
@user64494 My answer proof completely the inequality for $x<-0.597$ or $x>0.597$. For this you have no need of a bound of the derivative. Now to show $f(x)>0$ (my $f$ is different from
yours) on the interval $|x|<0.597$ you may apply the maximal slope principle. This need a bound of the derivative on $|x|<0.597$ (a rough bound suffice). This is very easy to get. And
you finish without difficulty the proof with a little computation (see the paper cited in my answer). – juan Jul 7 '13 at 8:18
1 I am not sure to follow: the good regime $4N^2(1-N)^2\gt1/2\pi$ is when $|x|\lt x_*$ for some $x_*$, not the other way round (as an aside, note that the inequality one is interested in
holds at $x=0$ hence also in a neighborhood of $x=0$). – Did Jul 7 '13 at 11:52
show 12 more comments
In view of $$f(x):= (x^2+1)N(x)+xn(x)-(x+N(x))^2-N(x)^2=$$ $$\left( {x}^{2}+1 \right) \left( 1/2+1/2\, {{\rm erf}\left(1/2\,\sqrt {2}x\right)} \right) +1/2\,{\frac {{{\rm e} ^{-1/2\,{x}^
{2}}}\sqrt {2}x}{\sqrt {\pi }}}- $$ $$\left( x+1/2+1/2\, {{\rm erf}\left(1/2\,\sqrt {2}x\right)} \right) ^{2}- \left( 1/2+1/2\, {{\rm erf}\left(1/2\,\sqrt {2}x\right)} \right) ^{2} $$ and
up vote its taylor expansion at $x=0$ $$f(x)=-x+ \left( -1/2\,{\pi }^{-1}+{\frac {1}{2}}- \left( 1+1/2\,{\frac { \sqrt {2}}{\sqrt {\pi }}} \right) ^{2} \right) {x}^{2}+O \left( {x}^{3 } \right) $$
0 down the inequality under consideration seems to fail for small positive values of $x$.
I think the series at 0 is $$(\pi-2)/(4\pi) +(1/4-1/\pi) x^2+ O(x^3)$$ – juan Jul 7 '13 at 8:05
See the taylor expansion found with Maple in the worksheet exported as a pdf file rapidshare.com/files/4225333834/taylor.pdf – user64494 Jul 7 '13 at 8:15
@ juan: Substitute $x=0$ in your expression and in the inequality under consideration. – user64494 Jul 7 '13 at 8:27
but your function has a term $-(x+N)^2$ instead of $-(x N+n)^2$. – juan Jul 7 '13 at 8:32
@ juan: Thank you. You are right. I must be more careful. – user64494 Jul 7 '13 at 8:56
add comment
The Maple command $$asympt((x^2+1)*N(x)+x*n(x)-(x*N(x)+n(x))^2-N(x)^2, x, 8)$$ produces $$ \left( {\frac {\sqrt {2}}{\sqrt {\pi }{x}^{3}}}-6\,{\frac {\sqrt {2}} {\sqrt {\pi }{x}^{5}}}+O \
up vote left( {x}^{-7} \right) \right) {\frac {1}{ \sqrt {{{\rm e}^{{x}^{2}}}}}}. $$ Thus the inequality is true for big positive $x$. I leave the investigation of it on the finite interval on your
-1 down own. The above asymptotics can be obtained by hand too.
1 You did not say which finite interval. – Did Jul 6 '13 at 8:33
@ Did: A positive result is obtained by me. What can you do? – user64494 Jul 6 '13 at 8:43
Sorry but I do not understand your comment. You might want to explain (or to delete the comment). – Did Jul 6 '13 at 8:48
@ Did: Indeed, I did not say it. What can you suggest to this end? – user64494 Jul 6 '13 at 9:17
@user64494: Did you read the last few sentences of my original post? "I can prove the lower bound for x greater than some positive number. I know I need to stitch the small and large
4 regions of positive x together, but I have not carried the detailed computation out yet. Does anyone have more clever trick to accomplish this task?" The lower bound can be easily
verified for very small $x$ too. The difficulty lies in specifying what you call "finite interval" show that the valid finite interval overlaps with the large $x$ interval. – Hansen Jul
6 '13 at 16:15
show 4 more comments
Not the answer you're looking for? Browse other questions tagged pr.probability real-analysis probability-distributions inequalities gaussian or ask your own question. | {"url":"http://mathoverflow.net/questions/135593/a-normal-distribution-inequality/135984","timestamp":"2014-04-16T07:43:15Z","content_type":null,"content_length":"104765","record_id":"<urn:uuid:b05dc06c-37fd-4bcb-8e53-d4d4bc916675>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pennington, NJ Algebra Tutor
Find a Pennington, NJ Algebra Tutor
...Over 20 years teaching and tutoring in both public and private schools. Currently employed as a professional math tutor and summer school Algebra I teacher at the nearby and highly regarded
Lawrenceville School. 12 years working as a Middle/Upper School math teacher at the nearby Pennington School. Master's degree in Education and NJ Teacher Certification in Middle School Math.
6 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I have also had much success in making history a fun "story" as opposed to something to memorize. One of my students had just moved to the United States from mainland China. So, in addition to
trying to learn American History, he was scrambling with the language issues.
43 Subjects: including algebra 2, algebra 1, English, reading
...Let me know what concepts you're struggling with before our session, so I can streamline the session as much as possible! In my free time, I like to play with my pet chickens, play Minecraft,
code up websites, and write sci-fi creative stories. I participate in NaNoWriMo every year! ** NOTE: I can't travel farther than 10 miles to meet with you, due to an increase in tutees.
26 Subjects: including algebra 2, English, writing, algebra 1
...I have a bachelor's degree in French from the State University of New York and a MBA in International Business with a concentration in French from the Monterey Institute of International
Studies. I also have extensive experience (over 15 years) teaching test preparation skills for the Princeton ...
5 Subjects: including algebra 1, French, geometry, trigonometry
...As a college instructor, I have taught Discrete Math, which includes set theory, matrices, probability,logic, and linear programming. I feel very comfortable in teaching these topics. During
my extensive college math teaching experience, from 1973 through 2007, I have taught several math courses that included the topic of logic.
21 Subjects: including algebra 1, GED, algebra 2, statistics
Related Pennington, NJ Tutors
Pennington, NJ Accounting Tutors
Pennington, NJ ACT Tutors
Pennington, NJ Algebra Tutors
Pennington, NJ Algebra 2 Tutors
Pennington, NJ Calculus Tutors
Pennington, NJ Geometry Tutors
Pennington, NJ Math Tutors
Pennington, NJ Prealgebra Tutors
Pennington, NJ Precalculus Tutors
Pennington, NJ SAT Tutors
Pennington, NJ SAT Math Tutors
Pennington, NJ Science Tutors
Pennington, NJ Statistics Tutors
Pennington, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Pennington_NJ_Algebra_tutors.php","timestamp":"2014-04-19T06:58:34Z","content_type":null,"content_length":"24265","record_id":"<urn:uuid:d3d18b69-f97b-4b7b-8bbd-84251c42c7c5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Spectra of graphs. Theory and applications. 3rd rev. a. enl. ed.
(English) Zbl 0824.05046
Leipzig: J. A. Barth Verlag. 447 p. DM 168,00; öS 1.310,00; sFr 168,00 (1995).
This is the third, enlarged edition of the book in which the second edition is reproduced and extended by two appendices surveying the recent development in the theory of graph spectra and their
applications. The appendices fill up additional 58 pages, while the new references occupy 21 pages. The first edition of the book [Academic Press, New York, 1980, and Deutscher Verlag der
Wissenschaften, Berlin (1980; Zbl 0458.05042)] covered almost all results about the spectra of graphs up to 1979. Later discoveries of several important applications of graph eigenvalues in
combinatorics and graph theory made the book partially out of date. By surveying these new achievements in the appendices, the authors cover this gap and assure that the book will remain a valuable
reference for the researchers in the field.
However, those working in combinatorics, graph theory, or the design of algorithms where graph eigenvalues became a substantial tool, should also consult related recent books and surveys. To mention
only some of them, we refer to three excellent books by N. Biggs [Algebraic graph theory, Second edition, Cambridge University Press, Cambridge (1994; Zbl 0797.05032)], A. E. Brouwer, A. M. Cohen and
A. Neumaier [Distance-regular graphs, Springer-Verlag, Berlin (1989; Zbl 0747.05073)], C. D. Godsil [Algebraic combinatorics, Chapman & Hall, New York (1993; Zbl 0784.05001)], and to the
comprehensive survey by B. Mohar and S. Poljak in [Combinatorial and graph- theoretical problems in linear algebra, Ed. R. A. Brualdi et al., Springer-Verlag, 1993, IMA Vol. Math. Appl. 50, 107-151
(1993; Zbl 0806.90104)].
05C50 Graphs and linear algebra
05-02 Research monographs (combinatorics)
05C85 Graph algorithms (graph theory) | {"url":"http://zbmath.org/?q=an:0824.05046&format=complete","timestamp":"2014-04-18T05:52:44Z","content_type":null,"content_length":"22331","record_id":"<urn:uuid:a13b16e4-c5ad-4284-b466-a28707628ece>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bolzano-Weierstrass Theorem
September 24th 2009, 03:05 PM
Bolzano-Weierstrass Theorem
Prove the following two dimensional form of the Bolzano-Weierstrass Theorem:
If ${(x_{n},y_{n})}$ is a sequence of points in they $xy$ plane, all of which lie in a rectangle,
$R = [a,b] \times [c,d] = {(x,y): a \leq x \leq b, c \leq y \leq d},$
then there is a subsequence ${(x_{n_{i}},y_{n_{i}})}$ which converges (i.e., the $x's$ and $y's$ each form a convergent sequence)
September 25th 2009, 01:30 PM
The bounded sequence $x_n$ has a convergent subsequence $x_{n_i}$.
The bounded subsequence $y_{n_i}$ has a convergent subsubsequence $y_{n_{m_i}}$.
The (sub)subsequence $(x_{n_{m_i}},y_{n_{m_i}})$ does the trick.
September 29th 2009, 12:10 PM
Thanks for your very helpful reply.
I'm a bit slow with this theoretical business, how do we know $x_{n}$ is bounded and $y_{n}$ is not?
Thank you very much. | {"url":"http://mathhelpforum.com/differential-geometry/104132-bolzano-weierstrass-theorem-print.html","timestamp":"2014-04-18T09:29:34Z","content_type":null,"content_length":"6567","record_id":"<urn:uuid:c1eca703-eb3a-4ad8-afbc-20142b829c42>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roger Penrose: A Knight on the tiles
Roger Penrose: A Knight on the tiles
Issue 18
Jan 2002
Sir Roger Penrose is one of the world's most widely known mathematicians. His popular books describe his insights and speculations about the workings of the human mind and the relationships between
mathematics and physics. His interests range from astrophysics and quantum mechanics to mathematical puzzles and games. As a teenager, he invented the so-called "Penrose staircase", used by Escher in
some of his famous optical illusion drawings, such as the one below. Helen Joyce from the Plus team talked to Sir Roger about his ideas.
M.C. Escher's "Ascending and Descending".
© 2002 Cordon Art - Baarn - Holland (www.mcescher.com).
All rights reserved. Used by permission.
Quantum Consciousness
Perhaps Penrose is best known to the wider public for his view that there is an essentially non-algorithmic element to human thought and human consciousness. Roughly speaking, an algorithm is a
clearly defined set of steps for carrying out some procedure; once an algorithm for carrying out a procedure has been found, no more independent thought is required. All that is necessary is to
follow the steps. Anything a computer can do must be algorithmic, since computers can only do what they are programmed to do, and programs are algorithmic in nature. This non-algorithmic element to
human thought is, Penrose claims, due to quantum effects in the brain, which are the source of our feelings of self-awareness, our consciousness, and our capacity for leaps of inspiration. These
ideas are set out very fully in Penrose's two bestselling works of popular science, "The Emperor's New Mind" and "Shadows of the Mind", the latter reviewed in this issue of Plus.
At the moment, the central project of mathematical physics is the search for a so-called "Grand Unified Theory" or "Theory of Everything". This theory would reconcile quantum mechanics, which
operates at very small scales, and relativity, which operates at very large scales. These two beautiful theories both have much experimental evidence to support them, but unfortunately are
Roger Penrose giving a lecture at the conference [Photo and copyright Anna N. Zytkow]
Penrose is unusual in believing that quantum mechanics will have to change in order to fit into such a unified theory. He says that this is "what distinguishes my view from the others who believe
that quantum mechanics are important in mental phenomena, although even that's a minority view. These other people don't, on the whole, think that you have to go that further step, and modify quantum
mechanics. According to me, quantum mechanics, as we understand it today, has certain limitations and will be shown not to be correct. The level at which this will happen is around the limits of what
is measurable with present-day technology. This is interesting, quite apart from its implications for consciousness and AI.
"The more detailed ideas I have about consciousness do depend on specifics - I must be right, not only in that quantum mechanics has its limits, but also in the detailed way in which it must be
changed. So if the experiments I'm trying to get colleagues to perform show that I'm wrong then my model of consciousness will also be disproved."
Penrose's ideas on consciousness are, to say the least, controversial in the AI community. After all, what he is saying is that the goal of an intelligent, let alone conscious, computer is
unattainable right from the start, a view understandably unpopular with those who are dedicating their lives to creating such entities.
However, Penrose's opinions are given some force by the slow progress of work on AI. It cannot be denied that early proponents of AI grotesquely underestimated the difficulty of their task. Arthur C
Clarke placed HAL, his thinking, feeling, ultimately paranoid computer, in 2001, during which year real AI researchers were teaching small robots to find their way around rooms! As Penrose says, this
is "no indication of any sort of intelligence, let alone consciousness".
Penrose also disagrees with the majority view regarding how to decide whether a computer is conscious. For over fifty years, since Alan Turing wrote his hugely influential article proposing the
so-called "Turing test" as a way of deciding whether a machine is conscious, most workers in the field of AI have accepted, in effect, that if a machine can persuade a human observer that it is
conscious, then it must be.
To be more accurate, Turing proposed that an interrogator in a separate room should interact with a computer and a person, via a machine terminal. If, after some unspecified time, the interrogator
could not tell the difference between the computer and the person, the computer should be regarded as "intelligent".
Not only does Penrose feel that the Turing test bypasses the essential element - consciousness - but he also believes that a non-conscious machine couldn't "fake it". In his opinion, a really good
impersonation of consciousness by a machine would not be possible unless the machine actually were conscious. He says that "within a limited framework, one might get a performance that would fool
somebody for a while, but ultimately the machine would give itself away. Of course there's a question just how good your simulation might be, and, so far anyway, the people working in this field are
nowhere close."
Trouble with Tiling
A problem from recreational mathematics to which Penrose has made a significant contribution is the
tiling problem.
His work on this problem is described in some detail in
Quasicrystals and Kleenex
, an article from Issue 16 of
. The tiling problem is this: given a collection of polygonal shapes, is it possible to cover the whole plane using just these shapes, with no overlaps? Such an arrangement of shapes is called a
Tilings are said to be
if they are exactly repetitive in two different directions (a direction and its opposite are not counted as different!).
At first sight, the problem of whether one can find a tiling of the plane by shapes which will only tile non-periodically, seems lighthearted. In fact, although it is "fun" mathematics, it has a
philosophically deep aspect - it is part of the area of mathematics known as non-recursive.
A class of mathematical problems is called recursive if there is an algorithm for finding the answer in each individual case. It is called non-recursive if there is is a yes-or-no answer to each
individual problem, but there is no algorithm for deciding whether the answer is yes or no in each individual case.
It has been known for about forty years that there is no algorithmic way of deciding whether a given collection of polygonal shapes will tile the plane, that is, the tiling problem is non-recursive.
Two Pairs of Penrose Tiles
The way in which this was originally proved was interesting. First it was shown that if, whenever a collection of polygonal shapes, or "tiles" could tile the plane in some way, the same collection
could tile the plane periodically, then the tiling problem had to be recursive. The next step was that the tiling problem was proved not to be recursive. This meant that there had to be a collection
of polygonal shapes that could tile the plane, but only non-periodically. The first such collection of tiles that was found was was absolutely enormous - it contained 20,426. Various mathematicians
found smaller collections, culminating in Penrose's discovery of a pair of tiles that tile the plane only in a non-periodic way. In fact, there are more than one such pair - two are shown below.
It's interesting to think about the problem of inventing a tiling in the light of Penrose's thoughts on AI, consciousness and inspiration, because the problem seems exactly like the sort of thing a
computer couldn't do. Penrose says that people ask him where he got the inspiration, whether it came to him suddenly.
"It wasn't like that, but, you see, it's never like that. What it's like is that one has been thinking about a problem for a long time and getting very familiar with it. After a while, you may be
worrying about some particular aspect of the problem, and then it's quite possible, even when you're thinking about something else, that an idea may come to you, and you realise that things fit
together in some way, with the accompanying realisation that this must be right. Of course sometimes one may be mistaken in such things!
"You see, I was already aware of certain nonperiodic patterns of pentagons and other shapes which are just nice to look at. What you might call the inspiration was to realise that, by modifying the
shapes in different sorts of ways, by putting knobs on like jigsaw pieces, you could force the pattern by local matching rules. As a pattern, I was aware of it before, but the thing that required a
little bit more, something for me to guess or something to come from somewhere else, was that realisation."
As Penrose says, one of the interesting features of the tiling problem is that it is noncomputable. "It is possible to explain noncomputability in relatively simple terms, but people almost always
get the wrong end of the stick! They say, 'show me a problem you can't solve', but it's never like that! It's a class of problems which have no systematic solutions. With the tiling problem, I
sometimes say in talks, 'well it depends on the existence of nonperiodic tilings', and give an example of such a tiling. And then at the end of the talk people come up to me and say, 'how do you know
that that tiles when the problem is noncomputable?' but that's not the point. The point is that in a particular case you may have a way of seeing the solution, but there is no systematic procedure
that could be put on a machine, which requires no more thinking."
An analogy that springs to mind is the fact that there are uncountable sets, such as the real numbers. A set is said to be countable if its elements can in principle be listed. An example is the
natural numbers, 0,1,2,3,... We are sure that the list that we get by adding one to the last element to get the next element will give us all the natural numbers.
In fact, the rational numbers are also countable (rational numbers are fractions). But the real numbers - all the numbers on the number line - are not. When people first learn this fact, they are
prone to think that individual real numbers are uncountable, and to ask "what is the number you can't count?", or, "What's wrong with root 2?"! But that's not the point. We could add any individual
real number to the set of rational numbers and still have a countable collection, just as we may be able to prove individual problems from a non-algorithmic class of problems.
Discovery or Invention?
Penrose shows himself an unabashed realist, by proclaiming that acting conscious is not the same as being conscious. His realism is nowhere more evident than in his thinking on the source of
mathematical inspiration. Although he takes a nuanced view on the hoary old question of whether maths is "out there" or "in here", discovery or invention, he is sure that it is not entirely a
construct of the human mind.
"I've always thought that the distinction between invention and discovery is not that clear. Just the other day some people were talking about this sort of thing and Edison was mentioned, about him
trying all these different things to make his first light bulb. They were saying that this was clearly invention.
"But it seems to me that we can't say that, because if the right substances weren't out there for him to use he couldn't have made it. So it clearly had elements of discovery as well as elements of
"In mathematics it's often not too dissimilar to that. In a sense the big results are out there. I think this is the way to think about it, but in your access to these, if you need to prove some sort
of theorem which is a stepping stone to some result, there may be all sorts of different ways you could go. These will be very much of the invention type. So I think that invention plays a big role
in mathematical research, but there is nevertheless an element of discovery.
"This discovery element is most clearly visible in those areas of mathematics where you get out so much more than you put in. The biggest example of that that I know of is i, the square root of minus
"You introduce this number in order to solve just one equation, and then suddenly you find that you can solve all these other equations, which you had no conception of at the time.
"Nothing like that happens when Pythagoras' Theorem leads you to introduce the square root of 2. You have to go beyond the rational numbers to find the square root of 2, but if you just add the
square root of 2 to the rational numbers, you get nowhere, you don't even get the square root of 3!
"You have to have the real numbers there first, but then i gives you a whole new universe. A window has opened into a completely new world."
A view into the new world of complex numbers
Submitted by Anonymous on February 4, 2011.
Interesting article, I think he's got the honesty to state what the majority of Mathematicians will not say publically, but, believe, nevertheless, that mathematical ideas are not mere fictions or
creations of the human mind and/or imagination, but, have a real existence apart from us.
In essence, this creates many difficulties for the development of Artificial "intelligence" in the sense of even non-verbal/cerebral behaviour, such basic automation and movement activity, with its
resultant feed-back loop, in terms of how it learns about its environment.
If such activity and the programming that constitute it are based a form of Mathematics that's not entirely accessible to the human mind, particularly at the quantum level, then, we'll be stuck with
a form of programming that will never even allow the development of AI with even the most basic approximations of rudimentary organic life.
Anyway, much to think about. | {"url":"http://plus.maths.org/content/os/issue18/features/penrose/index","timestamp":"2014-04-18T23:16:51Z","content_type":null,"content_length":"41635","record_id":"<urn:uuid:34baae14-5a48-4d22-86b2-63bae1f70633>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Timing the BLAS
Next: Timing the Eigensystem Routines Up: Run the LAPACK Timing Previous: Timing the Linear Equations Contents
Timing the BLAS
The linear equation timing program is also used to time the BLAS. Three input files are provided in each data type for timing the Level 2 and 3 BLAS. These input files time the BLAS using the matrix
shapes encountered in the LAPACK routines, and we will use the results to analyze the performance of the LAPACK routines. For the REAL version, the small data files are sblasa_small.in,
sblasb_small.in, and sblasc_small.in and the large data files are sblasa_large.in, sblasb_large.in, and sblasc_large.in. There are three sets of inputs because there are three parameters in the Level
3 BLAS, M, N, and K, and in most applications one of these parameters is small (on the order of the blocksize) while the other two are large (on the order of the matrix size). In sblasa_small.in, M
and N are large but K is small, while in sblasb_small.in the small parameter is M, and in sblasc_small.in the small parameter is N. The Level 2 BLAS are timed only in the first data set, where K is
also used as the bandwidth for the banded routines.
a) Go to LAPACK/TIMING and make any necessary modifications to the input files. You may need to set the minimum time a subroutine will be timed to a positive value. If you modified the values of N
or NB in Section 6.7.1, set M, N, and K accordingly. The large parameters among M, N, and K should be the same as the matrix sizes used in timing the linear equation routines, and the small
parameter should be the same as the blocksizes used in timing the linear equation routines. If necessary, the large data set can be simplified by using only one value of LDA.
b) Run the programs for each data type you are using. For the REAL version, the commands for the small data sets are
□ xlintims < sblasa_small.in > sblasa_small.out
□ xlintims < sblasb_small.in > sblasb_small.out
□ xlintims < sblasc_small.in > sblasc_small.out
or the commands for the large data sets are
□ xlintims < sblasa_large.in > sblasa_large.out
□ xlintims < sblasb_large.in > sblasb_large.out
□ xlintims < sblasc_large.in > sblasc_large.out
Similar commands should be used for the other data types.
Next: Timing the Eigensystem Routines Up: Run the LAPACK Timing Previous: Timing the Linear Equations Contents Julie Langou 2007-02-26 | {"url":"http://www.netlib.org/lapack/lawn81/node29.html","timestamp":"2014-04-20T08:33:25Z","content_type":null,"content_length":"5958","record_id":"<urn:uuid:652560d1-3a1c-44c8-aa83-0d670639d823>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coppell Precalculus Tutor
...I have taught Trigonometry for several years in local high schools. It can be difficult at first but most students catch on quickly with some one on one tutoring. I have a Masters degree in
Mathematics and have tutored Linear Algebra in the past.
15 Subjects: including precalculus, chemistry, statistics, calculus
...I have since helped several students prepare for this college entrance exam. I do not consider myself to be a speed reader, however I do have terrific long-term comprehension. I was born and
raised in southern California.
48 Subjects: including precalculus, chemistry, physics, calculus
...What information does the problem give me? Can I assume anything else about the problem?" "What do I want? What does the problem want me to find?" "What equations or concepts relate what I know
to what I want?
5 Subjects: including precalculus, physics, calculus, algebra 2
...I am currently tutoring organic chemistry II to a Pre-Med student at Univeristy of North Texas, and I have tutored another Pre-Med student in organic chemistry last summer from U of TX at
Arlington. (Both students were from this website.) My past experiences include tutoring many students from 2...
22 Subjects: including precalculus, chemistry, calculus, physics
...I went on to play for the Naval Academy, which is Division I, and finished my soccer career at Trinity. I absolutely love teaching math to students of every level, but I prefer middle school
and high school. I have three years of teaching/tutoring experience in a one-on-one setting.
14 Subjects: including precalculus, chemistry, geometry, Microsoft Word | {"url":"http://www.purplemath.com/Coppell_precalculus_tutors.php","timestamp":"2014-04-19T07:25:21Z","content_type":null,"content_length":"23854","record_id":"<urn:uuid:99273a56-7666-42f4-aec3-d88ff07a8135>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
second order ODE with sin term
December 8th 2009, 08:46 AM #1
Nov 2009
second order ODE with sin term
i am trying to solve this ode but can't seem to get any traction:
$\frac {d^2 \theta(\tau)} {d \tau^2} + \sin(\theta(\tau) ) = 0$
$\theta(0) = A$
$\dot{\theta}(0) = 0$
How would i go about solving this?
Thank you very much!!
(This is a derivitive of a nasty case where: $\frac {d^2 \theta(\tau)} {d \tau^2} + \sin(\theta_{s} + \theta(\tau) ) = 0$ which I still don't know how to solve with the same B.C)
i am trying to solve this ode but can't seem to get any traction:
$\frac {d^2 \theta(\tau)} {d \tau^2} + \sin(\theta(\tau) ) = 0$
$\theta(0) = A$
$\dot{\theta}(0) = 0$
How would i go about solving this?
Thank you very much!!
(This is a derivitive of a nasty case where: $\frac {d^2 \theta(\tau)} {d \tau^2} + \sin(\theta_{s} + \theta(\tau) ) = 0$ which I still don't know how to solve with the same B.C)
The solution involves elliptic integrals
Here is a link to a wiki with derivation
December 8th 2009, 09:08 AM #2 | {"url":"http://mathhelpforum.com/differential-equations/119330-second-order-ode-sin-term.html","timestamp":"2014-04-17T23:43:06Z","content_type":null,"content_length":"36105","record_id":"<urn:uuid:8a4b8835-c4f5-43b9-b277-98efb1576e48>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Marseille workshop on loops and spin foams
Mike2 on the other thread raised the question of whether the quantum gravity model assumed by AJL had been rigorously developed.
As far as I can tell, it's completely rigorous. And I'm a mathematician by training, so I'm more fussy about these things than most.
I know of rigorous (1,1)-dimensional theories and maybe some (1,2)-dimensional ones, but I don't know of any fully (1,3) relativistic quantized ones.
It's easier to make discrete models rigorous than models that assume spacetime is a continuum. That's the main reason I like discrete models.
In particular, all the 3+1-dimensional spin foam models of quantum gravity I've worked on - various versions of the Barrett-Crane model - are mathematically rigorous and background-free.
The problem is, we haven't gotten good evidence that these spin foam models "work" - namely, that they reduce to general relativity in the limit of distance scales that are large compared to the
Planck length.
See my Marseille talk for a taste of the problems:
Since we don't have any experimental evidence concerning quantum gravity, mathematical rigor is one way to make sure we're not playing tennis with the net down. I will be very happy when we get
rigorously well-defined background-free quantum theory of gravity that
in the sense defined above.
More precisely: I will be very happy if we get numerical evidence that it works, and
if we can mathematically
that it works. But since such a model is likely to be nonperturbative, a mathematical proof of this sort might be very difficult. Nobody has even proved confinement in lattice QCD, even though
numerical calculations have convinced everyone it's true. | {"url":"http://www.physicsforums.com/showpost.php?p=213627&postcount=27","timestamp":"2014-04-18T03:07:36Z","content_type":null,"content_length":"9508","record_id":"<urn:uuid:aeb0b799-f1b2-45c3-91d7-87b1ec17a4c6>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equivalence Classes
May 8th 2010, 10:31 AM #1
May 2010
Equivalence Classes
Hello, Im having problems with equivalence classes.
The relation R is defined for complex numbers z=x+yi and w=a+bi. zRw if and only if x+b=y+a
I am asked to give the elements of the equivalence class containg [i]
I have wrote, [i]={xeC:xRi}={xeC:x-i} to begin finding the equivalence classes, but am stuck trying to find the elements
Is this true $(0+1\cdot i)\mathcal{R}(1+0\cdot i)?$
No, because you can't relate the real numbers to the imaginery numbers?
The real numbers are a subset of the complex numbers. a+ib is a complex numbers, so imaginary numbers can be added or subtracted to the realnumber to form a complex number.
I know i^2 is -1
So the equivalence class containing i will just be those elements congruent to i? So the elements would be the complex numbers. Or is this chasing the wrong argument?
May 8th 2010, 10:56 AM #2
May 8th 2010, 11:08 AM #3
May 2010
May 8th 2010, 11:17 AM #4
May 8th 2010, 11:43 AM #5
May 2010 | {"url":"http://mathhelpforum.com/discrete-math/143696-equivalence-classes.html","timestamp":"2014-04-19T00:19:38Z","content_type":null,"content_length":"42453","record_id":"<urn:uuid:0120147a-7fef-4f0c-91a2-ffcf574c22ce>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
complex rational expression solver
Author Message
Spekej Posted: Saturday 30th of Dec 11:27
Hey dudes, I have just completed one week of my college, and am getting a bit worried about my complex rational expression solver home work. I just don’t seem to
understand the topics. How can one expect me to do my homework then? Please guide me.
Registered: 28.07.2002
From: The Netherlands
ameich Posted: Monday 01st of Jan 09:59
I could help you if you can be more specific and give more details about complex rational expression solver. A right program would be best option rather than a
costly math tutor. After trying a number of program I found the Algebrator to be the best I have so far come across. It solves any algebra problem that you may
want solved. It also clarifies all the steps (of the solution). You can just write it down as your homework . However, you should use it to learn algebra, and
simply not use it to copy answers.
Registered: 21.03.2005
From: Prague, Czech Republic
Svizes Posted: Wednesday 03rd of Jan 08:15
I remember I faced similar problems with graphing circles, multiplying matrices and rational inequalities. This Algebrator is rightly a great piece of algebra
software program. This would simply give step by step solution to any algebra problem that I copied from workbook on clicking on Solve. I have been able to use the
program through several Basic Math, Basic Math and College Algebra. I seriously recommend the program.
Registered: 10.03.2003
From: Slovenia
vhidtj42 Posted: Friday 05th of Jan 07:34
Cool! I wish to know more about this product’s features and how much it costs. Where can I get the info?
Registered: 03.04.2004
From: Central, North Carolina, USA
Paubaume Posted: Friday 05th of Jan 13:31
Click on http://www.algebra-equation.com/linear-equations-1.html to get it. You will get a great tool at a reasonable price. And if you are not happy, they give
you back your money back so it’s absolutely great.
Registered: 18.04.2004
From: In the stars... where you
left me, and where I will wait for
you... always...
molbheus2matlih Posted: Saturday 06th of Jan 11:33
A truly piece of math software is Algebrator. Even I faced similar problems while solving decimals, radical expressions and ratios. Just by typing in the problem
from homework and clicking on Solve – and step by step solution to my algebra homework would be ready. I have used it through several algebra classes - Pre
Algebra, College Algebra and Remedial Algebra. I highly recommend the program.
Registered: 10.04.2002
From: France | {"url":"http://www.algebra-equation.com/solving-algebra-equation/graphing-inequalities/complex-rational-expression.html","timestamp":"2014-04-20T13:57:11Z","content_type":null,"content_length":"24617","record_id":"<urn:uuid:e375dc44-c43a-45aa-86c3-00960e9af4c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continuity and differentiability
July 30th 2009, 10:27 AM #1
Jul 2009
Continuity and differentiability
Hi all,
I got some problems doing some exams exercises...
They ask me to study the continuity and the differentiability of some functions but I always get similar results...
$f(x,y)=x/(x-y^3)$, if $x$ != $y^3$
$f(x,y)=0$, if $x=y^3$
The function is continuous when $x$ != $y^3$ but it is not when $x=y^3$ because the limit for $(x,y)->(y^3,y)$ results +∞ or -∞ depending on the approaching direction... Am I wrong? If yes, how
do I have to solve that limit? Thanks to all!
Hi all,
I got some problems doing some exams exercises...
They ask me to study the continuity and the differentiability of some functions but I always get similar results...
$f(x,y)=x/(x-y^3)$, if $x$ != $y^3$
$f(x,y)=0$, if $x=y^3$
The function is continuous when $x$ != $y^3$ but it is not when $x=y^3$ because the limit for $(x,y)->(y^3,y)$ results +∞ or -∞ depending on the approaching direction... Am I wrong? If yes, how
do I have to solve that limit? Thanks to all!
Actually, f is not continuous. If we follow the path $x = m y^3, m e 1$then
$<br /> \lim_{(x,y)->(0,0)} \frac{m y^3}{m y^3-y^3} = \frac{m}{m-1}<br />$ which clear changes as we very $m$. Since we get different limits f is not continuous at $(0,0)$.
So it's not continuous in (0,0) but what about the other points that satisfies $x=y^3$? Thanks
Still not. You have $f$defined as 0 if $x = y^3$. Thus, in a neighborhood of this curve, if continuous, then $f$ can be made arbitrary close to $0$. However, along $x = my^3,$ the function is $\
frac{m}{1-m}$ which is large near $m = 1$, not zero.
Thanks, so I can conclude that the function is not differentiable for $x=y^3$ because it's not continuous, right?
So the following function too is not continuous for $y=x$?
$f(x,y)=y(1+x)/(x-y)$, if y!=x
$f(x,y)=0$, if y=x
Because if I follow the path $y=mx$, m!=1 then the limit change with $m$?
Yep - you got it!
Now consider the following
$<br /> \lim_{(x,y) \to (0,0)} \frac{x^2 y^2}{x^2+y^2}<br />$
Every path you follow you get 0. So, now what?
That one is continuous because using polar coordinates results 0 no matter what, am I right?
July 30th 2009, 11:50 AM #2
July 30th 2009, 12:13 PM #3
Jul 2009
July 30th 2009, 12:43 PM #4
July 30th 2009, 12:59 PM #5
Jul 2009
July 30th 2009, 01:10 PM #6
July 30th 2009, 01:52 PM #7
Jul 2009
July 30th 2009, 02:46 PM #8
July 30th 2009, 10:16 PM #9
Jul 2009 | {"url":"http://mathhelpforum.com/calculus/96537-continuity-differentiability.html","timestamp":"2014-04-16T20:05:19Z","content_type":null,"content_length":"63649","record_id":"<urn:uuid:eeeb2331-4dc0-4ebc-b29e-36e113dc4431>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematica Scripts
Script Files
A Mathematica script is simply a file containing Mathematica commands that you would normally evaluate sequentially in a Mathematica session. Writing a script is useful if the commands need to be
repeated many times. Collecting these commands together ensures that they are evaluated in a particular sequence with no command omitted. This is important if you run complex and long calculations. A
Mathematica script is typically written as a file with extension .m in the Mathematica source format.
When you use Mathematica interactively, the commands contained in the script file can be evaluated using Get. This function can also be used programmatically in your code or other .m files.
Get["file"] read in a file and evaluate commands in it
<<file shorter form of Get
Reading commands from a script file.
There is no requirement concerning the structure of the script file. Any sequence of Mathematica commands given in the file will be read and evaluated sequentially. If your code is more complex than
a plain list of commands, you may want to consider writing a more structured package, as described in "Setting Up Mathematica Packages".
Mathematica script is more useful when there is no need for an interactive session; that is, when your script encapsulates a single calculation that needs to be performed—for example, if your
calculation involves heavy computational tasks, such as linear algebra, optimization, numerical integration, or solution of differential equations, and when you do not use typesetting, dynamic
interactivity, or notebooks.
Running the Script
The script file can be used when invoking the Mathematica kernel from the command line, assuming that the MathKernel or math executables are on your path and can be found.
$ MathKernel -script file.m
Running the script file on Mac OS X.
The -script command line option specifies that the Mathematica kernel is to be run in a special script, or batch, mode. In this mode, the kernel reads the specified file and sequentially evaluates
its commands. The kernel turns off the default linewrapping by setting the PageWidth option of the output functions to Infinity and prints no In[] and Out[] labels.
When run in this mode, the standard input and output channels , , and are not redirected, and the output is done in InputForm. Thus the output of the script can be saved into a file and then
subsequently read back into Mathematica. The output is also suitable for passing to other scripts, so that the MathKernel process evaluating script commands can be used in a pipe with other
Running MathKernel with the -script option is equivalent to reading the file using the Get command, with a single difference: after the last command in the file is evaluated, the kernel terminates.
This behavior may have an effect on MathLink connections or external processes that were created by running the script.
Unix Script Executables
Unix-like operating systems allow writing scripts that can be made executable and run as regular executable programs. This is done by putting an "interpreter" line at the beginning of the file. The
same can be done with the script containing Mathematica commands.
The "interpreter" line consists of two characters, #!, which must be the first characters in the file, followed by the absolute path to the MathematicaScript interpreter, followed by other arguments.
The last argument on the interpreter line must be -script. The MathematicaScript interpreter is included in your copy of Mathematica.
#!/usr/local/bin/MathematicaScript -script
(* generate high-precision samples of a mixed distribution *)
Print /@ RandomVariate[MixtureDistribution[
10, WorkingPrecision -> 50]
The path to the interpreter must be an absolute path because the operating system mechanism used to launch the script does not use PATH or other means to find the file. Also, the path may not contain
spaces, so if you installed Mathematica in a location whose absolute path has spaces in it, you will need to make a symbolic link to MathematicaScript in an appropriate location. Giving the absolute
path of the symbolic link is acceptable in the interpreter line, and it will be properly resolved.
To make the script executable, you need to set executable permissions. After that, the script can be run simply by typing its name at a shell prompt.
$ chmod a+x script.m
$ script.m
Make the script executable and run it.
The MathematicaScript interpreter sets up the system environment and then launches the Mathematica kernel. Running the Mathematica script is completely equivalent to running MathKernel -script
The interpreter line may additionally contain other parameters placed between the interpreter path and the -script option. These parameters will be passed to the MathKernel executable. Possible
parameters are specified on the MathKernel page.
#!/usr/local/bin/MathematicaScript -pwfile "file" -script
Interpreter line using additional parameters.
The Mathematica script does not need to have the .m extension. An executable script is a full-featured program equivalent to any other program in a Unix operating system, so it can be used in other
scripts, in pipes, subject to job control, etc. Each Mathematica script launches its own copy of the MathKernel, which does not share variables or definitions. Note that running Mathematica scripts
concurrently may be affected by the licensing restriction on how many kernels you may run simultaneously.
Executable script files can be transparently read and evaluated in an interactive Mathematica session. The Get command will normally ignore the first line of the script if it starts with the #!
Script Parameters
When running a Mathematica script, you may often want to modify the behavior of the script by specifying parameters on the command line. It is possible for the Mathematica code to access parameters
passed to the Mathematica script via $ScriptCommandLine.
#!/usr/local/bin/MathematicaScript -script
(* generate "num" samples of a mixed distribution *)
num = ToExpression[$ScriptCommandLine[[2]]];
Print /@ RandomVariate[
{1, 2},
{NormalDistribution[1, 0.2],
NormalDistribution[3, 0.1]}
], num, WorkingPrecision -> 50]
Example of a script file, file.m, using a command-line parameter.
Run the script and specify the number of samples.
When accessed in the script, the $ScriptCommandLine is a list containing the name of the script as the first element and the rest of the command line arguments. $ScriptCommandLine follows the
standard argv[] convention.
Due to the way the Unix-like operating systems execute scripts, the $ScriptCommandLine is set to a non-empty list only if the Mathematica kernel is invoked via a MathematicaScript mechanism. If the
script is intended to be run both in a batch mode and as a standalone Unix script, or in both Unix and Windows environments, the $ScriptCommandLine can be used to determine how the script is run.
Then, both the $ScriptCommandLine and the $CommandLine should be used to access the command-line arguments. | {"url":"http://reference.wolfram.com/mathematica/tutorial/MathematicaScripts.html","timestamp":"2014-04-19T12:36:36Z","content_type":null,"content_length":"39068","record_id":"<urn:uuid:538d778a-70b2-4f95-9684-25259d7e97e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normandy Park, WA Prealgebra Tutor
Find a Normandy Park, WA Prealgebra Tutor
...I have over four years of experience as a tutor, working with students from the elementary through the college level. When I work with students, I am aiming for more than just good test scores
- I will build confidence so that my students know that they know the material. Math is my passion, not just what I majored in.
8 Subjects: including prealgebra, calculus, geometry, algebra 1
...During this time I also volunteered with the YMCA Special Olympics program and am very comfortable working with special needs children.I am qualified to tutor Study Skills due to my time spent
in earning my A.A. and B.S. in Biology degree. In total I have earned 173 semester hours, and have been...
25 Subjects: including prealgebra, chemistry, algebra 1, physics
...I have five years experience teaching this subject during the day, as well as during night school for credit recovery classes. I have an educational background in biology and chemistry with
degrees in both. During my biology requirements all my electives were in ecology class.
16 Subjects: including prealgebra, chemistry, physics, biology
...My name is Misa, a 25-year-old woman who grew up in Hawaii and moved to Washington to pursue a college degree. Crazy right? I mean, who moves from Hawaii to eastern Washington?
14 Subjects: including prealgebra, writing, geometry, algebra 1
...I have moved on to learn and use object oriented languages like C# and Java. I learned Python (an interpreted language) when helping my son program an Author recognition software. I have also
completed a course through Coursera (through Rice University) in Python.
16 Subjects: including prealgebra, geometry, algebra 1, algebra 2
Related Normandy Park, WA Tutors
Normandy Park, WA Accounting Tutors
Normandy Park, WA ACT Tutors
Normandy Park, WA Algebra Tutors
Normandy Park, WA Algebra 2 Tutors
Normandy Park, WA Calculus Tutors
Normandy Park, WA Geometry Tutors
Normandy Park, WA Math Tutors
Normandy Park, WA Prealgebra Tutors
Normandy Park, WA Precalculus Tutors
Normandy Park, WA SAT Tutors
Normandy Park, WA SAT Math Tutors
Normandy Park, WA Science Tutors
Normandy Park, WA Statistics Tutors
Normandy Park, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Normandy_Park_WA_Prealgebra_tutors.php","timestamp":"2014-04-21T02:10:04Z","content_type":null,"content_length":"24229","record_id":"<urn:uuid:7c2c425a-275a-4a6e-a62d-52680bccb1ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra (Volume, Ratios)
March 1st 2013, 01:16 PM
Algebra (Volume, Ratios)
The figure (fig) shows a solid consisting of three parts, a cone, a cylinder and a hemisphere, all of the same base radius.
The first part of the question asks us to find the volume of each part, in terms of w, s, t and π (pi):
Volume of cone = (1/3)πt^2w
Volume of cylinder = πt^2s
Volume of hemisphere = (1/2)*(4/3)πt^3
The next part of the question reads:
i: If the volume of each of the three parts is the same, find the ratio w : s : t.
ii: If also w + s + t = 11, find the total volume in terms of π.
Aaaaaaaand I have no idea, nor am I expected to have an idea -- I was never actually taught how to do this, it was some kind of maniacal challenge set by my teacher to see how I would go about
finding an answer.
Any help would be much appreciated. Thanks for your time.
March 1st 2013, 02:51 PM
Prove It
Re: Algebra (Volume, Ratios)
Well start by setting two of the volumes equal to each other.
\displaystyle \begin{align*} \frac{1}{3}\,\pi\, t^2 \, w &= \pi \, t^2 \, s \end{align*}
and find the relationship between w and s. You should be able to form the ratio between them from there.
March 2nd 2013, 01:35 PM
Re: Algebra (Volume, Ratios)
So I did (1/3)πt^2w = πt^2s and came out with s = (1/3)w and thus w = s/(1/3). I tried solving (1/3)πt^2w = (1/2)*(4/3)πt^3 for t but failed miserably.
Have I done this right? Also, what significance does this have in finding the ratio?
March 3rd 2013, 07:16 PM
Re: Algebra (Volume, Ratios)
Bump. Anyone?
March 3rd 2013, 07:46 PM
Re: Algebra (Volume, Ratios)
March 3rd 2013, 10:01 PM
Re: Algebra (Volume, Ratios)
So how did you go from having (1/3)w = s = (2/3)t to having the ratio (6:2:3)?
And also, where did "k" come from?
March 4th 2013, 12:39 AM
Re: Algebra (Volume, Ratios)
March 4th 2013, 12:32 PM
Re: Algebra (Volume, Ratios)
Thanks heaps. One final question to clarify my understanding: do we divide by 2 to make each fraction (1/X), (1/Y), etc? (i.e to turn (2/3) into (1/3) we must divide it by 2 and thus all the
others by 2)
March 4th 2013, 08:35 PM
Re: Algebra (Volume, Ratios)
Yes we divide the fractions by a suitable number so that we have 1 in the numerator of each fraction, in this case we divide by 2.
March 4th 2013, 08:36 PM
Re: Algebra (Volume, Ratios)
Thanks heaps. | {"url":"http://mathhelpforum.com/algebra/214060-algebra-volume-ratios-print.html","timestamp":"2014-04-20T15:08:28Z","content_type":null,"content_length":"9017","record_id":"<urn:uuid:2485a890-3a4d-4d8b-a3df-89008d1687b6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evaluate log(base 5) 81.
Round your answer to the nearest four decimals. - Homework Help - eNotes.com
Evaluate log(base 5) 81.
Round your answer to the nearest four decimals.
Value of `log_5(81)` has to be evaluated.
`=(4*log_e(3))/(log_e(5))` (using the rule for change of base of logarithms)
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/evaluate-log-base-5-81-round-your-answer-nearest-448267","timestamp":"2014-04-19T00:21:48Z","content_type":null,"content_length":"24366","record_id":"<urn:uuid:3f55cf1e-bb74-4ccb-847d-50ab836d0b6e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I JUST NEED A DIAGRAM From the top of a tower the angle of depression of an object on the horizontal ground is found to be 60degree. On descending 20 m vertically downwards from the top of the tower,
the angle of depression of the object is found to be 30degree Find the height of the tower.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510d23c9e4b0d9aa3c47631c","timestamp":"2014-04-21T05:03:06Z","content_type":null,"content_length":"175434","record_id":"<urn:uuid:6fc41496-70f8-4f0c-80f0-f46ea02c252c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/beknazar23/medals","timestamp":"2014-04-21T04:34:48Z","content_type":null,"content_length":"74685","record_id":"<urn:uuid:c3b3f1c8-d709-4ec8-9fd6-b1d0473748cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Building Candidate Keys
Functional Dependencies
A functional dependency is a functional relationship between sets of attributes which belong to the same table and whose definition may or may not change over time.
Emp# ---> Salary
is an example of a functional dependency. For each employee (identified by an emp#) there is one and only one salary.
There are a number of different ways to describe a functional dependency; among them
A ---> B
A determines B
B is dependent upon A
if you know A then you know B
All these are equivalent to one another.
NOTATION: We will use the following notation in these notes. Capital letters A, B, C, etc near the beginning of the alphabet represent single attributes. Capital letters X, Y, Z, etc near the end
of the alphabet represent sets of attributes. Capital letters R, S, T, etc represent relations and these same letters written with an underscore, R, S, T, etc, represent corresponding relation
The following rules can be used in reasoning about functional dependencies. In applying these rules we always assume that all attributes and attribute sets belong to the same schema.
1. Composition/Decomposition of FDs: X --> A and X --> B is equivalent to X --> A,B.
2. Identity: X --> X
3. Transitivity: If A --> B and B --> C then A --> C.
4. Trivial: X --> is true
5. Augmentation: If X --> A then X, Y --> A
We say a functional dependency, X --> A is full if there does not exist a proper subset Y of X such that Y --> A.
We want to use the above rules to prove an additional rule we call Augmentation+.
If X --> A and Y --> B then X, Y --> A, B
Given X --> A we can conclude X, Y --> A by Augmentation. (i)
Given Y --> B we can conclude X, Y --> B by Augmentation. (ii)
Hence X, Y --> A, B from (i) and (ii) and Composition
Candidate Keys
Recall that for a table R its schema R consists of all attributes of R. We say X C R is a key to R if X --> R .
The primary use of functional dependencies is in discovering the candidate keys of a given relation. A candidate key is a set of attributes, X of a schema R with the property that
X --> R \ X
and no proper subset of X satisfies the same property. A superkey to R and any subset Y of R ( Y C R ) that itself contains a candidate key.
EXAMPLE 1: Suppose we start with the relation R whose schema is R = {A, B, C, D, E} and whose functional dependencies are
1. A,C --> D
2. B --> E
3. D,A --> B
The idea is to build a functional dependency using all attributes from the schema of R, R. I always start with the functional dependency
A, B, C, D, E -->
Which says that "the empty set of attributes is known if you know anything else".
Now I can modify this functional dependency by moving E to the right-hand-side. My argument is that B --> E and adding A, C and D to the left-hand-side will still give me a functional dependency
by Rules 5 and 6.
B --> E (i) // FD 2
A,B,C,D --> E (ii) // Rule 5
A,B,C --> D,E (iii) // FD 1, FD 2 and Rule 6
A,C --> A, D (iv) // FD 1, Rule 2 and Rule 6
A,C --> B (v) // (iv), FD 3 and Rule 3
A,C --> E (vi) // (v), FD 2 and Rule 3
A,C --> B,D,E (vii) // FD1, (v), (vi) and Rule 1
Now an observation is in order. It is that if an attribute never appears on the right-hand-side of any given functional dependency then you will never be able to move it to the right-hand-side of
any functional dependency you create. Hence if you ever reach a functional dependency of the form
candidate key --> all other attributes
such an attribute must be on the left-hand-side and so be part of the candidate key. In other words, any attribute which does not appear on the right-hand-side of any given functional dependency
belongs to every candidate key.
Since neither A nor C appear on the right-hand-side of any of our three given functional dependencies both belong to every candidate key. This tells us that the functional dependency labeled
(vii) above can not be further modified by pushing another attribute from the left-hand-side to the right-hand-side. Hence we conclude that {A,C} is a candidate key of the above relation.
By my reasoning about A and C, any other candidate key will have to be of the form A,C,X since both A and C must belong to it. However, if X is non-empty then this key will be larger than a
candidate key we have already found and so it is not possible for it to be a candidate key. Kence X must be empty which says that in this case, the only candidate key for the table R is A,C
EXAMPLE 2: Suppose we start with the relation R whose schema is R = {A, B, C, D, E} and whose functional dependencies are
1. A,B --> D
2. C --> E
3. E --> A
4. D --> B
Since C does not appear on the right-hand-side of any functional dependency we can assume C belongs to every candidate key. To find the first candidate key we reason as follows:
A,B,C --> D, E (i) // FD 1, FD 2 and Rule 3
Hence {A, B, C} forms a super-key. We know that C can not be removed to the right-hand-side so it remains to try to move A or B.
C --> A,E (ii) // FD 2, FD 3, and Rule 4
C,B --> A,B (iii) // (ii), Rule 2 and Rule 3
C,B --> D (iv) // (iii), FD 1 and Rule 4
C,B --> A,D,E (v) // (ii), (iv) and Rule 3
We have now reached the point where we know we can not remove C from the left-hand-side and if we remove B then it must be possible to prove C --> B. Since B only depends on D and D, in part,
depends on B we will not be able to put both of these attributes on the right-hand-side so B must remain where it is. Hence {B, C} determines all other attributes and no subset of this set can do
the same thing so we conclude that {B, C} is a candidate key.
To find another candidate key we can start over again and look to put new attributes on the right-hand-side or we can use the observation that if X is a candidate key and Y --> X then Y is a
super-key (and so contains a candidate key). We reason as follows:
X --> all other attributes // since X is a CK
X --> X // Rule 2
X --> all attributes // Rule 1
Y --> X // given
Y --> all attributes // Rule 4
Hence Y is a super-key.
Following this reasoning we try to solve
? --> B,C
This is easy, since C must appear in any candidate key we can automatically put C on the left-hand-side and the fourth functional dependency gives us something else.
D,C --> B,C // FD 4, Rule 2 and Rule 3
Hence {C, D} is a super key. Using the same reasoning we applied above that C must belong to every candidate key and removing D means replacing it with B (which is the first candidate key we
already found) we can conclude that {C, D} is a candidate key. No further candidate keys will be found. | {"url":"http://www.cs.newpaltz.edu/~pletcha/BuildingCandidateKeys.html","timestamp":"2014-04-20T06:45:33Z","content_type":null,"content_length":"8998","record_id":"<urn:uuid:52546dff-7c65-4d04-86ed-60f9c140a0a6>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fulton, MD Precalculus Tutor
Find a Fulton, MD Precalculus Tutor
...Each student has a different way of learning a subject. In tutoring, I always make it a point to figure out the student's style of learning and I plan my tutoring sessions accordingly,
spending extra time to prepare for the session prior to meeting with the student. My broad background in math,...
16 Subjects: including precalculus, calculus, physics, statistics
...My current job requires use of algebra to manipulate equations for force calculation. As an engineer, I have used Excel for parts lists, loads calculation, mass properties management (weight
and CG, along with mass moment of inertia) for a nacelle, fastener CG calculation. I consider myself an ...
10 Subjects: including precalculus, physics, calculus, geometry
...I possess an MBA from Georgetown University. The curriculum included a strong marketing component with an international focus. My 25 year career as an engineer and technical consultant has
required me to identify and pursue new market opportunities to grow my employers' companies and, currently, my sole proprietorship.
31 Subjects: including precalculus, chemistry, calculus, physics
...I empathize with students who are frustrated with mathematical coursework and concepts, and incorporate a supportive manner in my teaching style. This applies especially to students with
learning disabilities, or otherwise unique learning styles that may be less compatible with a traditional cla...
16 Subjects: including precalculus, calculus, geometry, statistics
...I have studied several of the elements of Discrete Mathematics both as an undergraduate at MIT and as a PhD student at Cornell University, and I have actively used them in my work at the NASA/
Goddard Space Flight Center in the mathematical modeling of the Earth's Land/Ocean/Atmosphere System. I ...
39 Subjects: including precalculus, chemistry, physics, writing
Related Fulton, MD Tutors
Fulton, MD Accounting Tutors
Fulton, MD ACT Tutors
Fulton, MD Algebra Tutors
Fulton, MD Algebra 2 Tutors
Fulton, MD Calculus Tutors
Fulton, MD Geometry Tutors
Fulton, MD Math Tutors
Fulton, MD Prealgebra Tutors
Fulton, MD Precalculus Tutors
Fulton, MD SAT Tutors
Fulton, MD SAT Math Tutors
Fulton, MD Science Tutors
Fulton, MD Statistics Tutors
Fulton, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/fulton_md_precalculus_tutors.php","timestamp":"2014-04-21T11:04:12Z","content_type":null,"content_length":"24230","record_id":"<urn:uuid:2443046d-f639-4597-98ed-9709be6f26dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some studies on dynkin diagrams associated with Kac-moody algebra
Singh , Amit Kumar (2011) Some studies on dynkin diagrams associated with Kac-moody algebra. MSc thesis.
In the present project report, a sincere report has been made to construct and study the basic information related to Simple Lie Algebras, Kac-Moody algebras and their corresponding Dynkin Diagrams.
In chapter-1, I have given the definitions of Lie Algebra and some of the terms related to Lie algebra, i.e. subalgebras, ideals, Abelian, solvability, nilpotency etc. Also, I have done the
classifications of Classical Lie algebras.
In chapter-2, I addressed the basics of Representation Theory, i.e. structure constants, modules, reflections in a Euclidean space, root systems (simple roots) and their corresponding root diagrams.
Then I have discussed the formation of Dynkin Diagrams and cartan matrices associated with the roots of the simple lie algebras.
In chapter-3, I have given the necessary theory based on Kac-Moody lie algebras and their classifications. Then the definition of the extended Dynkin diagrams for Affinization of Kac-Moody algebras
and the Dynkin Diagrams associated with the affine Kac-Moody algebras are provided
Repository Staff Only: item control page | {"url":"http://ethesis.nitrkl.ac.in/2195/","timestamp":"2014-04-16T10:30:56Z","content_type":null,"content_length":"14584","record_id":"<urn:uuid:4f652725-c950-4be8-96fc-3bde5b4a2d20>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by The Development Team—Wolfram|Alpha Blog
Blog Posts from this author:
Based on the vast number of queries we have been receiving from users all around the world, we thought it would be very interesting to draw some inferences from it. We started with “Human Body
Measurements”, one of the many topic areas in Wolfram|Alpha. We thought it would be a safe assumption to make that in more cases than not, when users query for data based on weight or height values,
they are most likely looking for data about themselves (narcissism, thy name is Homo sapiens). Based on this assumption, we plotted all of the height and weight inputs and ended up with the following
We can see from this that the average Wolfram|Alpha user is an individual who weighs about 154 pounds and is between 5′ 9″ and 5′ 11″ tall. This translates to a BMI of between 21.5-22.7 for men or
women. From these results, we see that the average user falls within normal distribution.
Let us see how this hypothetical Wolfram|Alpha user compares with the average American male or female:
Similarly, we can compare user heights with the height distribution of the general population in America: More »
Since Wolfram|Alpha launched in 2009, we’ve had numerous requests to add data on climate. As part of our one-year anniversary release, we recently added a vast set of historical climate data, drawing
on studies from across the globe, which can be easily analyzed and correlated in Wolfram|Alpha.
You can now query for and compare the raw data from different climate model reconstructions and studies, as reported in peer-reviewed journals and by government agencies, many of them covering more
than a thousand years of history. The full set of reconstructions was chosen from as broad a collection of sources as possible, from well-known records such as ice cores and tree rings, to corals,
speleothems, and glacier lengths—and even some truly unusual ones, like grape harvest dates.
Or are you more interested in global greenhouse gas concentrations?
If you’re interested in exploring this vast area of climatology yourself, you can start by looking at a detailed summary of the most prominent models in literature: simply ask Wolfram|Alpha about “
global climate”, which will bring up a selection of data sets that have figured prominently in the news over the past few years.
Wolfram|Alpha can also compute a more local analysis of recorded temperature variations. For example, you can compare the temperature variations recorded in specific parts of the globe, like the
Northern Hemisphere. Or you can ask about studies conducted in specific countries, like the United Kingdom or Japan. More »
We’re in the midst of major enhancements to military data in Wolfram|Alpha, with newly added information on army, navy, and air force personnel for over 150 countries as well as statistics on many
armaments, including stockpiles of nuclear warheads.
Let’s start with the big numbers. Type “army size of all countries” and you’ll see China, India, and the Korean Peninsula topping the list. China’s army alone includes 1.4 million soldiers and dwarfs
the population of many smaller countries. The size of its combined army, navy, and air force is nearly equal to the entire population of Macedonia.
There’s an abundance of data on armaments, around the world as well, including estimates on nuclear stockpiles of the nine countries known to have detonated nuclear weapons; according to the latest
available estimates, Russia has the largest stockpile with 13,000 warheads. Also new in Wolfram|Alpha are figures on conventional weapons, including aircraft carriers, battle tanks, and fighter jets.
Try comparing countries’ armaments, such as “tanks USA vs Russia”, or asking about the number of submarines in the NATO alliance. More »
We recently added data on health indicators for more than 200 countries and territories. We now have World Health Organization data on health care workers, immunizations, water and sanitation,
preventive care, tobacco use, weight, and more.
Data is also now available on specific types of health care personnel, such as physicians, nurses, and dentists, and Wolfram|Alpha can also compute per capita figures for each type of health
professional. Check out the figures on midwives in South Africa or dentists in Iceland—or for a particularly interesting view, try asking about doctors per capita in all countries.
Other intriguing indicators include figures on hospital beds, drinking water and sanitation, tobacco use, weight and obesity, and reproduction and contraception.
Some data, such as for infant immunizations (including DTP, MCV, hepatitis B, and Hib), spans several years—which allows you to see dramatic increases in immunizations in many developing countries,
as well as surprising declines in some first-world nations. More »
When Wolfram|Alpha was introduced, Stephen Wolfram blogged about it being the first “killer app” that resulted from his work on A New Kind of Science (NKS). We can now use this application of NKS to
further our exploration and study within the NKS field. For example, one class of systems discussed in NKS is that of substitution systems. Now that a host of string substitution systems have been
integrated into Wolfram|Alpha, we can explore a variety of these systems—not just the ones that are well known.
A string substitution system is composed of two parts: a string and a set of rules. The string looks like a series of numbers, say “0″ and “1”. The rules describe what happens to each number in the
string; for example, “1” -> “0” and “0” -> “10”. Under our rules, our example string, “1”, transforms to “0”. In true NKS fashion, repeated iteration of these simple rules can give interesting
behavior. Our example, which seems deceptively simple, can model the Fibonacci numbers. We simply document the length of the string each time we apply the rules to find that the series of lengths
obtained at the end of each substitution corresponds to the Fibonacci series: {1, 1, 2, 3, 5…}. We see this in the following result:
Similarly, there is a string substitution system that models the Cantor set. The rules that define this substitution system are 1->101 and 0->000: More »
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step. | {"url":"http://blog.wolframalpha.com/author/devteam/","timestamp":"2014-04-21T07:03:36Z","content_type":null,"content_length":"47309","record_id":"<urn:uuid:0fa09456-2ea8-4e8d-8198-7bc1333c0e93>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Program verification as probabilistic inference
Program verification as probabilistic inference (2007)
Download Links
Other Repositories/Bibliography
by Sumit Gulwani
Venue: In Proc. POPL
Citations: 8 - 3 self
author = {Sumit Gulwani},
title = {Program verification as probabilistic inference},
booktitle = {In Proc. POPL},
year = {2007},
pages = {277--289},
publisher = {ACM}
In this paper, we propose a new algorithm for proving the validity or invalidity of a pre/postcondition pair for a program. The algorithm is motivated by the success of the algorithms for
probabilistic inference developed in the machine learning community for reasoning in graphical models. The validity or invalidity proof consists of providing an invariant at each program point that
can be locally verified. The algorithm works by iteratively randomly selecting a program point and updating the current abstract state representation to make it more locally consistent (with respect
to the abstractions at the neighboring points). We show that this simple algorithm has some interesting aspects: (a) It brings together the complementary powers of forward and backward analyses; (b)
The algorithm has the ability to recover itself from excessive under-approximation or over-approximation that it may make. (Because the algorithm does not distinguish between the forward and backward
information, the information could get both under-approximated and overapproximated at any step.) (c) The randomness in the algorithm ensures that the correct choice of updates is eventually made as
there is no single deterministic strategy that would provably work for any interesting class of programs. In our experiments we use this algorithm to produce the proof of correctness of a small (but
non-trivial) example. In addition, we empirically illustrate several important properties of the algorithm.
1874 Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints - Cousot, Cousot - 1986
1164 Loeliger “Factor graphs and the sum-product algorithm - Kschischang, Frey, et al. - 2001
597 Construction of abstract state graphs with PVS - Graf, Saidi - 1990
570 Automatic Discovery of Linear Restraints Among Variables of a Program - Cousot, Halbwachs - 1978
564 Probabilistic Inference Using Markov Chain Monte Carlo Methods - Neal - 1993
448 Lazy Abstraction - Henzinger, Jhala, et al. - 2002
370 The SLAM project: debugging system software via static analysis - Ball, Rajamani - 2002
286 Abstract Interpretation and Application to Logic Programs - Cousot, Cousot - 1992
204 Modular verification of software components in C - Chaki, Clarke, et al.
55 A practical and complete approach to predicate refinement - Jhala, McMillan - 2006
49 A comparison of algorithms for inference and learning in probabilistic graphical models - Frey, Jojic
36 S.: Counterexample driven refinement for abstract interpretation. Tools and Algorithms for the Construction and Analysis of Systems - Gulavani, Rajamani - 2006
34 Refining model checking by abstract interpretation - Cousot, Cousot - 1999
33 Discovering affine equalities using random interpretation - Gulwani, Necula - 2003
31 Verification of real-time systems by successive over and under approximation - Dill, Wong-Toi - 1995
25 Loop invariants on demand - Leino, Logozzo - 2005
21 Global value numbering using random interpretation - Gulwani, Necula - 2004
17 Precise interprocedural analysis using random interpretation - Gulwani, Necula
4 Three uses of the Herbrand-Genzen theorem in relating model theory and proof theory - Craig - 1957
4 Probabilistic inference of programs from input/output examples - Jojic, Gulwani, et al. - 2006 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.136.8778","timestamp":"2014-04-20T01:45:47Z","content_type":null,"content_length":"25611","record_id":"<urn:uuid:93cbdbff-a203-43c9-b4bf-4009448f11e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
An electronics store sells an average of 60 entertainment systems per month at an average of $800 more than the cost price. For every $20 increase in the selling price, the store sells one fewer
system. What amount over the cost price will maximize revenue?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
c = cost price s = selling price n = number of systems sn = total revenue sn - c = income/profit/loss s = (c+800) n = 60 c? if s goes up 20 n = n -1 sorry this is all i got so far, hope it helps
in any way
Best Response
You've already chosen the best response.
Q = 60 + (800-P)/20 R= QP = (100 -P/20)P so find the max of R = -P^2 /20 +100P and find the Price at that max... (set the derivative equal to zero and solve for P)
Best Response
You've already chosen the best response.
First thing you want is number sold in terms of price sold at lets call number sold n, and price sold p \[n = 60-\frac{p-800}{20}\] \[n = 60+\frac{800-p}{20}\] revenue (r) = number sold x price
sold \[r = p(60+\frac{800-p}{20})\] \[r = 60p+40p-\frac{p^2}{20}\] \[r = 100p - \frac{p^2}{20}\] To find the max, we find the stationary point (we know since this is a quadratic, and that the
coefficient of p^2 is negative the max is the stationary point) \[\frac{dr}{dp} = 100 -\frac{2p}{20}\] \[\frac{dr}{dp} = 100 -\frac{p}{10}\] \[0 = 100 - \frac{p}{10}\] \[0 = 1000 - p] \[p = 1000]
Best Response
You've already chosen the best response.
Thank you very much! I finally understand it.
Best Response
You've already chosen the best response.
that's great :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/508a4533e4b0d596c46073a3","timestamp":"2014-04-19T17:12:56Z","content_type":null,"content_length":"38118","record_id":"<urn:uuid:1427c580-9380-491f-9234-2038b0d6fcf7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
check an equation of curvature in 2D image?
November 22nd 2012, 04:31 AM #1
Dec 2011
check an equation of curvature in 2D image?
please i have a question about an equation given in this article: Curvature and Bending Energy in Digitized 2D and 3D Images). paragraphe 3. Isophote curvature in 2D
he said :
f(x, y) be a grey-value image and fx and fy respectively the derivatives in the x and y-direction
g = ( fx , fy ) (5)
c = (- fy , fx ) (6)
Ө= arccos(- fy /|| g|| ) = arcsin( fx/|| g|| ) = arctan(- fx/ fy ) (7)
||g|| = sqrt (fx^2 + fy^2)
To differentiate along the curve with respect to the arc length we use the operator
d/ds=cosӨ *d/dx+ sinӨ *d/dy= -fy/||g||d/dx+fx/||g||d/dy (8)
Applying eq. (8) to eq. (7) we get
dӨ/ds=-(fxx fy^2-2 fx fy fxy+fyy fx^2)/(fx^2+fy^2)^3/2 (9)
i want to know did he got equation (9) from equations (8) and (7) (step by step).
thanks an advance.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/higher-math/208161-check-equation-curvature-2d-image.html","timestamp":"2014-04-19T22:56:53Z","content_type":null,"content_length":"30373","record_id":"<urn:uuid:a3f8f449-82ec-4d8d-83db-ad70dadc4790>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pearland Statistics Tutor
Find a Pearland Statistics Tutor
I am currently a CRLA certified level 3. I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for
groups of students to give them extra practice on their course material and help to answer any question...
7 Subjects: including statistics, calculus, algebra 2, algebra 1
I have taught math and science as a tutor since 1989. I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me
via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible.
35 Subjects: including statistics, chemistry, physics, calculus
...We can also study linear, quadratic, exponential, and inverse functions. Methods for graphing, analyzing, and solving systems of equations and inequalities can also be covered. As needed, we
can reinforce pre-algebra concepts such as negative numbers, fractions, and calculator and computational skills.
30 Subjects: including statistics, calculus, physics, ADD/ADHD
...Most importantly I have learned that it is critical to engage the students in the subject matter. When the students are more enthusiastic, they learn better. Please let me know if there is
anything I could help you with.
42 Subjects: including statistics, Spanish, chemistry, calculus
...I have taught prealgebra and algebra as part of preparation for college level statistics and probability. Algebra is the "language" of much of mathematics. Once you learn the basics, you can
easily tackle more advanced areas.
20 Subjects: including statistics, writing, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Pearland_statistics_tutors.php","timestamp":"2014-04-18T21:48:00Z","content_type":null,"content_length":"23834","record_id":"<urn:uuid:b01c6195-f5e9-47f7-aced-21c2cb4f6bf9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
LGBARIDDLE: A frog flew by drinking RED-BULL. Then, how did a snake fly?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5051d575e4b02b4447c14280","timestamp":"2014-04-20T18:32:23Z","content_type":null,"content_length":"79780","record_id":"<urn:uuid:dc5140c4-4b7b-494b-b109-8b12b230602a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: use of string matrices
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: use of string matrices
From "Feiveson, Alan H. (JSC-SK311)" <alan.h.feiveson@nasa.gov>
To <statalist@hsphsun2.harvard.edu>
Subject st: use of string matrices
Date Tue, 15 May 2007 11:03:17 -0500
A completed crossword puzzle is a string matrix. So if you wanted a
convenient way of storing and printing crossword puzzles, it would be
nice to have a "smatrix list" command. Also, if there were a command to
replace the letters with blanks, (with a special character for the
"blackened" squares) you could print the puzzle and the solution.
Al Feiveson
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of William
Gould, StataCorp LP
Sent: Tuesday, May 15, 2007 8:28 AM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: how can I make a string matrix?
Edgard Alfonso Polanco Aguilar <e-polanc@uniandes.edu.co> writes
> I'm working is Stata9 SE and I need to create a string matrix. Stata's
> matrix commands don't allow me to make such a thing and I've tried
> defining it on Mata and then call it from Stata but I haven't found a
> way to do it. Does anybody knows a command or a way to make such a
My first reaction is to agree with Maarten Buis
<maartenbuis@yahoo.co.uk> who wrote, "Why do you think you need a string
matrix? Stata is designed in such a way that it is extremely rare that
you need such a thing." Maartin's point is that there may be an easier
way to Solve Edgard's problem.
Edgard didn't didn't say what he wanted to do with the matrix in Stata,
so let's just assume Edgard will need the elements one a time. One way
from Stata to access the [1,3] element of global Mata matrix A, and
store the result in local macro -mymac-, is
. mata: st_local("mymac", A[1,3])
After that, in Stata, one would refer to `mymac' (in single quotes) to
obtain the contents, as in
. display "The 1,3 element is `mymac'"
To store the contents of local macro mymac in preexisting matrix B, say
element B[2,7], one could code
. mata: B[2,7] = "`mymac'"
I emphasize that the above -- use of Mata, global matrices, macros, and
element-by-element access from Stata -- is not the way Mata is usually
-- Bill
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-05/msg00496.html","timestamp":"2014-04-18T03:44:25Z","content_type":null,"content_length":"8328","record_id":"<urn:uuid:9e6ba831-d010-40f2-b056-b2a6c867625d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theoretical and Experimental Probability ( Read ) | Probability
Have you ever wanted to trade jobs with someone else? Take a look at this dilemma between two friends.
“I don’t want to work on the weekend,” Carey said to Telly at lunch one day.
“But that was part of the deal. We both have to work one day out of the weekend,” Telly said.
“Well, which day do you want?” Carey asked.
“I don’t know. I haven’t really thought about it,” Telly said. “But we could make it really random.”
“How?” Carey asked.
Telly took two pieces of paper and wrote Saturday on one and Sunday on the other.
“Now we can figure out the probability of you getting Saturday or Sunday,” she said.
We can stop there. This Concept is all about probability. Telly’s experiment is an example of experimental probability. Let’s talk more about this at the end of the Concept.
Experimental probability is probability based on doing actual experiments – flipping coins, spinning spinners, picking ping pong balls out of a jar, and so on. To compute the experimental probability
of the number cube landing on 3 you would need to conduct an experiment. Suppose you were to toss the number cube 60 times.
Favorable outcomes:
Total outcomes: 60 tosses
Experimental probability:
$P(3) =\frac{favorable \ outcomes}{total \ outcomes}=\frac{Number \ of \ 3's}{Total \ Number \ of \ tosses}$
Write this comparison down in your notebooks.
Take a look at this situation.
What is the experimental probability of having the number cube land on 3?
trial 1 2 3 4 5 6 Total
raw data:3s ${|}$ ${|||}$ ${|}$ ${||}$ ${||}$
favorable outcomes:3s 1 3 0 1 2 2 9
total tosses total outcomes 10 10 10 10 10 10 60
experimental probability: favorable outcomes to total outcomes x x x x x x $9:60=3:20$
The data from the experiment shows that 3 turned up on the number cube 9 out of 60 times. Simplified, this ratio becomes:
$\text{Favorable outcomes}:\text{total outcomes}= 3:20$
You can see that it is only possible to calculate the experimental probability when you are actually doing experiments and counting results.
A number cube was tossed twenty times. The number 2 came up 3 times and the number 5 came up six times. Use this information to answer the following questions.
Example A
What is the probability that the number would be a 2?
Solution: $2:20$$1:10$
Example B
What is the probability that the number would be a 5?
Solution: $6:20$$3:10$
Example C
What is the probability of not rolling a 5?
Solution: $7:10$
Now let's go back to the dilemma from the beginning of the Concept.
Carey has a chance of working Saturday or Sunday. There are two possible outcomes. She has a one out of 2 chance of working on Saturday and a one out of two chance of working on Sunday.
50% chance or probability for each outcome.
a mathematical way of calculating how likely an event is to occur.
Favorable Outcome
the outcome that you are looking for
Total Outcomes
all of the outcomes both favorable and unfavorable.
Experimental Probability
probability based on doing actual experiments.
a reasonable guess based on probability
Guided Practice
Here is one for you to try on your own.
Use the table to compute the experimental probability of a number cube landing on 6.
trial 1 2 3 4 5 Total
raw data ${||||}$ ${|}$ ${|}$ ${||}$ ${|}$ x
number of 6's 4 1 1 2 1 9
total tosses 10 10 10 10 10 50
You can see from the experiment that the number cube was tossed 50 times.
The total number of sixes to appear during this experiment was 9.
The experimental probability of rolling a 6 is $9:50$
Video Review
Directions: Find the probability for rolling less than 4 on the number cube.
1. List each favorable outcome.
2. Count the number of favorable outcomes.
3. Write the total number of outcomes.
4. Write the probability.
5. Find the probability for rolling 1 or 6 on the number cube.
6. List each favorable outcome.
7. Count the number of favorable outcomes.
8. Write the total number of outcomes.
9. Write the probability.
10. A box contains 12 slips of paper numbered 1 to 12. Find the probability for randomly choosing a slip with a number less than 4 on it.
11. List each favorable outcome.
12. Count the number of favorable outcomes.
13. Write the total number of outcomes.
14. Write the probability.
Directions: Use the table to answer the questions. Express all ratios in simplest form.
Use the table to compute the experimental probability of flipping a coin and having it land on heads.
trial 1 2 3 4 5 6 Total
raw data(heads) $\cancel{||||}$ $\cancel{||||} \ {|}$ $\cancel{||||} \ {|}$ ${|||}$ $\cancel{||||} \ {|}$ $\cancel{||||}$
number of heads 5 6 6 3 6 5 31
total number of flips 10 10 10 10 10 10 60
13. How many favorable outcomes were there in the experiment?
14. How many total outcomes were there in the experiment?
15. What was the experimental probability of the coin landing on heads? | {"url":"http://www.ck12.org/probability/Theoretical-and-Experimental-Probability/lesson/Experimental-Probability/","timestamp":"2014-04-21T13:01:03Z","content_type":null,"content_length":"117864","record_id":"<urn:uuid:9c4b6227-0083-4666-8a05-1fc518cee512>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Siphoning - Torricellis Law
Siphons are tubes which draw fluid over the rim of a tank to a lower point. After an initial pressure change to initiate the flow, siphons operate continuously due to the pull of gravity.
Torricelli's law says if a liquid flows from an opening in a container, its speed is the same as the speed that fluid would have if it dropped in free fall from the top of that container.
Torricelli's Law derives directly from Bernoulli's principle.
Alright let's talk Siphons and Torricelli's law. Now first off what is a siphon? Well a siphon is a tube that I stick into a pool of water or some other fluid and then pull up out of the top and then
I have the other end of that tube lower down closer to the earth than the tube in the water. And so what happens is, if I can initiate a flow, if I can get that fluid flowing up through there, so
that it comes over the top and then falls down it's going to pull the rest of the fluid with it. And so people will use this to empty pools and sometimes to empty gas tanks but it is a very important
physical process that uses Bernoulli's Principle. Now what's Torricelli's law? Torricelli's law is named after the Italian physicist Evangelista Torricelli who derived it in 1643 and what he said was
that if you take a container of liquid and you've got a hole in the bottom of it, then the liquid will flow out of that hole at the same speed that it would move at if would just take the liquid from
the top and drop it.
Now the direction is going to be different, if I take liquid from the top and I drop it, it's going to be going down. But if I've got a tank and I poke a hole on the side of it the liquid is going to
be going horizontally is going to be going over, but the speed is the same and that's Torricelli's law. Alright so let's go ahead and look at a problem involving a siphon and we'll see where these
Torricelli stuff comes up and also where Bernoulli comes up it'll come up all the time. Alright so let's look at this siphon problem right here, I've got a pool of water, we'll just take it to be
water, it could be any other fluid though and I've got my hose I've put it one end of the hose at point c down at the bottom of the tank of water which is 5 meters below the surface, then I pull it
up and out. It's got to go over the edge of the container and that's a point b which is 2 meters above the surface and then I bring down the other end to point d which is 8 meters below the surface.
Now the reason that the siphon is going to work is because the fluid is going to come out here at a lower point in the gravitational field that the earth than here where it came in.
Alright and so that what's going to give me the energy that I need to get over this potential barrier. Now if I'm just going to leave that thing sitting here, it's not going to do anything. I got to
initiate the flow first, alright and the way that I do that is I have to put a pump over here and pump out some of the air at point d or I'll just use my mouth and I suck on it real quick to pull the
liquid up to point b so that it realizes that it can then fall down. And then once I've initiated that flow I just stand back and the whole container will empty by itself. Alright now how are we
going to determine the actual numbers associated with this problem? Well it turns out that Bernoulli's principle is exactly what we need. Bernoulli's principle let's, I've written it over here, now
I've written it in the case of constant density which is fine with water because water is almost incompressible. Very, very difficult to change the density of water you need a lot of pressure to do
that and that's something we're not going to have here. So we've got pressure plus density times acceleration due to gravity times height plus one half density speed squared is a constant.
Now we can interpret this in the exactly the same way as conservation of energy. Pressure represents a sort of potential energy associated with the fluid itself, rho gh represents gravitational
potential energy just like mgh except I divided by the volume which is essentially what we always do with fluids. One half rho d squared is just like one half mv squared that's kinetic energy. So
this is just conservation of energy, now as with essentially all conservation of energy problems the way we're going to approach this is by finding 2 different points. 1 point where this whole
expression is really easy and I can evaluate it without even having to try and I'll know all the numbers I can just write it down and that's what gives me the constant. And the other point which
contains information that I want to know, and then I just say okay they got to be the same. And then I'll solve the equation and get everything I want. Alright so the most complicated part of this
problem essentially any part of it, is setting it up, trying to decide which parts we want to use and what we know about those parts. So what I want to do is just kind of show you how to set it up
and I'll give you the answers but I'm going to go through a full detailed calculation.
Alright, so let's go ahead to part a, part a asks us for the gauge pressure to initiate the flow. Now remember gauge pressure is pressure minus atmospheric pressure. Now we said that in order to
initiate flow we're going to have to suck on the end of this hose at point d. So that means that the pressure up here at point b is going to have to be less than atmospheric, so the gauge pressure
will be negative. Alright now when we've got flow initiated the water has got to come up to this point, just right at that point. It won't be moving yet but it's going to get right up to that point.
And I want to know the pressure right here okay. So I need 2 different points in this fluid flow to apply Bernoulli's principle to. Okay so let's see obviously I got to use point b as one of them
because I want the pressure at point b and how else I'm I going to get that? What's the other point that I should use? Well it turns out that point a is the best point why? Well because whether or
not I've started flow point a is associated with water that is barely moving right there at 0 height. Notice all the heights are measured relative to the surface of the water. So that means the
surface of the water I can take at 0 height, then the height of point b will be 2 meters, the height of point c will be negative 5 and the height of point d will be negative 8.
Right so my 8 just going to be 0 at point a, my speed is going to be 0 at point a and what's my pressure going to be? Well the only thing that's pushing on the water a point a is the atmosphere. So
it's going to be atmospheric pressure, so that means that all 3 of these added together the Bernoulli sum will just be p atmosphere okay so that's at point a. At point b what do I have? Well jeez I
got the pressure now I've just wanted to initiate flow, the flow has not initiated yet. So my speed is 0 at point b but my height is 2 and so all I need to do is solve for minus p atmospheric and if
you go through the numbers, you'll find that p gauge which is p minus p atmospheric is negative 19,600 Pascal's notice it's negative and that means that the pressure is less than p atmospheric.
Alright let's go ahead to part b, so part b asks us for the speed of the flow. Once the flow has been started, so now it's flowing alright so this isn't like at part a where I took the speed to be 0.
Okay, so I need to know the speed of the flow, alright well again I want to use point a as one of my points because point a still is essentially not moving because the pool is so much bigger than the
hose. So I mean the flow can be flowing through the hose but that surface of the whole pool is going down real slow so I'm just going to ignore that. Alright so I still have p atmospheric at point a
but which point should I use to determine the speed of the flow? Alright this is a little tricky, alright now once you've seen a couple of times you get used to it, it makes perfect sense but it
might not be something that you thought of or that you would've thought of. We're going to use point d, the reason that we're going to use point d is because point d is open to the atmosphere and
that means that its pressure has to equal the atmospheric pressure. Notice that, that's not true at point b or at point c. I don't know the pressure there so I don't want to use those points. But at
point d I know the pressure is p atmospheric, I know the height it's negative 8 and I know, well I don't know the speed but that's what I want. One half rho v squared, now notice what happens here
and this actually is Torricelli's law right here notice it comes directly from Bernoulli's principle which is one of the nice things about Bernoulli's principle. Once you got that one, you can sweep
everybody else off the table because Bernoulli gives you everything.
Alright p atmospheric gone, and look what we got here, the density is going to cancel so this is going to give us rho g8 equals one half rho v squared. The densities cancel, and this is going to give
us v equals square root 2g8. Now that 8 is really the height, so this should be familiar from kinematics. Speed equals square root 2 gh and that's exactly what we get when we drop something, so
that's Torricelli's law. That's why it's true, it's because these 2 pressures cancel. Alright that's why it's true, it comes straight from Bernoulli very, very nice of course the answer if you just
plug in the numbers I think it's 12.54 meters per second. Alright let's go ahead to part c, I want to know the pressure at point b during flow. Alright this is fairly straight forward, alright now
that we've kind of gotten through part b which I think is the trickiest one. Part c is pretty straight forward, I want to know the pressure at point b, so what point I'm I going to use? I'm going to
use point b, what other point should I use?
Right if I want to know the pressure at point b I'm going to use a and b because a is easy and b has the information I want. So I'll just plug in for a obviously again it'll just be p atmospheric and
then for b I'll have pressure which is what I'm looking for, I'll have height which is 2 and I'll have v which is the answer that I got in part b because the flow through the whole hose is at the
same speed, so we're good to go with that. Plug in all the numbers and we end up with a pressure for part c of 3,325 Pascal's. Alright one important point to make about this answer for part c is that
this pressure has to be positive. Notice that it doesn't ask for the gauge pressure, we're not solving for the gauge pressure here, we're solving for the total pressure. The total pressure must
always be positive, now this pressure here is very small I mean 3,325 isn't necessarily a small number but when you compare it to the atmospheric pressure 101,325 this is really, really small. So
it's so close to 0 why? Well because point b and point d here the highest part and the part where it's leaving the hose are 10 meters apart.
The highest column of water that the atmosphere on earth can support is 10.34 meters just a little bit bigger than this. So that's why this pressure is so small, if this difference in height will be
more than 10.34 meters then the siphoning process wouldn't work in the way that we've assumed. Essentially the speed of the flow through the hose wouldn't be constant all the way through the water
would actually speed up as it flow down here and it wouldn't fill up the whole hose it would be a more complicated problem. And that's because it's the atmosphere that's having to drive this whole
thing and the atmosphere cannot support more than 10.34 meters. So that's just something to think about, not all problems that you set up this way will actually work. Alright at this height, this
difference in heights that needs to be small in enough to be supported by the atmosphere. Alright let's go ahead to part d, part d asks us for the gauge pressure at point c during the flow. Again
fairly simple, I want to know some piece of information about point c, I'm going to use point c. Point a is easy like it always is, I'm going to use point a and there we go. So we'll have p
atmospheric at point a, at point c we'll have the pressure which we're looking for, we'll have the height negative 5 and we'll have that one half rho v squared. Where that v is the same thing that we
found in part b, we'll just plug those things in, solve for the gauge pressure which is pressure minus p atmospheric.
Remember that, that's what gauge pressure is and the answer that we get for part d is negative 29,400 Pascal's notice it's negative that means that the pressure is less than the atmospheric pressure.
So what that means is, that if I were to take my finger and stick it right there, the hose will try to suck that in. It's this negative right here which is telling me that the hose is sucking the
water in. Alright so that's the way siphon's work and that's how Torricelli's law comes in.
siphons siphoning torricelli law | {"url":"https://www.brightstorm.com/science/physics/solids-liquids-and-gases/physics-siphoning-torricellis-law/","timestamp":"2014-04-17T01:12:21Z","content_type":null,"content_length":"70815","record_id":"<urn:uuid:76eb4df2-4d1b-4215-84bd-dbe1f30c1dc1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Question about "predict" and "mean" when i ~= (not equal) j?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Question about "predict" and "mean" when i ~= (not equal) j?
From "Martin Weiss" <martin.weiss1@gmx.de>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Question about "predict" and "mean" when i ~= (not equal) j?
Date Tue, 24 Nov 2009 23:15:25 +0100
Look at
Your "gap" is also known as residual, and -predict- can deliver it directly
with the -residuals- option...
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Sun, Yan (IFPRI)
Sent: Dienstag, 24. November 2009 23:06
To: statalist@hsphsun2.harvard.edu
Subject: st: Question about "predict" and "mean" when i ~= (not equal) j?
Hi, all,
Theoretically, my question is easy; but I do not know how to program it
in STATA.
Y is my dependent variable, X1, X2, X3 are my independent variable,
i/j=1, 2, ..., n are my observations, m=1, 2, ...,20 is city, my
equation is Y==X1 X2 X3. What I want to calculate is GAP(i~=j,
m)=Predicted Y(j) for i-mean(j, m) (j is all the other observations). I
know how to do for the equation GAP(i,m)=Predicted Y(i)-mean(i,m) , in
this case, what I do is
Reg Y X1 X2 X3
Predict PY, xb
Gen gap=PY-Y
collapse gap, by(m)
But I do not know how to predict and calculate the value which does not
include the current observation (the predicted value and the mean value
which excludes the current value)?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-11/msg01309.html","timestamp":"2014-04-17T19:10:10Z","content_type":null,"content_length":"7718","record_id":"<urn:uuid:e02fab27-dddc-4760-92e2-ee422671ebc3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Judy Robison on 14 Mar 13
"Need a pentagonal pyramid that's six inches tall? Or a number line that goes from ‑18 to 32 by 5's? Or a set of pattern blocks where all shapes have one-inch sides? You can create all those
things and more with the Dynamic Paper tool. Place the images you want, then export it as a PDF activity sheet for your students or as a JPEG image for use in other applications or on the web."
"Practice. Learn. Succeed.
ScootPad is the ultimate way to master math and reading skills.
Self-paced and personalized practice keeps kids engaged & challenged. Common Core aligned, real-time progress tracking & concept proficiency insigts, paperless homework....Best of all: FREE!"
provide teachers and students with mathematics relevant to our world today …
"Welcome to Math Chimp! We collect free online math games and organize them by the common core standards. We're glad you've come to play cool math games here... they're free and always will be!"
Based on the constructivist approach, Exploriments encourage the learners to actively participate in the learning process, instead of being passive recipients.
Selected Tags
Related tags | {"url":"https://groups.diigo.com/group/discovery-educator-network/content/tag/math","timestamp":"2014-04-20T08:59:55Z","content_type":null,"content_length":"125614","record_id":"<urn:uuid:5ebb76f5-6d86-4108-ad62-4cacfc1b9634>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear multiuser detectors for synchronous code-division multiple-access channels
Results 1 - 10 of 246
- IEEE TRANSACTIONS ON INFORMATION THEORY , 1998
"... In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which
are frequently used in the analysis and design of communication systems. Next, we focus on the information ..."
Cited by 289 (1 self)
Add to MetaCart
In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are
frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure.
Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath
- IEEE Trans. Inform. Theory , 1999
"... Multiuser receivers improve the performance of spread-spectrum and antenna-array systems by exploiting the structure of the multiaccess interference when demodulating the signal of a user. Much
of the previous work on the performance analysis of multiuser receivers has focused on their ability to re ..."
Cited by 269 (11 self)
Add to MetaCart
Multiuser receivers improve the performance of spread-spectrum and antenna-array systems by exploiting the structure of the multiaccess interference when demodulating the signal of a user. Much of
the previous work on the performance analysis of multiuser receivers has focused on their ability to reject worst case interference. Their performance in a power-controlled network and the resulting
user capacity are less well-understood. In this paper, we show that in a large system with each user using random spreading sequences, the limiting interference effects under several linear multiuser
receivers can be decoupled, such that each interferer can be ascribed a level of effective interference that it provides to the user to be demodulated. Applying these results to the uplink of a
single power-controlled cell, we derive an effective bandwidth characterization of the user capacity: the signal-to-interference requirements of all the users can be met if and only if the sum of the
effective bandwidths of the users is less than the total number of degrees of freedom in the system. The effective bandwidth of a user depends only on its own SIR requirement, and simple expressions
are derived for three linear receivers: the conventional matched filter, the decorrelator, and the MMSE receiver. The effective bandwidths under the three receivers serve as a basis for performance
- IEEE TRANS. INFORM. THEORY , 1995
"... The decorrelating detector and the linear minimum mean-square error (MMSE) detector are known to be effective strategies to counter the presence of multiuser interference in code-division
multiple-access channels; in particular, those multiuser detectors provide optimum near-far resistance. When tr ..."
Cited by 257 (16 self)
Add to MetaCart
The decorrelating detector and the linear minimum mean-square error (MMSE) detector are known to be effective strategies to counter the presence of multiuser interference in code-division
multiple-access channels; in particular, those multiuser detectors provide optimum near-far resistance. When training data sequences are available, the MMSE multiuser detector can be implemented
adaptively without knowledge of signature waveforms or received amplitudes. This paper introduces an adaptive multiuser detector which converges (for any initialization) to the MMSE detector without
requiring training sequences. This blind multiuser detector requires no more knowledge than does the conventional single-user receiver: the desired user’s signature waveform and its timing. The
proposed blind multiuser detector is made robust with respect to imprecise knowledge of the received signature waveform of the user of interest.
- IEEE TRANS. INFORM. THEORY , 1999
"... The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis
provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol per ..."
Cited by 220 (24 self)
Add to MetaCart
The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a
comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period. We analyze the spectral efficiency (total capacity per chip) as a function of the number of
users, spreading gain, and signal-to-noise ratio, and we quantify the loss in efficiency relative to an optimally chosen set of signature sequences and relative to multiaccess with no spreading.
White Gaussian background noise and equal-power synchronous users are assumed. The following receivers are analyzed: a) optimal joint processing, b) single-user matched filtering, c) decorrelation,
and d) MMSE linear processing.
- IEEE J. Select. Areas Commun , 1999
"... Abstract — We investigate robust wireless communication in high-scattering propagation environments using multi-element antenna arrays (MEA’s) at both transmit and receive sites. A simplified,
but highly spectrally efficient space–time communication processing method is presented. The user’s bit str ..."
Cited by 165 (1 self)
Add to MetaCart
Abstract — We investigate robust wireless communication in high-scattering propagation environments using multi-element antenna arrays (MEA’s) at both transmit and receive sites. A simplified, but
highly spectrally efficient space–time communication processing method is presented. The user’s bit stream is mapped to a vector of independently modulated equal bit-rate signal components that are
simultaneously transmitted in the same band. A detection algorithm similar to multiuser detection is employed to detect the signal components in white Gaussian noise (WGN). For a large number of
antennas, a more efficient architecture can offer no more than about 40 % more capacity than the simple architecture presented. A testbed that is now being completed operates at 1.9 GHz with up to 16
quadrature amplitude modulation (QAM) transmitters and 16 receive antennas. Under ideal operation at 18 dB signal-to-noise ratio (SNR), using 12 transmit antennas and 16 receive antennas (even with
uncoded communication), the theoretical spectral efficiency is 36 bit/s/Hz, whereas the Shannon capacity is 71.1 bit/s/Hz. The 36 bits per vector symbol, which corresponds to over 200 billion
constellation points, assumes a 5 % block error rate (BLER) for 100 vector symbol bursts. Index Terms — Antenna diversity, multi-element arrays (MEA’s), space–time processing, wireless
- IEEE Trans. Inform. Theory , 1997
"... Abstract—Performance analysis of the minimum-mean-squareerror (MMSE) linear multiuser detector is considered in an environment of nonorthogonal signaling and additive white Gaussian noise. In
particular, the behavior of the multiple-access interference (MAI) at the output of the MMSE detector is exa ..."
Cited by 144 (14 self)
Add to MetaCart
Abstract—Performance analysis of the minimum-mean-squareerror (MMSE) linear multiuser detector is considered in an environment of nonorthogonal signaling and additive white Gaussian noise. In
particular, the behavior of the multiple-access interference (MAI) at the output of the MMSE detector is examined under various asymptotic conditions, including: large signal-tonoise ratio; large
near–far ratios; and large numbers of users. These results suggest that the MAI-plus-noise contending with the demodulation of a desired user is approximately Gaussian in many cases of interest. For
the particular case of two users, it is shown that the maximum divergence between the output MAIplus-noise and a Gaussian distribution having the same mean and variance is quite small in most cases
of interest. It is further proved in this two-user case that the probability of error of the MMSE detector is better than that of the decorrelating linear detector for all values of normalized
crosscorrelations not greater than I
- IEEE TRANS. INFORM. THEORY , 1999
"... There has been intense effort in the past decade to develop multiuser receiver structures which mitigate interference between users in spread-spectrum systems. While much of this research is
performed at the physical layer, the appropriate power control and choice of signature sequences in conjuncti ..."
Cited by 78 (5 self)
Add to MetaCart
There has been intense effort in the past decade to develop multiuser receiver structures which mitigate interference between users in spread-spectrum systems. While much of this research is
performed at the physical layer, the appropriate power control and choice of signature sequences in conjunction with multiuser receivers and the resulting network user capacity is not well
understood. In this paper we will focus on a single cell and consider both the uplink and downlink scenarios and assume a synchronous CDMA (S-CDMA) system. We characterize the user capacity of a
single cell with the optimal linear receiver (MMSE receiver). The user capacity of the system is the maximum number of users per unit processing gain admissible in the system such that each user has
its quality-of-service (QoS) requirement (expressed in terms of its desired signal-to-interference ratio) met. Our characterization allows us to describe the user capacity through a simple effective
bandwidth characterization: Users are allowed in the system if and only if the sum of their effective bandwidths is less than the processing gain of the system. The effective bandwidth of each user
is a simple monotonic function of its QoS requirement. We identify the optimal signature sequences and power control strategies so that the users meet their QoS requirement. The optimality is in the
sense of minimizing the sum of allocated powers. It turns out that with this optimal allocation of signature sequences and powers, the linear MMSE receiver is just the corresponding matched filter
for each user. We also characterize the effect of transmit power constraints on the user capacity.
- IEEE Trans. Inform. Theory , 2000
"... A linear multiuser receiver for a particular user in a code-division multiple-access (CDMA) network gains potential benefits from knowledge of the channels of all users in the system. In fast
multipath fading environments we cannot assume that the channel estimates are perfect and the inevitable cha ..."
Cited by 68 (3 self)
Add to MetaCart
A linear multiuser receiver for a particular user in a code-division multiple-access (CDMA) network gains potential benefits from knowledge of the channels of all users in the system. In fast
multipath fading environments we cannot assume that the channel estimates are perfect and the inevitable channel estimation errors will limit this potential gain. In this paper, we study the impact
of channel estimation errors on the performance of linear multiuser receivers, as well as the channel estimation problem itself. Of particular interest are the scalability properties of the channel
and data estimation algorithms: what happens to the performance as the system bandwidth and the number of users (and hence channels to estimate) grows? Our main results involve asymptotic expressions
for the signal-to-interference ratio of linear multiuser receivers in the limit of large processing gain, with the number of users divided by the processing gain held constant. We employ a random
model for the spreading sequences and the limiting signal-to-interference ratio expressions are independent of the actual signature sequences, depending only on the system loading and the channel
statistics: background noise power, energy profile of resolvable multipaths, and channel coherence time. The effect of channel uncertainty on the performance of multiuser receivers is succinctly
captured by the notion of effective interference.
- IEEE TRANS. COMMUN , 1994
"... Direct Sequence (DS) Code Division Multiple Access (CDMA) is a promising technology for wireless environments with multiple simultaneous transmissions because of several features: asynchronous
multiple access, robustness to frequency selective fading, and multipath combining. The capacity ..."
Cited by 65 (7 self)
Add to MetaCart
Direct Sequence (DS) Code Division Multiple Access (CDMA) is a promising technology for wireless environments with multiple simultaneous transmissions because of several features: asynchronous
multiple access, robustness to frequency selective fading, and multipath combining. The capacity | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.83.2174","timestamp":"2014-04-18T12:29:08Z","content_type":null,"content_length":"40872","record_id":"<urn:uuid:c23f6ec9-817b-4bf8-af6c-01abb0a744df>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from David McClain
A way to restore sanity to IEEE FP math.
Date: -- (:)
From: David McClain <dmcclain@a...>
Subject: A way to restore sanity to IEEE FP math.
All the recent difficulties I've had with IEEE FP math can be traced to the
bipolar nature of zero. Direct use of this means that although the two
compare as equal, they cause a situation where subtraction is no longer
anitcommutative. They manifest this difficulty most in complex arithmetic
when the function atan2 is used to compute the phase angle of a complex
I recommend, instead, that complex arithmetic use the function phase,
defined as,
phase re im = atan2 (im +. 0.0) (re +. 0.0)
Doing so preserves the affine infinities, and preserves the use of bipolar
zero for those who need it, but restores the sanity of our number system in
the complex plane.
The phase angle of (-1, 0-) and (-1, 0+) are thus on the same Riemann sheet,
addition retains its cummutative behavior, negation is still idempotent, and
subtraction is restored to anticommutativity.
The only way to detect which zero one might have is through the use of the
curious function atan2, which allows one to see ever so slightly beyond one
Riemann sheet by peeking under the principal sheet along the branch cut on
the negative real axis. But since this is a curious function, and since
complex arithmetic now uses phase to obtain its angles, we have a sensible
Furthermore, the old identities (a + 0) = a are restored, as well as (0 -
a) = -a.
Hence the kinds of quick simplifications one is tempted to do are okay and
will not violate the arithmetic. Our faith can be restored in FP math, and
branch cuts are all properly preserved regardless of the number and kind of
quick simplifications we might be tempted to perform during translation from
algebra to computer code.
Those needing the bipolar zeros for interval arithmetic and for detecting
the sense of arithmetic rounding are still able to achieve their goals, by
means of the atan2 function.
- DM | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/1999/09/b2ecef0d3c40eb433c89af24b219ef0c.en.html","timestamp":"2014-04-19T17:13:16Z","content_type":null,"content_length":"6774","record_id":"<urn:uuid:38393523-ea40-4726-9500-38a53c762214>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quincy Center, MA Algebra 2 Tutor
Find a Quincy Center, MA Algebra 2 Tutor
...I am also the math facilitator for my building and have been teaching for 8 years. I teach a variety of levels of students from advanced to students with special needs. I show students a
variety of ways to answer a problem because my view is that as long as student can answer a question, understand how they got the answer and can explain how they do so, it doesn't matter the
5 Subjects: including algebra 2, algebra 1, precalculus, study skills
...In addition to undergraduate level linear algebra, I studied linear algebra extensively in the context of quantum mechanics in graduate school. I continue to use undergraduate level linear
algebra in my physics research. I use MATLAB routinely in my research.
16 Subjects: including algebra 2, calculus, physics, geometry
...I have a great vocabulary and had a great education. I also taught for many years. I have written a lot in my career, got 5s on all my AP exams and As in my English courses.
90 Subjects: including algebra 2, chemistry, English, reading
...In high school, I couldn't get enough of it. When the opportunity came to study at Harvard in the summer after my junior year, I independently prepared for the AP exam and went on to take
organic chemistry I and II. I majored in chemistry at Boston University, and worked in organic, inorganic, and materials labs.
15 Subjects: including algebra 2, chemistry, writing, geometry
...I hold both a master's and a bachelor's degree in economics. In addition to math, I have also taught economics both at the high school and community college level.I have taught Algebra I at
the honors and standard levels for almost 15 years. It is one of my favorite subjects to teach and tutor.
6 Subjects: including algebra 2, geometry, algebra 1, economics
Related Quincy Center, MA Tutors
Quincy Center, MA Accounting Tutors
Quincy Center, MA ACT Tutors
Quincy Center, MA Algebra Tutors
Quincy Center, MA Algebra 2 Tutors
Quincy Center, MA Calculus Tutors
Quincy Center, MA Geometry Tutors
Quincy Center, MA Math Tutors
Quincy Center, MA Prealgebra Tutors
Quincy Center, MA Precalculus Tutors
Quincy Center, MA SAT Tutors
Quincy Center, MA SAT Math Tutors
Quincy Center, MA Science Tutors
Quincy Center, MA Statistics Tutors
Quincy Center, MA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Braintree Highlands, MA algebra 2 Tutors
Braintree Hld, MA algebra 2 Tutors
East Braintree, MA algebra 2 Tutors
East Milton, MA algebra 2 Tutors
Grove Hall, MA algebra 2 Tutors
Houghs Neck, MA algebra 2 Tutors
Marina Bay, MA algebra 2 Tutors
Norfolk Downs, MA algebra 2 Tutors
North Quincy, MA algebra 2 Tutors
Quincy, MA algebra 2 Tutors
South Quincy, MA algebra 2 Tutors
Squantum, MA algebra 2 Tutors
West Quincy, MA algebra 2 Tutors
Weymouth Lndg, MA algebra 2 Tutors
Wollaston, MA algebra 2 Tutors | {"url":"http://www.purplemath.com/Quincy_Center_MA_Algebra_2_tutors.php","timestamp":"2014-04-16T07:40:42Z","content_type":null,"content_length":"24504","record_id":"<urn:uuid:659f9415-bf5b-4b69-bcda-3e416487b7ba>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perform load flow and initialize models containing three-phase machines and dynamic load blocks
LF = power_loadflow('-v2',sys)
LF = power_loadflow('-v2',sys,'solve')
LF = power_loadflow('-v2',sys,'noupdate')
LF = power_loadflow('-v2',sys,'solve','report')
LF = power_loadflow('-v2',sys,'solve','report',fname)
Mparam = power_loadflow(sys)
MI = power_loadflow(sys,Mparam)
The power_loadflow function contains two different tools for performing load flow and initializing three-phase models with machines.
Load Flow Tool
The Load Flow tool uses the Newton-Raphson method to provide a robust and fast convergence solution. It offers most of the functionality of other load flow software available in the power utility
The -v2 option in power_loadflow function syntax allows you to access this tool from the command line.
power_loadflow('-v2',sys) opens a graphical user interface to edit and perform load flow of sys using the Newton Raphson algorithm.
LF = power_loadflow('-v2',sys) returns the current load flow parameters of sys.
LF = power_loadflow('-v2',sys,'solve') computes the load flow of sys. The model is initialized with the load flow solution.
LF = power_loadflow('-v2',sys,'noupdate') computes the load flow but does not initialize the model with the load flow solution.
LF = power_loadflow('-v2',sys,'solve','report') computes the load flow and then opens the editor to save the load flow report.
LF = power_loadflow('-v2',sys,'solve','report',fname) computes the load flow and then saves detailed information in fname file.
Machine Initialization Tool
The Machine Initialization tool offers simplified load flow features to initialize the machine initial currents of your models.
power_loadflow(sys) opens the Machine Initialization tool dialog box to perform machine initialization.
Mparam = power_loadflow(sys) returns the machine initialization parameter values of sys. Use Mparam as a template variable to perform new machine initialization.
MI = power_loadflow(sys,Mparam) computes the machine initial conditions using the load flow parameters given in Mparam. Create an instance of Mparam using Mparam = power_loadflow(sys), and then edit
the initialization parameter values based on this template.
Load Flow Tool GUI
Open this dialog box by entering the power_loadflow('-v2', sys) command, or from the Powergui block dialog box by selecting Load Flow.
The frequency used by the Load Flow tool to compute the normalized Ybus network admittance matrix of the model and to perform the load flow calculations. This value corresponds to the Load flow
frequency parameter of the Powergui block.
The base power used by the Load Flow tool to compute the normalized Ybus network admittance matrix in p.u./Pbase and bus base voltages of the model. This value corresponds to the Base power Pbase
parameter of the Powergui block.
Defines the maximum number of iterations that the Load flow tool iterates until the P and Q powers mismatch at each bus is lower than the PQ tolerance parameter value (in pu/Pbase). This value
corresponds to the Max iterations parameter of the Powergui block.
The tolerance used by the Load Flow tool. This value corresponds to the PQ Tolerance parameter of the Powergui block.
Click to get the latest changes in the model. Any previous load flow solution is cleared from the table.
Click to solve the load flow. The solution is displayed in the V_LF, Vangle_LF, P_LF, and Q_LF columns of the Load Flow tool table.
Click to display the load flow report showing power flowing at each bus. Save the report in a file that is displayed in the MATLAB^® editor.
Click to apply the load flow solution to the model.
Close the Load Flow tool.
Load Flow Parameters
The Load flow parameters and solution are returned in a structure with the following fields.
┃ Field │ Description ┃
┃ model │ The name of the model. ┃
┃ frequency │ The load flow frequency, in hertz. This value corresponds to the Load flow frequency parameter of the Powergui block. ┃
┃ basePower │ The base power used by the Load Flow tool. This value corresponds to the Base power Pbase parameter of the Powergui block. ┃
┃ tolerance │ The tolerance used by the Load Flow tool. This value corresponds to the PQ Tolerance parameter of the Powergui block. ┃
┃ bus │ [1 x Nbus] structure with fields defining the bus parameters. Nbus is the number of buses in the model. ┃
┃ sm │ [1 x Nsm] structure with fields defining the load flow parameters of the Synchronous Machine blocks. Nsm is the number of Synchronous Machine blocks in the model. ┃
┃ asm │ [1 x Nasm] structure with fields defining the load flow parameters of the Asynchronous Machine blocks. Nasm is the number of Asynchronous Machine blocks in the model. ┃
┃ vsrc │ [1 x Nsrc] structure with fields defining the load flow parameters of the Three-Phase Source and Three-Phase Programmable Voltage Source blocks. Nsrc is the number of voltage source ┃
┃ │ blocks in the model. ┃
┃ pqload │ [1 x Npq] structure with fields defining the load flow parameters of the Three-Phase Dynamic Load blocks. Npq is the number of Three-Phase Dynamic Load blocks in the model. ┃
┃ rlcload │ [1 x Nrlc] structure with fields defining the load flow parameters of the Three-Phase Parallel RLC Load and Three-Phase Series RLC Load blocks. Nrlc is the number of Three-Phase ┃
┃ │ Parallel RLC Load and Three-Phase Series RLC Load blocks in the model. ┃
┃ Ybus1 │ [Nbus x Nbus] positive-sequence complex admittance matrix in p.u./Pbase. ┃
┃ Networks │ Lists the bus numbers of each independent network. ┃
┃ status │ Returns 1 when a solution is found, and -1 when no solution is found. ┃
┃ iterations │ The number of iterations that the solver took to solve the load flow. ┃
┃ error │ Displays an error message when no solution is found. ┃
Machine Initialization Tool Dialog Box
You can open this dialog box by entering the power_loadflow(sys) command, or from the Powergui block dialog box by selecting Machine Initialization.
Displays the names of the Simplified Synchronous Machines, the Synchronous Machines, the Asynchronous Machine, and the Three-Phase Dynamic Load blocks of your model. Select a machine or a load in
the list box to set its parameters.
If Bus type is set to P&V Generator, you can set the terminal voltage and active power of the machine. If Bus type is set to PQ generator, you can set the active and reactive powers. If Bus type
is set to Swing Bus, you can set the terminal voltage, enter an active power guess, and specify the phase of the UAN terminal voltage of the machine.
If you select an Asynchronous Machine block machine, you have to enter only the mechanical power delivered by the machine. If you select a Three-Phase Dynamic Load block, you must specify the
active and reactive powers consumed by the load.
Specify the terminal line-to-line voltage of the selected machine.
Specify the active power of the selected machine or load.
Specify an active power guess to start iterations when the specified machine bus type is Swing Bus.
Specify the reactive power of the selected machine or load.
This parameter is activated only when the bus type is Swing Bus.
Specify the phase of the phase-to-neutral voltage of phase A of the selected machine.
In motor mode, specify the mechanical power developed by the squirrel cage induction machine. In generator mode, specify the mechanical power absorbed by the machine as a negative number.
Specify the frequency to be used in the calculations (normally 60 Hz or 50 Hz).
Normally, you keep the default setting Auto to let the tool automatically adjust the initial conditions before starting iterations. If you select Start from previous solution, the tool starts
with initial conditions corresponding to the previous solution. Try this option if the load flow fails to converge after a change has been made to the power and voltage settings of the machines
or to the circuit parameters.
Update the list of machines, voltage and current phasors, as well as the powers, if you have made a change in your model while the Machine Initialization tool is open. The new voltages and powers
displayed are computed by using the machine currents obtained from the last computation (the three currents stored in the Initial conditions parameter of the machine blocks).
Executes the calculations for the given machine parameters.
Machine Initialization Parameters
The Machine Initialization parameters of a model are organized in a structure with the following fields.
┃ Field │ Description ┃
┃ name │ Cell array of string values defining the names of the machine blocks of the model. ┃
┃ type │ Cell array of string values ('Asynchronous Machine', 'Simplified Synchronous Machine', 'Synchronous Machine', or 'Three Phase Dynamic Load') defining the mask type of the ┃
┃ │ machine and load blocks. ┃
┃ set │ Structure with variable fields defining the parameters specific to each machine or dynamic load (Bus Type, Terminal Voltage, Active Power, Reactive Power, Mechanical Power). ┃
┃ LoadFlowFrequency │ Parameter defining the frequency used by the tool, in hertz. The frequency is specified only in the first element of the Machine Initialization parameters structure. ┃
┃ InitialConditions │ String value defining the initial condition type ('Auto','Start from previous solution'). The initial condition status is specified only in the first element of the Machine ┃
┃ │ Initialization parameter structure. ┃
┃ DisplayWarnings │ String value ('on','off') controlling the display of warning messages during the Machine Initialization computation. ┃
For example, you obtain the initialization parameters for the power_machines example by doing:
Mparam = power_loadflow('power_machines');
ans =
name: 'SM 3.125 MVA'
type: 'Synchronous Machine'
set: [1x1 struct]
LoadFlowFrequency: 60
InitialConditions: 'Auto'
DisplayWarnings: 'on'
name: 'ASM 2250HP'
type: 'Asynchronous Machine'
set: [1x1 struct]
LoadFlowFrequency: []
InitialConditions: []
DisplayWarnings: []
If you use the Mparam = power_loadflow(sys) command to create the initialization parameters structure, you do not need to edit or modify the name and type fields of lfparam. The set field is where
you specify new parameter values, lfparam(1).LoadFlowFrequency is where you define the frequency. lfparam(1).Initialconditions is where you specify the initial conditions status.
If your model does not contain any Asynchronous, Simplified Synchronous, or Synchronous Machine blocks and no Three Phase Dynamic Load block, lfparam returns an empty variable.
Initialization Parameters of Asynchronous Machine Block
For the Asynchronous Machine blocks you can specify only the mechanical power of the machine. The set field is a structure with the following field.
┃ Field │ Description ┃
┃ MechanicalPower │ The mechanical power, in watts, of the machine. ┃
For example, the mechanical power of the Asynchronous Machine in the power_machines example is:
Mparam = power_loadflow('power_machines');
ans =
MechanicalPower: 1492000
Initialization Parameters of Synchronous Machine Blocks
For the Simplified Synchronous Machine and Synchronous Machine blocks, the set field is a structure with these fields.
┃ Field │ Description ┃
┃ BusType │ String ('P & V generator', 'P & Q generator', or 'Swing bus') defining the bus type of the machine. ┃
┃ TerminalVoltage │ Parameter defining the terminal voltage, in volts rms. ┃
┃ ActivePower │ Parameter defining the active power, in watts. ┃
┃ ReactivePower │ Parameter defining the reactive power, in vars. ┃
┃ PhaseUan │ Parameter defining the phase voltage, in degrees, of the Uan voltage. ┃
For example, the initialization parameter values of the Synchronous Machine block in the power_machines example are:
Mparam = power_loadflow('power_machines');
ans =
BusType: 'P & V generator'
TerminalVoltage: 2400
ActivePower: 0
ReactivePower: 0
PhaseUan: 0
Initialization Parameters of Three-Phase Dynamic Load Block
For the Three-Phase Dynamic Load block, the set field is a structure with the following fields.
┃ Field │ Description ┃
┃ ActivePower │ Parameter defining the Active power, in watts. ┃
┃ ReactivePower │ Parameter defining the reactive power, in vars. ┃
Machine Initialization Results
The results are organized in a structure with the following fields.
┃ Field │ Description ┃
┃ status │ Status returns 1 when a solution is found, and returns 0 when no solution is found. The status is given only in the first element of Machine Initialization structure; it returns an ┃
┃ │ empty value for the other elements. ┃
┃ Machine │ The names of the machines or loads. ┃
┃ Nominal │ The nominal parameters [nominal power, nominal voltage] of the machines or loads. ┃
┃ BusType │ The bus type of the machines or loads. ┃
┃ UanPhase │ The phase angles, in degrees, of the phase A-to-neutral voltage at machine or load terminals. ┃
┃ Uab,Ubc,Uca │ The steady-state, phase-to-phase terminal voltages of the machines or loads. The voltages are returned in a 1-by-3 vector containing the voltage in volts, the voltage in p.u. based ┃
┃ │ on the nominal power of the machine, and the phase in degrees. ┃
┃ Ia,Ib,Ic │ The steady-state phase currents of the machines or loads. The currents are returned in a 1-by-3 vector representing the current in amperes, the current in p.u. based on the nominal ┃
┃ │ power of the machine, and the phase in degrees. ┃
┃ P │ The active power of the machine is returned in a 1-by-2 vector, representing the power in watts and in p.u. based on the nominal power of the machine. ┃
┃ Q │ The reactive power of the machine is returned in a 1-by-2 vector, representing the power in vars and in p.u. based on the nominal power of the machine. ┃
┃ Pmec │ The mechanical power of the machine is returned in a 1-by-2 vector, representing the mechanical power in watts and in p.u. based on the nominal parameters of the machine. ┃
┃ Torque │ The mechanical torque of the machine is returned in a 1-by-2 vector, representing the torque in N.m and in p.u. based on the nominal power and speed of the machine. ┃
┃ Vf │ The computed field voltage of Synchronous Machine blocks. This parameter is set to an empty value for the other types of machines and loads. ┃
┃ Slip │ The computed slip of Asynchronous Machine blocks. This parameter is set to an empty value for the other types of machines and loads. ┃
The power_machines example illustrates a rated 2250 HP, 2.4 kV asynchronous motor connected on a 25 kV distribution network. The motor develops a mechanical power of 2000 HP (1.492 MW). You change
its mechanical power to −1.0 MW (generator mode) and initialize the machine initial currents to start the simulation in steady state.
In the Command Window, type:
The model opens.
Obtain the initialization parameters for the model:
Mparam = power_loadflow('power_machines');
ans =
MechanicalPower: 1492000
Change the mechanical power of the asynchronous motor to switch it to generator mode:
Mparam(2).set.MechanicalPower = -1e6;
Initialize the machine initial currents to start the simulation in steady state:
MI = power_loadflow('power_machines',Mparam);
View the Synchronous Machine block parameters:
ans =
status: 1
Machine: 'SM 3.125 MVA'
Nominal: [3125000 2400]
BusType: 'P & V generator'
UanPhase: -30.3047
Uab: [2.4000e+003 1.0000 -0.3047]
Ubc: [2.4000e+003 1.0000 -120.3047]
Uca: [2.4000e+003 1.0000 119.6953]
Ia: [140.1689 0.1865 -120.3047]
Ib: [140.1689 0.1865 119.6953]
Ic: [140.1689 0.1865 -0.3047]
P: [-5.8208e-011 -1.8626e-017]
Q: [5.8267e+005 0.1865]
Pmec: [391.1104 1.2516e-004]
Torque: [2.0749 1.2516e-004]
Vf: 1.2909
Slip: []
The MI(1).status is set to 1, meaning that the function has found a solution for the given parameters.
View the Asynchronous Machine block parameters:
ans =
status: []
Machine: 'ASM 2250HP'
Nominal: [1678500 2400]
BusType: 'Asynchronous Machine'
UanPhase: -30.3047
Uab: [2.4000e+003 1.0000 -0.3047]
Ubc: [2.4000e+003 1.0000 -120.3047]
Uca: [2.4000e+003 1.0000 119.6953]
Ia: [268.7636 0.6656 177.3268]
Ib: [268.7636 0.6656 57.3268]
Ic: [268.7636 0.6656 -62.6732]
P: [-9.8981e+005 -0.5897]
Q: [5.1815e+005 0.3087]
Pmec: [-1000000 -0.5958]
Torque: [-5.2844e+003 -0.5934]
Vf: []
Slip: -0.0039
The mechanical torque reference input for the Asynchronous Machine block is now set to –5284.43 watts and the slip is S = –0.0039, meaning that the machine now behaves as a generator instead of a
motor. Run the simulation and verify that it starts in steady state. | {"url":"http://www.mathworks.co.uk/help/physmod/sps/powersys/ref/power_loadflow.html?nocookie=true","timestamp":"2014-04-23T07:09:41Z","content_type":null,"content_length":"66453","record_id":"<urn:uuid:f18f4565-1243-4dda-8ed4-1d9ab17eb6e1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] FGL custom node identification (Label -> Node lookup)
[Haskell-cafe] FGL custom node identification (Label -> Node lookup)
Thomas DuBuisson thomas.dubuisson at gmail.com
Thu Nov 24 19:13:20 CET 2011
My thinking on this was that something akin to NodeMap should be
_part_ of the graph structure. This would be more convenient and
allow the graph and nodemap operations to apply to a single data
Instead of:
insMapNode_ :: (Ord a, DynGraph g) => NodeMap a -> a -> g a b -> g a b
You could have:
insMapNode_ :: (Ord a, DynGraph g) => a -> g a b -> g a b
The only think stopping us from making a product data type like this
is the inflexibility of the type classes, right? Were we able to
define (&) to update the nodemap too then we could keep these to
structures in sync automatically instead of expecting the programmer
to keep them paired correctly.
On Thu, Nov 24, 2011 at 1:42 AM, Ivan Lazar Miljenovic
<ivan.miljenovic at gmail.com> wrote:
> On 24 November 2011 20:33, Thomas DuBuisson <thomas.dubuisson at gmail.com> wrote:
>> All,
>> The containers library has a somewhat primitive but certainly useful
>> Data.Graph library. Building a graph with this library simultaneously
>> results in the lookup functions:
>> m1 :: Vertex -> (node, key, [key])
>> m2 :: key -> Maybe Vertex
>> (where 'key' is like FGL's 'label' but is assumed to be unique)
>> This is exactly what I wanted when building and analyzing a call graph
>> in FGL. To that end, I started making a graph type that tracked label
>> to Node mappings, wrapping Data.Graph.Inductive.Gr, and assuming the
>> labels are all unique.
>> The classes for such a graph actually aren't possible. The ability to
>> build a mapping from a node's 'label' to the 'Node' requires extra
>> context (ex: Hashable, Ord, or at least Eq), but such context can not
>> be provided due to the typeclass construction.
>> Is there any chance we can change the Graph and DiaGraph classes to
>> expose the type variables 'a' and 'b'?
>> class Graph gr a b where ...
>> class (Graph gr) => DynGraph gr a b where ...
>> This would allow instances to provide the needed context:
>> instance (Hashable a, Hashable b) => Graph UniqueLabel a b where
>> ...
>> buildGraph = ... some use of containers libraries that
>> require context ...
>> ...
>> lookupNode :: Hashable a => UniqueLabel a b -> a -> Node
>> -- etc
>> Cheers,
>> Thomas
>> P.S. Please do educate me if I simply missed or misunderstood some
>> feature of FGL.
> Well, there *is* the NodeMap module, but I haven't really used it so
> I'm not sure if it does what you want.
> We did start upon a version of FGL which had these type variables in
> the class, but it got a little fiddly; the ability to have superclass
> constraints should solve this but I haven't touched FGL for a while,
> as I've been working on some other graph library code for planar
> graphs, with the plan to take my experience from writing this library
> into a "successor" to FGL.
> However, my experience with designing this planar graph library has
> led me to using abstract (i.e. non-exported constructor) ID types for
> nodes and edges and finding them rather useful, but then I'm more
> concerned about the _structure_ of the graph rather than the items
> stored within it. As such, I'd appreciate you explaining to me
> (off-list is OK) why you want/need such a label -> node mapping so
> that I can try and work out a way to incorporate such functionality.
> --
> Ivan Lazar Miljenovic
> Ivan.Miljenovic at gmail.com
> IvanMiljenovic.wordpress.com
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2011-November/097120.html","timestamp":"2014-04-18T18:40:44Z","content_type":null,"content_length":"7658","record_id":"<urn:uuid:1e3fb237-9a64-4389-b975-b3fc2faf6af1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Wikibooks, open books for an open world
Localization involves one question: Where is the robot now? Or, robo-centrically, where am I, keeping in mind that "here" is relative to some landmark (usually the point of origin or the destination)
and that you are never lost if you don't care where you are.
Although a simple question, answering it isn't easy, as the answer is different depending on the characteristics of your robot. Localization techniques that work fine for one robot in one environment
may not work well or at all in another environment. For example, localizations which work well in an outdoors environment may be useless indoors.
All localization techniques generally provide two basic pieces of information:
• what is the current location of the robot in some environment?
• what is the robot's current orientation in that same environment?
The first could be in the form of Cartesian or Polar coordinates or geographic latitude and longitude. The latter could be a combination of roll, pitch and yaw or a compass heading.
Current Location[edit]
The current location of a robot can be determined in several very different ways:
Dead Reckoning[edit]
Dead reckoning uses odometry to measure how far the robot moves. Trigonometry and the equations of kinematics are all that is needed to calculate its new position.
This method has 2 requirements:
• It needs a way to determine its initial location.
• Its accuracy usually decreases over time as every measurement has an error and errors accumulate.
Dead reckoning is often used with other methods to improve the overall accuracy.
Least Mean Squares[edit]
There are numerous solutions to the localization robotics problem. These range from simple Dead Reckoning methods to advanced algorithms with expensive radar or vision system. The most important
factor is picking an algorithm to find the robotic location is the availability of accurate relative and global position data. For simple systems with basic relative position sensors and some form of
a global position sensor, the most practical and easiest to implement localization method is that of Least Mean Squares.
See Least Squares: http://en.wikipedia.org/wiki/Least_squares for more information on using the method of Least Squares to find solutions for over determined systems.
To look at the Least Mean Square (LMS) algorithm in a general sense first, it is important to look at the general principles that govern it. The goal of all localization methods is to minimize the
error between where the robot is and where the robot should be. The algorithm must be able to follow a preset track or find the least error in the robot location (Adaptive Algorithms and Structures,
p. 97). These goals for robot localization are applied to the LMS algorithm by adaptively adjusting weights to minimize the error between the actual function and the function generated by the LMS
algorithm. As a subset of the Steepest Descent Algorithm (http://en.wikipedia.org/wiki/Gradient_descent), this method was created to minimize the error by estimating the gradient (http://
en.wikipedia.org/wiki/Gradient) and is known as the simplest localization method with global position output.
Derivation of LMS Algorithm[edit]
The LMS algorithm can be applied to two cases:
• Parallel multiple inputs
• Series single input
Figure 1: Linear Combiner Configuration [1]
Figure 2: Curve Fit to Data to Minimize Error [3]
Figure 3: Transversal Combiner Configuration [1]
In both cases, the gradient is calculated as
ε[k] = d[k] - X^T[k]W[k]
where ‘Y[k]’ is the desired output, ‘X^T[k]’ is the actual input, and ‘W[k]’ for the kth input element. Next, the gradient estimate is calculated as
Now that we have the gradient estimate, the weights can be updated by
W[k+1] = W[k] - μdel[k] = W[k] + 2με[k]X[k]
to minimize the error through iterative change. ‘μ ’ is the gain constant for training the algorithm. The use of training is to adjust how fast the error is corrected for by the weight update
function [1]. If the training constant is too high, the system will oscillate and never converge to the minimal error value.
To illustrate the process other than in mathematical format, the LMS algorithm can be demonstrated through the use of pseudo code. The purpose of the code below is for illustration purposes only.
This example is intended to provide a step-by-step visualization tool for beginning the coding process.
Procedure LMS[edit]
Initialize the Weights repeat choose training pair (x,d) for all k do
y[k] = W[k]X[k]
end for all k do
ε[k] = Y[k] - d[k]
end for all k do
W[k](j+1) = W[k](j) + με[k]X[k]
end until termination condition reached
Various Variations[edit]
Modified LMS Method
For problems with sequentially arriving data, the Modified LS method can be used to add or delete rows of data. In various time-series problems, a moving ‘window’ can be employed to evaluate data
over a predefined timeframe. This method is useful in LMS problems that arise in statistics, optimization, and signal processing [2].
Constrained LMS Method
For curve and surface fitting application problems, the Constrained LMS method is very useful. Such problems require satisfying linear equations systems with various unknowns. This method only
provides a useful solution if and only if each input can be scaled by some magnitude and still remains a constant [2].
Direct Methods for Sparse problems
A sparse LMS problem involves solving the LMS problem when the matrix has ‘relatively few’ nonzero elements. “J. H. Wilkinson defined a sparse matrix to be ‘any matrix with enough zeros that it pays
to take advantage of them’” [2]. A more precise definition is more difficult to provide. Such problem usually involve large to enormous size and involve numerous unknowns. The following are examples
of problem that can be classified as sparse problems: pattern matching, cluster analysis, and surface fitting [2].
Iterative Methods for LMS problems
A second method for solving sparse problems (in addition to the Direct Method mentioned above) is the Iterative Method. In general, the Iterative Method is used for analysis of under-determined,
sparse problems to compute a minimum solution based on the norm of the input data [2].
Nonlinear LMS Method
For problems with nonlinear systems, the Nonlinear LMS method is useful for iteratively calculating the relative LMS solution to the problem. This method can be used for unconstrained optimization
and various survey methods [2].
Additional Sources for Application and Theory[edit]
For further reference and additional reading, the following resources can be used for a more extensive overview of the subject.
• Adaptive Signal Processing [1]
• Numerical Methods for LS Problem book [2]
GPS Global Positioning System[edit]
On large terrains GPS can provide more or less accurate coordinates, dead reckoning and the data from its other sensors can fill in the gaps. However on small terrains or indoors the GPS's inaccuracy
becomes a problem and the dead reckoning and sensor data becomes the dominant factor in determining the location.
GPS-like Systems[edit]
Although GPS isn't very useful indoors, similar techniques as those used in GPS can be used indoors. All a robot needs is to measure it's distance to 3 fixed beacons. Each of these distances describe
a circle with the beacon in its center. 3 circles will only intersect in one point.
Line following[edit]
Probably the easiest way: Draw a line on the ground and use IR reflection sensors to follow it. Useful as a track or to mark the edges of the robot's work area.
This can also be done by putting an electric cable in the ground and sending a modulated signal through it. The robot can detect this cable with magnetic sensors (Hall-sensors or coils).
The complex versions of line-following involves using sensors like vision (camera) which helps reducing overall cost of sensors and implementation and also provides versatility to detect lines of
various colours. It allows a great scope for autonomy in the system too.
Placing landmarks is another way to help your robot to navigate through its environment. Landmarks can be active beacons (IR or sound) or passive (reflectors). Using bar code scanners is another
A special trick, if the environment is known beforehand, is to use collision detection (i.e. sensor bumpers or similar) with known objects. Together with dead reckoning the precision can be
extraordinary even when using cheap equipment.
In new environments landmarks often need to be determined by the robot itself. Through the use of sensor data collected by laser, sonar or camera, specific landmark types (e.g. walls, doors,
corridors) can be recognised and used for localization.
Current Heading[edit]
Determining a robot's heading can be done with a compass sensor. These sensors usually consist of 2 magnetic field sensors placed at a 90° angle.
These sensors measure the earth's magnetic field and can be influenced by other magnetic fields. Speakers, transformers, electric cables and refrigerator magnets can reduce the accuracy of the
compass. Compass sensor modules can have built-in filtering for the magnetic fields of AC. | {"url":"http://en.wikibooks.org/wiki/Robotics/Navigation/Localization","timestamp":"2014-04-18T16:27:51Z","content_type":null,"content_length":"41685","record_id":"<urn:uuid:08933eca-88c5-47a0-9e98-31851cbea18f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
The 13th European Conference on Evolutionary Computation in Combinatorial Optimisation
EvoCOP is a multidisciplinary conference that brings together researchers working on metaheuristics that are used to solve difficult combinatorial optimization problems appearing in various
industrial, economic, and scientific domains. Prominent examples of metaheuristics include: evolutionary algorithms, simulated annealing, tabu search, scatter search and path relinking, memetic
algorithms, ant colony and bee colony optimization, particle swarm optimization, variable neighbourhood search, iterated local search, greedy randomized adaptive search procedures, estimation of
distribution algorithms, and hyperheuristics. Successfully solved problems include scheduling, timetabling, network design, transportation and distribution problems, vehicle routing, travelling
salesman, graph problems, satisfiability, energy optimization problems, packing problems, planning problems, and general mixed integer programming.
The EvoCOP 2013 conference will be held in Vienna, Austria. The conference will be held in conjunction with EuroGP (the 16th European Conference on Genetic Programming), EvoBIO (the 11th European
Conference on Evolutionary Computation, Machine Learning and Data Mining in Computational Biology), EvoMUSART (11th European conference on evolutionary and biologically inspired music, sound, art and
design) and EvoAPPS (specialist events on a range of evolutionary computation topics and applications), in a joint event collectively known as EvoStar.
For more information see the EvoCOP 2013 webpage http://www.evostar.org/
Areas of Interest and Contributions
Topics of interest include, but are not limited to:
• Applications of metaheuristics to combinatorial optimization problems
• Novel application domains for metaheuristic optimisation methods
• Representation techniques
• Neighborhoods and efficient algorithms for searching them
• Variation operators for stochastic search methods
• Constraint-handling techniques
• Hybrid methods and hybridization techniques
• Parallelization
• Theoretical developments
• Search space and landscape analyses
• Comparisons between different (also exact) techniques
• Metaheuristics and machine learning
• Ant colony optimisation
• Artificial immune systems
• Bee colony optimization
• Genetic programming and Genetic algorithms
• (Meta-)heuristics
• Scatter search
• Particle swarm optimisation
• Tabu search, iterated local search and variable neighborhood search
• Memetic algorithms and hyperheuristics
• Estimation of distribution algorithms
• String processing
• Scheduling and timetabling
• Network design
• Vehicle routing
• Graph problems
• Satisfiability
• Packing and cutting problems
• Energy optimization problems
• Practical solving of NP-hard problems
• Mixed integer programming
• Multi-objective optimisation
• Grid computing
• Combinatorial optimisation
• Nature and Bio-inspired methods
• Quantum computing and quantum annealing
• Optimization in Cloud computing
Publication Details
All accepted papers will be presented orally at the conference and printed in the proceedings published by Springer in the LNCS series (see LNCS volumes 2037, 2279, 2611, 3004, 3448, 3906, 4446,
4972, 5482, 6022, 6622, and 7245 for the previous proceedings).
Submission Detail
Submissions must be original and not published elsewhere. The submissions will be peer reviewed by at least three members of the program committee. The authors of accepted papers will have to improve
their paper on the basis of the reviewers’ comments and will be asked to send a camera ready version of their manuscripts. At least one author of each accepted work has to register for the
conference, attend the conference and present the work.
The reviewing process will be double-blind, please omit information about the authors in the submitted paper. Submit your manuscript in Springer LNCS format.
Submission link: http://myreview.csregistry.org/evocop13/
Page limit: 12 pages
Important Dates
Submission deadline: [S:1 November 2012:S] Extended to 11 November 2012
EVO* event: 3-5 April 2013
EvoCOP programme chairs
• Martin Middendorf
University of Leipzig, Germany
• Christian Blum
IKERBASQUE, Basque Foundation for Science, Spain
University of the Basque Country, Spain
EvoCOP Accepted Paper Abstracts
A Hyper-heuristic with a Round Robin Neighbourhood Selection
Ahmed Kheiri, Ender Ozcan
An iterative selection hyper-heuristic passes a solution through a heuristic selection process to decide on a heuristic to apply from a fixed set of low level heuristics and then a move acceptance
process to accept or reject the newly created solution at each step. In this study, we introduce Robinhood hyper-heuristic whose heuristic selection component allocates equal share from the overall
execution time for each low level heuristic, while the move acceptance component enables partial restarts when the search process stagnates. The proposed hyper-heuristic is implemented as an
extension to a public software used for benchmarking of hyper-heuristics, namely HyFlex. The empirical results indicate that Robinhood hyper-heuristic is a simple, yet powerful and general multistage
algorithm performing better than most of the previously proposed selection hyper-heuristics across six different Hyflex problem domains.
A Multiobjective Approach Based on the Law of Gravity and Mass Interactions for Optimizing Networks
Alvaro Rubio-Largo, Miguel A. Vega-Rodriguez
In this work, we tackle a real-world telecommunication problem by using Evolutionary Computation and Multiobjective Optimization jointly. This problem is known in the literature as the Traffic
Grooming problem and consists on multiplexing or grooming a set of low-speed traffic requests (Mbps) onto high-speed channels (Gbps) over an optical network with wavelength division multiplexing
facility. We propose a multiobjective version of an algorithm based on the laws of motions and mass interactions (Gravitational Search Algorithm, GSA) for solving this NP-hard optimization problem.
After carrying out several comparisons with other approaches published in the literature for this optical problem, we can conclude that the multiobjective GSA (MO-GSA) is able to obtain very
promising results.
A Multi-Objective Feature Selection Approach Based on Binary PSO and Rough Set Theory
Liam Cervante, Bing Xue, Lin Shang, Mengjie Zhang
Feature selection has two main objectives of maximising the classification performance and minimising the number of features. However, most existing feature selection algorithms are single objective
wrapper approaches. In this work, we propose a multi-objective filter feature selection algorithm based on binary particle swarm optimisation (PSO) and probabilistic rough set theory. The proposed
algorithm is compared with other five feature selection methods, including three PSO based single objective methods and two traditional methods. Three classification algorithms naive bayes, decision
trees and k-nearest neighbours) are used to test the generality of the proposed filter algorithm. Experiments have been conducted on six datasets of varying difficulty. Experimental results show that
the proposed algorithm can automatically evolve a set of non-dominated feature subsets. In almost all cases, the proposed algorithm outperforms the other five algorithms in terms of both the number
of features and the classification performance (evaluated by all the three classification algorithms). This paper presents the first study on using PSO and rough set theory for multi-objective
feature selection.
A New Crossover for Solving Constraint Satisfaction Problems
Reza Abbasian, Malek Mouhoub
In this paper we investigate the applicability of Genetic Algorithms (GAs) for solving Constraint Satisfaction Problems (CSPs). Despite some success of GAs when tackling CSPs, they generally suffer
from poor crossover operators. In order to overcome this limitation in practice, we propose a novel crossover specifically designed for solving CSPs. Together with a variable ordering heuristic and
an integration into a parallel architecture, this proposed crossover enables the solving of large and hard problem instances as demonstrated by the experimental tests conducted on randomly generated
CSPs based on the model RB. We will indeed demonstrate, through these tests, that our proposed method is superior to the known GA based techniques for CSPs. In addition, we will show that we are able
to compete with the efficient MAC-based Abscon 109 solver for random problem instances.
A Population-based Strategic Oscillation Algorithm for Linear Ordering Problem with Cumulative Costs
Wei Xiao, Wenqing Chu, Zhipeng Lu, Tao Ye, Guang Liu, Shanshan Cui
This paper presents a Population-based Strategic Oscillation (denoted by PBSO) algorithm for solving the linear ordering problem with cumulative costs (denoted by LOPCC). The proposed algorithm
integrates several distinguished features, such as an adaptive strategic oscillation local search procedure and an effective population updating strategy. The proposed PBSO algorithm is compared with
several state-of-the-art algorithms on a set of public instances up to 100 vertices, showing its efficacy in terms of both solution quality and efficiency. Moreover, several important ingredients of
the PBSO algorithm are analyzed.
A study of adaptive perturbation strategy for iterated local search
Una Benlic, Jin-Kao Hao
We investigate the contribution of a recently proposed adaptive diversification strategy (ADS) to the performance of an iterated local search (ILS) algorithm. ADS is used as a diversification
mechanism by breakout local search (BLS), which is a new variant of the ILS metaheuristic. The proposed perturbation strategy adaptively selects between two types of perturbations (directed or random
moves) of different intensities, depending on the current state of search. We experimentally evaluate the performance of ADS on the quadratic assignment problem (QAP) and the maximum clique problem
(MAX-CLQ). Computational results accentuate the benefit of combining adaptively multiple perturbation types of different intensities. Moreover, we provide some guidance on when to introduce a weaker
and when to introduce a stronger diversification into the search.
Adaptive MOEA/D for QoS-based web service composition
Mihai Suciu, Denis Pallez, Marcel Cremene, Dumitru Dumitrescu
QoS aware service composition is one of the main research problem related to \textit{Service Oriented Computing (SOC)}. A certain functionality may be offered by several services having different
Quality of Service (QoS) attributes. Although the QoS optimization problem is multiobjective by its nature, most approaches are based on single-objective optimization. Compared to single-objective
algorithms, multiobjective evolutionary algorithms have the main advantage that the user has the possibility to select a posteriori one of the Pareto optimal solutions. A major challenge that arises
is the dynamic nature of the problem of composing web services. The algorithms performance is highly influenced by the parameter settings. Manual tuning of these parameters is not feasible. An
evolutionary multiobjective algorithm based on decomposition for solving this problem is proposed. To address the dynamic nature of this problem we consider the hybridization between an adaptive
heuristics and the multiobjective algorithm. The proposed approach outperforms state of the art algorithms.
An Analysis of Local Search for the Bi-objective Bidimensional Knapsack Problem
Leonardo C. T. Bezerra, Manuel Lopez-Ibanez, Thomas Stuetzle
Local search techniques are increasingly often used in multi-objective combinatorial optimization due to their ability to improve the performance of metaheuristics. The efficiency of multi-objective
local search techniques heavily depends on factors such as (i) neighborhood operators, (ii) pivoting rules and (iii) bias towards good regions of the objective space. In this work, we conduct an
extensive experimental campaign to analyze such factors in a Pareto local search (PLS) algorithm for the bi-objective bidimensional knapsack problem (bBKP). In the first set of experiments, we
investigate PLS as a stand-alone algorithm, starting from random and greedy solutions. In the second set, we analyze PLS as a post-optimization procedure.
An Artificial Immune System based approach for solving the Nurse Re-Rostering Problem
Broos Maenhout, Mario Vanhoucke
Personnel resources can introduce uncertainty in the operational processes. Constructed personnel rosters can be disrupted and render infeasible rosters. Feasibility has to be restored by adapting
the original announced personnel rosters. In this paper, an Artificial Immune System for the nurse re-rostering problem is presented. The proposed algorithm uses problem-specific and even
roster-specific mechanisms which are inspired on the vertebrate immune system. We observe the performance of the different algorithmic components and compare the proposed procedure with the existing
Automatic Algorithm Selection for the Quadratic Assignment Problem using Fitness Landscape Analysis
Erik Pitzer, Andreas Beham, Michael Affenzeller
In the last few years, fitness landscape analysis has seen an increase in interest due to the availability of large problem collections and research groups focusing on the development of a wide array
of different optimization algorithms for diverse tasks. Instead of being able to rely on a single trusted method that is tuned and tweaked to the application more and more, new problems are
investigated, where little or no experience has been collected. In an attempt to provide a more general criterion for algorithm and parameter selection other than ``it works better than something
else we tried'', sophisticated problem analysis and classification schemes are employed. In this work, we combine several of these analysis methods and evaluate the suitability of fitness landscape
analysis for the task of algorithm selection.
Balancing Bicycle Sharing Systems: A Variable Neighborhood Search Approach
Marian Rainer-Harbach, Petrina Papazek, Bin Hu, Guenther R. Raidl
We consider the necessary redistribution of bicycles in public bicycle sharing systems in order to avoid rental stations to run empty or entirely full. For this purpose we propose a general Variable
Neighborhood Search (VNS) with an embedded Variable Neighborhood Descent (VND) that exploits a series of neighborhood structures. While this metaheuristic generates candidate routes for vehicles to
visit unbalanced rental stations, the numbers of bikes to be loaded or unloaded at each stop are efficiently derived by one of three alternative methods based on a greedy heuristic, a maximum flow
calculation, and linear programming, respectively. Tests are performed on instances derived from real-world data and indicate that the VNS based on a greedy heuristic represents the best compromise
for practice. In general the VNS yields good solutions and scales much better to larger instances than two mixed integer programming approaches.
Combinatorial Neighborhood Topology Particle Swarm Optimization Algorithm for the Vehicle Routing Problem
Yannis Marinakis, Magdalene Marinaki
One of the main problems in the application of a Particle Swarm Optimization in combinatorial optimization problems, especially in routing type problems like the Traveling Salesman Problem, the
Vehicle Routing Problem, etc., is the fact that the basic equation of the Particle Swarm Optimization algorithm is suitable for continuous optimization problems and the transformation of this
equation in the discrete space may cause loose of information and may simultaneously need a large number of iterations and the addition of a powerful local search algorithm in order to find an
optimum solution. In this paper, we propose a different way to calculate the position of each particle which will not lead to any loose of information and will speed up the whole procedure. This was
achieved by replacing the equation of positions with a novel procedure that includes a Path Relinking Strategy and a different correspondence of the velocities with the path that will follow each
particle. The algorithm is used for the solution of the Capacitated Vehicle Routing Problem and is tested in the two classic set of benchmark instances from the literature with very good results.
Dynamic Evolutionary Membrane Algorithm in Dynamic Environments
Chuang Liu, Min Han
Several problems that we face in real word are dynamic in nature. For solving these problems, a novel dynamic evolutionary algorithm based on membrane computing is proposed. In this paper, the
partitioning strategy is employed to divide the search space to improve the search efficiency of the algorithm. Furthermore, the four kinds of evolutionary rules are introduced to maintain the
diversity of solutions found by the proposed algorithm. The performance of the proposed algorithm has been evaluated over the standard moving peaks benchmark. The simulation results indicate that the
proposed algorithm is feasible and effective for solving dynamic optimization problems.
From Sequential to Parallel Local Search for SAT
Alejandro Arbelaez, Philippe Codognet
In the domain of propositional Satisfiability Problem (SAT), parallel portfolio-based algorithms have become a standard methodology for both complete and incomplete solvers. In this methodology
several algorithms explore the search space in parallel, either independently or cooperatively with some communication between the solvers. We conducted a study of the scalability of several SAT
solvers in different application domains (crafted, verification, quasigroups and random instances) when drastically increasing the number of cores in the portfolio, up to 512 cores. Our experiments
show that on different problem families the behaviors of different solvers vary greatly. We present an empirical study that suggests that the best sequential solver is not necessary the one with the
overall best parallel speedup.
Generalizing Hyper-heuristics via Apprenticeship Learning
Shahriar Asta, Ender Ozcan, Andrew J. Parkes, A. Sima Etaner-Uyar
An apprenticeship-learning-based technique is used as a hyper-heuristic to generate heuristics for an online combinatorial problem. It observes and learns from the actions of a known-expert heuristic
on small instances, but has the advantage of producing a general heuristic that works well on other larger instances. Specifically, we generate heuristic policies for online bin packing problem by
using expert near-optimal policies produced by a hyper-heuristic on small instances, where learning is fast. The "expert" is a policy matrix that defines an index policy, and the apprenticeship
learning is based on observation of the action of the expert policy together with a range of features of the bin being considered, and then applying a k-means classification. We show that the
generated policy often performs better than the standard best-fit heuristic even when applied to instances much larger than the training set.
High-Order Sequence Entropies for Measuring Population Diversity in the Traveling Salesman Problem
Yuichi Nagata, Isao Ono
We propose two entropy-based diversity measures for evaluating population diversity in a genetic algorithm (GA) applied to the traveling salesman problem (TSP). In contrast to a commonly used
entropy-based diversity measure, the proposed ones take into account high-order dependencies between the elements of individuals in the population. More precisely, the proposed ones capture
dependencies in the sequences of up to $m+1$ vertices included in the population (tours), whereas the commonly used one is the special case of the proposed ones with m=1. We demonstrate that the
proposed entropy-based diversity measures with appropriate values of $m$ evaluate population diversity more appropriately than does the commonly used one.
Investigating Monte-Carlo Methods on the Weak Schur Problem
Shalom Eliahou, Cyril Fonlupt, Jean Fromentin, Virginie Marion-Poty, Denis Robilliard, Fabien Teytaud
Nested Monte-Carlo Search (NMC) and Nested Rollout Policy Adaptation (NRPA) are Monte-Carlo tree search algorithms that have proved their efficiency at solving one-player game problems, such as
morpion solitaire or sudoku 16x16, showing that these heuristics could potentially be applied to constraint problems. In the field of Ramsey theory, the weak Schur number WS(k) is the largest integer
n for which their exists a partition into k subsets of the integers [1,n] such that there is no x < y < z all in the same subset with x + y = z. Several studies have tackled the search for better
lower bounds for the Weak Schur numbers WS(k), k <= 4. In this paper we investigate this problem using NMC and NRPA, and obtain a new lower bound for WS(6), namely WS(6) <= 582.
Multi-Objective AI Planning: Comparing Aggregation and Pareto Approaches
Mostepha R. Khouadjia, Marc Schoenauer, Vincent Vidal, Johann Dreo, Pierre Saveant
Most real-world Planning problems are multi-objective, trying to minimize both the makespan of the solution plan, and some cost of the actions involved in the plan. But most, if not all existing
approaches are based on single-objective planners, and use an aggregation of the objectives to remain in the single-objective context. Divide-and-Evolve is an evolutionary planner that won the
temporal deterministic satisficing track at the last International Planning Competitions (IPC). Like all Evolutionary Algorithms (EA), it can easily be turned into a Pareto-based Multi-Objective EA.
It is however important to validate the resulting algorithm by comparing it with the aggregation approach: this is the goal of this paper. The comparative experiments on a recently proposed
benchmark set that are reported here demonstrate the usefulness of going Pareto-based in AI Planning.
Predicting Genetic Algorithm Performance on the Vehicle Routing Problem Using Information Theoretic Landscape Measures
Mario Ventresca, Beatrice Ombuki-Berman, Andrew Runka
In this paper we examine the predictability of genetic algorithm (GA) performance using information-theoretic fitness landscape measures. The outcome of a GA is largely based on the choice of search
operator, problem representation and tunable parameters (crossover and mutation rates, etc). In particular, given a problem representation the choice of search operator will determine, along with the
fitness function, the structure of the landscape that the GA will search upon. Statistical and information theoretic measures have been proposed that aim to quantify properties (ruggedness,
smoothness, etc) of this landscape. In this paper we concentrate on the utility of information theoretic measures to predict algorithm output for various instances of the capacitated and
time-windowed vehicle routing problem. Using a clustering-based approach we identify similar landscape structures within these problems and propose to compare GA results to these clusters using
performance profiles. These results highlight the potential for predicting GA performance, and providing insight self-configurable search operator design.
Single Line Train Scheduling with ACO
Marc Reimann, Jose Eugenio Leal
In this paper we study a train scheduling problem on a single line that may be traversed in both directions by trains with different priorities travelling with different speeds. We propose an ACO
approach to provide decision support for tackling this problem. Our results show the strong performance of ACO when compared to optimal solutions provided by CPLEX for small instances as well as to
other heuristics on larger instances.
Solving Clique Covering in Very Large Sparse Random Graphs by a Technique Based on k-Fixed Coloring Tabu Search
David Chalupa
We propose a technique for solving the k-fixed variant of the clique covering problem (k-CCP), where the aim is to determine, whether a graph can be divided into at most k non-overlapping cliques.
The approach is based on labeling of the vertices with k available labels and minimizing the number of non-adjacent pairs of vertices with the same label. This is an inverse strategy to k-fixed graph
coloring, similar to a tabu search algorithm TabuCol. Thus, we call our method TabuCol-CCP. The technique allowed us to improve the best known results of specialized heuristics for CCP on very large
sparse random graphs. Experiments also show a promise in scalability, since a large dense graph does not have to be stored. In addition, we show that Gamma-function, which is used to evaluate a
solution from the neighborhood in graph coloring in O(1) time, can be used without modification to do the same in k-CCP. For sparse graphs, direct use of Gamma allows a significant decrease in space
complexity of TabuCol-CCP to O(|E|), with recalculation of fitness possible with small overhead in O(log deg(v)) time, where deg(v) is the degree of the vertex, which is relabeled.
Solving the Virtual Network Mapping Problem with Construction Heuristics, Local Search and Variable Neighborhood Descent
Johannes Infuehr, Guenther R. Raidl
The Virtual Network Mapping Problem arises in the context of Future Internet research. Multiple virtual networks with different characteristics are defined to suit specific applications. These
virtual networks, with all of the resources they require, need to be realized in one physical network in a most cost effective way. Two properties make this problem challenging: Already finding any
valid mapping of all virtual networks into the physical network without exceeding the resource capacities is NP-hard, and the problem consists of two strongly dependent stages as the implementation
of a virtual network's connections can only be decided once the locations of the virtual nodes in the physical network are fixed. In this work we introduce various construction heuristics, Local
Search and Variable Neighborhood Descent approaches and perform an extensive computational study to evaluate the strengths and weaknesses of each proposed solution method.
The Generate-and-Solve Framework Revisited: Generating by Simulated Annealing
Rommel D. Saraiva, Napoleao V. Nepomuceno, Placido R. Pinheiro
The Generate-and-Solve is a hybrid framework to cope with hard combinatorial optimization problems by artificially reducing the search space of solutions. In this framework, a metaheuristic engine
works as a generator of reduced instances of the problem. These instances, in turn, can be more easily handled by an exact solver to provide a feasible (optimal) solution to the original problem.
This approach has commonly employed genetic algorithms and it has been particularly effective in dealing with cutting and packing problems. In this paper, we present an instantiation of the framework
for tackling the constrained two-dimensional non-guillotine cutting problem and the container loading problem using a simulated annealing generator. We conducted computational experiments on a set of
difficult benchmark instances. Results show that the simulated annealing implementation overachieves previous versions of the Generate-and-Solve framework. In addition, the framework is shown to be
competitive with current state-of-the-art approaches to solve the problems studied here. | {"url":"http://www.kevinsim.co.uk/evostar2013/cfpEvoCOP.html","timestamp":"2014-04-17T01:47:48Z","content_type":null,"content_length":"42378","record_id":"<urn:uuid:62fb603b-0aa0-4777-ae57-5eb6ec9ed464>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
qu-quadratic eqn.
January 17th 2010, 09:35 AM #1
Find the value of p so that the equation x^2+10x+21=0 and x^2+9x+p=0 may have a common root. Find also the equation formed by the other root.
Observe that $f(x) = x^2+10x+21 = (x+3)(x+7)$. Hence roots of $f(x) = 0$ are $x=-3$ and $x=-7$.
Observe that roots of $g(x) = x^2+9x+p =0$ are given by $x_0 = \frac{-9\pm \sqrt{81-4p}}{2}$.
Thus if you find the values of $p$ where $x_0 = -3$ or $x_0 = -7$ you're done.
Sorry. This can be done easier ;p.
Since you must find p such that $(x-a)(x+3) =x^2+(3-a)x-3a = x^2+9x+p$. Observe the only possibility is $a = -\frac{p}{3}$.
Thus now we must find p such that: $x^2+(3+\frac{p}{3})x+ p= x^2+9x+p$. This gives $p = 18$
Now you can do the same trick for $f(x) = (x-a)(x+7)$ to find another value of p
January 17th 2010, 09:53 AM #2
January 17th 2010, 10:06 AM #3 | {"url":"http://mathhelpforum.com/algebra/124144-qu-quadratic-eqn.html","timestamp":"2014-04-21T08:31:46Z","content_type":null,"content_length":"36310","record_id":"<urn:uuid:4f71ce28-194b-488c-b220-89eccc6261f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marietta, GA Algebra 2 Tutor
Find a Marietta, GA Algebra 2 Tutor
I have six years of college-level teaching experience in mathematics. Experience includes teaching elementary probability and statistics, college algebra, pre-calculus, and business calculus. I
assist students in solving problems and clarifying concepts, thus dramatically increasing their grades and performance in the subjects tutored.
16 Subjects: including algebra 2, calculus, geometry, statistics
My goal is to provide students with confidence and competence in mathematics and the SAT. To me, confidence is just as important as knowing the material. My most recent positions as a full time
Math Teacher in Gwinnett County, Math Instructor with Sylvan Learning Group and SAT Instructor with Pro Tutoring allow for a hands-on approach to student enrichment.
12 Subjects: including algebra 2, geometry, algebra 1, SAT math
...I know the situations that can help or derail a person of any age being successful in attaining their goals. The ISEE is very similar to the SSAT except that the ISEE is for entrance into
private schools where the SSAT is given in the public school usually magnet programs. The ISEE has different tests for different grade levels.
40 Subjects: including algebra 2, English, reading, calculus
...In addition I have worked in a prosthetic laboratory designing new tools for persons with limb loss for two years. Math and science has opened many doors for me and they can do the same for
you!Differential Equations is an intimidating and potentially frustrating course. The course is usually taken by engineering students and taught by mathematics professors.
15 Subjects: including algebra 2, physics, calculus, trigonometry
...I feel that every student is different in what builds their confidence in the material, so try to figure out what that is as we work together. I also ask for some information on exactly what
topics the student is working on prior to the first meeting and occasionally for future sessions to ensur...
9 Subjects: including algebra 2, chemistry, calculus, geometry
Related Marietta, GA Tutors
Marietta, GA Accounting Tutors
Marietta, GA ACT Tutors
Marietta, GA Algebra Tutors
Marietta, GA Algebra 2 Tutors
Marietta, GA Calculus Tutors
Marietta, GA Geometry Tutors
Marietta, GA Math Tutors
Marietta, GA Prealgebra Tutors
Marietta, GA Precalculus Tutors
Marietta, GA SAT Tutors
Marietta, GA SAT Math Tutors
Marietta, GA Science Tutors
Marietta, GA Statistics Tutors
Marietta, GA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Acworth, GA algebra 2 Tutors
Alpharetta algebra 2 Tutors
Atlanta algebra 2 Tutors
College Park, GA algebra 2 Tutors
Decatur, GA algebra 2 Tutors
Douglasville algebra 2 Tutors
Dunwoody, GA algebra 2 Tutors
Johns Creek, GA algebra 2 Tutors
Kennesaw algebra 2 Tutors
Lawrenceville, GA algebra 2 Tutors
Mableton algebra 2 Tutors
Roswell, GA algebra 2 Tutors
Sandy Springs, GA algebra 2 Tutors
Smyrna, GA algebra 2 Tutors
Woodstock, GA algebra 2 Tutors | {"url":"http://www.purplemath.com/marietta_ga_algebra_2_tutors.php","timestamp":"2014-04-20T11:10:10Z","content_type":null,"content_length":"24331","record_id":"<urn:uuid:2aa3bef7-508f-44f6-b851-55f5fc36fff0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry, Topology and Physics, Second Edition (Graduate Student Series in Physics)
Geometry, Topology and Physics Offers an introduction to the ideas and techniques of differential geometry and topology. This work starts with a brief survey of quantum field theory, gauge theory,
general relativity, vector spaces, and topology. It also demonstrates elaborate concepts of topology and geometry, including fiber bundles, characteristic classes, and index theorems. Full
description [via] | {"url":"http://www.bookfinder.com/dir/i/Geometry,_Topology_and_Physics,_Second_Edition_Graduate_Student_Series_in/0750306068/","timestamp":"2014-04-19T02:56:26Z","content_type":null,"content_length":"24809","record_id":"<urn:uuid:576673cf-ad50-4586-873b-58045eea4ac7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Textbook-Integrated Guide to Educational Resources
Journal Articles: 65 results
A Laboratory Experiment Using Molecular Models for an Introductory Chemistry Class Shahrokh Ghaffari
Presents a new approach to using molecular models in teaching general chemistry concepts. Students construct molecular models and use them to balance simple chemical equations, demonstrate the law
of conservation of mass, and discover the relationship between the mole and molecules and atoms.
Ghaffari, Shahrokh. J. Chem. Educ. 2006, 83, 1182.
Molecular Modeling |
Stoichiometry |
Student-Centered Learning
Interactive Demonstrations for Mole Ratios and Limiting Reagents Crystal Wood and Bryan Breyfogle
The objective of this study was to develop interactive lecture demonstrations based on conceptual-change learning theory. Experimental instruction was designed for an introductory chemistry course
for nonmajors to address misconceptions related to mole ratios and limiting reagents
Wood, Crystal; Breyfogle, Bryan. J. Chem. Educ. 2006, 83, 741.
Learning Theories |
Reactions |
Stoichiometry |
Student-Centered Learning
Procedure for Decomposing a Redox Reaction into Half-Reactions Ilie Fishtik and Ladislav H. Berka
The principle of stoichiometric uniqueness provides a simple algorithm to check whether a simple redox reaction may be uniquely decomposed into half-reactions in a single way. For complex redox
reactions the approach permits a complete enumeration of a finite and unique number of ways a redox reaction may be decomposed into half-reactions. Several examples are given.
Fishtik, Ilie; Berka, Ladislav H. J. Chem. Educ. 2005, 82, 553.
Stoichiometry |
Equilibrium |
Electrochemistry |
Oxidation / Reduction |
Reactions |
Evaluating Students' Conceptual Understanding of Balanced Equations and Stoichiometric Ratios Using a Particulate Drawing Michael J. Sanger
A total of 156 students were asked to provide free-response balanced chemical equations for a classic multiple-choice particulate-drawing question first used by Nurrenbern and Pickering. The
balanced equations and the number of students providing each equation are reported in this study. The most common student errors included a confusion between the concepts of subscripts and
coefficients and including unreacted chemical species in the equation.
Sanger, Michael J. J. Chem. Educ. 2005, 82, 131.
Stoichiometry |
Kinetic-Molecular Theory
Using Knowledge Space Theory To Assess Student Understanding of Stoichiometry Ramesh D. Arasasingham, Mare Taagepera, Frank Potter, and Stacy Lonjers
Using the concept of stoichiometry we examined the ability of beginning college chemistry students to make connections among the molecular, symbolic, and graphical representations of chemical
phenomena, as well as to conceptualize, visualize, and solve numerical problems. Students took a test designed to follow conceptual development; we then analyzed student responses and the
connectivities of their responses, or the cognitive organization of the material or thinking patterns, applying knowledge space theory (KST). The results reveal that the students' logical frameworks
of conceptual understanding were very weak and lacked an integrated understanding of some of the fundamental aspects of chemical reactivity.
Arasasingham, Ramesh D.; Taagepera, Mare; Potter, Frank; Lonjers, Stacy. J. Chem. Educ. 2004, 81, 1517.
Learning Theories |
Stoichiometry |
The Decomposition of Zinc Carbonate: Using Stoichiometry To Choose between Chemical Formulas Stephen DeMeo
To determine which formula corresponds to a bottle labeled "zinc carbonate", students perform qualitative tests on three of zinc carbonate's decomposition products: zinc oxide, carbon dioxide, and
water. Next students make quantitative measurements to find molar ratios and compare them with the coefficients of the balanced chemical equations. This allows the correct formula of zinc carbonate
to be deduced.
DeMeo, Stephen. J. Chem. Educ. 2004, 81, 119.
Gases |
Stoichiometry |
Quantitative Analysis
Alka-Seltzer Fizzing—Determination of Percent by Mass of NaHCO[3] in Alka-Seltzer Tablets. An Undergraduate General Chemistry Experiment Yueh-Huey Chen and Jing-Fun Yaung
Lab activity that introduces the concept of a limiting reactant by incrementally increasing the amount of vinegar added to an Alka Seltzer tablet.
Chen, Yueh-Huey; Yaung, Jing-Fun. J. Chem. Educ. 2002, 79, 848.
Acids / Bases |
Quantitative Analysis |
Analysis of an Oxygen Bleach: A Redox Titration Lab Christine L. Copper and Edward Koubek
Students balance the reaction of H2O2 and MnO4 in two different ways (one assuming that H2O2 is the oxygen source and a second assuming that MnO4 is the oxygen source), determine which of these
balanced equations has the correct stoichiometry by titrating a standard H2O2 solution with KMnO4, and use the correct balanced equation to determine the mass percent of H2O2 in a commercially
available bleach solution.
Copper, Christine L.; Koubek, Edward. J. Chem. Educ. 2001, 78, 652.
Quantitative Analysis |
Oxidation / Reduction |
Stoichiometry |
Titration / Volumetric Analysis |
Consumer Chemistry
Using Games To Teach Chemistry. 2. CHeMoVEr Board Game Jeanne V. Russell
A board game similar to Sorry or Parcheesi was developed. Students must answer chemistry questions correctly to move their game piece around the board. Card decks contain questions on balancing
equations, identifying the types of equations, and predicting products from given reactants.
Russell, Jeanne V. J. Chem. Educ. 1999, 76, 487.
Stoichiometry |
Nomenclature / Units / Symbols
Let's Dot Our I's and Cross Our T's Leenson, Ilya A.
Leenson, Ilya A. J. Chem. Educ. 1998, 75, 1088.
Stoichiometry |
Oxidation / Reduction
A Challenging Balance P Glaister
A difficult-to-balance equation and how its solution might be approached.
Glaister, P. J. Chem. Educ. 1997, 74, 1368.
Redox Balancing without Puzzling Marten J. ten Hoor
Once it has been established by experiment that the given reactants can indeed be converted into the given products, chemistry has done its job. Balancing the equation of the reaction is a matter of
mathematics only.
ten Hoor, Marten J. J. Chem. Educ. 1997, 74, 1367.
Stoichiometry |
Oxidation / Reduction
A New and General Method for Balancing Chemical Equations by Inspections Chunshi Guo
Any chemical equation, no matter how complicated, can be balanced by inspection. In fact, inspection is often the quickest and easiest way to balance complex equation. The method described here
involves the use of "linked sets". It does not require the use of oxidation numbers of the splitting of equations into "half reactions". It can be used to balance all kinds of chemical equations,
including ionic equations.
Guo, Chunshi. J. Chem. Educ. 1997, 74, 1365.
Balancing Chemical Equations by Inspection Zoltán Tóth
The paper shows that the balancing chemical equations by inspection is not a trial-and-error process, because a systematic procedure for the balancing simple and more complicated chemical equations
without oxidation numbers or equations with several unknowns can be suggested. The proposed method is suitable for balancing all the chemical equations, including ionic equations, which have single
unique solution.
Toth, Zoltan. J. Chem. Educ. 1997, 74, 1363.
On Balancing Chemical Equations: Past and Present William C. Herndon
The main purposes of this paper are to give a listing of selected papers on balancing chemical equations that may be useful to chemistry teachers and potential authors as background material, and to
provide some comparisons of methods. The selection criteria for the references were deliberately broad, in order to include a wide variety of topics and seminal historical citations, and the
references are annotated to increase their usefulness.
Herndon, William C. J. Chem. Educ. 1997, 74, 1359.
Letter to the Editor about Letter to the Editor "Redox Challenges" from David M. Hart and Response from Roland Stout (J. Chem. Educ. 1996, 73, A226-7) Andrzej Sobkowiak
Examples of a variety of redox equations.
Sobkowiak, Andrzej. J. Chem. Educ. 1997, 74, 1256.
Stoichiometry |
Reactions |
Oxidation / Reduction
Letter to the Editor about "Redox Challenges" by Roland Stout (J. Chem. Educ. 1995, 72, 1125) Rodger S. Nelson
Solution for balancing a difficult equation using the conservation of mass.
Nelson, Rodger S. J. Chem. Educ. 1997, 74, 1256.
An Analysis of the Algebraic Method for Balancing Chemical Reactions John A. Olson
A new aspect of this treatment is the mathematical formulation of a third condition involving a balance between oxidation and reduction. The treatment begins with the three general conditions that a
balanced chemical reaction must satisfy. These conditions are then expressed in mathematical form that enables the stoichiometric coefficients to be determined.
Olson, John A. . J. Chem. Educ. 1997, 74, 538.
Oxidation / Reduction |
Redox Challenges (the author replies) Stout, Roland
Algebraic solution to balancing a redox equation.
Stout, Roland J. Chem. Educ. 1996, 73, A227.
Stoichiometry |
Oxidation / Reduction |
Oxidation State
Redox Challenges (2) Zaugg, Noel S.
Algebraic solution to balancing a redox equation.
Zaugg, Noel S. J. Chem. Educ. 1996, 73, A226.
Stoichiometry |
Oxidation / Reduction |
Oxidation State
Redox Challenges (1) Hart, David M.
Algebraic solution to balancing a redox equation.
Hart, David M. J. Chem. Educ. 1996, 73, A226.
Stoichiometry |
Oxidation / Reduction |
Oxidation State
How Do I Balance Thee? ... Let Me Count the Ways! Lawrence A. Ferguson
The author suggests that this would be a good equation for students to try to balance by trial and error because it has two different sets of coefficients that are not multiples of each other.
Ferguson, Lawrence A. J. Chem. Educ. 1996, 73, 1129.
Redox Challenges: Good Times for Puzzle Fanatics Roland Stout
Three difficult to balance redox equations.
Stout, Roland. J. Chem. Educ. 1995, 72, 1125.
Reactions |
Stoichiometry |
Oxidation / Reduction |
Enrichment / Review Materials
The Relationship between the Number of Elements and the Number of Independent Equations of Elemental Balance in Inorganic Chemical Equations R. Subramanian, N.K. Goh, and L. S. Chia
The criterion for determining whether a chemical equation can be balanced fully by the algebraic technique and its application.
Subramaniam, R.; Goh, N. K.; Chia, L. S. J. Chem. Educ. 1995, 72, 894.
Stoichiometry |
REACT: Exploring Practical Thermodynamic and Equilibrium Calculations Ramette, Richard W.
Description of REACT software to balance complicated equations; determine thermodynamic data for all reactants and products; calculate changes in free energy, enthalpy, and entropy for a reaction;
and find equilibrium conditions for the a reaction.
Ramette, Richard W. J. Chem. Educ. 1995, 72, 240.
Stoichiometry |
Equilibrium |
Thermodynamics |
Balancing a chemical equation: What does it mean? Filgueiras, Carlos A
Students were puzzled by the idea that one chemical equation could be balanced in several different ways. This led to a fruitful discussion on how exact a science chemistry really is.
Filgueiras, Carlos A J. Chem. Educ. 1992, 69, 276.
Stoichiometry |
Oxidation / Reduction
Chemical equations are actually matrix equations Alberty, Robert A.
Chemists tend to think that chemical equations are unique to chemistry and they are not used to thinking of chemical equations as the mathematical equations they in fact are. The objective of this
paper is to illustrate the mathematical significance of chemical equations.
Alberty, Robert A. J. Chem. Educ. 1991, 68, 984.
Stoichiometry |
Nitric oxide leftovers Hornack, Fred M.
This example shows that a stoichiometric problem can be solved in a number of different but equally valid ways.
Hornack, Fred M. J. Chem. Educ. 1990, 67, 496.
Stoichiometry |
Applications of Chemistry
Problem solving and requisite knowledge of chemistry Lythcott, Jean
It is possible for students to produce right answers to chemistry problems without really understanding much of the chemistry involved.
Lythcott, Jean J. Chem. Educ. 1990, 67, 248.
Stoichiometry |
Learning Theories
A question of basic chemical literacy? Missen, Ronald W.; Smith, William R.
The ability to read and write clearly in chemical-equation terms is not as well developed as it should be. The purpose of this "Provocative Opinion" is to draw attention to this problem, and to
suggest specific remedies for its solution.
Missen, Ronald W.; Smith, William R. J. Chem. Educ. 1989, 66, 217.
Chemistry according to ROF (Fee, Richard) Radcliffe, George; Mackenzie, Norma N.
Two reviews on a software package that consists of 68 programs on 17 disks plus an administrative disk geared toward acquainting students with fundamental chemistry content. For instance, acids and
bases, significant figures, electron configuration, chemical structures, bonding, phases, and more.
Radcliffe, George; Mackenzie, Norma N. J. Chem. Educ. 1988, 65, A239.
Chemometrics |
Atomic Properties / Structure |
Equilibrium |
Periodicity / Periodic Table |
Periodicity / Periodic Table |
Stoichiometry |
Physical Properties |
Acids / Bases |
Covalent Bonding
Solving limiting reagent problems (the author replies) Kalantar, A. H.
Thanks for clarification and suggestion.
Kalantar, A. H. J. Chem. Educ. 1987, 64, 472.
Stoichiometry |
Solving limiting reagent problems Skovlin, Dean O.
Uncertainty in the meaning of the term X.
Skovlin, Dean O. J. Chem. Educ. 1987, 64, 472.
Stoichiometry |
A new method to balance chemical equations Garcia, Arcesio
A simple method, applicable to any kind of reaction, that does not require the knowledge of oxidation numbers.
Garcia, Arcesio J. Chem. Educ. 1987, 64, 247.
Stoichiometry |
Oxidation State |
The chemistry tutor (Rinehart, F.P.) Watkins, Stanley R.; Krugh, William D.
Two reviews of a two-disk package that is designed to help students master the essential skills of equation balancing, stoichiometric,and limiting reagents calculations.
Watkins, Stanley R.; Krugh, William D. J. Chem. Educ. 1986, 63, A206.
Chemistry: Stoichiometry and Chemistry: Acids and Bases ( Frazin, J. and partners) Bendall, Victor I.; Roe, Robert, Jr.
Two reviews of a software package that contains drill and practice programs that are suitable for beginning students of chemistry.
Bendall, Victor I.; Roe, Robert, Jr. J. Chem. Educ. 1986, 63, A204.
Stoichiometry |
Acids / Bases
A simpler method of chemical reaction balancing Harjadi, W.
The author adds to some other approaches that appeared in this Journal that dealt with balancing a rather large chemical equation.
Harjadi, W. J. Chem. Educ. 1986, 63, 978.
What can we do about Sue: A case study of competence Herron, J. Dudley; Greenbowe, Thomas J.
A case study of a "successful" student who is representative of other successful students that are not prepared to solve novel problems.
Herron, J. Dudley; Greenbowe, Thomas J. J. Chem. Educ. 1986, 63, 528.
Stoichiometry |
Learning Theories
On writing equations Campbell, J.A.
The author presents a very direct approach to writing complicated equations without using a matrix approach.
Campbell, J.A. J. Chem. Educ. 1986, 63, 63.
Stoichiometry |
How should equation balancing be taught? Porter, Spencer K.
Suggestions for balancing chemical equations.
Porter, Spencer K. J. Chem. Educ. 1985, 62, 507.
Note to: method for balancing redox reactions containing hydroxyl ions Stark, Franz M.
A much simpler way of balancing the equations presented.
Stark, Franz M. J. Chem. Educ. 1984, 61, 476.
Stoichiometry |
Oxidation / Reduction
Balancing complex chemical equations using a hand-held calculator Alberty, Robert A.
37. Bits and pieces, 14. This article is primarily concerned the question: If certain specified chemical species are involved in a reaction, what are the stoichiometric coefficients?
Alberty, Robert A. J. Chem. Educ. 1983, 60, 102.
Chemical equation balancing: A general method which is quick, simple, and has unexpected applications Blakley, G. R.
Using matrices to solve mathematical equations and balance chemical equations. From "The Goals of General Chemistry - A Symposium."
Blakley, G. R. J. Chem. Educ. 1982, 59, 728.
Stoichiometry |
Balancing chemical equations with a calculator Kennedy, John H.
A straightforward mechanical approach that leads quickly to a properly balanced equation.
Kennedy, John H. J. Chem. Educ. 1982, 59, 523.
Balancing complex redox equations by inspection Kolb, Doris
A step-by-step walk through of the inspection process for balancing equations.
Kolb, Doris J. Chem. Educ. 1981, 58, 642.
Stoichiometry |
Pressure and the exploding beverage container Perkins, Robert R.
The question in this article is an extension of exploding pop bottles to illustrate the balancing of a chemical equation, enthalpy, stoichiometry, and vapor pressure calculations, and the use of the
Ideal Gas Equation. The question is aimed at the first-year level student.
Perkins, Robert R. J. Chem. Educ. 1981, 58, 363.
Stoichiometry |
Gases |
Thermodynamics |
More on balancing redox equations Kolb, Doris
Balancing atypical redox equations.
Kolb, Doris J. Chem. Educ. 1979, 56, 181.
Stoichiometry |
Oxidation / Reduction
Participatory lecture demonstrations Battino, Rubin
Examples of participatory lecture demonstrations in chromatography, chemical kinetics, balancing equations, the gas laws, the kinetic-molecular theory, Henry's law, electronic energy levels in
atoms, translational, vibrational, and rotational energies of molecules, and organic chemistry.
Battino, Rubin J. Chem. Educ. 1979, 56, 39.
Chromatography |
Kinetic-Molecular Theory |
Kinetics |
Stoichiometry |
Gases |
Atomic Properties / Structure |
Molecular Properties / Structure
Balancing an atypical redox equation Carrano, S. A.
The author presents a particularly tricky redox problem and walks readers through a solution.
Carrano, S. A. J. Chem. Educ. 1978, 55, 382.
Chemometrics |
Oxidation / Reduction |
The chemical equation. Part I: Simple reactions Kolb, Doris
A chemical equation is often misunderstood by students as an "equation" that is used in chemistry. However, a more accurate description is that it is a concise statement describing a chemical
reaction expressed in chemical symbolism.
Kolb, Doris J. Chem. Educ. 1978, 55, 184.
Stoichiometry |
Chemometrics |
Nomenclature / Units / Symbols |
A computer program designed to balance inorganic chemical equations Rosen, Allen I.
A BASIC program designed to check the correct balancing of chemical equations.
Rosen, Allen I. J. Chem. Educ. 1977, 54, 704.
A balancing act Schug, Kenneth
A method for teaching introductory students how to balance chemical equations.
Schug, Kenneth J. Chem. Educ. 1977, 54, 370.
The paper clip mole - An undergraduate experiment Cassen, T.
Paper clips are used to represent atoms and demonstrate the concept of atomic weight.
Cassen, T. J. Chem. Educ. 1975, 52, 386.
Mole concept and limiting reagent in the laboratory Maio, Frances A.
The author provides a stepwise approach to problems in limiting reagents and the mole concepts.
Maio, Frances A. J. Chem. Educ. 1971, 48, 155.
Grading the copper sulfide experiment Novick, Seymour
The author recommends a more liberal analysis in grading the copper sulfide experiment.
Novick, Seymour J. Chem. Educ. 1970, 47, 785.
Stoichiometry |
Balancing equations (the author responds) Young, Jay A.
Recognizes the referenced letter.
Young, Jay A. J. Chem. Educ. 1970, 47, 785.
Balancing equations Missen, R. W.
The author provides an alternative answer to the question in the referenced article.
Missen, R. W. J. Chem. Educ. 1970, 47, 785.
Application of diophantine equations to problems in chemistry Crocker, Roger
The mathematical method of diophantine equations is shown to apply to two problems in chemistry: the balancing of chemical equations, and determining the molecular formula of a compound.
Crocker, Roger J. Chem. Educ. 1968, 45, 731.
Mathematics / Symbolic Mathematics |
Balancing ionic equations by the method of undetermined coefficients Haas, Rudy; Gayer, Karl H.
Describes a mathematical method for balancing chemical equations.
Haas, Rudy; Gayer, Karl H. J. Chem. Educ. 1962, 39, 537.
Stoichiometry |
Redox revisited Lockwood, Karl L.
Examines issues regarding instruction in oxidation-reduction chemistry.
Lockwood, Karl L. J. Chem. Educ. 1961, 38, 326.
Oxidation / Reduction |
Oxidation State |
Letters to the editor Perkins, Alfred J.
A discussion of balancing redox equations in response to an earlier article in the Journal.
Perkins, Alfred J. J. Chem. Educ. 1959, 36, 474.
Stoichiometry |
Oxidation / Reduction
Writing oxidation-reduction equations: A review of textbook materials Yalman, Richard G.
The purpose of this paper is to review those parts of a number of textbooks containing aids or suggestions to help students balance oxidation-reduction reactions.
Yalman, Richard G. J. Chem. Educ. 1959, 36, 215.
Stoichiometry |
Oxidation / Reduction |
Oxidation State
Balancing organic redox equations Burrell, Harold P. C.
This paper presents a method for balancing organic redox equations based on the study of structural formulas and an artificial device - the use of hypothetical free radicals.
Burrell, Harold P. C. J. Chem. Educ. 1959, 36, 77.
Stoichiometry |
Oxidation / Reduction |
Free Radicals
Material balances and redox equations Bennett, George W.
It is the purpose of this paper to remind teachers of a third method of balancing redox equations that does not depend on rule-of-thumb empiricism but relies on the conservation of matter.
Bennett, George W. J. Chem. Educ. 1954, 31, 324.
Stoichiometry |
Oxidation / Reduction |
Oxidation State
Otis Coe Johnson and redox equations Bennett, George W.
It is the purpose of this paper to point out what is basic verity and what is empiricism in Johnson's method for balancing oxidation-reduction equations.
Bennett, George W. J. Chem. Educ. 1954, 31, 157.
Oxidation / Reduction |
Oxidation State | | {"url":"http://www.chemeddl.org/alfresco/service/org/chemeddl/ttoc/ttoc_results/?id=28876&mode=primary&type=jcearticle&num_results=&guest=true","timestamp":"2014-04-20T00:39:40Z","content_type":null,"content_length":"50561","record_id":"<urn:uuid:588c939c-76de-49ce-9d37-ace9076c7928>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vacuum Tube Spice Models - Page 3 - diyAudio
Oh No! Another Pentode Model
In another thread I posted the results of a new pentode model for the GU50 tube, and said that I would start a new topic to discuss this model in detail. I'll warn everyone right up front that this
new model has a lot of parameters that need to be calculated. However, the model has been designed in such a way that there is minimum interaction between the parameters, so that they can be
calculated in small independent sets.
Over the past year I've been working on a spreadsheet to do loadline analysis of output tubes, and have been using the Koren model which works very well for triodes, but I've had trouble getting an
accurate fit especially near the knee of the plate current curves. Recently, I've been tinkering with an alternative pentode model. When I started on this, I wasn't aware of Ayumi's pentode model. I
had a look at it (using Google to translate from Japanese) and, while I didn't read the entire article in detail, I was impressed with methodology for determining the parameters. However, I was still
a bit disappointed with the way the model fit around the knee of the plate current curve. So, I decided to continue with my project.
Although the Koren model is empirical, it appears that it's theoretically based with some empirical additions. This is generally a good way to do it, because we can be reasonably confident that if we
try to extrapolate the model beyond the original fitting data, the curves won't likely veer out of control.
Unfortunately, there sometimes comes a point where the theoretical model is just too approximate, and piling empirical additions onto it makes it more and more unwieldy. I decided to start with clean
slate and use a completely empirical model based on very simple math functions. Examining the plate current curves for a pentode, we can approximate the curve with two straight lines: the first, to
the left of the knee with a steep slope (see Line A in Figure 1), and the second, to the right of the knee, with an almost horizontal slope (see Line B in Figure 1). In the transition region, a
function must be provided to blend smoothly from one to the other. For the time being, I'll ignore the effects of changing screen and control grid voltages. I'll assume the values of Ec and Es to be
fixed at the maximum voltage for the published data. That means that we only have to fit a single variable function.
I'll use the data for the GU50 as an example, and assume Ec=0 and Es=250V. I'll refer to these reference voltages as EcRef and EsRef respectively.
For the part of the curve below the knee voltage, a simple straight line will suffice (Line A, Figure 1), and has the form:
where g0 is the slope and b0 is the Ip axis intercept. Since Ip must be zero when Ep is zero, b0 must be zero, and can be eliminated from the formula. So the plate current function for the left side
of the curve is simply:
g0 is simply the transconductance below the knee, and can be found directly from the plate current curves by picking a point on the curve just before the knee and dividing the current by the
corresponding voltage.
For the GU50 example, the curve starts to bend after Ip goes above 0.160 A (For Ec=0). The plate voltage at this point is about 24 volts, so the slope is 0.16/24. Hence:
For the right side of the curve (Line B, Figure 1), I looked at several different functions, in order of increasing complexity:
- A simple straight line (2 parameters)
- A parabola (3 parameters)
- A square root function (4 parameters)
- A hyperbola (4 parameters)
The choice of which function to use is a compromise between ease of calculation, and how well it fits. A simple straight line may work well in many cases where the plate curves are very straight, and
it is very easy to calculate with nothing more than a ruler to measure slope and intercept, but in the general case there's often too much curvature to the Ip/Ep characteristic. So, I looked at the
next simplest function, a parabola. This requires only one more parameter than the straight line, and there are many online curve fitting calculators that will do the work of determining the 3
coefficients. This gives a better fit than the straight line in most cases. However, it isn't very good when attempting to use the model beyond the range of the fitted data, because the parabolic
curve naturally starts to drop off at high values. The hyperbola and square root function don't suffer from this problem, making them fairly safe for use beyond the range of fitting data. I
eventually chose a hyperbolic function with an additional linear term of this form:
This involves a bit more work to calculate the parameters. Currently, I'm using a solver macro in my spreadsheet to do the fitting work, but I hope to develop a method for calculating them directly.
For the GU50 example, the solver came up with these values:
In the knee region a method is needed to transition from the nearly vertical line on the left to the nearly horizontal curve on the right. This can be handled with a math function that exhibits an
S-shaped curve. There are numerous examples. The arctan function is one of them, and it is the first one that I tried. However, it wasn't satisfactory. So, I moved on to an exponential form:
where parameter kt1 determines the position of the centre of the transition region, and kt2 determines the width of the transition region.
For the GU50 example, the centre of the transition region (kt2) will be somewhere above 24 volts, so I chose a value of 30 for kt1 and then tried different values of kt2 to see the effect of widening
or narrowing the transition region. (The smaller the value of kt2, the wider the transition region.) The best value turned out to be less than one. I let the solver fine tune these values, and it
came up with:
Combining the three functions into one, we get:
Using the above calculated parameters, we get the plate current curve show in Figure 2. The blue data points are as measured off of the manufacturers data sheet, and the red curve is the fitting
function. It's pretty good, but not perfect near the knee.
At this point I decided to let the solver fine tune all of the values simultaneously, and it came up with the following optimized values:
Surprisingly, the solver changed some of the values quite radically, but this gave a very good fit that wouldn't have been possible by manually manipulating the numbers.
Now, you're probably thinking it's a lot of parameters (seven) for a function that still doesn't account for screen voltage and control grid voltage. True enough, but that's the cost of getting a
good fit. The following chart shows how the fitted curve (red line) compares with the published data (blue points). The fit is almost exact. (See Figure 3)
Next we have to account for control and screen grids.
We can see that when the control grid is made more negative, the Ip curve moves closer to the Ep axis, but stays the same shape. It seems reasonable then that the plate current function can simply be
multiplied by a scale factor which is a simple function of Ec. I found a parabolic fit works very well for this. It's simply a matter of picking a fixed value for Ep, say (600 volts for the GU50
example), and then measure the curves to find Ip for different values of Ec. These current values are divided by the value of Ip when Ec=EcRef. This normalizes the values so that the scale factor
function will return a value of 1 when Ec=EcRef. The resulting data points are then plugged into a curve fitter to get the coefficients for what I will call the control grid reduction function:
Frc = kc1*Ec^2+kc2*Ec+kc3
We're now up to ten parameters, but the good news is that these last three parameters will have no interactions with the first seven parameters, making their calculation reasonably painless.
The curve fitting calculator gave the following values:
The complete set of curves for Es=250V is shown in Figure 4.
To account for screen voltage, we do the same thing we did for the control grid. This time, we hold Ep constant (700V) and Ec constant (0V), and then measure the different values of Ip as we vary Es.
Again this is normalized by dividing the Ip values by the value of Ip at Es=EsRef, and then is fitted to a parabolic curve using a curve fitter. This give the screen reduction function:
Frs = ks1*Es^2+ks2*Es+ks3
We're now up to 13 parameters, but again, these last three will have no interaction with the first 10 parameters so they can be calculated independently. The curve fitter gave the following values:
In this particular example the best fit happens to be a straight line, and so ks1 is zero. However, we can't assume that a straight line fit will work in every case.
We're almost done, but not quite. When the both control and screen grids are made more negative, the resulting plate current decreases more than the combined Frc and Frs functions will account for.
This is shown on the screen curves of Figure 5. Notice how the fitted curves (red lines) sit too high in the mid region.
Therefore one last function must be included to account for the interaction between screen and control grid. I tried a few functions experimentally and found the following to work reasonably well:
Fsc = 1/(1-(kcs*(Ec-EcRef)*(1-Es/EsRef)))
where EcRef and EsRef are the reference values of control and screen grid voltages mentioned earlier, and kcs is a value between 0 and 1 that is manually adjusted for best fit.
For the GU50 example I manually adjusted kcs for the best fit. The optimum value is:
The improvement is shown in Figure 6.
The model is now:
If this model is to be used in a situation where plate voltage can go negative, then it should be enclosed inside of a Max() function to force Ip to zero for negative Ep values. Hence the final model
And there it is. It requires the calculation of a total of 14 parameters, though the calculations are very simple for the most part. I don't include EcRef or EsRef in the total because they are
simply reference points on the published data and it's trivial to pick them.
Summarizing the parameters for the GU50, we have:
EcRef= 0
EsRef= 250
g0= 0.006963
kp1= -7.0253
kp2= 14.76
kp3= 4.8470e-5
kp4= 3.1618e-1
kt1= 24.05
kt2= 4.35
kc1= 3.3616e-4
kc2= 3.6311e-2
kc3= 1.0061
ks1= 0
ks2= 4.6688e-3
ks3= -0.16667
kcs= 0.09
At this point the plate current model does not account for non-zero suppressor voltage. To account for this, would require info not normally available in the published data. But if there is any
interest in this, I could take a look at it.
I have not developed any model for control grid current or screen grid current. I'll probably take a crack at them eventually. In the meantime, an existing model (Koren or Ayumi) will have to be used
for these.
Since I've been using this plate current function only on a spreadsheet for doing loadlines, I don't have it coded into a Spice model yet. However, it should be straightforward to take the above
formulas and create the Spice model.
Hopefully I haven't made any errors in transcribing the above formulas.
Last edited by Robert Weaver; 15th October 2013 at 09:26 PM. Reason: Fixed typos in Frs, Frc Ip formulas | {"url":"http://www.diyaudio.com/forums/tubes-valves/243950-vacuum-tube-spice-models-3.html","timestamp":"2014-04-21T00:55:10Z","content_type":null,"content_length":"99125","record_id":"<urn:uuid:529eb4ce-b7c7-4846-8e78-0a203ebb09cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Left and right representations
up vote 0 down vote favorite
In this question, representation means representation of a finite group $G$ on a finite-dimensional $\mathbb{C}$-vector space $V$.
When I studied this subject, I only learned about "left representations", i.e., when the action of $G$ on $V$ was a left action. What happens when we have a "right representation" (i.e., a group
homomorphism $G^{op} \rightarrow GL(V)$)?
I know this is a bit vague but I find it confusing whenever I have to deal with these "right representations"...
One specific question would be: when people talk about a right representation, say $\rho: G^{op} \rightarrow GL(V)$, and then they mention the character $\chi_{\rho}$, do they mean $\chi_{\rho}(g) =
trace(\rho(g))$ or $\chi_{\rho}(g) = trace(\rho(g^{-1}))$? What is the standard terminology when dealing with right representations?
Original question: I was reading a paper that mentioned a natural right representation of a given group $G$ and went on to say "irreducible representation of $G$". Should I assume he means a "right
irreducible representation"? Or does he mean a standard "left irreducible representation"?
2 $G\cong G^{op}$ via $g\mapsto g^{-1}$ – user2035 Aug 3 '11 at 11:48
@a-fortiori: I added a specific question. – expmat Aug 3 '11 at 11:53
I am pretty sure the character is $\chi_{\rho}\left(g\right) = \mathrm{trace}\left(\rho\left(g\right)\right)$, because when one speaks of right representations, one usually wants to think of them
as being genuine right representations, rather than left representations in disguise. But I'll leave the definitive answer to experts. – darij grinberg Aug 3 '11 at 12:36
Sorry for the vagueness and confusion. I just added the question that originally motivated this entry. Should I reformulate this whole entry? – expmat Aug 3 '11 at 12:57
add comment
2 Answers
active oldest votes
This left-right business is largely an accident of notation/language/writing, which makes it all the more difficult to distinguish genuine mathematical features from accidents of writing
left-to-right horizonatally, etc.
Presumably a primary way in which $G^{op}$ arises is acting on the dual $V^\star$ of a given repn $V$ of $G$, by choosing to have $G$ act "on the right" on the dual, by $(\lambda g)(v)=\
lambda(gv)$, that is, just moving the parentheses. The "problem", indeed, is the apparent left-right switch, if we write something like $\pi^\star(g)(\lambda)=\lambda\circ g$, namely, $\pi^\
It is a significant problem that $G^{op}$ [edit] cannot be isomorphic to $G$ non-abelian by $g\rightarrow g$... [as commented, my previous was messed-up...] Yes, it can be by $g\rightarrow g^
{-1}$, but/and this amounts to converting everything back to left $G$-modules. My point would be that it is going the long way round to introduce the opposite group if we want to recover
"left" $G$-modules in any case, and when the left-right business is syntactic, rather than semantic.
It is possibly better to have $G$ act "on the left" on the dual by $(g\lambda)(v)=\lambda(g^{-1}v)$.
Similarly, modules over rings can be "left" or "right", but, in fact, "right" modules are left modules over the opposite ring. Thus, a left $S$-module and right $R$-module is, as well, a left
$S\otimes R^{op}$-module, where the left actions of $S$ and $R^{op}$ commute.
Some special classes of rings do have natural isomorphisms to their opposite rings, allowing identifications...
EDIT: yes, incorporating @Snark's comment, one common manner in which the left-right seeming-confusion arises is with a group $G$ acting on a set $X$, and then "inducing" a natural action of
$G$ on functions $f$ on $X$. Much as taking dual reverses arrows, the action on functions on a set reverses arrows... thus the following. For $G$ acting on $X$ on the left, $G$ acts on
functions "on the left" by $(gf)(x)=f(g^{-1}x)$. For $G$ acting on the right on $X$, $G$ acts on functions "on the left" by $(gf)(x)=f(xg)$. (Note that this avoids having to pretend that $G^
{op}$ is a group, etc.) Once or twice in one's life, one should verify that the placement of inverse (or lack theoreof) makes the "left" action of $G$ on functions "associative", in the sense
$(gh)f=g(hf)$ for $g,h\in G$.
Edit: ... and, in reference to the added "original question", operationally there is no "left/right" sense to a repn. One may choose different ways of writing, but the underlying things do
up vote not change. Many years ago, I. Herstein advocated writing functions on the right of their arguments, for the reason that then the order we'd read them would be the order of their application,
4 down but it mostly did not "catch on".
More edits! (Sorry for the delay... busy...) Certainly if one finds left-right distinctions/notations useful at least as a mnemonic, then they're good. For myself, somehow the variety of
contexts relevant to my stuff makes me feel that left-right is insufficient. For one thing, different contexts would seem to impose conflicting requirements. For groups, many people do not
want the contragredient to be a repn of the "opposite group", because "contragredient" is supposed to be a functor from the category of $G$-repns to itself. Similarly for
special-but-important rings ("co-algebras", generally, I believe), such as group rings, universal enveloping algebras. "Oppositely", for rings without (relevant) involution, the bi-module,
etc., notion is sometimes very useful. Nevertheless, it can get out of hand: for $R,S$-bimodule $M$ and $T,U$-bimodule $N$, $Hom_{\mathbb Z}(M,N)$ has a fairly obvious structure... until/
unless we feel compelled to establish a once-and-for-all, "perfect" notation that allegedly suits all contexts.
I know that left-right is often used to suggest co/contra-variance, but, in fact, I claim that there are some technical advantages in not identifying the two things, one being innate, the
other arguably less so.
Again, I'll claim that left-right should never be the pivotal issue, if the context is decently portrayed. Sure, something can be _tied_to_ notational left-right issues, but there's surely
something underlying that is not so fragile as notation. This is not meant to "dis" anyone who finds left-right a marvelous device to remember distinctions!
Yet-another-edit: as with many things, the path by which one comes to a situation strongly affects one's attitude. Here, if one comes to "group repns" as a special and specially-equipped case
of categories of modules over rings, yes, arguably, it makes complete sense to talk about the way in which categories over special types of rings have bonus properties. On the other hand, if
one has followed the "opposite" route, starting with group repns as useful in various applications, and considered "imbedding" that story in the larger story of categories of modules over
rings with-or-without various structures, one could just-as-easily take the viewpoint that the "general case" is a kind of "failure". That is, if categories of group repns are the "usual
case" or "normal case", one might find no need to distinguish it from the "failing cases", and, likewise, not choose language that emphasizes how lucky we are to have "all these special
features"... because they're not special?
Certainly some people prefer to proceed from the particular to the general, and others vice-versa, and such predilections color language and notation, obviously, and, often in manners nearly
incomprehensible to people with "opposite" viewpoints.
I'll just add to this explanation : both the left and right representations correspond to left actions. The thing is that going through a function transforms a left action to a right
action (and vice versa), which explains the need to either write $f(xg)$ or $f(g^{-1}x)$ for a left action of $g$ on a function $f$. – Julien Puydt Aug 3 '11 at 12:39
6 $G^{op}$ is a group, for every group $G$. – user2035 Aug 3 '11 at 12:58
2 I disagree that "this left-right business is largely an accident of notation." We are talking about a special case of the distinction between covariant and contravariant functors, and this
distinction is real, important, and far from an "accident." – Qiaochu Yuan Aug 3 '11 at 14:47
1 The issue is that group theorists like to fix which side the act on. Thus whenever one performs a contravariant operation, the immediately us inverses to remove the contravariance. Thus
what many group theorists refer to the right regular representation for a certain left module. Semigroup theorists and ring theorists don't have this issue because dualizing a left module
gives a right module pointe finale! – Benjamin Steinberg Aug 3 '11 at 14:56
2 It is quite a genuine mathematical feature of group rings that their categories of left and right representations are equivalent! It ca be pinned down precisely in several ways, but it is
quite a feature :) – Mariano Suárez-Alvarez♦ Aug 4 '11 at 5:40
show 6 more comments
As has been pointed out in comments, any particular representation of a group can be realized as a representation with the "opposite handedness," but of another group, namely the opposite
group. As has also been mentioned, every group is canonically isomorphic to its opposite group via the inversion map. One might be inclined to combine these two comments to conclude that
the distinction between left and right representations is moot.
In my estimation: sometimes it is, and sometimes it isn't.
For example:
(1) When one defines the induction of a representation from a subgroup, one starts with a certain space of functions that transforms is a specified fashion on one side (depending on the
original representation). The canonical representation of the original group (of the original handedness) on this space is on the other side, or equivalently, is a representation of the
opposite group. However, we usually just trade it off for the same handedness representation of the same group through the inversion identification.
up vote 1
down vote (2) On the other hand, sometimes one naturally is led to deal with both kinds of actions at once - like when studying functions (better: forms of some sort) on double coset spaces of a pair
of subgroups in a group, and it can be useful to keep the right and left actions as such. Natural notation such as $H\setminus G/ K$ and the bi-chiral transformation formulas play more
nicely together this way, for example.
I think that, in both cases the choice comes down to a convenience of one sort or another.
For that matter, I suppose I feel like this convenience extends to the covariant/contravariant commentary here. One can trade off one for the other by trading off a category for its
opposite. This can certainly make a number of statements/constructions more awkward, but again this seems a matter of convenience.
All this aside, there is a lot to be said for convenience...
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/71988/left-and-right-representations","timestamp":"2014-04-19T20:30:08Z","content_type":null,"content_length":"73410","record_id":"<urn:uuid:94b8a2f9-5b80-4270-ac1c-e7749c5badf9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
To provide a paper on designing and building a high performance, independent, secure, wireless network using easily obtainable hardware. The final design should be easy to construct, yet maintain
all the goals at low cost. Network access will also be open to the public, or at least those in close proximity to our location. To also cover all technical aspects of wireless networking,
including documenting little known facts, and illustrating the actual network setup and installation, or at least that's my idea.
Although some of the items in this how-to are Proxim hardware specific, the overall designs and ideas should help in the design of any and all types of wireless networks.
Also covered will be instructions for using MMDS antennas in 2.4 GHz wireless networks and several modifications for Proxim Symphony wireless network cards. Adaptation to wireless network cards
other than the Symphony should be (is) trivial.
Some terms that will be used and their definitions :
Decibel, a logarithmic unit of intensity used to indicated power lost or gained between two signals. Named after Alexander Graham Bell.
Gain in decibels referenced to a standard half-wave dipole antenna. This is a more realistic reference to antenna gain.
Gain in decibels referenced to an isotropic radiator. An isotropic radiator is a theoretical antenna with equal gain to all points on isotropic sphere. 2.15 dBi = 0 dBd
Meaningless random large numbers generated by advertizing departments.
Decibel referenced to 1 milliwatt into a 50 Ohm impedance (usually) 0 dBm = 1 mW
Kilobytes per second, unit of data rate measurement, 1,000 bytes per second or 8,000 bits per second. Example: 30 kBps
Kilohertz, unit of frequency measurement, 1,000 periods per second. Example: 455 kHz
Kilobits per second, unit of data rate measurement, 1,000 bits per second. Example: 128 kbps
Milliwatt, one thousandth (1/1000) of a Watt, used to indictate received or transmitted power.
Direct Sequence Spread Spectrum, a RF carrier and pseudo-random pulse train are mixed to make a noise like wide-band signal.
Effective Isotropic Radiated Power, actual power transmitted in the main lobe after taking in account all cable losses and antenna gain. Based on an isotropic antenna.
Frequency Hopping Spread Spectrum, transmitting on one frequency for a certain time, then randomly jumping to another, and transmitting again.
Frequency Shift Keying, modulating a transmitter by using data signals to shift the carrier frequency. Commonly used for digital communications.
Gigahertz, unit of frequency measurement, 1,000,000,000 periods per second. Example: 2.4 GHz
Hertz, the basic unit of frequency measurement, cycle per second. Example: 60 Hz
Megahertz, unit of frequency measurement, 1,000,000 periods per second. Example: 147.075 MHz
Multichannel Multipoint Distribution Service, wireless method of remotely sending conventional cable TV service to rural areas or over long distances using microwave radio frequencies.
Megabits per second, unit of data rate measurement, 1,000,000 bits per second. Example: 1.544 Mbps
Radio Frequency, electromagnetic radiation between 10 kHz and 300 GHz.
The width of a signal on the radio spectrum. The greater the signal's bandwidth, the more frequency space it occupies, and the stronger the signal needs to be to overcome noise.
Coaxial Cable
A type of wire in which a center conductor is surrounded by a concentric outer conductor. Also called "coax".
Fade Margin
Fade margin is the difference, in dB, between the magnitude of the received signal at the receiver input and the minimum level of signal determined for reliable operation. Higher the fade margin,
the more reliable the link will be. The exact amount of fade margin required depends on the desired reliability of the link, but a good rule-of-thumb is 20 to 30 dB. Fade margin is often referred
to as "thermal" or "system operating margin".
Front-To-Rear (Back) Ratio
Antenna measurement that is determined from the peak power difference, in decibels, between the the main radiation lobe at 0° (front of an antenna) and the strongest rearward lobe (back of the
antenna). Higher the ratio, the more directional the antenna is.
Free Space Loss
Attenuation, in dB, of a RF signal's power as it propagates through open space.
Fresnel Zone
The Fresnel (fre'-nel) zone is an elliptical region surrounding the line-of-sight path between transmitting and receive antennas. Must be obstruction free for a microwave radio link to work
The complex combination of resistance and reactance, measured in Ohms (50 typically). Impedance must be matched for maximum power transfer.
Hypothetical, lossless electromagnetic wave radiating equally in all directions from a point source in free space. Used as a reference for antenna gain.
When the transmit and receive antennas can physically see each other.
When the RF signal arrives at the receiving antenna after bouncing through several paths. Significantly degrades the received signal power.
Path Loss
Free space loss of RF power plus any power loss due to link path obstructions, poor antenna height, and link distance.
The polarity of a radio signal's electric field. Transmit and receive antennas must have the same polarity for maximum receive power.
Radiation Fields
There are three traditional radiation fields in free space as a result of an antenna radiating power.
□ Near-field, also called the reactive near-field region, is the region that is closest to the transmitting antenna and for which the reactive field dominates over the radiative fields.
□ Fresnel zone, also called the radiating near-field, is that region between the reactive near-field and the far-field regions and is the region in which the radiation fields dominate and where
the angular field distribution depends on distance from the transmitting antenna.
□ Far-field, or Rayleigh distance, is the region where the radiation pattern is independent of distance from the transmitting antenna.
Receiver Sensitivity
The minimum required RF signal power received to meet are certain performance level. This level is usually referred to as the Bit Error Rate (BER). Example: -80 dBm or 0.000000010 mW
When your bunghole friend won't answer the radio, send the cops to his work to tell him to turn on the HT so in all your pinkness, you don't buy the wrong computer hardware, again. Also people
who transmit in mono.
A directional antenna made up of one driven element and one, or more, parasitic elements.
Data Conversion Examples
1.0 Watt = 1000.0 mW = +30.0 dBm
10 Watts transmitted - 5 Watts lost in cable = 3 dB lost
-85 dBm for 1x10e-6 BER = 1 bit in a million error at a -85 dBm received power level
Useful Perl Equations
# Milliwatts to dBm
$dBm = 10 * (log10 $mW);
# dBm to milliwatts
$mW = 10 ** ($dBm / 10);
# Attenuation/gain in dB given RF power input and output
$dB = 10 * (log10 ($power_output / $power_input));
# dB to power ratio
$power_ratio = 10 ** ($db / 10);
# Free space loss, frequency in MHz and distance in kilometers (isotropic)
$free_space_loss = 32.4 + (20 * (log10 $freq)) + (20 * (log10 $distance));
# Calculating quarter wavelength, in centimeters, given frequency, in MHz
$quater_wavelength = 7132.32 / $freq;
# Fresnel zone boundary
# You must satisfy 60% clearance of the first Fresnel zone ($nth = 0.6)
# $fn Nth Fresnel zone boundary in meters
# $nth The Nth numerical value (0.6, 1, 2, etc)
# $d1 Distance from one end of path to reflection/obstruction in meters
# $dis Total path distance in meters
# $freq Frequency in MHz
$fn = 17.31 * sqrt (($nth * $d1 * ($dis - $d1)) / ($freq * $dis));
# Attenuation, in dB, for 30.5 meters (100 ft) of LMR-400 (frequency in MHz)
$loss = (0.12229 * sqrt $freq) + (0.00026 * $freq);
# Received power level and EIRP
# $pwr Output power of transmitter (in dBm)
# $tx_cab Total transmitter cable attenuation (in dB)
# $rx_cab Total receiver cable attenuation (in dB)
# $tx_ant Total transmitting antenna gain (in dBi)
# $rx_ant Total receiving antenna gain (in dBi)
# $rx_pwr Received power level at receiver (in dBm)
# $EIRP Effective Isotropic Radiated Power (in dBm)
$rx_pwr = $pwr - $tx_cab + $tx_ant - $free_space_loss + $rx_ant - $rx_cab;
$EIRP = $pwr - $tx_cab + $tx_ant;
# Parabolic dish antenna gain and beamwidth, in dB
# $dia Reflector diameter in centimeters
# $wave Wavelength of operating frequency in centimeters
# $illum Illumination factor. typically 0.6
# $freq Frequency in GHz
# $theta 3 dB beamwidth in degrees
$gain = (20 * (log10 ($dia / $wave))) + ((10 * (log10 $illum)) + 9.938);
$theta = 70 / (($dia / 30.48) * $freq);
# Distance and antenna height calculations
# All antenna heights are measured above ground level (AGL), in meters
# $radio_horizon Distance to radio horizon, in km
# $distance_max Maximum space wave communications distance, in km
# $rx_height Receiver antenna height
# $tx_height Transmitter antenna height
$radio_horizon = 4 * (sqrt $antenna_height);
$antenna_height = ($radio_horizon / 4) ** 2;
$distance_max = (4 * sqrt $tx_height) + (4 * sqrt $rx_height);
$rx_height = (($distance_max - (4 * sqrt $tx_height)) / 4) ** 2;
$tx_height = (($distance_max - (4 * sqrt $rx_height)) / 4) ** 2;
# Great circle distance calculator using decimal degrees
# $lat1 Latitude of site A
# $lon1 Longitude of site A
# $lat2 Latitude of site B
# $lon2 Longitude of site B
# $radius Radius of the earth in km (6367)
# $distance Distance between the two points in km
$dlon = $lon2 - $lon1;
$dlat = $lat2 - $lat1;
$a = ((sin(deg2rad $dlat / 2)) ** 2) + ((cos(deg2rad $lat1) * cos(deg2rad $lat2)) * ((sin(deg2rad $dlon / 2)) ** 2));
$distance = (2 * atan2(sqrt $a, sqrt(1 - $a))) * $radius; | {"url":"http://www.qsl.net/n9zia/wireless/page01.html","timestamp":"2014-04-23T07:54:20Z","content_type":null,"content_length":"14370","record_id":"<urn:uuid:70d21525-a9b5-4af9-96c6-c22a2705d320>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Poisson law for some nonuniformly hyperbolic dynamical systems with polynomial rate of mixing (.pdf)
F. Pène, B. Saussol, preprint.
Abstract: We consider some nonuniformly hyperbolic invertible dynamical systems which are modeled by a Gibbs-Markov-Young tower. We assume a polynomial tail for the inducing time and a polynomial
control of hyperbolicity, as introduced by Alves, Pinheiro and Azevedo. These systems admit a physical measure with polynomial rate of mixing. In this paper we prove that the distribution of the
number of visits to a ball B(x,r) converges to a Poisson distribution as the radius r -> 0 and after suitable normalization.
2. Return- and Hitting-time limits for rare events of null-recurrent Markov maps, (.pdf)
F. Pène, B.Saussol, R. Zweimüller, preprint.
Abstract: We determine limit distributions for return- and hitting-time functions of certain asymptotically rare events for conservative ergodic infinite measure preserving transformations with
regularly varying asymptotic type. Our abstract result applies, in particular, to shrinking cylinders around typical points of null-recurrent renewal shifts and infinite measure preserving
interval maps with neutral fixed points.
3. Exponential law for random subshifts of finite type (.pdf)
Jérôme Rousseau, Benoît Saussol, Paulo Varandas, preprint.
Abstract: In this paper we study the distribution of hitting times for a class of random dynamical systems. We prove that for invariant measures with super-polynomial decay of correlations
hitting times to dynamically defined cylinders satisfy exponential distribution. Similar results are obtained for random expanding maps. We emphasize that what we establish is a quenched
exponential law for hitting times.
4. Skew products, quantitative recurrence, shrinking targets and decay of correlations, (.ps, .pdf)
Stefano Galatolo, Jérôme Rousseau, Benoît Saussol, To appear in Ergodic Theory and Dynamical Systems
Abstract: We consider toral extensions of hyperbolic dynamical systems. We prove that its quantitative recurrence (also with respect to given observables) and hitting time scale behavior depend
on the arithmetical properties of the extension. By this we show that those systems have a polynomial decay of correlations with respect to C^r observables, and give estimations for its exponent,
which depend on r and on the arithmetical properties of the system. We also show examples of systems of this kind having not the shrinking target property, and having a trivial limit distribution
of return time statistics.
5. Recurrence rates and hitting-time distributions for random walks on the line, (.ps / .pdf)
F. Pène, B.Saussol, R. Zweimüller, accepted for publication in Annals of Probability.
Abstract: We consider random walks on the line given by a sequence of independent identically distributed jumps belonging to the strict domain of attraction of a stable distribution, and first
determine the almost sure exponential divergence rate, as r goes to zero, of the return time to (-r,r). We then refine this result by establishing a limit theorem for the hitting-time
distributions of (x-r,x+r) with arbitrary real x.
6. Central limit theorem for dimension of Gibbs measures in hyperbolic dynamics, (.ps / .pdf)
R. Leplaideur, B.Saussol, Stochastic and Dynamics 12-2 (2012).
Abstract: We consider a class of non-conformal expanding maps on the d-dimensional torus. For an equilibrium measure of an Hoelder potential, we prove an analogue of the Central Limit Theorem for
the fluctuations of the logarithm of the measure of balls as the radius goes to zero. An unexpected consequence is that when the measure is not absolutely continuous, then half of the balls of
radius ε have a measure smaller than ε^δ and half of them have a measure larger than ε^δ, where δ is the Hausdorff dimension of the measure. We first show that the problem is equivalent to the
study of the fluctuations of some Birkhoff sums. Then we use general results from probability theory as the weak invariance principle and random change of time to get our main theorem. Our method
also applies to conformal repellers and Axiom A surface diffeomorphisms and possibly to a class of one-dimensional non uniformly expanding maps. These generalizations are presented at the end of
the paper.
7. Hitting and returning into rare events for all alpha-mixing processes, (.ps / .pdf)
M. Abadi, B.Saussol, Stochastic Processes and their Applications 121-2 (2011) 314-323.
Abstract: We prove that for any α-mixing process the hitting time of any n-string A[n] converges, when suitably normalized, to an exponential law. We identify the normalization constant λ(A[n]).
A similar statement holds also for the return time.
8. An introduction to quantitative Poincaré recurrence in dynamical systems, (.ps / .pdf)
B.Saussol, Reviews in Mathematical Physics 21-8 (2009) 949-979.
Abstract: We present some recurrence results in the context of ergodic theory and dynamical systems. The main focus will be on smooth dynamical systems, in particular those with some chaotic/
hyperbolic behavior. The aim is to compute recurrence rates, limiting distributions of return times, and short returns. We choose to give the full proofs of the results directly related to
recurrence, avoiding as possible to hide the ideas behind technical details. This drove us to consider as our basic dynamical system a one-dimensional expanding map of the interval. We note
however that most of the arguments still apply to higher dimensional or less uniform situations, so that most of the statements continue to hold. Some basic notions from the thermodynamic
formalism and the dimension theory of dynamical systems will be recalled.
9. Back to balls in billiards, (.ps / .pdf)
F. Pène, B.Saussol, Communications in mathematical physics 293-3 (2010) 837-866; présentation grand public.
Abstract: We consider a billiard in the plane with periodic configuration of convex scatterers. This system is recurrent, in the sense that almost every orbit comes back arbitrarily close to the
initial point. In this paper we study the time needed to get back in an r-ball about the initial point, in the phase space and also for the position, in the limit when r->0. We establish the
existence of an almost sure convergence rate, and prove a convergence in distribution for the rescaled return times.
10. Poincaré recurrence for observations, (.ps/ .pdf)
J. Rousseau, B.Saussol, Transactions A.M.S. 362-11 (2010) 5845-5859
Abstract: A high dimensional dynamical system is often studied by experimentalists through the measurement of a relatively low number of di fferent quantities, called an observation. Following
this idea and in the continuity of Boshernitzan's work, for a measure preserving system, we study Poincaré recurrence for the observation. The link between the return time for the observation and
the Hausdorff dimension of the image of the invariant measure is considered. We prove that when the decay of correlations is super polynomial, the recurrence rates for the observations and the
pointwise dimensions relatively to the push-forward are equal.
11. Quantitative recurrence in two dimensional extended processes, (.ps / .ps.gz / .pdf / .dvi )
F. Pène, B.Saussol, Ann. Inst. H. Poincaré proba-stat 45-4 (2009) 1065-1084.
Abstract: Under some mild condition, a random walk in the plane is recurrent. In particular each trajectory is dense, and a natural question is how much time one needs to approach a given small
neighborhood of the origin. We address this question in the case of some extended dynamical systems similar to planar random walks, including Z^2-extension of hyperbolic dynamics. We define a
pointwise recurrence rate and relate it to the dimension of the process, and establish a convergence in distribution of the rescaled return times near the origin.
12. Large deviations for return times in non-rectangle sets for Axiom A diffeomorphisms, (.ps / .ps.gz / .pdf / .dvi )
R. Leplaideur, B.Saussol, Discrete and Continuous Dynamical Systems A 22 (2008) 327-344
Abstract: For Axiom A diffeomorphisms and equilibrium states, we prove a Large deviations result for the sequence of successive return times into a fixed Borel set, under some assumption on the
boundary. Our result relies on and extends the work by Chazottes and Leplaideur who considered cylinder sets of a Markov partition.
13. Recurrence rate in rapidly mixing dynamical systems, (.ps / .ps.gz / .pdf / .dvi )
B.Saussol, Discrete and Continuous Dynamical Systems A 15 (2006) 259-267
Abstract: For measure preserving dynamical systems on metric spaces we study the time needed by a typical orbit to return close to its starting point. We prove that when the decay of correlation
is super-polynomial the recurrence rates and the pointwise dimensions are equal. This gives a broad class of systems for which the recurrence rate equals the Hausdorff dimension of the invariant
14. Products of non-stationary random matrices and Multiperiodic equations of several scaling factors, (.ps / .ps.gz / .pdf / .dvi )
A.H.Fan, B.Saussol, J.Schmeling, Pacific Journal of Mathematics 214 (2004) 31-54
Abstract: Let b>1 be a real number and M: R -> GL(C^d) be a uniformly almost periodic matrix-valued function. We study the asymptotic behavior of the product P[n](x) =M(b^n-1x) ... M(bx) M(x).
Under some condition we prove a theorem of Furstenberg-Kesten type for such products of non-stationary random matrices. Theorems of Kingman and Oseledec type are also proved. The obtained results
are applied to multiplicative functions defined by commensurable scaling factors. We get a positive answer to a Strichartz conjecture on the asymptotic behavior of such multiperiodic functions.
The case where b is a Pisot-Vijayaraghavan number is well studied.
15. Recurrence spectrum in smooth dynamical system, (.ps / .ps.gz / .pdf / .dvi )
B.Saussol, J.Wu, Nonlinearity 16 (2003) 1991-2001
Abstract: We prove that for conformal expanding maps the return time does have constant multifractal spectrum. This is the counterpart of the result by Feng and Wu in the symbolic setting.
16. Recurrence and Lyapunov exponents for diffeomorphisms, (.ps / .ps.gz / .pdf / .dvi )
B.Saussol, S.Troubetzkoy, S.Vaienti, Moscow Mathematical Journal 3 (2003) 189-203
Abstract: We prove two inequalities between the Lyapunov exponents of a diffeomorphism and its local recurrence properties. We give examples showing that each of the inequalities is optimal.
17. Distribution of frequencies of digits via multifractal analysis, (.ps / .ps.gz / .pdf / .dvi )
L.Barreira, B.Saussol, J.Schmeling, Journal of Number Theory 97/2 (2002) 413-442
Abstract: We study the Hausdorff dimension of a large class of sets in the real line defined in terms of the distribution of frequencies of digits for the representation in some integer base. In
particular, our results unify and extend classical work of Borel, Besicovitch, Eggleston, and Billingsley in several directions. Our methods are based on recent results concerning the
multifractal analysis of dynamical systems and often allow us to obtain explicit expressions for the Hausdorff dimension. This work is still another illustration of the role that the theory of
dynamical systems can play in number theory.
18. On the uniform hyperbolicity of certain hyperbolic systems, (.ps / .ps.gz / .pdf / .dvi )
J.F.Alves, V.Araújo, B.Saussol, Proc. Amer. Math. Soc. 131 (2003) 1303-1309
Abstract: We give sufficient conditions for the uniform hyperbolicity of certain nonuniformly hyperbolic dynamical systems. In particular, we show that local diffeomorphisms that are nonuniformly
expanding on sets of total probability are necessarily uniformly expanding. We also present a version of this result for diffeomorphisms with nonuniformly hyperbolic sets.
19. Recurrence, dimensions and Lyapunov exponents, (.ps / .ps.gz / .pdf / .dvi )
B.Saussol, S.Troubetzkoy, S.Vaienti, Journal of Statistical Physics 106 (2002) 623-634
Abstract: We show that the Poincaré return time of a typical cylinder is at least its length. For one dimensional maps we express the Lyapunov exponent and dimension via return times.
20. On pointwise dimensions and spectra of measures, (.ps / .ps.gz / .pdf / .dvi )
J.-R. Chazottes, B.Saussol, C. R. Acad. Sci. Paris Sér I Math. 333 (2001) 719-723
Abstract: We give a new definition of the lower pointwise dimension associated with a Borel probability measure with respect to a general Caratheodory-Pesin structure. Then we show that the
spectrum of the measure coincides with the essential supremum of the lower pointwise dimension. We provide an example coming from dynamical systems.
21. Variational principles for hyperbolic flows, (.ps / .ps.gz / .pdf / .dvi )
L.Barreira, B.Saussol, Fields Institute Communications 31 (2002) 43-63
Abstract: We establish a conditional variational principle for hyperbolic flows. In particular we provide an explicit expression for the topological entropy of the level sets of Birkhoff
averages, and obtain a very simple new proof of the corresponding multifractal analysis. One application is that for a geodesic flow F[t] on a compact Riemannian manifold of negative sectional
curvature, if there exists a geodesic F[t]x with ``average'' scalar curvature K, then there exist uncountably many geodesics with the same ``average'' scalar curvature K. The variational
principle can also be used to establish the analyticity of several new classes of multifractal spectra for hyperbolic flows.
22. Pointwise dimensions for Poincaré recurrence associated with maps and special flows, (.ps / .ps.gz / .pdf / .dvi )
V.Afraimovich, J.-R.Chazottes, B.Saussol, Discrete and Continuous Dynamical Systems A 9 (2003) 263-280
Abstract: We introduce pointwise dimensions and spectra associated with Poincaré recurrences. These quantities are then calculated for any ergodic measure of positive entropy on a weakly
specified subsh ift. We show that they satisfy a relation comparable to Young's formula for the Hausdorff dimension of measures invariant under surface diffeomorphisms. A key-result in
establishing these formula is to prove that the Poincaré recurrence for a "typical" cylinder is asymptotically its length. Examples are provided which show that this is not true for some systems
with zero entropy. Similar results are obtained for special flows and we get a formula relating spectra for measures of the base to the ones of the flow.
23. Statistics of return time via inducing, (.ps / .ps.gz / .pdf / .dvi )
H.Bruin, B.Saussol, S.Troubetzkoy, S.Vaienti, Ergodic Theory and Dynamical Systems 23 (2003) 991-1013
Abstract: We prove that return time statistics of a dynamical system do not change if one passes to an induced (i.e. first return) map. We apply this to show exponential return time statistics in
i) smooth interval maps with nowhere--dense critical orbits and ii) certain interval maps with neutral fixed points. The method also applies to iii) certain quadratic maps of the complex plane.
24. Product structure of Poincaré recurrence. (.ps / .ps.gz / .pdf / .dvi )
L.Barreira, B.Saussol, Ergodic Theory and Dynamical Systems 22 (2002) 33-61
Abstract: We provide new non-trivial quantitative information on the behavior of Poincaré recurrence. In particular we establish the almost everywhere coincidence of the recurrence rate and of
the pointwise dimension for a large class of repellers, including repellers without finite Markov partitions.
Using this information, we are able to show that for locally maximal hyperbolic sets the recurrence rate possesses a certain local product structure, which closely imitates the product structure
provided by the families of local stable and unstable manifolds, as well as the almost product structure of hyperbolic measures.
25. Higher dimensional multifractal analysis. (.ps / .ps.gz / .pdf / .dvi )
L.Barreira, J.Schmeling, B.Saussol, Journal de Mathématiques pures et appliquées 81 (2002) 67-91
Abstract: We establish a higher-dimensional version of multifractal analysis for several classes of hyperbolic dynamical systems. This means that we consider multifractal decompositions which are
associated to multi-dimensional parameters. In particular, we obtain a conditional variational principle, which shows that the topological entropy of the level sets of pointwise dimensions, local
entropies, and Lyapunov exponents can be simultaneously approximated by the entropy of ergodic measures. A similar result holds for the Hausdorff dimension.
This study allows us to exhibit new nontrivial phenomena absent in the one-dimensional multifractal analysis. In particular, while the domain of definition of a one-dimensional spectrum is always
an interval, we show that for higher-dimensional spectra the domain need not be convex and may even have empty interior, while still containing an uncountable number of points. Furthermore, the
interior of the domain of a higher-dimensional spectrum has in general more than one connected component.
26. Local dimension for Poincaré recurrence.
V.Afraimovich, J.-R. Chazottes, B.Saussol. Electron. Res. Announc. Amer. Math. Soc. 6 (2000), 64-74
Abstract: Pointwise dimensions and spectra for measures associated with Poincaré recurrences are calculated for arbitrary weakly specified subshifts with positive entropy. We show that they
satisfy a relation comparable to Young's formula for the Hausdorff dimension of measures invariant under surface diffeomorphisms. It is also proved that the Poincaré recurrence for a "typical"
cylinders is asymptotically its length. Examples are provided which show that this is not true for some systems with zero entropy.
27. Hausdorff dimension of measures via Poincaré recurrence. (.ps / .ps.gz / .pdf / .dvi )
L.Barreira, B.Saussol. Communication in mathematical physics 219 (2001) 443-463
Abstract: We study the quantitative behavior of Poincaré recurrence. In particular, for an equilibrium measure on a locally maximal hyperbolic set of a diffeomorphism f with Hoelder derivative,
we show that the recurrence rate to each point coincides almost everywhere with the Hausdorff dimension d of the measure, that is, inf { k>0 : dist(f^kx, x) < r } ~ r^-d. This result is a
non-trivial generalization of work of Boshernitzan concerning the quantitative behavior of recurrence, and is a dimensional version of work of Ornstein and Weiss for the entropy. We stress that
our approach uses different techniques. Furthermore, our results motivate the introduction of a new method to compute the Hausdorff dimension of measures.
28. On fluctuations and the exponential statistics of return times. ( .ps / .ps.gz / .pdf / .dvi )
B.Saussol. Nonlinearity 14 (2001) 179-191
Abstract: This paper presents some facts related to the exponential statistic of return time. First of all we show that this behavior is valid in a large class of dynamical system.
Second, we investigate the question of computing the speed of convergence to this limiting law. We show that this speed carries informations about the system under consideration, while via a
local analysis we can relate it to some combinatorial property of some orbits.
Finally, we prove that for an arbitrary dynamical systems, the existence of an exponential statistic for the return time implies the equivalence between the fluctuations of empirical entropies
and repetition times.
29. Variational principles and mixed multifractal spectra. ( .ps / .ps.gz / .pdf / .dvi )
L.Barreira, B.Saussol. Transactions AMS 353 (2001) 3919-3944
Abstract: We establish a "conditional" variational principle, which unifies and extends many results in the multifractal analysis of dynamical systems. Namely, instead of considering several
quantities of local nature and studying separately their multifractal spectra we develop a unified approach which allows us to obtain all spectra from a new multifractal spectrum. Using the
variational principle we are able to study the regularity of the spectra and the full dimensionality of their irregular sets for several classes of dynamical systems, including the class of maps
with upper semi-continuous metric entropy.
Another application of the variational principle is the following. The multifractal analysis of dynamical systems studies multifractal spectra such as the dimension spectrum for pointwise
dimensions and the entropy spectrum for local entropies. It has been a standing open problem to effect a similar study for the "mixed" multifractal spectra, such as the dimension spectrum for
local entropies and the entropy spectrum for pointwise dimensions. We show that they are analytic for several classes of hyperbolic maps. We also show that these spectra are not necessarily
convex, in strong contrast with the "non-mixed" multifractal spectra.
30. Multifractal analysis of hyperbolic flows. (.ps / .ps.gz / .pdf / .dvi )
L.Barreira, B.Saussol. Communication in Mathematical Physics 214 (2000) 339-371
Abstract: We establish the multifractal analysis of hyperbolic flows and of suspension flows over subshifts of finite type. A non-trivial consequence of our results is that for every Hoelder
continuous function non-cohomologous to a constant, the set of points without Birkhoff average has full topological entropy.
31. Dimensions for recurrence times: topological and dynamical properties. ( .ps / .ps.gz / .pdf / .dvi )
V.Penné, B. Saussol, S. Vaienti. Discrete and Continuous Dynamical Systems 5 (1999) 783-798
Abstract: In this paper we give new properties of the dimension introduced by Afraimovich to characterize Poincaré recurrence and which we proposed to call Afraimovich-Pesin's (AP's) dimension.
We will show in particular that AP's dimension is a topological invariant and that it often coincides with the asymptotic distribution of periodic points: deviations from this behavior could
suggest that the AP's dimension is sensible to some "non-typical"' points.
32. Statistics of return times: a general framework and new applications. ( .ps / .ps.gz / .pdf / .dvi )
M.Hirata, B.Saussol, S.Vaienti. Communication in Mathematical Physics 206 (1999) 33-55
Abstract:In this paper we provide general estimates for the errors between the distribution of the first, and more generally, the Kth return time (suitably rescaled) and the Poisson law for
measurable dynamical systems. In the case that the system exhibits strong mixing properties, these bounds are explicitly expressed in terms of the speed of mixing. Using these approximations, the
Poisson law is finally proved to hold for a large class of non hyperbolic systems on the interval.
33. Fractal and statistical characteristics of recurrence times. ( .ps / .ps.gz / .pdf / .dvi )
V. Penné, B. Saussol, S. Vaienti. Journal de Physique (Paris), proceding of the conference Disorders and Chaos (Rome) in honor of Giovanni Paladin
Abstract: In this paper we introduce and discuss two proprieties related to recurrences in dynamical systems. The first gives the asymptotic law for the return time in a neighborhood, while the
second gives a topological index of fractal type to characterize the system or some regions of the system.
34. Absolutely continuous invariant measures for multidimensional expanding maps. ( .ps / .ps.gz / .pdf / .dvi )
B. Saussol. Israel Journal of Mathematics 116 (2000) 223-248
Abstract: We investigate the existence and statistical properties of absolutely continuous invariant measures for multidimensional expanding maps with singularities. The key point is the
establishment of a spectral gap in the spectrum of the transfer operator. Our assumptions appear quite naturally for maps with singularities. We allow maps that are discontinuous on some
extremely wild sets, the shape of the discontinuities being completely ignored with our approach.
35. A probabilistic approach to intermittency. ( .ps / .ps.gz / .dvi )
C. Liverani, B. Saussol, S. Vaienti. Ergodic Theory and Dynamical Systems 19 (1999) 671-685
Abstract: We present an original approach which allows to investigate the statistical properties of a non-uniform hyperbolic map of the interval. Based on a stochastic approximation of the
deterministic map, this method gives essentially the optimal polynomial bound for the decay of correlations, the degree depending on the order of the tangency at the neutral fixed point.
36. Conformal measure and decay of correlations for covering weighted systems. ( .ps / .ps.gz / .pdf / .dvi )
C. Liverani, B. Saussol, S. Vaienti. Ergodic Theory and Dynamical Systems 18 (1998) 1399-1420
Abstract: We show that for a large class of piecewise monotonic transformations on a totally ordered, compact set one can construct conformal measures and obtain exponential mixing rate for the
associated equilibrium state. The method is based on the study of the Perron-Frobenius operator. The conformal measure, the density of the invariant measure and the rate of mixing are deduced by
using an appropriate Hilbert metric, without any compactness arguments, even in the case of a countable to one transformation. | {"url":"http://lmba.math.univ-brest.fr/perso/benoit.saussol/articles.html","timestamp":"2014-04-18T02:58:33Z","content_type":null,"content_length":"37981","record_id":"<urn:uuid:963bb4fb-af13-4634-95dd-fdcbf470467e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerically solving NLS equation
Hi all.
I am using the Split Step Fourier Method to solve NLS to study the interaction of two solitons.
I have done the animation of the collision of two solitons for exact solution.
But when I numerically solve it and watch the animation, the profiles during interaction is not quite the same. I don't really know what's wrong. I have used a finer step size to try again. But it
doesn't help.
Will the numerical solution resemble the exact solution very well during interaction?
Can you kindly refer me to some animations of NLS soliton collision using numerical solutions? I am using the NLS in water wave context in the most simple form, namely iut+uxx+u^2u=0. | {"url":"http://www.physicsforums.com/showthread.php?t=195859","timestamp":"2014-04-21T12:22:15Z","content_type":null,"content_length":"22946","record_id":"<urn:uuid:420e0152-10c4-4ee1-9cb4-63a038066ad5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
At sea level 3978(mi from center of earth) an astronaut weighs 117lbs. Find the weight when it is 284 mi above surface of the earth & the spacecraft is not in motion.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f943e65e4b000ae9eca7f0e","timestamp":"2014-04-19T17:20:29Z","content_type":null,"content_length":"77111","record_id":"<urn:uuid:1a517667-ebf3-49bd-a4db-0600cdba969e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unfolding and Reconstructing Polyhedra
This thesis covers work on two topics: unfolding polyhedra into the plane and reconstructing polyhedra from partial information. For each topic, we describe previous work in the area and present an
array of new research and results. Our work on unfolding is motivated by the problem of characterizing precisely when overlaps will occur when a polyhedron is cut along edges and unfolded. By
contrast to previous work, we begin by classifying overlaps according to a notion of locality. This classification enables us to focus upon particular types of overlaps, and use the results to
construct examples of polyhedra with interesting unfolding properties. The research on unfolding is split into convex and non-convex cases. In the non-convex case, we construct a polyhedron for which
every edge unfolding has an overlap, with fewer faces than all previously known examples. We also construct a non-convex polyhedron for which every edge unfolding has a particularly trivial type of
overlap. In the convex case, we construct a series of example polyhedra for which every unfolding of various types has an overlap. These examples disprove some existing conjectures regarding
algorithms to unfold convex polyhedra without overlaps. The work on reconstruction is centered around analyzing the computational complexity of a number of reconstruction questions. We consider two
classes of reconstruction problems. The first problem is as follows: given a collection of edges in space, determine whether they can be rearranged by translation only to form a polygon or
polyhedron. We consider variants of this problem by introducing restrictions like convexity, orthogonality, and non-degeneracy. All of these problems are NP-complete, though some are proved to be
only weakly NP-complete. We then consider a second, more classical problem: given a collection of edges in space, determine whether they can be rearranged by translation and/or rotation to form a
polygon or polyhedron. This problem is NP-complete for orthogonal polygons, but polynomial algorithms exist for non-orthogonal polygons. For polyhedra, it is shown that if degeneracies are allowed
then the problem is NP-hard, but the complexity is still unknown for non-degenerate polyhedra. | {"url":"https://uwspace.uwaterloo.ca/handle/10012/1037","timestamp":"2014-04-17T01:09:21Z","content_type":null,"content_length":"23525","record_id":"<urn:uuid:f4cd19b9-ce45-4356-8433-f60f2eb915a5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Key Strategies Brief
Five “Key Strategies” for Effective Formative Assessment
In order to build a comprehensive framework for formative assessment, Wiliam and Thompson (2007) proposed that three processes were central:
1. Establishing where learners are in their learning
2. Establishing where they are going
3. Establishing how to get there
By considering separately the roles of the teacher and the students themselves, they proposed that formative assessment could be built up from five “key strategies.”
1. Clarifying, sharing, and understanding goals for learning and criteria for success with learners
There are a number of ways teachers can begin the process of clarifying and sharing learning goals and success criteria. Many teachers specify the learning goals for the lesson at the beginning of
the lesson, but in doing so, many teachers fail to distinguish between the learning goals and the activities that will lead to the required learning. When teachers start from what it is they want
students to know and design their instruction backward from that goal, then instruction is far more likely to be effective (Wiggins and McTighe 2000).
Wiggins and McTighe also advocate a two-stage process of first clarifying the learning goals themselves (what is worthy and requiring understanding?), which is then followed by establishing success
criteria (what would count as evidence of understanding?). Only then should the teacher move on to exploring activities that will lead to the required understanding.
However, it is important that students also come to understand these goals and success criteria, as Royce Sadler (1989, p. 121) notes:
The indispensable conditions for improvement are that the student comes to hold a concept of quality roughly similar to that held by the teacher, is continuously able to monitor the quality of what
is being produced during the act of production itself, and has a repertoire of alternative moves or strategies from which to draw at any given point.
Indeed, there is evidence that discrepancies in beliefs about what it is that counts as learning in mathematics classrooms may be a significant factor in the achievement gaps observed in mathematics
classrooms. In a study of 72 students between the ages of seven and thirteen, Gray and Tall (1994) found that the reasoning of the higher-achieving students was qualitatively different from that of
the lower-achieving students. In particular, the higher-achieving students were able to work successfully despite unresolved ambiguities about whether mathematical entities were concepts or
procedures. Lower-achieving students were unable to accept such ambiguities and could not work past them. By refusing to accept the ambiguities inherent in mathematics, the lower-achieving students
were, in fact, attempting a far more difficult form of mathematics, with a far greater cognitive demand.
A simple example may be illustrative here. When we write 6 1/2 , the mathematical operation between the 6 and the 1/2 is actually addition, but when we write 6x, the implied operation between the 6
and the x is multiplication, and the relationship between the 6 and the 1 in 61 is different again. And yet, very few people who are successful in mathematics are aware of these inconsistencies or
differences in mathematical notation. In a very real sense, being successful in mathematics requires knowing what to worry about and what not to worry about. Students who do not understand what is
important and what is not important will be at a very real disadvantage.
In a study of twelve seventh-grade science classrooms, White and Frederiksen (1998) found that giving students time to talk about what would count as quality work, and how their work was likely to be
evaluated, reduced the achievement gap between the highest- and lowest-achieving students in half and increased the average performance of the classes to such an extent that the weakest students in
the experimental group were outperforming all but the very strongest students in the control group.
This is why using a variety of examples of students’ work from other classes can be extremely powerful in helping students come to understand what counts as quality work. Many teachers have found
that students are better at spotting errors in the work of other students than they are at seeing them in their own work. By giving students examples of work at different standards, students can
begin to explore the differences between superior and inferior work, and these emergent understandings can be discussed with the whole class. As a result of such processes, students will develop a
“nose for quality” (Claxton 1995) that they will then be able to use in monitoring the quality of their own work.
2. Engineering effective classroom discussions, questions, activities, and tasks that elicit evidence of students’ learning
Once we know what it is that we want our students to learn, then it is important to collect the right sort of evidence about the extent of students’ progress toward these goals, but few teachers plan
the kinds of tasks, activities, and questions that they use with their students specifically to elicit the right kind of evidence of students’ learning. As an example, consider the question shown in
figure 1 below.
Figure 1: Diagnostic item on elementary fractions
Diagram A is the obvious answer, but B is also correct. However, some students do not believe that one-quarter of B is shaded because of a belief that the shaded parts have to be adjoining. Students
who believe that one-quarter of C is shaded have not understood that one region shaded out of four is not necessarily a quarter. Diagram D is perhaps the most interesting here. One-quarter of this
diagram is shaded, although the pieces are not all equal; students who rely too literally on the “equal areas” definition of fractions will say that D is not a correct response. By crafting questions
that explicitly build in the undergeneralizations and overgeneralizations that students are known to make (Bransford, Brown, and Cocking 2000), we can get far more useful information about what to do
next. Furthermore, by equipping each student in the class with a set of four cards bearing the letters A, B, C, and D and by requiring all students to respond simultaneously with their answers, the
teacher can generate a very solid evidence base for deciding whether the class is ready to move on (Leahy et al. 2005). If every student responds with A, B, and D, then the teacher can move on with
confidence that the students have understood. If everyone simply responds with A, then the teacher may choose to reteach some part of the topic. The most likely response, however, is for some
students to respond correctly and for others to respond incorrectly, or incompletely. This provides the teacher with an opportunity to conduct a classroom discussion in which students with different
views can be asked to justify their selections.
Of course planning such questions takes time, but by investing the time before the lesson, the teacher is able to address students’ confusion during the lesson, with the students still in front of
him or her. Teachers who do not plan such questions are forced to put children’s thinking back on track through grading, thus dealing with the students one at a time, after they have gone away.
3. Providing feedback that moves learning forward
The research on feedback shows that much of the feedback that students receive has, at best, no impact on learning and can actually be counterproductive. Kluger and DeNisi (1996) reviewed more than
three thousand research reports on the effects of feedback in schools, colleges, and workplaces and found that only 131 studies were scientifically rigorous. In 50 of these studies, feedback actually
made people’s performance worse than it would have been without feedback. The principal feature of these studies was that feedback was, in the psychological jargon, “ego-involving.” In other words,
the feedback focused attention on the person rather than on the quality of the work——for example, by giving scores, grades, or other forms of report that encouraged comparison with others. The
studies where feedback was most effective were those in which the feedback told participants not just what to do to improve but also how to go about it.
Given the emphasis on grading in U.S. schools, teachers may be tempted to offer comments alongside scores or grades. However, a number of studies (e.g., Butler 1987, 1988) have shown that when
comments are accompanied by grades or scores, students focus first on their own grade or score and then on those of their neighbors, so that grades with comments are no more effective than grades
alone, and much less effective than comments alone. The crucial requirement of feedback is that it should force the student to engage cognitively in the work.
Such feedback could be given orally, as in this example from Saphier (2005, p. 92):
Teacher: What part don’t you understand?
Student: I just don’t get it.
Teacher: Well, the first part is just like the last problem you did. Then we add one more variable. See if you can find out what it is, and I’ll come back in a few minutes.
Written feedback can support students in finding errors for themselves:
• There are 5 answers here that are incorrect. Find them and fix them.
• The answer to this question is … Can you find a way to work it out?
It can also identify where students might use and extend their existing knowledge:
• You’ve used substitution to solve all these simultaneous equations. Can you use elimination?
Other approaches (Hodgen and Wiliam 2006) include encouraging pupils to reflect:
• You used two different methods to solve these problems. What are the advantages and disadvantages of each?
• You have understood … well. Can you make up your own more difficult problems?
Another suggestion is to have students discuss their ideas with others:
• You seem to be confusing sine and cosine. Talk to Katie about how to work out the difference.
• Compare your work with Ali and write some advice to another student tackling this topic for the first time.
The important point in all this is that as well as “putting the ball back in the students’ court,” the teacher also needs to set aside time for students to read, respond to, and act on feedback.
4. Activating students as owners of their own learning
When teachers are told they are responsible for making sure that their students do well, the quality of their teaching deteriorates, as does their students’ learning (Deci et al. 1982). In contrast,
when students take an active part in monitoring and regulating their learning, then the rate of their learning is dramatically increased. Indeed, it is common to find studies in which the rate of
students’ learning is doubled, so that students learn in six months what students in control groups take a year to learn (Fontana and Fernandes 1994; Mevarech and Kramarski 1997).
In an attempt to integrate research on motivation, metacognition, self-esteem, self-efficacy, and attribution theory, Monique Boekaerts has proposed a dual-processing theory of student motivation and
engagement (Boekaerts 2006). When presented with a task, the student evaluates the task according to its interest, difficulty, cost of engagement, and so on. If the evaluation is positive, the
student is likely to seek to increase competence by engaging in the task. If the evaluation is negative, a range of possible outcomes is possible. The student may engage in the task but focus on
getting a good grade from the teacher instead of mastering the relevant material (e.g., by cheating) or the student may disengage from the task on the grounds that “it is better to be thought lazy
than dumb.” The important point for teachers is that to maximize learning, the focus needs to be on personal growth rather than on a comparison with others.
Practical techniques for getting students started include “traffic lights,” where students flash green, yellow, or red cards to indicate their level of understanding of a concept. Many teachers have
reported that initially, students who are focusing on well-being, rather than growth, display green, indicating full understanding, even though they know they are confused. However, when the teacher
asks students who have shown green cards to explain concepts to those who have shown yellow or red, students have a strong incentive to be honest!
5. Activating students as learning resources for one another
Slavin, Hurley, and Chamberlain (2003) have shown that activating students as learning resources for one another produces some of the largest gains seen in any educational interventions, provided two
conditions are met. The first is that the learning environment must provide for group goals, so that students are working as a group instead of just working in a group. The second condition is
individual accountability, so that each student is responsible for his or her contribution to the group, so there can be no “passengers.”
With regard to assessment, then, a crucial feature is that the assessment encourages collaboration among students while they are learning. To achieve this collaboration, the learning goals and
success criteria must be accessible to the students (see above), and the teacher must support the students as they learn how to help one another improve their work. One particularly successful format
for doing this has been the idea of “two stars and a wish.” The idea is that when students are commenting on the work of one another, they do not give evaluative feedback but instead have to identify
two positive features of the work (two “stars”) and one feature that they believe merits further attention (the “wish”). Teachers who have used this technique with students as young as five years old
have been astonished to see how appropriate the comments are, and because the feedback comes from a peer rather than someone in authority over them, the recipient of the feedback appears to be more
able to accept the feedback (in other words, they focus on growth rather than on preserving their well-being). In fact, teachers have told us that the feedback that students give to one another,
although accurate, is far more hard-hitting and direct than they themselves would have given. Furthermore, the research shows that the person providing the feedback benefits just as much as the
recipient because she or he is forced to internalize the learning intentions and success criteria in the context of someone else’s work, which is less emotionally charged than doing it in the context
of one’s own work.
The available research evidence suggests that considerable enhancements in student achievement are possible when teachers use assessment, minute-by-minute and day-by-day, to adjust their instruction
to meet their students’ learning needs. However, it is also clear that making such changes is much more than just adding a few routines to one’s normal practice. It involves a change of focus from
what the teacher is putting into the process and to what the learner is getting out of it, and the radical nature of the changes means that the support of colleagues is essential. Nevertheless, our
experiences to date suggest that the investment of effort in these changes is amply rewarded. Students are more engaged in class, achieve higher standards, and teachers find their work more
professionally fulfilling. As one teacher said, “I’m not babysitting any more.”
By Dylan Wiliam
Series Editor: Judith Reed
Boekaerts, Monique. “Self-Regulation and Effort Investment.” In Handbook of Child Psychology, Vol. 4: Child Psychology in Practice, 6th ed., edited by K. Ann Renninger and Irving E. Sigel, pp.
345–77). Hoboken, N.J.: John Wiley & Sons, 2006.
Bransford, John D., Ann L. Brown, and Rodney R. Cocking. How People Learn: Brain, Mind, Experience, and School. Washington, D.C.: National Academies Press, 2000.
Butler, Ruth. “Task-Involving and Ego-Involving Properties of Evaluation: Effects of Different Feedback Conditions on Motivational Perceptions, Interest and Performance.” Journal of Educational
Psychology 79, no. 4 (1987): 474–82.
———. “Enhancing and Undermining Intrinsic Motivation: The Effects of Task-Involving and Ego-Involving Evaluation on Interest and Performance.” British Journal of Educational Psychology 58 (1988):
Claxton, G. L. “What Kind of Learning Does Self-Assessment Drive? Developing a ‘Nose’ for Quality: Comments on Klenowski.” Assessment in Education: Principles, Policy and Practice 2, no. 3 (1995):
Deci, Edward L., N. H. Speigel, R. M. Ryan, R. Koestner, and M. Kauffman. “The Effects of Performance Standards on Teaching Styles: The Behavior of Controlling Teachers.” Journal of Educational
Psychology 74 (1982): 852–59.
Fontana, David., and M. Fernandes. “Improvements in Mathematics Performance as a Consequence of Self-Assessment in Portuguese Primary School Pupils.” British Journal of Educational Psychology 64, no.
4 (1994): 407–17.
Gray, Eddie M., and David O. Tall. “Duality, Ambiguity, and Flexibility: A ‘Proceptual’ View of Simple Arithmetic.” Journal for Research in Mathematics Education 25 (March 1994): 116–40.
Hodgen, Jeremy, and Dylan Wiliam. Mathematics inside the Black Box: Assessment for Learning in the Mathematics Classroom. London: NFER-Nelson, 2006.
Kluger, Avraham N., and Angelo DeNisi. “The Effects of Feedback Interventions on Performance: A Historical Review, a Meta-analysis, and a Preliminary Feedback Intervention Theory.” Psychological
Bulletin 119, no. 2 (1996): 254–84.
Leahy, Siobhan, Christine Lyon, Marnie Thompson, and Dylan Wiliam. (2005). “Classroom Assessment: Minute-by-Minute and Day-by-Day.” Educational Leadership 63, no. 3 (2005): 18–24.
Mevarech, Zemira R.., and Bracha Kramarski. “IMPROVE: A Multidimensional Method for Teaching Mathematics in Heterogeneous Classrooms.” American Educational Research Journal 34, no. 2 (1997): 365–94.
Sadler, D. Royce. “Formative Assessment and the Design of Instructional Systems.” Instructional Science 18, no. 2 (1989): 119–44.
Saphier, Jonathon. “Masters of Motivation.” In On Common Ground: The Power of Professional Learning Communities, edited by Richard DuFour, Robert Eaker, and Rebecca DuFour, pp. 85–113. Bloomington,
Ill.: National Education Service, 2005.
Slavin, Robert E., Eric A. Hurley, and Anne M. Chamberlain. “Cooperative Learning and Achievement.” In Handbook of Psychology, Vol. 7: Educational Psychology, edited by W. M. Reynolds and G. J.
Miller, pp. 177–98. Hoboken, N.J.: John Wiley & Sons, 2003.
White, Barbara Y., and John R. Frederiksen. “Inquiry, Modeling, and Metacognition: Making Science Accessible to All Students.” Cognition and Instruction 16, no. 1 (1998): 3–118.
Wiggins, Grant, and Jay McTighe. Understanding by Design. New York: Prentice Hall, 2000.
Wiliam, Dylan., and Marnie Thompson. “Integrating Assessment with Instruction: What Will It Take to Make It Work?” In The Future of Assessment: Shaping Teaching and Learning, edited by C. A. Dwyer.
Mahwah, N.J.: Lawrence Erlbaum Associates, 2007. | {"url":"http://www.nctm.org/news/content.aspx?id=11474","timestamp":"2014-04-19T15:34:53Z","content_type":null,"content_length":"46417","record_id":"<urn:uuid:e38f8c1c-6fd1-4842-bec5-007c2c698945>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Key-value apply
From HaskellWiki
I just wrote this function:
apply :: (Ord k) => k -> v -> (v -> v) -> [(k,v)] -> [(k,v)]
apply k v f ds =
let (p1,px) = span ( (k >) . fst) ds
(p2,p3) = case px of
[] -> ((k,v),[])
(x:xs) -> if fst x == k
then ((k, f $ snd x), xs)
else ((k, v), x:xs)
in p1 ++ (p2 : p3)
As you can see (?!), this takes a list of key/value pairs and processes it as follows:
• The function is given a key to look for.
• If the key is found, a function is applied to the associated value.
• If the key is not found, it is inserted (at the correct place) with a specified 'default value'.
Notice that if you start with a completely empty list, you can call
several times and you will end up with a sorted list. (Note that
uses the fact that the list is sorted to cut the search short in the 'I can't find it' case - hence the
context.) Does a function like this already exist somewhere? (Hoogle seems to indicate not.) Is this a special case of something more general? Is there a better implementation? (The code isn't very
readable at it is.) Can you think of a better name than just '
'? Have you ever had call to use such a function yourself?
When you are making excessive use of (key,value) pairs it is usually time to switch to Data.Map.
is almost the same as
, only that function has the type:
insertWith :: Ord k => (a -> a -> a) -> k -> a -> Map k a -> Map k a
Here the update function receives the new value as well. --Twanvl | {"url":"http://www.haskell.org/haskellwiki/index.php?title=Key-value_apply&oldid=11329","timestamp":"2014-04-20T19:30:11Z","content_type":null,"content_length":"20977","record_id":"<urn:uuid:eea2ff9c-4dbf-4538-9123-eef64b868f13>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |