content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Pressure Drops In Pipes: Part 2, Series and Parallel
Engineering, Technology, and DIY
Pressure Drops In Pipes: Part 2, Series and Parallel
Posted by on January 13, 2010
This is Part 2 in a series on pressure drops in pipes. Part 1 covered the basic concepts and equations (1-4b). It is recommended to read that before proceeding, or to have it open along side this
article for reference.
A single pipe is not usually sufficient to solve a real—life problem. Instead, systems of pipes, often quite complex, are used to achieve all desired results. The math, as a result, becomes more
difficult, but where would be the fun if it was easy?
Pipes in series are actually easy. Each length of pipe and each fitting needs to be accounted for, so a sum of all major and minor losses is taken to find the head loss (See Part 1, Eqn. 2), and from
that, pressure drop. Parallel pipes, like those depicted in Figure 1, need a bit more consideration.
Fluid flows in along a single pipe before branching off into any number of parallel paths. These paths may be the same, but more often than not, each will be a bit different. Eventually, they rejoin
and the fluid exits as one stream again. Finding the overall pressure drop between the entrance and exit isn’t quite as simple as adding everything up.
Consider Figure 1 once more, as the logic will be more easily understood with an example. There is some volumetric flow Q entering the system from the left. It branches, with a flow of Q1 in the
upper Path 1 and Q2 in the lower Path 2. The head loss for each particular branch can be found with the unknowns in terms of friction factors (f1, f2) and volumetric flow rates (Q1, Q2), provided the
pipe geometry and K factors for fittings are known.
At the point of branching, there is only one pressure. The same applies to the exit, as it would create a contradiction having two different values at the same place. Therefore, the pressure entering
each branch and exiting each branch are the same. Similarly, there change in height, Δh, for each will be the same. These provide the key to solving the problem; if the pressure drops and Δh in each
branch are equal, then the head loss in each pipe is equal.
In this two branch example, that will mean that h1=h2. Assuming the fluid incompressible and that mass is conserved, the Q entering is equal to Q1+Q2, which is also equal to the Q exiting. Therefore,
one can substitute to get rid of Q2 (or Q1). Rearranging for Q1 gives something like Equation 5. For simplicity, due to it’s pseudo-symmetric nature, L1=L2, which cancel. Minor losses were also
assumed to cancel.
Now, actually solving for Q1 may be easy or hard depending on the type of flow. If the flow is laminar, Equation 4a can be used for friction factor, where it is an explicit function of Reynolds
Number, and therefore volumetric flow. The rest becomes a trivial plug and chug to find Q1, which could then be used to find h1 and pressure drop.
If the flow is turbulent, Equation 4b, the Colebrook-White Equation, must be used. This defines friction factor implicitly, so the solution is not so trivial; an iterative method must be used. The
steps are as follows:
• Select a inital guesses for f1 and f2. Literature can help here to find likely values, or at least magnitudes.
• Solve for Q1 using the estimated values for friction factor.
• Find Q2 using Q-Q1
• Calculate Reynolds numbers Re1,Re2 from Q1,Q2
• Calculate new f1 and f2 from the Reynolds calculated. A similar iterative method could be used to solve the implicit Colebrook-White equation, but tools like WolframAlpha can be used.
• Check results. If the magnitude of the difference between the initial guess and calculated friction factor is lower than a threshold required, one can stop here and use these values for the final
calculation of head loss and pressure drop with either pipe. Otherwise, use the calculated friction factors as the input for Step 2 and repeat.
As an exercise, try to solve the system shown in Figure 2. It is no longer symmetrical, so lengths and minor losses won’t cancel.
5 responses to “Pressure Drops In Pipes: Part 2, Series and Parallel”
2. Khan
April 4, 2011 at 9:09 am
Thanks for sharing this information about parallel lines. It is diffucult to find sufficient resource indicating directly about this subject even in centrifugal pump and/or piping network books.
4. google.com December 23, 2013 at 4:53 pm
What’s up mates, how is the whole thing, and what you want to say about this article,
in my view its genuinely awesome designed for me. | {"url":"http://napkindiagrams.wordpress.com/2010/01/13/pressure-drops-in-pipes-part-2-series-and-parallel/","timestamp":"2014-04-17T05:51:07Z","content_type":null,"content_length":"55887","record_id":"<urn:uuid:083250c3-767d-467d-a25b-e0fdf2f22cc0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stanford, CA Trigonometry Tutor
Find a Stanford, CA Trigonometry Tutor
...In addition, I've prepared many students for the SAT 1, SAT 2, and ACT standardized tests including some who achieved perfect scores. If my experience as an educator has taught me anything, it
has taught me that every student is different: different personalities, different motivations, differen...
14 Subjects: including trigonometry, chemistry, calculus, physics
...By nature of my coursework and extracurricular research, I also have extensive experience in lab work and spent a fair amount of my tutoring time assisting with lab coursework. I look forward
to tutoring for you!I took two semesters of organic chemistry and one semester of physical organic chemi...
24 Subjects: including trigonometry, reading, chemistry, calculus
...Part of my job on the West Valley College campus is to re-teach critical study skills. Most of my time is spent teaching adults to the value of organizing their lives so they can more easily
complete their assignments. A big part of my tutoring style works on the importance of reviewing, that a...
17 Subjects: including trigonometry, reading, English, geometry
...I worked as the Children's Programming Coordinator at the Center for the Homeless by Notre Dame for three years, where, among my many duties, I tutored students twice a week. I have also run
an after school program for grade school students, taught a current events class for inner city high scho...
27 Subjects: including trigonometry, chemistry, English, reading
...I am comfortable with and have ample experience tutoring students of all ages. I help students to thoroughly understand math so they can do A+ work. I believe that anybody who puts in the
effort can succeed in math.
5 Subjects: including trigonometry, geometry, algebra 2, prealgebra
Related Stanford, CA Tutors
Stanford, CA Accounting Tutors
Stanford, CA ACT Tutors
Stanford, CA Algebra Tutors
Stanford, CA Algebra 2 Tutors
Stanford, CA Calculus Tutors
Stanford, CA Geometry Tutors
Stanford, CA Math Tutors
Stanford, CA Prealgebra Tutors
Stanford, CA Precalculus Tutors
Stanford, CA SAT Tutors
Stanford, CA SAT Math Tutors
Stanford, CA Science Tutors
Stanford, CA Statistics Tutors
Stanford, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/stanford_ca_trigonometry_tutors.php","timestamp":"2014-04-18T08:31:22Z","content_type":null,"content_length":"24290","record_id":"<urn:uuid:662bc75f-1d45-4889-a9f0-0ab55f5629a5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
AXELROD R M (1986) An Evolutionary Approach to Norms. American Political Science Review, 80 (4). pp. 1095-1111.
BOOCH G, Rumbaugh J, and Jacobson I (1999) The Unified Modeling Language User Guide. Addison-Wesley, Boston, MA.
CHAN T F, Golub G H, and LeVeque R J (1983) Algorithms for Computing the Sample Variance: Analysis and Recommendations. The American Statistician, 37(3). pp. 242-247.
FLACHE A and Macy M W (2002) Stochastic Collusion and the Power Law of Learning. Journal of Conflict Resolution, 46 (5). pp. 629-653.
GALÁN J M and Izquierdo L R (2005) Appearances Can Be Deceiving: Lessons Learned Reimplementing Axelrod's 'Evolutionary Approach to Norms'. Journal of Artificial Societies and Social Simulation,
8 (3), http://jasss.soc.surrey.ac.uk/8/3/2.html
GOLDBERG D (1991) What every computer scientist should know about floating point arithmetic. ACM Computing Surveys 23 (1). pp. 5-48.
GOTTS N M, Polhill J G, and Law A N R (2003) Aspiration levels in a land use simulation. Cybernetics & Systems 34 (8). pp. 663-683.
HIGHAM N J (2002) Accuracy and Stability of Numerical Algorithms, (second ed.), Philadelphia, USA: SIAM.
IEEE (1985) IEEE Standard for Binary Floating-Point Arithmetic, IEEE 754-1985, New York, NY: Institute of Electrical and Electronics Engineers.
IZQUIERDO L R, Gotts N M, and Polhill J G (2004) Case-Based Reasoning, Social Dilemmas, and a New Equilibrium Concept. Journal of Artificial Societies and Social Simulation, 7 (3), http://
IZQUIERDO L R, Izquierdo S S, Gotts N M, and Polhill J G (2006) Transient and Long-Run Dynamics of Reinforcement Learning in Games. Submitted to Games and Economic Behavior.
JOHNSON P E (2002) Agent-Based Modeling: What I Learned from the Artificial Stock Market. Social Science Computer Review, 20. pp. 174-186.
KAHAN W (1998) "The improbability of probabilistic error analyses for numerical computations". Originally presented at the UCB Statistics Colloquium, 28 February 1996. Revised and updated version
(version dated 10 June 1998, 12:36 referred to here) is available for download from http://www.cs.berkeley.edu/~wkahan/improber.pdf
KNUTH D E (1998) The Art of Computer Programming Volume 2: Seminumerical Algorithms. Third Edition. Boston, MA: Addison-Wesley.
LEBARON B, Arthur W B, and Palmer R (1999) Time series properties of an artificial stock market. Journal of Economic Dynamics & Control, 23. pp. 1487-1516.
MACY M W and Flache A (2002) Learning Dynamics in Social Dilemmas. Proceedings of the National Academy of Sciences USA, 99, Suppl. 3, pp. 7229-7236.
POLHILL J G, Gotts N M, and Law A N R (2001) Imitative and nonimitative strategies in a land use simulation. Cybernetics & Systems, 32 (1-2). pp. 285-307.
POLHILL J G and Izquierdo L R (2005) Lessons learned from converting the artificial stock market to interval arithmetic. Journal of Artificial Societies and Social Simulation, 8 (2), http://
POLHILL J G, Izquierdo L R, and Gotts N M (2005) The ghost in the model (and other effects of floating point arithmetic). Journal of Artificial Societies and Social Simulation, 8 (1), http://
POLHILL J G, Izquierdo L R, and Gotts N M (2006) What every agent based modeller should know about floating point arithmetic. Environmental Modelling and Software, 21 (3), March 2006. pp.
WILENSKY U (1999) NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
SUN MICROSYSTEMS (2000) Numerical Computation Guide. Lincoln, NE: iUniverse Inc. Available from http://docs.sun.com/source/806-3568/. | {"url":"http://jasss.soc.surrey.ac.uk/9/4/4.html","timestamp":"2014-04-16T18:56:45Z","content_type":null,"content_length":"96667","record_id":"<urn:uuid:c829d28a-05d4-4fd4-b4be-ba83c8481565>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roller coasters
Grades: 4 - 8
Using the Giant Wall, students build a roller coaster and explore the concepts of gravity and energy. After a brief discussion with a facilitator about the forces involved to make roller coasters
work students experiment with changing variables to investigate the effect of mass on speed, distance traveled, and energy.
General Outline of Program:
- Welcome to the Big Lab and an introduction to observation
- Construct a working model roller coaster on the Giant Wall
- Compare the class roller coasters and discuss how gravity and energy work in a roller coaster
- As a class, model an experiment to investigate whether a heavier or lighter scooter travels down a ramp faster
- Conduct experiments with ramps and roller coasters
- Discuss results and further explorations
California State Science Standards related to program:
4th Grade
Investigation and Experimentation: 6.a, 6.b, 6.c, 6.d
5th Grade
Investigation and Experimentation: 6.b, 6.c, 6.d, 6.e, 6.f, 6.g, 6.h
6th Grade
Investigation and Experimentation: 7.a, 7.b, 7.d, 7.e
7th Grade
Investigation and Experimentation: 7.a, 7.e
8th Grade
Motion: 1.a
Forces: 2.b, 2.c, 2.e
Investigation and Experimentation: 7.a, 7.b, 7.d, 7.e, 7.h
Paper Helicopters
Post Activities
Big Lab Field Trips | Where's the Air? | Ladybugs
Boats and Balance | Erosion | Roller coasters
Cow Eye Dissection | {"url":"http://www.californiasciencecenter.org/Education/GroupPrograms/BigLab/Rollercoasters/Rollercoasters.php","timestamp":"2014-04-17T06:40:20Z","content_type":null,"content_length":"37637","record_id":"<urn:uuid:46ba3112-a601-4db5-9065-1e6b5be6e889>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alum Success Mathematics
Katey has been accepted at Ohio Franklin F. and Wanda L. Otis Award
State University where she will Gerald M. Samson Department
pursue her PhD in Health The Franklin F. Otis Scholarship was established in memory of one of the charter faculty of the Sault Branch of Michigan College of Mining of Mathematics Scholarship
Communication. and Technology. Franklin spent 30 years as a mathematics professor at Lake State, including 23 years as chair of the Mathematics awardee Jamie Groos poses
Department. He was the University’s first faculty athletic representative, serving 12 years during the early development of the Laker with Professor Tom Boger.
Katey Price ('07) athletic program. He was instrumental in the development of the Great Lakes Intercollegiate Athletic Conference (GLIAC) and the Central
Collegiate Hockey Association (CCHA) leagues in which Lake State is a member today. In 1999, Franklin Otis was posthumously inducted to
the LSSU Athletic Hall of Fame for his invaluable contributions to the Laker athletic program. His named scholarship recognizes the
hard-working sophomore, junior or senior enrolled in mathematics or computer and mathematical science programs. The recipient must submit
a letter of application to the faculty of the mathematics department and have a minimum 2.5 overall GPA and a minimum 3.0 grade point
average in mathematics or computer science courses.
Gerald M. Samson Department of Mathematics Scholarship
Named in honor of LSSU faculty member Gerald Samson, this scholarship is awarded to a deserving computer and mathematical science major.
The recipient is nominated and chosen by the mathematics faculty.
School of Mathematics and Computer Science Scholarship
This scholarship was established in 2001 by Galen Harrison to provide full-tuition to math and computer science majors. Professor Harrison
was a long-time LSSU faculty member who taught at Lake State from 1963 until his retirement in 1996. | {"url":"https://www.lssu.edu/programsofstudy/mathematics/scholarships.php","timestamp":"2014-04-16T07:16:31Z","content_type":null,"content_length":"18077","record_id":"<urn:uuid:a5879398-b38b-4a9c-afa9-39db0847c151>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the Power and Energy of a Capacitor
Capacitors store energy for later use. The instantaneous power of a capacitor is the product of its instantaneous voltage and instantaneous current. To find the instantaneous power of the capacitor,
you need the following power definition, which applies to any device:
The subscript C denotes a capacitance device (surprise!). Substituting the current for a capacitor into this equation gives you the following:
Assuming zero initial voltage, the energy w[C](t) stored per unit time is the power. Integrating that equation gives you the energy stored in a capacitor:
The energy equation implies that the energy stored in a capacitor is always positive. The capacitor absorbs power from a circuit when storing energy. The capacitor releases the stored energy when
delivering energy to the circuit.
For a numerical example, look at the top-left diagram shown here, which shows how the voltage changes across a 0.5-μF capacitor. Try calculating the capacitor’s energy and power.
The slope of the voltage change (time derivative) is the amount of current flowing through the capacitor. Because the slope is constant, the current through the capacitor is constant for the given
slopes. For this example, you calculate the slope for each time interval in the graph as follows:
Multiply the slopes by the capacitance (in farads) to get the capacitor current during each interval. The capacitance is 0.5 μF, or 0.5 × 10^–6 F, so here are the currents:
You see the graph of the calculated currents in the top-right diagram shown here.
You find the power by multiplying the current and voltage, resulting in the bottom-left graph shown here. Finally, you can find the energy by calculating (½)C[v[C](t)]^2. When you do this, you get
the bottom-right graph shown here. Here, the capacitor’s energy increases when it’s absorbing power and decreases when it’s delivering power. | {"url":"http://www.dummies.com/how-to/content/find-the-power-and-energy-of-a-capacitor.navId-810963.html","timestamp":"2014-04-17T02:13:44Z","content_type":null,"content_length":"58314","record_id":"<urn:uuid:fff36e5a-bfac-42ec-8850-a4d38ad1b1c1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Jacques Le Normand
Date: -- (:)
From: Jacques Le Normand <rathereasy@g...>
Subject: GADT constructor syntax
Dear caml-list,
I would like to start a constructive discussion on the syntax of GADT
constructors of the ocaml gadt branch, which can be found at:
There are two separate issues:
1) general constructor form
option a)
type _ t =
TrueLit : bool t
| IntLit of int : int lit
option b)
type _ t =
TrueLit : bool t
| IntLit : int -> int lit
I'm open to other options. The branch has used option b) from the
start, but I've just switched to option a) to see what it's like
Personal opinion:
I slightly prefer option b), because it makes it clear that it's a
gadt constructor right from the start. This is useful because the type
variables in gadt constructors are independent of the type parameters
of the type, consider:
type 'a t = Foo of 'a : 'b t
this, counter intuitively, creates a constructor Foo of type forall 'd
'e. 'd t -> 'e t.
2) explicit quantification of existential variables
option a)
leave existential variables implicitly quantified. For example:
type _ u = Bar of 'a t : u
type _ u = Bar : 'a t -> u
option b)
specifically quantify existential variables. For example:
type _ u = Bar of 'a. 'a t : u
type _ u = Bar : 'a. 'a t -> u
Currently, the branch uses option a).
Personal opinion: I prefer option b). This is for four reasons:
I) the scope of the explicitly quantified variable is not clear. For
example, how do you interpret:
type _ u = Bar of 'a. 'a : 'a t
type _ u = Bar : 'a. 'a -> 'a t
In one interpretation bar has type forall 'a 'b. 'a -> 'b t and in
another interpretation it has type forall 'a. 'a -> 'a t. My
inclination would be to flag it as an error.
In the example of option b), the 'a variable is quantified as a
universal variable but, in patterns, it is used as an existential
variable. This is something I found very confusing in Haskell where
they actually use the 'forall' keyword.
III) option a) is the current Haskell GADT syntax and I've never heard
anyone complain about it
IIII) I don't see how option b) improves either readability or bug prevention
I look forward to hearing your opinions.
--Jacques Le Normand | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2010/12/3d88b8b7376dd1eeead3b4550c43b387.en.html","timestamp":"2014-04-20T00:53:20Z","content_type":null,"content_length":"10305","record_id":"<urn:uuid:fc703194-1013-4133-b8e0-d99af5a02b61>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Most Stringent Test for Location Parameter of a Random Number from Cauchy Density
Atiq-ur-Rehman, Atiq-ur-Rehman and Zaman, Asad (2008): Most Stringent Test for Location Parameter of a Random Number from Cauchy Density.
Download (316Kb) | Preview
We study the test for location parameter of a random number from Cauchy density, focusing on point optimal tests. We develop analytical technique to compute critical values and power curve of a point
optimal test. We study the power properties of various point optimal tests. The problem turned out to be different in its nature, in that, the critical value of a test determines the power properties
of test. We found that if for given size and any point m in alternative space, if the critical value of a point optimal test is 1, the test optimal for that point is the most stringent test.
Item Type: MPRA Paper
Original Most Stringent Test for Location Parameter of a Random Number from Cauchy Density
Language: English
Keywords: Cauchy density, Power Envelop, Location Parameter, Stringent Test
Subjects: A - General Economics and Teaching > A2 - Economics Education and Teaching of Economics > A23 - Graduate
Item ID: 13492
Depositing Atiq-ur- Rehman
Date 20. Feb 2009 08:41
Last 11. Feb 2013 23:38
[1] Berenblutt I. & Webb G.I. (1973) ‘A new test for autocorrelated errors in linear regression model’ Journal of Royal Statistical Society, series B34
[2] Copas J.B (1975). ‘On Unimodality of the Likelihood for Cauchy distribution’ Biometrika, 62
[3] Davies R.B. (1969), ‘Beta-optimal tests and an application to the summary evaluation of experiments’ Journal of Royal Statistical Society, series B31
[4] Efron B. ‘Defining curvature of a statistical problem’ Anals of statistics, 3(6)
[5] Fraser D.A.S. Guttman I. & Styan G.P.H. (1976), ‘Serial correlation and distribution on a sphere’ Communications in Statistics
[6] King M.L. (1987) ‘Toward a theory of point optimal testing’ Econometric Reviews
[7] Gurtler N. & Henze, N. (2000). ‘Goodness-of-fit tests for Cauchy distribution based on the empirical characteristic function’ Ann. Inst. Statist. Math, 52
[8] Lawless J. F. (1972). ‘Conditional confidence interval procedure for location & scale parameters of the Cauchy and logistic distribution’ Biometrika, 52
[9] Lehman E.L. (1986). ‘Testing Statistical Hypothesis’, 2nd Edition, John Wiley & Sons, Inc
[10] Matsui M. & Takemura A.(2003). ‘Empirical Characteristic Function Approach to Goodness of Fit Test for Cauchy Distribution with Parameters estimated by MLE or EISE’ CIRJE
discussion paper series, University of Tokyo
[11] Rublyk F. (1999). ‘A goodness-of-fit test for Cauchy distribution’, Proceedings of the Third International Conference on Mathematical Statistics, Tatra Mountains, Mathematical
Publications Bratislava
[12] Rublyk F. (2001) ‘A quantile goodness-of-fit for Cauchy distribution, based on extreme order statistics’. Applications of Mathematics
[13] Zaman A.(1996) ‘Statistical Foundations of Econometric Techniques’, Academic Press Inc
URI: http://mpra.ub.uni-muenchen.de/id/eprint/13492 | {"url":"http://mpra.ub.uni-muenchen.de/13492/","timestamp":"2014-04-16T13:03:15Z","content_type":null,"content_length":"20787","record_id":"<urn:uuid:79800e63-95fd-4260-aa9e-84a7cd0e656e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can I compute E[max Xi \ X1<X2<X3]?
March 30th 2009, 06:52 AM #1
Mar 2009
How can I compute E[max Xi \ X1<X2<X3]?
If Xi , i = 1, 2, 3, are independent exponential random variables with rate lambda_i, i=1, 2, 3 find E[max Xi \ X1<X2<X3] ,
How can I proof that the answer is (1/lambda1+lambda2+lambda3)+(1/lambda2+lambda3)+(1/lambda3)
From what you've written I think you want the expected value of the largest order statistic
from a sample of size three from an exponential distribution.
If that's true, you should find the density of that order stat then integrate.
Nope, I was wrong. Your random variables have different parameters.
But you can still derive the density of the largest of these three.
March 30th 2009, 10:47 PM #2
March 31st 2009, 09:10 AM #3 | {"url":"http://mathhelpforum.com/advanced-statistics/81460-how-can-i-compute-e-max-xi-x1-x2-x3.html","timestamp":"2014-04-18T03:04:25Z","content_type":null,"content_length":"36774","record_id":"<urn:uuid:b73803a5-13af-44f0-873f-cda34a73b4dd>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability, random variables, and stochastic processes
Results 1 - 10 of 620
- NEURAL COMPUTATION , 1995
"... ..."
, 1998
"... tomasi @ cs.stanford.edu Bilateral filtering smooths images while preserving edges, by means of a nonlinear combination of nearby image values. The method is noniterative, local, and sim-ple. It
combines gray levels or colors based on both their geometric closeness and their photometric similariv, a ..."
Cited by 662 (2 self)
Add to MetaCart
tomasi @ cs.stanford.edu Bilateral filtering smooths images while preserving edges, by means of a nonlinear combination of nearby image values. The method is noniterative, local, and sim-ple. It
combines gray levels or colors based on both their geometric closeness and their photometric similariv, and prefers near values to distant values in both domain and range. In contrast with filters
that operate on the three bands of a color image separately, a bilateral filter can en-force the perceptual metric underlying the CIE-Lab color space, and smooth colors and preserve edges in a way
that is tuned to human perception. Also, in contrast with standardjltering, bilateral filtering produces no phantom colors along edges in color images, and reduces phantom colors where they appear in
the original image. 1
- NEURAL NETWORKS , 2000
"... ..."
- IEEE Transactions on Information Theory , 1992
"... Most of a signal information is often found in irregular structures and transient phenomena. We review the mathematical characterization of singularities with Lipschitz exponents. The main
theorems that estimate local Lipschitz exponents of functions, from the evolution across scales of their wavele ..."
Cited by 381 (9 self)
Add to MetaCart
Most of a signal information is often found in irregular structures and transient phenomena. We review the mathematical characterization of singularities with Lipschitz exponents. The main theorems
that estimate local Lipschitz exponents of functions, from the evolution across scales of their wavelet transform are explained. We then prove that the local maxima of a wavelet transform detect the
location of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations have a different behavior that we
study separately. We show that the size of the oscillations can be measured from the wavelet transform local maxima. It has been shown that one and two-dimensional signals can be reconstructed from
the local maxima of their wavelet transform [14]. As an application, we develop an algorithm that removes white noises by discriminating the noise and the signal singularities through an analysis of
their ...
, 2003
"... The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in
general, nonuniform. However, a closed-form expression of this distribution and an in-depth investigation ..."
Cited by 256 (7 self)
Add to MetaCart
The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in
general, nonuniform. However, a closed-form expression of this distribution and an in-depth investigation is still missing. This fact impairs the accuracy of the current simulation methodology of ad
hoc networks and makes it impossible to relate simulation-based performance results to corresponding analytical results. To overcome these problems, we present a detailed analytical study of the
spatial node distribution generated by random waypoint mobility. More specifically, we consider a generalization of the model in which the pause time of the mobile nodes is chosen arbitrarily in each
waypoint and a fraction of nodes may remain static for the entire simulation time. We show that the structure of the resulting distribution is the weighted sum of three independent components: the
static, pause, and mobility component. This division enables us to understand how the model s parameters influence the distribution. We derive an exact equation of the asymptotically stationary
distribution for movement on a line segment and an accurate approximation for a square area. The good quality of this approximation is validated through simulations using various settings of the
mobility parameters. In summary, this article gives a fundamental understanding of the behavior of the random waypoint model.
- IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1989
"... We describe a novel approach to curve inference based on curvature information. The inference procedure is divided into two stages: a trace inference stage, to which this paper is devoted, and a
curve synthesis stage, which will be treated in a separate paper. It is shown that recovery of the trace ..."
Cited by 201 (13 self)
Add to MetaCart
We describe a novel approach to curve inference based on curvature information. The inference procedure is divided into two stages: a trace inference stage, to which this paper is devoted, and a
curve synthesis stage, which will be treated in a separate paper. It is shown that recovery of the trace of a curve requires estimating local models for the curve at the same time, and that tangent
and curvature information are sufficient. These make it possible to specify powerful constraints between estimated tangents to a curve, in terms of a neigh-borhood relationship called cocircularity
and between curvature esti-mates, in terms of a curvature consistency relation. Because all curve information is quantized, special care must be taken to obtain accurate estimates of trace points,
tangents and curvatures. This issue is ad-dressed specifically by the introduction of a smoothness constraint and a maximum curvature constraint. The procedure is applied to two types of images,
artificial images designed to evaluate curvature and noise sensitivity, and natural images.
- Image and Vision Computing , 1997
"... : Almost all problems in computer vision are related in one form or another to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most
commonly used techniques for parameter estimation. These include linear least-squares (pseudo-inverse and eigen ..."
Cited by 196 (6 self)
Add to MetaCart
: Almost all problems in computer vision are related in one form or another to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most commonly
used techniques for parameter estimation. These include linear least-squares (pseudo-inverse and eigen analysis); orthogonal least-squares; gradient-weighted least-squares; bias-corrected
renormalization; Kalman øltering; and robust techniques (clustering, regression diagnostics, M-estimators, least median of squares). Particular attention has been devoted to discussions about the
choice of appropriate minimization criteria and the robustness of the dioeerent techniques. Their application to conic øtting is described. Key-words: Parameter estimation, Least-squares, Bias
correction, Kalman øltering, Robust regression (R#sum# : tsvp) Unite de recherche INRIA Sophia-Antipolis 2004 route des Lucioles, BP 93, 06902 SOPHIA-ANTIPOLIS Cedex (France) Telephone : (33) 93 65
77 77 -- Telecopie : (33) 9...
- IEEE TRANS. INFORM. THEORY , 1999
"... In this paper, we develop a new multiscale modeling framework for characterizing positive-valued data with longrange-dependent correlations (1=f noise). Using the Haar wavelet transform and a
special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, the mo ..."
Cited by 171 (30 self)
Add to MetaCart
In this paper, we develop a new multiscale modeling framework for characterizing positive-valued data with longrange-dependent correlations (1=f noise). Using the Haar wavelet transform and a special
multiplicative structure on the wavelet and scaling coefficients to ensure positive results, the model provides a rapid O(N) cascade algorithm for synthesizing N-point data sets. We study both the
second-order and multifractal properties of the model, the latter after a tutorial overview of multifractal analysis. We derive a scheme for matching the model to real data observations and, to
demonstrate its effectiveness, apply the model to network traffic synthesis. The flexibility and accuracy of the model and fitting procedure result in a close fit to the real data statistics
(variance-time plots and moment scaling) and queuing behavior. Although for illustrative purposes we focus on applications in network traffic modeling, the multifractal wavelet model could be useful
in a number of other areas involving positive data, including image processing, finance, and geophysics.
, 1998
"... ... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields. ..."
Cited by 160 (37 self)
Add to MetaCart
... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=32212","timestamp":"2014-04-20T07:34:24Z","content_type":null,"content_length":"36063","record_id":"<urn:uuid:0c49f200-c7e7-4993-989c-68ee264a3961>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenCL Binary | Effective Memory Usage
06-10-2012, 10:10 PM #1
Junior Member
Join Date
Jun 2012
OpenCL Binary | Effective Memory Usage
Hi Community!
Right know I try to handle a problem, and my google-Research did not work out. I do some calculations of Voronoi-Cells (nearest neighbor-problem). Therefore, I write into my Result array, which
is quite big (Number of Points > 250.000).
I only want to store True/False to keep the information of closest neighbors. (i.e. Pnt 12 is Neighbour of Pnt 13 => 12/13 = 1)
I compressed this information to a uint8 linear array (Values 0-255) and I do bit manipulations to store 8 Neighbor Informations.
Furthermore the array grows with the Gaussian sum:
Matrix looks like:
nachbar = zeros(array_size, dtype=uint
nachbar_buf = cl.Buffer(ctx, mf.READ_WRITE | mf.USE_HOST_PTR, hostbuf=nachbar)
int index = (cur_compare_pnt-1)*cur_compare_pnt/2 + cur_pnt;
int main_index = index / 8;
int sub_index = index % 8;
nachbar_buf[main_index] |= 1<<sub_index;
I am not sure, if this was understandable…
My question is: Is there a way to use a bool-linear- array in OpenCL? Or is there a better way at all?
Re: OpenCL Binary | Effective Memory Usage
Is it even worth compressing? 250k is nothing on modern hardware. Of course, if this grouping makes the next stage of processing that much more efficient that is another matter.
But you have some options:
- do a full bytes worth in each thread. less parallelism but with a big enough problem it should still be ok
- use atomic_or to merge results from each group of N threads
- use a reduction algorithm to perform the bit merging without atomics
For atomics you probably want to use local memory as a staging area, as with the reduction algorithm. The reduction algorithm might be better as atomics can be slow - although most hardware has
specialised units which make certain operations fast.
Re: OpenCL Binary | Effective Memory Usage
Hi notzed,
thanks for your answer. The Problem is not the 250k Points, the problem is that i have to store the nearest neighbor information. The amout of data is given by
size = 1/2 * N * ( N -1). This is for 250k ca. 31G Connections (so. ca. 31 GB)
Right now I try to work with semaphors and atomic in generell but somehow I am to unskilled to be successful
Does somebody has a nice tutorial for this kind of problems?
Re: OpenCL Binary | Effective Memory Usage
Hi notzed,
"do a full bytes worth in each thread. less parallelism but with a big enough problem it should still be ok"
I was able to apply this idea to some other code, which creates large matrizes parallely.
__kernel void create_big_ones( __global unsigned char *big_ones)
int main_index = get_global_id(0);
for( int sub_index = 0; sub_index < 8; sub_index++)
int index = sub_index + main_index * 8;
float id1_temp = sqrt(2 * index + 0.25f) + 0.5f;
int id1 = (int) id1_temp;
int id0 = index - (id1-1)*id1/2;
//printf("%d %d %d\\n",index,main_index,sub_index);
//printf("Points to check: %d %d %d\\n", id0, id1, index);
int main_index = index / 8;
if(id0 == 1)
big_ones[main_index] |= 1<<sub_index;
This code work perfectly parallel. But this idea is not appliable for my Nearest Neighbour problem, since I cannot check a specific connection (i.e. Point 12 to Point 14 ? Neighbors). I start the
code with a specific point and get a list of Neighbors. So, I do not know which byte will be modified in advance.
Ciao and thanks
06-11-2012, 05:11 PM #2
Senior Member
Join Date
Aug 2011
06-11-2012, 10:43 PM #3
Junior Member
Join Date
Jun 2012
06-11-2012, 10:50 PM #4
Junior Member
Join Date
Jun 2012 | {"url":"http://www.khronos.org/message_boards/showthread.php/8365-OpenCL-Binary-Effective-Memory-Usage?s=48c0c9dc61067f073dc237752265da64&p=27391","timestamp":"2014-04-18T05:52:25Z","content_type":null,"content_length":"44953","record_id":"<urn:uuid:ef3d9f22-f087-4c1f-9cb9-9ba1cda6f347>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Rutherford SAT Math Tutor
Find an East Rutherford SAT Math Tutor
...I can help you learn the basics of Word or more advanced tools like MailMerge. I have prepared and given hundreds of presentations with PowerPoint. I can help you understand the basics or give
pizzazz to your presentation.
36 Subjects: including SAT math, English, reading, writing
...For the last year, I have tutored college students in Calculus I and Calculus 2. I feel very confident tutoring this subject. I have been tutoring students grades K-5 for the last 5 years, in
addition to middle and high school students.
19 Subjects: including SAT math, calculus, geometry, biology
...I have helped many students as a private tutor in both mathematics and Spanish. My areas of knowledge range from Spanish grammar, phonetics, syntax, Spanish conversation and Business Spanish.
As a math tutor my areas of knowledge range from Algebra, trigonometry, geometry, Differential Calculus...
9 Subjects: including SAT math, Spanish, calculus, geometry
...I attend synagogue at least once a week and pray in Hebrew, and I understand what I am saying. On occasion I read from the Torah scroll, also in Hebrew, and I understand what I am reading.
Since the age of six I have been studying the Bible (Tanakh), Mishnah and other sacred texts in the original Hebrew.
25 Subjects: including SAT math, chemistry, writing, biology
...Ask me for more specifics, if you are interested! My Bio...I grew up in St. Louis, MO, before moving to NYC for college at Columbia University.
27 Subjects: including SAT math, reading, English, grammar | {"url":"http://www.purplemath.com/east_rutherford_sat_math_tutors.php","timestamp":"2014-04-18T06:18:59Z","content_type":null,"content_length":"23962","record_id":"<urn:uuid:3f413fce-92bb-4518-845d-817a2640af91>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 102
Math 102 CITYSCAPE: Analyzing Urban Data
The target audience is those students whose weakest proficiency areas are numerical and statistical reasoning. Students apply basic concepts of probability and statistics to actual data. Probability
concepts include theoretical and empirical definitions, as well as the addition and multiplication principles and expected value. Statistical concepts include the usual measures of central tendency
and dispersion of data and various graphical displays of these relationships, such as box plots, stem plots, and frequency histograms. Examples of graphs from the media are also examined with respect
to honesty and clarity of the relationships shown. [1/2 course credit if C- or better] | {"url":"http://www.trincoll.edu/depts/mcenter/mcMath102.htm","timestamp":"2014-04-18T21:49:02Z","content_type":null,"content_length":"3047","record_id":"<urn:uuid:574420cd-5209-4d13-b894-7d0d802ff63c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to solve smaz time machine
Just to say how I learned, I had truble when I first started to learn how to solve the 3x3 cube with alg. and understanding U,Ui,R,Ri,ex,ex,ex,. I had to put it in a way that it was easyer for me to
understand. Like U=T arrow to the left, Ui=T arrow to the right, R=R with a arrow up, Ri=R with arrow down, ex,ex,. I am slowly learning the right way but it's still hard. I can solve a lot of
puzzles. But just had to learn in a diffrent way. I am better at hands on watching some one then reading how.
Any help on this would be great, Thank You.
edit: yes, i understand
can happen all solved except 2 pieces adjacents reversed....
some idea guys? i dont think this is solvable with this algoritm
a safe way to solve last 2 clocks
In 4x4-like commutator/conjugate notation: [[U2:[Rw:U]],Bw]
Written out in full: [U2 Rw U Rw' U2' Bw U2 Rw U' Rw' U2' Bw']
Here is the definition of my notation since my notation is perhaps ambiguous:
U2 = turn the top clockface 2 "clicks" clockwise (60 degrees rotation)
Rw = turn the right side of the 2x2, including the right clock-face 90 degrees clockwise
I *think* it should 3-cycle 2 adjacent pieces on the top clockface and 1 piece from the right clockface. Let me know if this doesn't work and/or if my notation is un-decipherable.
how to solve the last "clock" guys?
sear70 wrote:
...Any help on this would be great, Thank You.
I have no video equipement but I made a sequence of pictures for you.
Please, read from left to right and from top to bottom.
You can click onto the picture to enlarge it.
(Notation follows closely the usual letter notation of WCA: U stands for the upper clock; D for the down clock; x2 is the usual notation for turning the whole cube around the (x) axis going through
the faces at your left and right.
Rw stand for "Right wide" meaning two layers at once. In my move sequence only one kind of Rw turns is used where the whole right half of the puzzles is turned by 180 degrees, written as Rw2)
The result is a 3-cycle of numbers on the U clock:
1 is now at the Position of the former 11.
11 went to the position of the former 12.
12 went to the position of the former 1.
I hope my still photos work equally well as a video.
Good luck!
You can vary this move sequence by replacing the number 1 by 2 or 3.
Rw2 U3 Rw2 D3' Rw2 U3 Rw2 U3' Rw2 U3' D3 Rw2 U3' can be used to deal with an odd permutation (like just two numbers are swapped).
My collection at: http://sites.google.com/site/twistykon/home
Thank you, The pic's helped a lot. I really need to sit down and learn the right way to read alg. But for me it is easyer to take the info. and reduce it to a way that I can understand. I have
allways had that truble. I learn better when I can see whats going on. Thank you agian for the help.
Hey Konrad, this is a great find. I too have a 12-move ([5,1] commutator) but it doesn't have the same nice property of all three cycled pieces being next to each other. That's quite useful.
Prior to using my real name I posted under the account named bmenrigh.
Thank you Konrad for sharing this and for the high quality of your pictures | {"url":"http://www.twistypuzzles.com/forum/viewtopic.php?p=283238","timestamp":"2014-04-16T22:35:27Z","content_type":null,"content_length":"55253","record_id":"<urn:uuid:ef683642-5e5f-4766-b1aa-5709004728a2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acceleration of acceleration?
Hi everyone.
I was wondering about this.
If the position of an object changes in time, the object has velocity.
If the velocity of an object changes in time, the object is accelerating (decelerating)
If the acceleration of an object changes in time, could we hypothetically have acceleration of acceleration.
I have this scenario in my head:
An asteroid is at rest, far away from earth. The earth starts dragging the asteroid towards it. The asteroid will accelerate towards earth ever so slightly (since gravity depends on distance)
The closer it gets to earth, not only will it go faster, but it will also accelerate faster.
So we will have the change of acceleration in time:
[itex]a\prime = \lim_{\Delta t\to\ 0}\frac{\Delta a}{\Delta t}[/itex]
So acceleration of acceleration would be in [itex]\frac{m}{s^3}[/itex] or rather [itex]\frac{\frac{m}{s^2}}{s}[/itex] if it appeals more.
Would this be useful? We could calculate the exact acceleration in any given moment as opposed to having the average acceleration.
But that's beside the point since we could always calculate the exact acceleration if we know how far it is from a planet.
But if we needed to know the exact speed of the asteroid after let's say 20 hours, we would get an incorrect answer if we treated the acceleration as if it were constant.
So [itex]a_1 = a_0 \pm a\prime t[/itex]
so if [itex]a_0 = 0[/itex] then [itex]a_1 = a\prime t[/itex]
and if [itex]v_1 = v_0 \pm at[/itex]
and if [itex]v_0 = 0[/itex] then [itex]v_1 = at[/itex]
=> [itex]v_1 = a\prime t^2[/itex] if the object is starting to move from rest
Does any of this make sense?
Waiting for someone to point out a flaw in this. | {"url":"http://www.physicsforums.com/showthread.php?p=4223742","timestamp":"2014-04-20T03:25:03Z","content_type":null,"content_length":"29665","record_id":"<urn:uuid:55635433-56bb-47af-8e8c-f7794f029450>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The standard baseball measures 2 11/16" in diameter. What is its volume (to the nearest hundredth)? _____cu. in.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
what's the volume equation for a sphere?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
that's better :) So find the radius...
Best Response
You've already chosen the best response.
we gonna divide
Best Response
You've already chosen the best response.
2 11/16
Best Response
You've already chosen the best response.
that's the diameter, so you need to divide 2 11/16 in half to get the radius.. .careful with the fractions :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
diameter = 2 11/16 = 2.6875 radius = diameter / 2 = 1.34375 Volume = (4/3) pi (radius^3) = (4/3) pi (1.34375^3) = (4/3) pi (2.4264) = 5.8416 pi = 18.3519 =18.35 cm^3 <<-- rounded to nearest
hundredths That's what I got... not sure where you got 43? But I could have made an error... might want to check it.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506c5f60e4b060a360fe7464","timestamp":"2014-04-19T07:11:22Z","content_type":null,"content_length":"49155","record_id":"<urn:uuid:99b43d69-90a1-4f87-a8d8-a619f9d2e869>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Union of Two countably infinite sets
March 22nd 2011, 06:15 PM #1
Mar 2011
Union of Two countably infinite sets
If A and B are both countably infinite then prove that $A\bigcup B$ is countably inifinite.
I was working on this problem with a friend and we know that this means that there is a bijection of both sets, but where do we go from there?
Since A and B are both countable we know that there exists a bijection from the integers to the sets.
Let $A = \{a_1,a_2,a_3, \cdots$ and
Let $B = \{b_1,b_2,b_3, \cdots$
let $f:A \to \mathbb{N} \quad f(a_i)=i$ and
let $g:B \to \mathbb{N} \quad f(b_i)=i$
Now let $h:A \cup B \to \mathbb{N}$
$h(x)=\begin{cases}2i-1, \text{if } x \in A \\ 2i, \text{if } x \in B \end{cases}$
Now show that this is both 1-1 and onto
March 22nd 2011, 06:25 PM #2 | {"url":"http://mathhelpforum.com/discrete-math/175438-union-two-countably-infinite-sets.html","timestamp":"2014-04-18T19:09:15Z","content_type":null,"content_length":"34958","record_id":"<urn:uuid:787fd2f8-d215-46ca-803e-0d0bb4467057>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
University of California
Math honors-level courses
Course guidance
Honors-level courses in mathematics must be at the mathematical analysis (precalculus) level or above. These courses should have three years of college-preparatory mathematics as prerequisite work.
Mathematical analysis that includes the development of the trigonometric and exponential functions can be certified for UC honors credit. If mathematical analysis is certified at the UC honors level,
there must be a section of the regular college-preparatory course offered as well. The honors-level course should be demonstrably more challenging than the regular college preparatory sections.
Calculus, with four years of college-preparatory mathematics as prerequisite, qualifies as an honors-level course if it is substantially equivalent to an AP Calculus course. Statistics, with a
three-year mathematics prerequisite, may also be approved for honors credit if it is substantially equivalent to an AP Statistics course. These two courses do not require a separate section in the
regular college-preparatory curriculum. Each honors-level course in mathematics must include a comprehensive final examination.
UC-approved honors math courses must also meet the general “a-g” honors-level course criteria.
Sample courses
Samples of honors-level courses approved in the “c” subject area are available for reference as you prepare your own course for UC approval. | {"url":"http://www.ucop.edu/agguide/a-g-requirements/c-mathematics/honors/index.html","timestamp":"2014-04-17T19:04:14Z","content_type":null,"content_length":"9061","record_id":"<urn:uuid:985559ef-d996-4fe8-b80a-9292b062d4c1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Mathematica vs. Maple
Replies: 1 Last Post: Jun 14, 1996 2:43 AM
Re: Mathematica vs. Maple
Posted: Jun 14, 1996 2:43 AM
Stu Schaffner (sch@mitre.org) wrote:
: [snip]...
: I use Mathematica, am pleased with it, and have never had a chance to
: use Maple. One of the more interesting aspects of Mathematica is its
: underlying functional programming language. I never hear anyone
: compare the base languages of these systems. In Maple, can you manipulate
: patterns, rules, and lambda expressions, or are you stuck with loops
: and if-then-else?
I think this a very good point. I have used Maple a little, and
dabbled with Reduce. In Reduce, I found it very nice to program in a
mixture of procedures and LET rules. My impression is that in Maple
you are limited to a procedural programming style, whereas Mathematica
appears to have procedural, rule-based, and functional programming
styles as options. I would really appreciate some knowledgable
comment on this point, as it would seem to be a big limitation in
Maple, which otherwise I really like, esp. for its open library which
lets you view and modify the source of much of the stuff you use.
John Kot (jkot@rp.csiro.au) ,-_|\ CSIRO Div. of Radiophysics
tel: +61 2 372 4343 / \ PO Box 76
fax: +61 2 372 4446 \_,-._/ Epping, NSW 2121
http://www.rp.csiro.au/staff/jkot.html v Australia | {"url":"http://mathforum.org/kb/message.jspa?messageID=1512978","timestamp":"2014-04-16T13:44:58Z","content_type":null,"content_length":"15318","record_id":"<urn:uuid:790d4196-7231-4474-9c37-1180dd8c0113>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why does finitely presented imply quasi-separated ?
up vote 5 down vote favorite
By the EGA definition, a morphism of schemes of finite presentation is required to be quasi-separated. As far as I can see, removing this requirement does not prevent from proving the basic
properties such as stability of the notion under composition, products, etc. So my question is :
where exactly in proving important theorems involving morphisms of finite presentation is the quasi-separated assumption crucial ?
Note that a morphism of finite type is not required to be quasi-separated.
All kinds of examples and counter-examples will be appreciated.
add comment
1 Answer
active oldest votes
One of the main interests in finitely presented morphisms comes from the various theorems in EGA IV,8. They show that for many questions about morphisms of schemes and sheaves on them, the
condition of finite presentation allows one to reduce to a noetherian situation. For these theorems the assumption of quasi-separatedness is crucial.
Let me quickly try to explain why. The heart of the reduction to the noetherian case are theorems like the following: Let X over Spec A be a finitely presented scheme. Then there is a
subring $A_0$ of $A$ which is a finitely generated $\mathbb{Z}$-algebra (and in particular noetherian) and an $A_0$-scheme $X_0$ of finite presentation such that $X$ arises from $X_0$ via
up vote 14 the base-change $A_0\to A$. If $X$ is affine, this is pretty clear, as $X$ is definied by finitely many equations in an affine space over $A$. In order to pass from the affine case to the
down vote general case, it does NOT suffice to know that we can cover $X$ by finitely many affine pieces (which would be the assumption of quasi-compactness), but we also need that the glueing data
accepted for the affine pieces are somehow described by a finite number of equations. This is ensured by the assumption that the intersection of two affine pieces is quasi-compact which corresponds
precisely to the assumption that $X$ is quasi-separated over A.
I guess that these theorems were the reason for Grothendieck to include this condition in the definition of finitely presented.
Great ! That's exactly the kind of answer I was expecting. Thanks. – Matthieu Romagny Aug 26 '10 at 12:01
Note also the trivial "converse" that since every f. type map $f_0:X_0 \rightarrow S_0$ to a noetherian $S_0$ is q−septd, any $f:X \rightarrow S$ arising from such $f_0$ by base change
3 must be q-septd. So q-sep'tdness is not only sufficient for a f. type map to admit descent to a "noetherian" situation, but also necessary. For example, if $A$ is a ring and $U$ is a
non-qc open in Spec($A$) then the f.type $A$-scheme gluing $X$ of Spec($A$) to itself along $U$ cannot descend to f.type scheme over noetherian ring; stronger than saying the EGA method
of proof doesn't apply. – BCnrd Aug 26 '10 at 12:27
small clarification: when I wrote about "sufficient for a f.type map.." I should have said "sufficient for a quasi-compact locally finitely presented map". – BCnrd Aug 31 '10 at 5:25
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/36737/why-does-finitely-presented-imply-quasi-separated?answertab=oldest","timestamp":"2014-04-16T16:18:23Z","content_type":null,"content_length":"54612","record_id":"<urn:uuid:42df6462-27b9-4048-b511-4d154ace07cf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
surface of revolution
July 23rd 2008, 10:40 AM #1
Jul 2008
surface of revolution
Given the function f(x) = e^-x, with x greater than or equal to 0. This finite area of an infinite region should produce a finite volume of revolution, but then an infinite surface of revolution,
a la Toricelli's Trumpet. Is this assumption of mine correct? And if so, what would the proof be like?
Also, I understand that the surface area of revolution would be I = 2pi e^-x sqrt[1+e^-2x] dx, with bounds 0 and infinity. However, I am not sure how to perform such an integration.
Lastly, this is not really a "homework help", it is just an interest that developed while doing a problem involving Toricelli's Trumpet, but I want to add that part in the write-up that must be
handed in. Sorry if this is jumbled/empty rambling.
Last edited by beahipcat; July 23rd 2008 at 10:56 AM. Reason: clarity
The volume from 0 to infinity is Pi/2.
The surface integral can be evaluated by doing a u sub.
Let u=e^(-x) and get:
2Pi*INT[sqrt(1+u^2)]du, u=0..1
You should get Pi(sqrt(2)+ln(1+sqrt(2)))
It is finite as well.
If you want a function that gives finite volume and infinite surface area, try
integrating y=1/x from 1 to infinity.
This is known as Gabriel's horn.
Sorry, LaTex is down.
thank you very much, torriceli's trumpet is the same as gabriel's horn which is how this question came about. i very much appreciate your help and time. however, shouldn't it be -2 pi followed by
the INT, given that du is -e^-x? dx
Last edited by beahipcat; July 23rd 2008 at 11:48 AM.
When you switch the order of integration it becomes positive
July 23rd 2008, 11:20 AM #2
July 23rd 2008, 11:26 AM #3
Jul 2008
July 24th 2008, 05:36 AM #4 | {"url":"http://mathhelpforum.com/calculus/44262-surface-revolution.html","timestamp":"2014-04-19T10:33:09Z","content_type":null,"content_length":"38722","record_id":"<urn:uuid:85173ad3-93db-4874-adf9-716d6987a1e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics Jokes and Humor
Gallery of Statistics Jokes
(A very extensive collection.)
Gary Ramseyer
A researcher asked an experienced statistician what procedure should be used to obtain the correlation between two normally distributed variables that were artificially dichotomized. Why did the
researcher suddenly rush from the statistician's office and run straight to the pharmacy to buy a bottle of carbon tet cleaning fluid?
The statistician told him a TETRACHORIC SOLUTION was appropriate for his problem!!!
Ben Shabad's Statistics Cartoon Page
Standard Deviation Man (Video)
Profession Jokes: Statistics
by David Shay
Example: In God we trust. All others must bring data.
Mathematical Jokes: Statistics
Example: If had only one day left to live, I would live it in my statistics class:
it would seem so much longer.
Statistical Jokes from Fathom Resource Center
Example: The latest survey shows that 3 out of 4 people make up 75% of the world's population.
Statz Rappers This is a cute video made by some statistics students.
Statistical One Liners
Example: The only time a pie chart is appropriate is at a baker's convention.
Using Humor to Teach Statistics
Using Humor in the Introductory Statistics Course
Hershey H. Friedman, Linda W. Friedman, and Taiwo Amoo
Using Humor to Teach Statistics : Must They Be Orthogonal?
by Richard G. Lomax and Seyed A. Moosavi | {"url":"http://davidmlane.com/hyperstat/humor.html","timestamp":"2014-04-20T15:51:12Z","content_type":null,"content_length":"5978","record_id":"<urn:uuid:b0ce58c9-b3ab-4274-afa4-aabc7cec81c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
More integral help. What would be the anti derivative?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50494341e4b063f50cff495b","timestamp":"2014-04-16T13:43:39Z","content_type":null,"content_length":"65457","record_id":"<urn:uuid:13f712e8-14b6-47b0-8c94-9b0c3317ca17>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
On probability, statistics and journalismOn probability, statistics and journalism
On probability, statistics and journalism
This should pose a good teaser for any working reporter and news editor who uses percentages of chance on a day-to-day basis.
If a woman is given a positive screening result after a mammogram, which is bad, what is the probability that she does not have cancer?
Answer: 91 percent.
Why is this? Cambridge University professor David Spiegelhalter explains in the September UK edition of Wired (emphasis is mine):
Mammography correctly classifies around 90 percent of women who go for breast-cancer screening. So when a middle-aged woman is told she has a positive test result, what’s the probability she
doesn’t have cancer? The answer, which is surprising to most people, is around 91 per cent. The crucial missing piece of information is the size of her background risk.
So suppose she is from a population in which around one in 100 have breast cancer. Then, out of 100 such women tested, one would have breast cancer and will most likely test positive. But of the
99 who do not have breast cancer, we would still expect around ten to test positive — as the test is only 90 percent accurate. That makes 11 positive tests, only one of which involves cancer,
which gives a 10/11 = 91 percent probability that someone who tests positive does not have breast cancer.
Spiegelhalter writes that this is difficult to understand because it is difficult to understand: probability doesn’t make sense, nor follow the rule of logic we think govern our lives and the
outcomes of decisions.
But he also correctly identifies a flaw in news reporting where the numerator – the amount of things - is enthusiastically reported, without mentioning the amount of times the event could have
happened, the denominator.
So the amount of health scare stories (mentioning no names) that are perpetually reported – invariably from unpublished, unreviewed studies and promoted by PR officers – are lacking in context and
thus utterly misleading. Ben Goldacre has been making this point for years.
Health scare stories may be right to mention that while the relative risk of, for example, getting cancer from drinking/not drinking red wine/eating peanuts/reading the Daily Mail may be increased or
decreased. But the absolute risk may be statistically unchanged – when a person is considered as part of a wider population, not just the 1,000 or so that took part in the study.
News naturally focuses on the unlikely and the shocking – it would be boring otherwise. But that doesn’t necessarily mean it has to be misleading.
This was written by Patrick Smith. Posted on Friday, September 23, 2011, at 10:18 am. Filed under Journalism. Tagged Journalism. Bookmark the permalink. Follow comments here with the RSS feed. Post a
comment or leave a trackback. | {"url":"http://psmithjournalist.com/2011/09/on-probability-statistics-and-journalism/","timestamp":"2014-04-20T10:47:47Z","content_type":null,"content_length":"22766","record_id":"<urn:uuid:5730494e-0aa8-44f6-882f-8896db92858f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector representation VS Angle representation
Im talking about skinning models. As in rigging them for animation.
(I reckon the whole industry is stupid, you should call it “rigging” not “skinning”)
When your working with vectors, you get advantages and disadvantages.
When your working with angles, you get advantages and disadvantages.
Like how do I limit the head rotation so it cant spin all the way around facing backwards.
If I do it with a vector stick man, then how am I supposed to restrict the bone.
I am indeed in knowledge that if I used a rotational angle tree I would be able to limit it between two scalars… its just I wanted to work with vectors.
Note my stickmen are just vertices and lines connecting them, then I bind the model with orientation matrices so it acts to the skeleton.
This gives me ragdoll physics, its just now im stuck in vector representation and I dont know what to do…
Id perhaps want mouselook in an first person situation, but I dont want you to be able to mouselook all the way around, and what if the body changes angles, id want to keep the head so it couldnt
spin back.
I realize if I did it with an angle heirarchy this would work, but then I possibly couldnt direct the head in the look vector.
[EDIT] turned out I could [/EDIT]
So whats a solution?
Dont tell me itll take measuring angles from vectors, holy crap.
Oh who cares, I know how to fix it now… I just have to measure the x of the vector and the y of the vector, and convert it to rotations.
Then I could go from there.
Oh hang on I figured it out.
I just link the mouse vector to the actual body direction, then I can limit it to angles yes, Then when the body moves right I just move the look left.
Then I can restrict it to the shoulders.
So I guess the moral of this story is maybe you should use a bit of both.
Im talking about skinning models. As in rigging them for animation.
(I reckon the whole industry is stupid, you should call it “rigging” not “skinning”)
Rigging is the process of defining a skeleton and assigning vertices to them. Skinning is the process of transforming the verts using the bone matrices. I have never seen those words used
Anyway, you should read up on joint constraints. Angles aren’t that useful either, as they only let you constraint the rotations on the X, Y and Z axis, and not on arbitrary axes. What you could do,
is defining a constraint as an axis and a minimum and maximum value. You dotproduct the axis with the bone direction, and the result should be between min and max.
Whats it got to do with whats on the skin anyway? I thought “skinning” was more to do with sub surface scattering.
Guess what this does, it sets instance.si[1], which happens to be if the object spins left or right. only problem is, if you measure an angle of a vector to go left is to go to the nearest left,
which could be right the complete other way on the circle… so this bit of logic picks the least difference, 2 ways to go around the circle and outputs “right” and “left.”
float angle1,angle2;
if(angle2-angle1<(6.28f-angle2)+angle1) instance[j].si[1]=1; else instance[j].si[1]=2;
if(angle1-angle2<(6.28f-angle1)+angle2) instance[j].si[1]=2; else instance[j].si[1]=1;
Shit, what you said made me think tho - so theres better ways than just YawPitchRoll matrices?
ehm excuse me
if(angle1-angle2<(6.28f-angle1)+angle2) instance[j].si[1]=1; else instance[j].si[1]=2;
if(angle2-angle1<(6.28f-angle2)+angle1) instance[j].si[1]=2; else instance[j].si[1]=1;
modification. :)
Whats it got to do with whats on the skin anyway?
You pull the skin (the mesh) over the bones. Much like skinning an animal, although the other way around.
I thought “skinning” was more to do with sub surface scattering.
No it has nothing to do with that. That’s just skin rendering.
Shit, what you said made me think tho - so theres better ways than just YawPitchRoll matrices?
Why yaw-pitch-roll matrices if you could simply use general matrices. And some use quaternions, which are faster to multiply than matrices, which is especially handy in the hierarchical nature of | {"url":"http://devmaster.net/posts/18603/vector-representation-vs-angle-representation","timestamp":"2014-04-16T04:18:00Z","content_type":null,"content_length":"25842","record_id":"<urn:uuid:d93c5fcb-cccb-4d00-b685-353771e3d103>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Composition of transformations
• Order matters! ( rotation * translation ? translation * rotation)
• Composition of transformations = matrix multiplication:if T is a rotation and S is a scaling, then applying scaling first and rotation second is the same as applying transformation given by the
matrix TS (note the order).
• Reversing the order does not work in most cases | {"url":"http://www.mrl.nyu.edu/~dzorin/rendering/lectures/lecture3/tsld023.htm","timestamp":"2014-04-19T17:13:37Z","content_type":null,"content_length":"1248","record_id":"<urn:uuid:c7fb5423-bf46-47d5-9679-d3153c778baa>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating a CRS-based
Next: Parallelism Up: Sparse Incomplete Factorizations Previous: CRS-based Factorization Transpose
Incomplete factorizations with several levels of fill allowed are more accurate than the [80]).
As a preliminary, we need an algorithm for adding two vectors lx be the number of nonzero components in x, and let xind be an integer array such that
Similarly, ly, y, yind.
We now add y into a full vector w then adding w to x. The total number of operations will be :
% copy y into w
for i=1,ly
w( yind(i) ) = y(i)
% add w to x wherever x is already nonzero
for i=1,lx
if w( xind(i) ) <> 0
x(i) = x(i) + w( xind(i) )
w( xind(i) ) = 0
% add w to x by creating new components
% wherever x is still zero
for i=1,ly
if w( yind(i) ) <> 0 then
lx = lx+1
xind(lx) = yind(i)
x(lx) = w( yind(i) )
In order to add a sequence of vectors
For a slight refinement of the above algorithm, we will add levels to the nonzero components: we assume integer vectors xlev and ylev of length lx and ly respectively, and a full length level vector
wlev corresponding to w. The addition algorithm then becomes:
% copy y into w
for i=1,ly
w( yind(i) ) = y(i)
wlev( yind(i) ) = ylev(i)
% add w to x wherever x is already nonzero;
% don't change the levels
for i=1,lx
if w( xind(i) ) <> 0
x(i) = x(i) + w( xind(i) )
w( xind(i) ) = 0
% add w to x by creating new components
% wherever x is still zero;
% carry over levels
for i=1,ly
if w( yind(i) ) <> 0 then
lx = lx+1
x(lx) = w( yind(i) )
xind(lx) = yind(i)
xlev(lx) = wlev( yind(i) )
We can now describe the A, and gradually builds up a factorization M of the form M respectively. The particular form of the factorization is chosen to minimize the number of times that the full
vector w is copied back to sparse form.
Specifically, we use a sparse form of the following factorization scheme:
for k=1,n
for j=1,k-1
for i=j+1,n
a(k,i) = a(k,i) - a(k,j)*a(j,i)
for j=k+1,n
a(k,j) = a(k,j)/a(k,k)
This is a row-oriented version of the traditional `left-looking' factorization algorithm.
We will describe an incomplete factorization that controls fill-in through levels (see equation ( )). Alternatively we could use a drop tolerance (section ), but this is less attractive from a point
of implementation. With fill levels we can perform the factorization symbolically at first, determining storage demands and reusing this information through a number of linear systems of the same
sparsity structure. Such preprocessing and reuse of information is not possible with fill controlled by a drop tolerance criterion.
The matrix arrays A and M are assumed to be in compressed row storage, with no particular ordering of the elements inside each row, but arrays adiag and mdiag point to the locations of the diagonal
for row=1,n
% go through elements A(row,col) with col<row
COPY ROW row OF A() INTO DENSE VECTOR w
for col=aptr(row),aptr(row+1)-1
if aind(col) < row then
acol = aind(col)
MULTIPLY ROW acol OF M() BY A(col)
SUBTRACT THE RESULT FROM w
ALLOWING FILL-IN UP TO LEVEL k
INSERT w IN ROW row OF M()
% invert the pivot
M(mdiag(row)) = 1/M(mdiag(row))
% normalize the row of U
for col=mptr(row),mptr(row+1)-1
if mind(col) > row
M(col) = M(col) * M(mdiag(row))
The structure of a particular sparse matrix is likely to apply to a sequence of problems, for instance on different time-steps, or during a Newton iteration. Thus it may pay off to perform the above
incomplete factorization first symbolically to determine the amount and location of fill-in and use this structure for the numerically different but structurally identical matrices. In this case, the
array for the numerical values can be used to store the levels during the symbolic factorization phase.
Next: Parallelism Up: Sparse Incomplete Factorizations Previous: CRS-based Factorization Transpose
Jack Dongarra
Mon Nov 20 08:52:54 EST 1995 | {"url":"http://netlib.org/linalg/html_templates/node104.html","timestamp":"2014-04-20T13:18:15Z","content_type":null,"content_length":"9321","record_id":"<urn:uuid:8f0f7dd7-93c7-45a0-b8ff-caf4e24c66db>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annotate this POD
Math::GSL::Linalg - Functions for solving linear systems
use Math::GSL::Linalg qw/:all/;
Here is a list of all the functions included in this module :
gsl_linalg_householder_hm($tau, $v, $A) - This function applies the Householder matrix P defined by the scalar $tau and the vector $v to the left-hand side of the matrix $A. On output the result P A
is stored in $A. The function returns 0 if it succeded, 1 otherwise.
gsl_linalg_householder_mh($tau, $v, $A) - This function applies the Householder matrix P defined by the scalar $tau and the vector $v to the right-hand side of the matrix $A. On output the result A P
is stored in $A.
gsl_linalg_householder_hv($tau, $v, $w) - This function applies the Householder transformation P defined by the scalar $tau and the vector $v to the vector $w. On output the result P w is stored in
gsl_linalg_complex_householder_hm($tau, $v, $A) - Does the same operation than gsl_linalg_householder_hm but with the complex matrix $A, the complex value $tau and the complex vector $v.
gsl_linalg_complex_householder_mh($tau, $v, $A) - Does the same operation than gsl_linalg_householder_mh but with the complex matrix $A, the complex value $tau and the complex vector $v.
gsl_linalg_complex_householder_hv($tau, $v, $w) - Does the same operation than gsl_linalg_householder_hv but with the complex value $tau and the complex vectors $v and $w.
gsl_linalg_hessenberg_decomp($A, $tau) - This function computes the Hessenberg decomposition of the matrix $A by applying the similarity transformation H = U^T A U. On output, H is stored in the
upper portion of $A. The information required to construct the matrix U is stored in the lower triangular portion of $A. U is a product of N - 2 Householder matrices. The Householder vectors are
stored in the lower portion of $A (below the subdiagonal) and the Householder coefficients are stored in the vector $tau. tau must be of length N. The function returns 0 if it succeded, 1
gsl_linalg_hessenberg_unpack($H, $tau, $U) - This function constructs the orthogonal matrix $U from the information stored in the Hessenberg matrix $H along with the vector $tau. $H and $tau are
outputs from gsl_linalg_hessenberg_decomp.
gsl_linalg_hessenberg_unpack_accum($H, $tau, $V) - This function is similar to gsl_linalg_hessenberg_unpack, except it accumulates the matrix U into $V, so that V' = VU. The matrix $V must be
initialized prior to calling this function. Setting $V to the identity matrix provides the same result as gsl_linalg_hessenberg_unpack. If $H is order N, then $V must have N columns but may have
any number of rows.
gsl_linalg_hessenberg_set_zero($H) - This function sets the lower triangular portion of $H, below the subdiagonal, to zero. It is useful for clearing out the Householder vectors after calling
gsl_linalg_hesstri_decomp($A, $B, $U, $V, $work) - This function computes the Hessenberg-Triangular decomposition of the matrix pair ($A, $B). On output, H is stored in $A, and R is stored in $B. If
$U and $V are provided (they may be null), the similarity transformations are stored in them. Additional workspace of length N is needed in the vector $work.
gsl_linalg_SV_decomp($A, $V, $S, $work) - This function factorizes the M-by-N matrix $A into the singular value decomposition A = U S V^T for M >= N. On output the matrix $A is replaced by U. The
diagonal elements of the singular value matrix S are stored in the vector $S. The singular values are non-negative and form a non-increasing sequence from S_1 to S_N. The matrix $V contains the
elements of V in untransposed form. To form the product U S V^T it is necessary to take the transpose of V. A workspace of length N is required in vector $work. This routine uses the
Golub-Reinsch SVD algorithm.
gsl_linalg_SV_decomp_mod($A, $X, $V, $S, $work) - This function computes the SVD using the modified Golub-Reinsch algorithm, which is faster for M>>N. It requires the vector $work of length N and the
N-by-N matrix $X as additional working space. $A and $V are matrices while $S is a vector.
gsl_linalg_SV_decomp_jacobi($A, $V, $S) - This function computes the SVD of the M-by-N matrix $A using one-sided Jacobi orthogonalization for M >= N. The Jacobi method can compute singular values to
higher relative accuracy than Golub-Reinsch algorithms. $V is a matrix while $S is a vector.
gsl_linalg_SV_solve($U, $V, $S, $b, $x) - This function solves the system A x = b using the singular value decomposition ($U, $S, $V) of A given by gsl_linalg_SV_decomp. Only non-zero singular values
are used in computing the solution. The parts of the solution corresponding to singular values of zero are ignored. Other singular values can be edited out by setting them to zero before calling
this function. In the over-determined case where A has more rows than columns the system is solved in the least squares sense, returning the solution x which minimizes ||A x - b||_2.
gsl_linalg_LU_decomp($a, $p) - factorize the matrix $a into the LU decomposition PA = LU. On output the diagonal and upper triangular part of the input matrix A contain the matrix U. The lower
triangular part of the input matrix (excluding the diagonal) contains L. The diagonal elements of L are unity, and are not stored. The function returns two value, the first is 0 if the operation
succeeded, 1 otherwise, and the second is the sign of the permutation.
gsl_linalg_LU_solve($LU, $p, $b, $x) - This function solves the square system A x = b using the LU decomposition of the matrix A into (LU, p) given by gsl_linalg_LU_decomp. $LU is a matrix, $p a
permutation and $b and $x are vectors. The function returns 1 if the operation succeded, 0 otherwise.
gsl_linalg_LU_svx($LU, $p, $x) - This function solves the square system A x = b in-place using the LU decomposition of A into (LU,p). On input $x should contain the right-hand side b, which is
replaced by the solution on output. $LU is a matrix, $p a permutation and $x is a vector. The function returns 1 if the operation succeded, 0 otherwise.
gsl_linalg_LU_refine($A, $LU, $p, $b, $x, $residual) - This function apply an iterative improvement to $x, the solution of $A $x = $b, using the LU decomposition of $A into ($LU,$p). The initial
residual $r = $A $x - $b (where $x and $b are vectors) is also computed and stored in the vector $residual.
gsl_linalg_LU_invert($LU, $p, $inverse) - This function computes the inverse of a matrix A from its LU decomposition stored in the matrix $LU and the permutation $p, storing the result in the matrix
gsl_linalg_LU_det($LU, $signum) - This function returns the determinant of a matrix A from its LU decomposition stored in the $LU matrix. It needs the integer $signum which is the sign of the
permutation returned by gsl_linalg_LU_decomp.
gsl_linalg_LU_lndet($LU) - This function returns the logarithm of the absolute value of the determinant of a matrix A, from its LU decomposition stored in the $LU matrix.
gsl_linalg_LU_sgndet($LU, $signum) - This functions computes the sign or phase factor of the determinant of a matrix A, det(A)/|det(A)|, from its LU decomposition, $LU.
gsl_linalg_complex_LU_decomp($A, $p) - Does the same operation than gsl_linalg_LU_decomp but on the complex matrix $A.
gsl_linalg_complex_LU_solve($LU, $p, $b, $x) - This functions solve the square system A x = b using the LU decomposition of A into ($LU, $p) given by gsl_linalg_complex_LU_decomp.
gsl_linalg_complex_LU_svx($LU, $p, $x) - Does the same operation than gsl_linalg_LU_svx but on the complex matrix $LU and the complex vector $x.
gsl_linalg_complex_LU_refine($A, $LU, $p, $b, $x, $residual) - Does the same operation than gsl_linalg_LU_refine but on the complex matrices $A and $LU and with the complex vectors $b, $x and
gsl_linalg_complex_LU_invert($LU, $p, $inverse) - Does the same operation than gsl_linalg_LU_invert but on the complex matrces $LU and $inverse.
gsl_linalg_complex_LU_det($LU, $signum) - Does the same operation than gsl_linalg_LU_det but on the complex matrix $LU.
gsl_linalg_complex_LU_lndet($LU) - Does the same operation than gsl_linalg_LU_det but on the complex matrix $LU.
gsl_linalg_complex_LU_sgndet($LU, $signum) - Does the same operation than gsl_linalg_LU_sgndet but on the complex matrix $LU.
gsl_linalg_QR_decomp($a, $tau) - factorize the M-by-N matrix A into the QR decomposition A = Q R. On output the diagonal and upper triangular part of the input matrix $a contain the matrix R. The
vector $tau and the columns of the lower triangular part of the matrix $a contain the Householder coefficients and Householder vectors which encode the orthogonal matrix Q. The vector tau must be
of length k= min(M,N).
gsl_linalg_QR_solve($QR, $tau, $b, $x) - This function solves the square system A x = b using the QR decomposition of A into (QR, tau) given by gsl_linalg_QR_decomp. $QR is matrix, and $tau, $b and
$x are vectors.
gsl_linalg_QR_svx($QR, $tau, $x) - This function solves the square system A x = b in-place using the QR decomposition of A into the matrix $QR and the vector $tau given by gsl_linalg_QR_decomp. On
input, the vector $x should contain the right-hand side b, which is replaced by the solution on output.
gsl_linalg_QR_lssolve($QR, $tau, $b, $x, $residual) - This function finds the least squares solution to the overdetermined system $A $x = $b where the matrix $A has more rows than columns. The least
squares solution minimizes the Euclidean norm of the residual, ||Ax - b||.The routine uses the $QR decomposition of $A into ($QR, $tau) given by gsl_linalg_QR_decomp. The solution is returned in
$x. The residual is computed as a by-product and stored in residual. The function returns 0 if it succeded, 1 otherwise.
gsl_linalg_QR_QRsolve($Q, $R, $b, $x) - This function solves the system $R $x = $Q**T $b for $x. It can be used when the $QR decomposition of a matrix is available in unpacked form as ($Q, $R). The
function returns 0 if it succeded, 1 otherwise.
gsl_linalg_QR_Rsolve($QR, $b, $x) - This function solves the triangular system R $x = $b for $x. It may be useful if the product b' = Q^T b has already been computed using gsl_linalg_QR_QTvec.
gsl_linalg_QR_Rsvx($QR, $x) - This function solves the triangular system R $x = b for $x in-place. On input $x should contain the right-hand side b and is replaced by the solution on output. This
function may be useful if the product b' = Q^T b has already been computed using gsl_linalg_QR_QTvec. The function returns 0 if it succeded, 1 otherwise.
gsl_linalg_QR_update($Q, $R, $b, $x) - This function performs a rank-1 update $w $v**T of the QR decomposition ($Q, $R). The update is given by Q'R' = Q R + w v^T where the output matrices Q' and R'
are also orthogonal and right triangular. Note that w is destroyed by the update. The function returns 0 if it succeded, 1 otherwise.
gsl_linalg_QR_QTvec($QR, $tau, $v) - This function applies the matrix Q^T encoded in the decomposition ($QR,$tau) to the vector $v, storing the result Q^T v in $v. The matrix multiplication is
carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T. The function returns 0 if it succeded, 1 otherwise.
gsl_linalg_QR_Qvec($QR, $tau, $v) - This function applies the matrix Q encoded in the decomposition ($QR,$tau) to the vector $v, storing the result Q v in $v. The matrix multiplication is carried out
directly using the encoding of the Householder vectors without needing to form the full matrix Q. The function returns 0 if it succeded, 1 otherwise.
gsl_linalg_QR_QTmat($QR, $tau, $A) - This function applies the matrix Q^T encoded in the decomposition ($QR,$tau) to the matrix $A, storing the result Q^T A in $A. The matrix multiplication is
carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T. The function returns 0 if it succeded, 1 otherwise.
gsl_linalg_QR_unpack($QR, $tau, $Q, $R) - This function unpacks the encoded QR decomposition ($QR,$tau) into the matrices $Q and $R, where $Q is M-by-M and $R is M-by-N. The function returns 0 if it
succeded, 1 otherwise.
gsl_linalg_R_solve($R, $b, $x) - This function solves the triangular system $R $x = $b for the N-by-N matrix $R. The function returns 0 if it succeded, 1 otherwise.
gsl_linalg_R_svx($R, $x) - This function solves the triangular system $R $x = b in-place. On input $x should contain the right-hand side b, which is replaced by the solution on output. The function
returns 0 if it succeded, 1 otherwise.
gsl_linalg_QRPT_decomp($A, $tau, $p, $norm) - This function factorizes the M-by-N matrix $A into the QRP^T decomposition A = Q R P^T. On output the diagonal and upper triangular part of the input
matrix contain the matrix R. The permutation matrix P is stored in the permutation $p. There's two value returned by this function : the first is 0 if the operation succeeded, 1 otherwise. The
second is sign of the permutation. It has the value (-1)^n, where n is the number of interchanges in the permutation. The vector $tau and the columns of the lower triangular part of the matrix $A
contain the Householder coefficients and vectors which encode the orthogonal matrix Q. The vector tau must be of length k=\min(M,N). The matrix Q is related to these components by, Q = Q_k ...
Q_2 Q_1 where Q_i = I - \tau_i v_i v_i^T and v_i is the Householder vector v_i = (0,...,1,A(i+1,i),A(i+2,i),...,A(m,i)). This is the same storage scheme as used by lapack. The vector norm is a
workspace of length N used for column pivoting. The algorithm used to perform the decomposition is Householder QR with column pivoting (Golub & Van Loan, Matrix Computations, Algorithm 5.4.1).
gsl_linalg_QRPT_decomp2($A, $q, $r, $tau, $p, $norm) - This function factorizes the matrix $A into the decomposition A = Q R P^T without modifying $A itself and storing the output in the separate
matrices $q and $r. For the rest, it operates exactly like gsl_linalg_QRPT_decomp
gsl_linalg_cholesky_decomp($A) - Factorize the symmetric, positive-definite square matrix $A into the Cholesky decomposition A = L L^T and stores it into the matrix $A. The funtcion returns 0 if the
operation succeeded, 0 otherwise.
gsl_linalg_cholesky_solve($cholesky, $b, $x) - This function solves the system A x = b using the Cholesky decomposition of A into the matrix $cholesky given by gsl_linalg_cholesky_decomp. $b and $x
are vectors. The funtcion returns 0 if the operation succeeded, 0 otherwise.
gsl_linalg_cholesky_svx($cholesky, $x) - This function solves the system A x = b in-place using the Cholesky decomposition of A into the matrix $cholesky given by gsl_linalg_cholesky_decomp. On input
the vector $x should contain the right-hand side b, which is replaced by the solution on output. The funtcion returns 0 if the operation succeeded, 0 otherwise.
gsl_linalg_complex_cholesky_decomp($A) - Factorize the symmetric, positive-definite square matrix $A which contains complex numbers into the Cholesky decomposition A = L L^T and stores it into the
matrix $A. The funtcion returns 0 if the operation succeeded, 0 otherwise.
gsl_linalg_complex_cholesky_solve($cholesky, $b, $x) - This function solves the system A x = b using the Cholesky decomposition of A into the matrix $cholesky given by
gsl_linalg_complex_cholesky_decomp. $b and $x are vectors. The funtcion returns 0 if the operation succeeded, 0 otherwise.
gsl_linalg_complex_cholesky_svx($cholesky, $x) - This function solves the system A x = b in-place using the Cholesky decomposition of A into the matrix $cholesky given by
gsl_linalg_complex_cholesky_decomp. On input the vector $x should contain the right-hand side b, which is replaced by the solution on output. The funtcion returns 0 if the operation succeeded, 0
gsl_linalg_symmtd_decomp($A, $tau) - This function factorizes the symmetric square matrix $A into the symmetric tridiagonal decomposition Q T Q^T. On output the diagonal and subdiagonal part of the
input matrix $A contain the tridiagonal matrix T. The remaining lower triangular part of the input matrix contains the Householder vectors which, together with the Householder coefficients $tau,
encode the orthogonal matrix Q. This storage scheme is the same as used by lapack. The upper triangular part of $A is not referenced. $tau is a vector.
gsl_linalg_symmtd_unpack($A, $tau, $Q, $diag, $subdiag) - This function unpacks the encoded symmetric tridiagonal decomposition ($A, $tau) obtained from gsl_linalg_symmtd_decomp into the orthogonal
matrix $Q, the vector of diagonal elements $diag and the vector of subdiagonal elements $subdiag.
gsl_linalg_symmtd_unpack_T($A, $diag, $subdiag) - This function unpacks the diagonal and subdiagonal of the encoded symmetric tridiagonal decomposition ($A, $tau) obtained from
gsl_linalg_symmtd_decomp into the vectors $diag and $subdiag.
gsl_linalg_hermtd_decomp($A, $tau) - This function factorizes the hermitian matrix $A into the symmetric tridiagonal decomposition U T U^T. On output the real parts of the diagonal and subdiagonal
part of the input matrix $A contain the tridiagonal matrix T. The remaining lower triangular part of the input matrix contains the Householder vectors which, together with the Householder
coefficients $tau, encode the orthogonal matrix Q. This storage scheme is the same as used by lapack. The upper triangular part of $A and imaginary parts of the diagonal are not referenced. $A is
a complex matrix and $tau a complex vector.
gsl_linalg_hermtd_unpack($A, $tau, $U, $diag, $subdiag) - This function unpacks the encoded tridiagonal decomposition ($A, $tau) obtained from gsl_linalg_hermtd_decomp into the unitary complex matrix
$U, the real vector of diagonal elements $diag and the real vector of subdiagonal elements $subdiag.
gsl_linalg_hermtd_unpack_T($A, $diag, $subdiag) - This function unpacks the diagonal and subdiagonal of the encoded tridiagonal decomposition (A, tau) obtained from the gsl_linalg_hermtd_decomp into
the real vectors $diag and $subdiag.
gsl_linalg_HH_solve($a, $b, $x) - This function solves the system $A $x = $b directly using Householder transformations where $A is a matrix, $b and $x vectors. On output the solution is stored in $x
and $b is not modified. $A is destroyed by the Householder transformations.
gsl_linalg_HH_svx($A, $x) - This function solves the system $A $x = b in-place using Householder transformations where $A is a matrix, $b is a vector. On input $x should contain the right-hand side
b, which is replaced by the solution on output. The matrix $A is destroyed by the Householder transformations.
You have to add the functions you want to use inside the qw /put_funtion_here / with spaces between each function. You can also write use Math::GSL::Complex qw/:all/ to use all avaible functions of the module.
For more informations on the functions, we refer you to the GSL offcial documentation: http://www.gnu.org/software/gsl/manual/html_node/
This example shows how to compute the determinant of a matrix with the LU decomposition:
use Math::GSL::Matrix qw/:all/;
use Math::GSL::Permutation qw/:all/;
use Math::GSL::Linalg qw/:all/;
my $Matrix = gsl_matrix_alloc(4,4);
map { gsl_matrix_set($Matrix, 0, $_, $_+1) } (0..3);
gsl_matrix_set($Matrix,1, 0, 2);
gsl_matrix_set($Matrix, 1, 1, 3);
gsl_matrix_set($Matrix, 1, 2, 4);
gsl_matrix_set($Matrix, 1, 3, 1);
gsl_matrix_set($Matrix, 2, 0, 3);
gsl_matrix_set($Matrix, 2, 1, 4);
gsl_matrix_set($Matrix, 2, 2, 1);
gsl_matrix_set($Matrix, 2, 3, 2);
gsl_matrix_set($Matrix, 3, 0, 4);
gsl_matrix_set($Matrix, 3, 1, 1);
gsl_matrix_set($Matrix, 3, 2, 2);
gsl_matrix_set($Matrix, 3, 3, 3);
my $permutation = gsl_permutation_alloc(4);
my ($result, $signum) = gsl_linalg_LU_decomp($Matrix, $permutation);
my $det = gsl_linalg_LU_det($Matrix, $signum);
print "The value of the determinant of the matrix is $det \n";
Jonathan "Duke" Leto <jonathan@leto.net> and Thierry Moisan <thierry.moisan@gmail.com>
Copyright (C) 2008-2011 Jonathan "Duke" Leto and Thierry Moisan
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
syntax highlighting: [no syntax highlighting] | {"url":"http://search.cpan.org/~leto/Math-GSL-0.28/lib/Math/GSL/Linalg.pm","timestamp":"2014-04-16T21:54:51Z","content_type":null,"content_length":"37884","record_id":"<urn:uuid:1e3de1f9-901c-4142-af5f-b37c986910ba>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Einstein's explanation of the invariance of c
This may sound unbelievable, but I think Motor Daddy's universe is actually more complicated than relativity! He's got light doing all sorts of odd things because of the "absolute velocities"
of different reference frames. I mean, the earth is a ROTATING frame, for gosh sakes. The speed of light would vary by time of day, as well as seasonally, etc. Trying to get any physics done
would be a nightmare. It would be chaos! lol
Like I said before, admit mine is correct and I'll admit Einstein's is incorrect but easier.
I thought it was widely understood that the Earth is a rotating frame.
Motor Daddy
I'm not a mathematician
You aren't a physicist either, or much of a logician.
But, hey, since you can add and subtract, Einstein must have got it wrong . . . everyone knows how useless he was with math.
Always ready to get a cheap shot in, eh? Instead of talking trash why don't you look at post #912 and tell me where I go wrong?
You go wrong when you assume that understanding how to add and subtract means you can understand special relativity.
Post #912 without even looking at it, is guaranteed to be a repeat of every other post you've made in this thread. I rest my case.
I'll take that to mean you just think it's wrong, but you can't tell me why. You do a lot of that.
I can see that it's wrong, and I can explain why. So can more than a few other people.
You can't answer questions or accept that your logic is almost totally flawed--you still don't seem to understand things that you've said you do understand, for instance what a stationary
frame is, what an observer is, and so on. Even what a clock is and how to synchronise one. You do a lot of that.
The most obvious mistake you keep repeating is this: you ignore any questions, and just go over the same old ground as if everbody who has read it 900 times already needs to see it again.
I could trawl back over the thread and quote some of the questions you have ignored, or not attempted--this is the telling part, it's more than a possibility that you simply don't understand
them, or you just don't know the answer or if there is an answer.
But that would be a big waste of time, because you'll just ignore them all over again. It was fun, even a bit of a laugh, but now it's more like finding out someone dropped their ciggy butt
into your can of beer.
I don't understand what you mean by drawing a right triangle and having light travel faster than c. Can you show me in a pic what you are talking about? I'm not a mathematician, so if there
is something I've done wrong in my formula by all means, let me know how to fix it so that it represents my theory correctly.
I'll try to draw something, but in the meantime, here is a verbal explanation. Lay down a tape measure on the "floor" of the absolute rest frame. Now lay another tape measure down at a right
angle to the first one. Call the measurements on one of the tape measures "x" and call the measurements on the other tape measure "y". When the train moves, we will use "x" to measure how far
the train moves. We will use "y" to measure how far the skate moves, because the skate travels a path that is 90 degrees to the path of the train.
Let's start with the skate located at position (0,0) where the first zero is the "x" value, and the second zero is the "y" value. For this exercise, let's say that only the train moves, and
the skate just sits still inside the train. In this case, let's say that the skate moved from (0,0) to (12,0) because of the movement of the train. Notice that the skate has moved a distance
of 12. I am just making these numbers up for this simple exercise.
Now let's try things a little differently. This time, let's say that only the skate moves, and the train stays motionless. In this case, let's say that the skate moved from (0,0) to (0,8)
because of the movement of the skate. Notice that the skate has moved a distance of 8 for this case. Again, I'm just making these numbers up.
And finally, let's combine both motions together. We will let the skate move at the same time as the train moves. In this case, the skate moves from (0,0) to (12,8) because of the movement of
the skate and the train combined. As you can see, the skate has moved in both the "x" and "y" directions. If there had been an elevator on the train, the skate could also have also moved
along the "z" direction, but let's keep things simple for now.
Anyway, do you know how to calculate how far the skate moved? If not, here is the answer:
d = sqrt(12^2 + 8^2)
d = sqrt(144 + 64)
d = sqrt(208)
d = 14.422
In case you don't know those symbols, the "sqrt" means square root, and the "^2" means raised to the second power. Raising to the second power is also called squaring, which is just another
way of saying that the number is multiplied by itself.
If we want to talk about the "absolute velocity" of the skate, then we must consider that it actually traveled a distance of 14.422, because it went from point (0,0) to point (12,8). The line
that connects those points is the vector that represents the direction of the absolute velocity.
Now let's get to some light signals. If the skate has a length of 1, and a beam of light travels the length of the skate, how far does the light travel according to the absolute rest frame?
Here is the answer:
d = sqrt(12^2 + 9^2)
d = sqrt(144 + 81)
d = sqrt(225)
d = 15
The reason I used 9 instead of 8 is because I added the length of the skate to the "y" dimension. The vector that represents the direction of the velocity of the light is the line that goes
from point (0,0) to (12,9). This is the information that I have been using to show that your calculations are not working with a light speed of "c" in the absolute rest frame. Does this make
sense yet?
The most obvious mistake you keep repeating is this: you ignore any questions, and just go over the same old ground as if everbody who has read it 900 times already needs to see it again.
I could trawl back over the thread and quote some of the questions you have ignored, or not attempted--this is the telling part, it's more than a possibility that you simply don't understand
them, or you just don't know the answer or if there is an answer.
But that would be a big waste of time, because you'll just ignore them all over again. It was fun, even a bit of a laugh, but now it's more like finding out someone dropped their ciggy butt
into your can of beer.
Post #912 demolishes Einstein's concept of the relativity of simultaneity. You think it's wrong? SHOW ME!!! Enough of the small talk, SHOW ME!!!
Well I would, but Oprah is on, so there are more better things to do . . .
Post #912 demolishes Einstein's concept of the relativity of simultaneity.
You really do have a tiny little brain, don't you?
I'll try to draw something, but in the meantime, here is a verbal explanation. Lay down a tape measure on the "floor" of the absolute rest frame. Now lay another tape measure down at a right
angle to the first one. Call the measurements on one of the tape measures "x" and call the measurements on the other tape measure "y". When the train moves, we will use "x" to measure how far
the train moves. We will use "y" to measure how far the skate moves, because the skate travels a path that is 90 degrees to the path of the train.
Let's start with the skate located at position (0,0) where the first zero is the "x" value, and the second zero is the "y" value. For this exercise, let's say that only the train moves, and
the skate just sits still inside the train. In this case, let's say that the skate moved from (0,0) to (12,0) because of the movement of the train. Notice that the skate has moved a distance
of 12. I am just making these numbers up for this simple exercise.
Now let's try things a little differently. This time, let's say that only the skate moves, and the train stays motionless. In this case, let's say that the skate moved from (0,0) to (0,8)
because of the movement of the skate. Notice that the skate has moved a distance of 8 for this case. Again, I'm just making these numbers up.
And finally, let's combine both motions together. We will let the skate move at the same time as the train moves. In this case, the skate moves from (0,0) to (12,8) because of the movement of
the skate and the train combined. As you can see, the skate has moved in both the "x" and "y" directions. If there had been an elevator on the train, the skate could also have also moved
along the "z" direction, but let's keep things simple for now.
Anyway, do you know how to calculate how far the skate moved? If not, here is the answer:
d = sqrt(12^2 + 8^2)
d = sqrt(144 + 64)
d = sqrt(208)
d = 14.422
In case you don't know those symbols, the "sqrt" means square root, and the "^2" means raised to the second power. Raising to the second power is also called squaring, which is just another
way of saying that the number is multiplied by itself.
If we want to talk about the "absolute velocity" of the skate, then we must consider that it actually traveled a distance of 14.422, because it went from point (0,0) to point (12,8). The line
that connects those points is the vector that represents the direction of the absolute velocity.
Now let's get to some light signals. If the skate has a length of 1, and a beam of light travels the length of the skate, how far does the light travel according to the absolute rest frame?
Here is the answer:
d = sqrt(12^2 + 9^2)
d = sqrt(144 + 81)
d = sqrt(225)
d = 15
The reason I used 9 instead of 8 is because I added the length of the skate to the "y" dimension. The vector that represents the direction of the velocity of the light is the line that goes
from point (0,0) to (12,9). This is the information that I have been using to show that your calculations are not working with a light speed of "c" in the absolute rest frame. Does this make
sense yet?
I understand what you are telling me. I was measuring the distance the skate traveled inline with the train's motion, not at 90 degrees to the train's motion.
There is two different velocities we are talking about here, one of the train and one of the skate. The two velocities are separate. The train in the direction of the tracks, and the skate in
the direction 90 degrees to the tracks. The velocity of each light is in the direction of the object's motion.
So, there is an x velocity and a y velocity. They are separate velocities. You can't combine them and say the skate arrived at (12,9) like it traveled in a straight line from (0,0). There are
two separate velocities, one of the x and one of the y.
Last edited by Motor Daddy; 01-05-11 at 11:39 PM.
I understand what you are telling me. I was measuring the distance the skate traveled inline with the train's motion, not at 90 degrees to the train's motion.
There is two different velocities we are talking about here, one of the train and one of the skate. The two velocities are separate. The train in the direction of the tracks, and the skate in
the direction 90 degrees to the tracks. The velocity of each light is in the direction of the object's motion.
Are you disputing that light traveled from point (0,0) to point (12,9)?
The x has a velocity and the y has a velocity.
If light is emitted from a light sphere, how does light get to 12,9 in a straight line in each direction from the center of the sphere? Impossible for one light to get to 12,9 at the same
There is only one light ray. It travels down the skate. If the speed of the skate and the speed of the train are the same, then the light coordinate would have the form (12,12) or (9,9). But
in this, case I chose for the train and the skate to have different speeds.
Think of a light source in space, like a small sun. Each ray of light travels away from the source in a straight line. There is one distance the light travels. It is impossible for light to
leave a source and travel to 12,9. It either travels 12 away, or 9 away, depending on the time.
You don't think I can shine a laser pointer from (0,0) to point (12,8) in the absolute rest frame? You dont think a sphere of light set off at point (0,0) will eventually cross the point
If the train had not moved, and the skate had not moved, the light would have traveled from (0,0) to (0,1). Do you dispute this simple concept?
What point 12,8 in what absolute rest frame? Do you think there is some kind of grid system in space?
Light defines space in straight lines.
Similar Threads
1. By Paul W. Dixon in forum Astronomy, Exobiology, & Cosmology
Last Post: 12-30-10, 10:07 AM
Replies: 1953
2. By Scaramouche in forum Physics & Math
Last Post: 01-18-10, 05:37 PM
Replies: 69
3. By Scaramouche in forum Physics & Math
Last Post: 01-05-10, 07:19 AM
Replies: 33
4. By dkane75 in forum Science & Society
Last Post: 01-06-08, 03:57 AM
Replies: 34
5. By noahfor in forum General Philosophy
Last Post: 04-06-06, 02:17 PM
Replies: 9
Tags for this Thread | {"url":"http://www.sciforums.com/showthread.php?105408-On-Einstein-s-explanation-of-the-invariance-of-c/page47","timestamp":"2014-04-17T09:35:06Z","content_type":null,"content_length":"129549","record_id":"<urn:uuid:1f8439ab-0770-4186-9130-f38073d9370f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
slenderness ratio and axial load of steel question..pls help
The only thing I am wondering is in the dimensions of the column. You give 152x152x37. Is the 37 a wall thickness since you give the height as 6.5m? I am assuming so.
For some reason I read that problem about 20 times and it didn't click. Now it is. Perhaps I needed more coffee.
So are you wondering where to start? Basically, you are looking at a change in restraints for two scenarios. Since the cross section doesn't change, the radius of gyration will not change. So, you
have 2 boundary conditions. That means you should calculate the maximum safe load for 2 situations (4 if the BS spec also specifies a different material than the grade 43). Since I don't have access
to the BS spec you referenced, I would assume that it spells out some minimums in the safe load. Take your 2 results and then compare them to the spec.
It would be a lot easier to help if you had some specific questions. | {"url":"http://www.physicsforums.com/showthread.php?t=125208","timestamp":"2014-04-18T00:34:05Z","content_type":null,"content_length":"47206","record_id":"<urn:uuid:873882c8-a19a-4cf1-9499-54c9603a5e94>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
vectors problems
March 6th 2007, 06:51 PM #1
May 2006
vectors problems
I dont quite get this problem. I know what the component vector form is but I dont know how this problem is done.
Do you have to find the magnitude within it like Sqrt of x^2+y^2?
Yeah.. Im kind of lost on this one. Thanks for the help if anyone can
The "x" is the cos 210 = -sqrt(3)/2
The "y" is the sin 210 = -1/2
That means a unit vector is,
-sqrt(3)/2 i - 1/2 j
Multily this by 5 to get your answers.
March 6th 2007, 07:58 PM #2
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/pre-calculus/12253-vectors-problems.html","timestamp":"2014-04-19T21:46:11Z","content_type":null,"content_length":"33684","record_id":"<urn:uuid:c04c2857-4f6c-4bcf-83a8-16172ede3158>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric Speed-Up Techniques for Finding Shortest Paths in Large Sparse Graphs
Results 1 - 10 of 41
- IN WORKSHOP ON ALGORITHM ENGINEERING & EXPERIMENTS , 2006
"... We study the point-to-point shortest path problem in a setting where preprocessing is allowed. We improve the reach-based approach of Gutman [16] in several ways. In particular, we introduce a
bidirectional version of the algorithm that uses implicit lower bounds and we add shortcut arcs which reduc ..."
Cited by 60 (5 self)
Add to MetaCart
We study the point-to-point shortest path problem in a setting where preprocessing is allowed. We improve the reach-based approach of Gutman [16] in several ways. In particular, we introduce a
bidirectional version of the algorithm that uses implicit lower bounds and we add shortcut arcs which reduce vertex reaches. Our modifications greatly reduce both preprocessing and query times. The
resulting algorithm is as fast as the best previous method, due to Sanders and Schultes [27]. However, our algorithm is simpler and combines in a natural way with A∗ search, which yields
significantly better query times.
, 2008
"... An algorithm is presented for finding the k nearest neighbors in a spatial network in a best-first manner using network distance. The algorithm is based on precomputing the shortest paths
between all possible vertices in the network and then making use of an encoding that takes advantage of the fact ..."
Cited by 46 (8 self)
Add to MetaCart
An algorithm is presented for finding the k nearest neighbors in a spatial network in a best-first manner using network distance. The algorithm is based on precomputing the shortest paths between all
possible vertices in the network and then making use of an encoding that takes advantage of the fact that the shortest paths from vertex u to all of the remaining vertices can be decomposed into
subsets based on the first edges on the shortest paths to them from u. Thus, in the worst case, the amount of work depends on the number of objects that are examined and the number of links on the
shortest paths to them from q, rather than depending on the number of vertices in the network. The amount of storage required to keep track of the subsets is reduced by taking advantage of their
spatial coherence which is captured by the aid of a shortest path quadtree. In particular, experiments on a number of large road networks as
- IN: PROCEEDINGS OF THE EIGHT WORKSHOP ON ALGORITHM ENGINEERING AND EXPERIMENTS (ALENEX06), SIAM , 2006
"... An overlay graph of a given graph G =(V,E) on a subset S ⊆ V is a graph with vertex set S that preserves some property of G. In particular, we consider variations of the multi-level overlay
graph used in [21] to speed up shortestpath computations. In this work, we follow up and present general verte ..."
Cited by 24 (8 self)
Add to MetaCart
An overlay graph of a given graph G =(V,E) on a subset S ⊆ V is a graph with vertex set S that preserves some property of G. In particular, we consider variations of the multi-level overlay graph
used in [21] to speed up shortestpath computations. In this work, we follow up and present general vertex selection criteria and strategies of applying these criteria to determine a subset S inducing
an overlay graph. The main contribution is a systematic experimental study where we investigate the impact of selection criteria and strategies on multi-level overlay graphs and the resulting
speed-up achieved for shortest-path queries. Depending on selection strategy and graph type, a centrality index criterion, a criterion based on planar separators, and vertex degree turned out to be
good selection criteria.
- In Proceedings of the 13th ACM International Symposium on Advances in Geographic Information Systems , 2005
"... A framework for determining the shortest path and the distance between every pair of vertices on a spatial network is presented. The framework, termed SILC, uses path coherence between the
shortest path and the spatial positions of vertices on the spatial network, thereby, resulting in an encoding t ..."
Cited by 23 (12 self)
Add to MetaCart
A framework for determining the shortest path and the distance between every pair of vertices on a spatial network is presented. The framework, termed SILC, uses path coherence between the shortest
path and the spatial positions of vertices on the spatial network, thereby, resulting in an encoding that is compact in representation and fast in path and distance retrievals. Using this framework,
a wide variety of spatial queries such as incremental nearest neighbor searches and spatial distance joins can be shown to work on datasets of locations residing on a spatial network of sufficiently
large size. The suggested framework is suitable for both main memory and disk-resident datasets. Categories and Subject Descriptors
- ALGORITHMICS OF LARGE AND COMPLEX NETWORKS. LECTURE NOTES IN COMPUTER SCIENCE , 2009
"... Algorithms for route planning in transportation networks have recently undergone a rapid development, leading to methods that are up to three million times faster than Dijkstra’s algorithm. We
give an overview of the techniques enabling this development and point out frontiers of ongoing research on ..."
Cited by 23 (14 self)
Add to MetaCart
Algorithms for route planning in transportation networks have recently undergone a rapid development, leading to methods that are up to three million times faster than Dijkstra’s algorithm. We give
an overview of the techniques enabling this development and point out frontiers of ongoing research on more challenging variants of the problem that include dynamically changing networks,
time-dependent routing, and flexible objective functions.
- 9TH DIMACS IMPLEMENTATION CHALLENGE , 2006
"... We study two speedup techniques for route planning in road networks: highway hierarchies (HH) and goal directed search using landmarks (ALT). It turns out that there are several interesting
synergies. Highway hierarchies yield a way to implement landmark selection more efficiently and to store landm ..."
Cited by 23 (10 self)
Add to MetaCart
We study two speedup techniques for route planning in road networks: highway hierarchies (HH) and goal directed search using landmarks (ALT). It turns out that there are several interesting
synergies. Highway hierarchies yield a way to implement landmark selection more efficiently and to store landmark information more space efficiently than before. ALT gives queries in highway
hierarchies an excellent sense of direction and allows some pruning of the search space. For computing shortest distances and approximately shortest travel times, this combination yields a
significant speedup over HH alone. We also explain how to compute actual shortest paths very efficiently.
- In Proc. 3rd Workshop on Experimental and Efficient Algorithms. LNCS , 2004
"... Computing a shortest path from one node to another in a directed graph is a very common task in practice. This problem is classically solved by Dijkstra's algorithm. Many techniques are known to
speed up this algorithm heuristically, while optimality of the solution can still be guaranteed. In m ..."
Cited by 20 (6 self)
Add to MetaCart
Computing a shortest path from one node to another in a directed graph is a very common task in practice. This problem is classically solved by Dijkstra's algorithm. Many techniques are known to
speed up this algorithm heuristically, while optimality of the solution can still be guaranteed. In most studies, such techniques are considered individually.
, 2007
"... We present a fast algorithm for computing all shortest paths between source nodes s ∈ S and target nodes t ∈ T. This problem is important as an initial step for many operations research problems
(e.g., the vehicle routing problem), which require the distances between S and T as input. Our approach i ..."
Cited by 16 (5 self)
Add to MetaCart
We present a fast algorithm for computing all shortest paths between source nodes s ∈ S and target nodes t ∈ T. This problem is important as an initial step for many operations research problems
(e.g., the vehicle routing problem), which require the distances between S and T as input. Our approach is based on highway hierarchies, which are also used for the currently fastest speedup
techniques for shortest path queries in road networks. We show how to use highway hierarchies so that for example, a 10 000 × 10 000 distance table in the European road network can be computed in
about one minute. These results are based on a simple basic idea, several refinements, and careful engineering of the approach. We also explain how the approach can be parallelized and how the
computation can be restricted to computing only the k closest connections.
- IN: 9TH DIMACS IMPLEMENTATION CHALLENGE , 2006
"... Shortest-path computation is a frequent task in practice. Owing to ever-growing real-world graphs, there is a constant need for faster algorithms. In the course of time, a large number of
techniques to heuristically speed up Dijkstra’s shortest-path algorithm have been devised. This work reviews the ..."
Cited by 15 (4 self)
Add to MetaCart
Shortest-path computation is a frequent task in practice. Owing to ever-growing real-world graphs, there is a constant need for faster algorithms. In the course of time, a large number of techniques
to heuristically speed up Dijkstra’s shortest-path algorithm have been devised. This work reviews the multi-level technique to answer shortest-path queries exactly [SWZ02, HSW06], which makes use of
a hierarchical decomposition of the input graph and precomputation of supplementary information. We develop this preprocessing to the maximum and introduce several ideas to enhance this approach
considerably, by reorganizing the precomputed data in partial graphs and optimizing them individually. To answer a given query, certain partial graphs are combined to a search graph, which can be
explored by a simple and fast procedure. Experiments confirm query times of less than 200 µs for a road graph with over 15 million vertices.
- ACM Journal of Experimental Algorithmics , 2005
"... A fundamental approach in finding efficiently best routes or optimal itineraries in traffic information systems is to reduce the search space (part of graph visited) of the most commonly used
shortest path routine (Dijkstra’s algorithm) on a suitably defined graph. We investigate reduction of the se ..."
Cited by 15 (7 self)
Add to MetaCart
A fundamental approach in finding efficiently best routes or optimal itineraries in traffic information systems is to reduce the search space (part of graph visited) of the most commonly used
shortest path routine (Dijkstra’s algorithm) on a suitably defined graph. We investigate reduction of the search space while simultaneously retaining data structures, created during a preprocessing
phase, of size linear (i.e., optimal) to the size of the graph. We show that the search space of Dijkstra’s algorithm can be significantly reduced by extracting geometric information from a given
layout of the graph and by encapsulating precomputed shortest-path information in resulted geometric objects (containers). We present an extensive experimental study comparing the impact of different
types of geometric containers using test data from real-world traffic networks. We also present new algorithms as well as an empirical study for the dynamic case of this problem, where edge weights
are subject to change and the geometric containers have to be updated and show that our new methods are two to three times faster than recomputing everything from scratch. Finally, in an appendix, we
discuss the software framework that we developed to realize the implementations of all of our variants of Dijkstra’s algorithm. Such a framework is not trivial to achieve as our goal was to maintain
a common code base that is, at the same time, small, efficient, and flexible, | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.13.7034","timestamp":"2014-04-19T21:18:56Z","content_type":null,"content_length":"39582","record_id":"<urn:uuid:1399b571-4f0e-4704-b166-18da780bfa9d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Object-object intersection comparison for photorealistic graphics rendering
3.4Triangle-Triangle Intersection
As detailed in the previous chapter, triangle-triangle intersections are used to make sure no clipping occurs in a 3d model. We introduced the separating axis test and robust interval overlap
algorithms and have hypothesized that although the early stage rejections of both algorithms are similar, the former needs to invest less time in these checks and as such is expected to run faster in
these cases.
However, in all other case the interval overlap method most definitely will outperform the SAT due to the immense different in computationally heavy code of the remaining phases of the algorithm.
3.4.1Test Setup
The test setup will be based on each algorithm’s hypothesis expectations and their individual rejection and acceptance moments. As such the following tests will be simulated for the (a) separating
axis test and (b) robust interval overlap algorithms:
No triangle/plane overlap to compare the algorithms’ early rejections
Plane/triangle overlap, but no triangle/triangle overlap allows (a) to exit relatively early in the latter part of its algorithm
Triangle/triangle overlap compare full running times
Fully randomized average case scenario to compare real-life performances
3.4.2Test Results
Test # (a) Running Time (ms) (b) Running Time (ms)
1) 568 575
2) 1985 1346
3) 3629 1581
4) 1118 971
Table 4: Triangle-triangle intersection results
3.4.3Comparison Results & Conclusions
Although it seems our expectation of the SAT to result in a faster early rejection seems to be correct, the difference is too small to reliably come to any useful conclusions. However, for the
remaining cases the interval overlap method definitely shows its predicted superiority over the SAT.
Although the extreme differences in running time seem to be mostly occurring at the edge cases, the real-life randomized scenario also shows a slight benefit in favor of the interval overlap method.
Hence we can conclude that this method forms the logical choice for triangle-triangle intersection tests.
3.5Triangle-AABB Intersection
As detailed in the previous chapter, triangle-AABB intersections are used when assigning triangle primitives into box-based spatial data structures. We introduced the triangle-box intersection and
separating axis test algorithms and have hypothesized that the dual perspective of the triangle-box intersection approach will outperform the rejection-only approach of the separating axis test on
positive triangle-AABB intersections.
In contrast to this the separating axis test seems to be superior in all other cases due to the heavy calculations involved in the triangle-box intersection algorithm. Further note should be taken of
the fact that we can use the early trivial rejection of the triangle-box intersection algorithm for the separating axis test algorithm as well.
3.5.1Test Setup
The test setup will be based on each algorithm’s hypothesis expectations and their individual rejection and acceptance moments. As such the following tests will be simulated for the (a) triangle-box
intersection and (b) separating axis test algorithms:
Trivial internal vertices acceptances favoring (a)’s early acceptance
No triangle-AABB intersections (a)’s box half-plane exclusion test
No triangle-AABB intersections (a)’s dodecahedron half-plane exclusion test
No triangle-AABB intersections (a)’s octahedron half-plane exclusion test
Interior triangle protrusion (a)’s early ray-triangle acceptance test
Interior box protrusion comparing full running time of algorithms
Fully randomized average case scenario to compare real-life performances
Fully randomized average case scenario with early trivial acceptance case for (b) to compare real-life performances with attempted improvement of (b)
3.5.2Test Results
Test # (a) Running Time (ms) (b) Running Time (ms)
1) 190 1727
2) 172 337
3) 409 763
4) 426 774
5) 1317 1830
6) 1600 1777
7) 258 505
8) 250 471
Table 5: Triangle-AABB intersection results
3.5.3Comparison Results & Conclusions
While comparing both algorithms for early acceptance, it is shown that an enormous difference in running times backs up our statement of the triangle-box intersection test outperforming the
separating axis test in these cases.
However, further testing of the other scenarios shows our hypothesis concerning the remaining scenarios to be incorrect. Although the running time of the triangle-box intersection test rapidly
increases as predicted, the growth for the separating axis test is actually higher. As such it can be concluded that the triangle-box intersection algorithm is superior for all cases.
3.6Sphere-AABB Intersection
As detailed in the previous chapter, sphere-AABB intersections are used when spheres are supported as geometric primitives and these need to be sorted into box-based spatial data structures.
Additionally, these kind of intersection routines are also needed in some spatially approximating global illumination algorithms such as photon mapping. We introduced the distance-based and quick
rejection intertwined algorithms and have hypothesized that their performance is neigh-equal, except for the early termination in the quick-rejection intertwined test.
3.6.1Test Setup
The test setup will be based on each algorithm’s hypothesis expectations and their individual rejection and acceptance moments. As such the following tests will be simulated for the (a)
distance-based and (b) quick-rejection intertwined algorithms:
Sphere-AABB overlap expected to slightly benefit (a)
No sphere-AABB overlap expected to slightly benefit (b)
Fully randomized average case scenario to compare real-life performances
3.6.2Test Results
Test # (a) Running Time (ms) (b) Running Time (ms)
1) 112 90
2) 137 85
3) 126 101
Table 6: Sphere-AABB intersection results
3.6.3Comparison Results & Conclusions
Although test 1 runs slightly faster for (a) than test 2 does (and vice versa for (b)), these expected benefits do not confirm our expectation of separate niche superiority, since in both cases the
quick-rejection intertwined algorithm is better. This is also illustrated in the average case scenario as well, resulting in the conclusion that this algorithm consistently outperforms the
distance-based approach.
4Summary & Future Work
Seeing as how we’ve already discussed the conclusions for each specific object-object intersection type using the test results in the previous chapter, we shall limit ourselves here to a small
summary of these conclusions.
Ray-triangle intersection
In case of intersection behind origin, plane intersection/inclusion is faster.
In all other cases, including randomized, Möller-Trumbore is faster.
Ray-sphere intersection:
In case of intersection behind origin, geometry optimized test is faster.
In case of misses, quadratic solution is slightly faster.
In all other cases, including randomized, they perform equally.
Ray-AABB intersection:
In case no intersection location is needed, ray-slope algorithm is faster.
When t is reported and multiple rays are handled, slab method is faster.
In randomized scenario, ray-slope algorithm is faster.
Triangle-triangle intersection:
In case of early rejection, SAT seems slightly faster
In all other cases, including randomized, interval overlap method is faster.
Triangle-AABB intersection:
In case of early acceptance, triangle-box intersection is significantly faster.
In all other cases we have been proven wrong in our hypothesis; triangle-box overlap was expected to be faster due to lower computationally heavy code fragments, but instead triangle-box
intersection is faster.
Sphere-AABB intersection:
Although we expected slightly faster running times for each algorithm’s specialty niche, the quick-rejection intertwined algorithm actually outperforms the distance-based approach slightly in
all cases.
Concerning future work, it would be interesting to research the adaptability of these intersection algorithms to parallel architectures, with an emphasis per algorithm on the effect of early reject/
accept exits on code divergences, the memory dependency of the algorithms and their associated memory latency hiding capacity. | {"url":"http://lib.znate.ru/docs/index-40240.html?page=6","timestamp":"2014-04-21T14:39:35Z","content_type":null,"content_length":"26996","record_id":"<urn:uuid:8c4fe116-e911-45c0-9018-9b37b929e800>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carrollton, GA Algebra 2 Tutor
Find a Carrollton, GA Algebra 2 Tutor
I have been a teacher for seven years and currently teach 9th grade Coordinate Algebra and 10th grade Analytic Geometry. I am up to date with all of the requirements in preparation for the EOCT.
I am currently finishing up my masters degree from KSU.
4 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...I emphasize effective study and test-taking skills, as well as prepare students to have a successful college experience. I always focus my students' attention on striving for academic
excellence. I use my own academic achievements (ranked 4th in my high school graduating class, being the recipi...
57 Subjects: including algebra 2, reading, chemistry, geometry
...I'm currently taking Accounting II. The highest score I made on SAT math is a 780. I have also taken the SAT four times for my experience.
17 Subjects: including algebra 2, chemistry, calculus, geometry
I have 33 years of Mathematics teaching experience. During my career, I tutored any students in the school who wanted or needed help with their math class. I usually tutored before and after
school, but I've even tutored during my lunch break and planning times when transportation was an issue.
13 Subjects: including algebra 2, calculus, algebra 1, SAT math
...When I brush up on the math a little more I will be glad to help with differential equations and calculus 2 and some of the other math. Thank you,Lee H.I have passed all of my Algebra classes
when I was in college at Chattahoochee Technical College and Southern Polytechnic State University. I have 19 years of working experience with algebra.
6 Subjects: including algebra 2, geometry, algebra 1, elementary math
Related Carrollton, GA Tutors
Carrollton, GA Accounting Tutors
Carrollton, GA ACT Tutors
Carrollton, GA Algebra Tutors
Carrollton, GA Algebra 2 Tutors
Carrollton, GA Calculus Tutors
Carrollton, GA Geometry Tutors
Carrollton, GA Math Tutors
Carrollton, GA Prealgebra Tutors
Carrollton, GA Precalculus Tutors
Carrollton, GA SAT Tutors
Carrollton, GA SAT Math Tutors
Carrollton, GA Science Tutors
Carrollton, GA Statistics Tutors
Carrollton, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/carrollton_ga_algebra_2_tutors.php","timestamp":"2014-04-20T06:51:36Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:4e3e20b6-243d-47b8-91d7-9bcb53a8ce38>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
The Mylar balloon: new viewpoints and generalizations.
(English) Zbl 1123.53006
Mladenov, Ivaïlo (ed.) et al., Proceedings of the 8th international conference on geometry, integrability and quantization, Sts. Constantine and Elena, Bulgaria, June 9–14, 2006. Sofia: Bulgarian
Academy of Sciences (ISBN 978-954-8495-37-0/pbk). 246-263 (2007).
The Mylar balloon is physically determined: Sew together two disks of Mylar and inflate it for instance by air, the resulting balloon is called a Mylar balloon. This object has rotational symmetry
(but is different from a sphere). Investigations are due to W. Paulsen [Am. Math. Mon. 101, No. 10, 953–958 (1994; Zbl 0847.49030)], F. Baginski [SIAM J. Appl. Math. 65, No. 3, 838–857 (2005; Zbl
1072.74046)], G. Gibbons [DAMTP Preprint, Cambr. Univ. (2006)] and the authors [Am. Math. Mon. 110, No. 9, 761–784 (2003; Zbl 1044.33008)].
In the present paper the Mylar balloon is first modelled as a linear Weingarten surface of revolution. A parametrization of such a surface is given as $x\left(u,v\right)=\left(ucosv,usinv,z\left(u\
right)\right)$ where $z\left(u\right)$ is expressed by hypergeometric functions using MAPLE.
For the second approach the authors use the parametrization $x\left(s,v\right)=\left(r\left(s\right)cosv,r\left(s\right)sinv,z\left(s\right)\right)$ (whith $s$ the arclength on the meridian curve $\
left(r\left(s\right),z\left(s\right)\right)$ and figure out the equilibrium conditions. Special solutions of these non linear equations are presented. Let denote $H=\left(1/2\right)\left({k}_{\mu }+
{k}_{\pi }\right)$ with ${k}_{\mu }$ and ${k}_{\pi }$ the main curvatures related to the meridian and to the parallel curves, respectively. Then one class of examples are the Delaunay surfaces $\left
(H=\text{const.}\right)$. The second example is the Mylar balloon $\left({k}_{\mu }=2{k}_{\pi }\right)$. The paper also contains visualizations using MAPLE.
53A05 Surfaces in Euclidean space
33E05 Elliptic functions and integrals
49Q10 Optimization of shapes other than minimal surfaces
49Q20 Variational problems in a geometric measure-theoretic setting | {"url":"http://zbmath.org/?q=an:1123.53006&format=complete","timestamp":"2014-04-19T07:10:34Z","content_type":null,"content_length":"23873","record_id":"<urn:uuid:1a8c0286-7b48-4528-9431-d82081da62e2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
The dark side of bioinformatics data mining
I spend much of my day analysing yeast high throughput data, recently produced in the laboratory. On one hand I’m very lucky to have access to fresh data at many cellular levels. On the other hand,
with all this information, I’m easily swept away by the amount of variables I have access to - the dark side of data mining.
Dark side
Principle components analysis. Singular value decomposition. Hierarchical clustering. Constrained ordination. Unconstrained ordination. Bayesian clustering. Throw all the techniques I can think of,
but it won’t give me the answer I’m looking for - because I don’t have a question. Instead I have component weights for variables and parameters, identified clusters, and interesting but ultimately
useless graphs. I know I’m slipped into the dark side of data mining because the time is half past go home, my motivation is in my shoes, my face is on my desk.
Light side
The power of the light side, comes from reading and understanding the literature. Plenty of reading gives you the question to answer. I spend the a portion of my day reading, then thinking about how
this relates to the question I’m trying to answer. Work in small portions, find one small question. Then set out to answer it. With your answer review this in light of what you have understood from
the literature. Is this what you expect? How does it relate to previous findings? Review the literature again, until you have another small question. Repeat as necessary, publish. Bask in admiration
of your peers.
Lure of the dark side
Why is it so easy to end up in the dark side of data mining? Or bioinformatics in general? I believe this is because of how little effort it takes to carry out computational analysis. If you worked
in a wet laboratory it’s likely to take more than a week to perform a small defined piece of analytical work. However, as a bioinformatician sat at your computer, working away no one is going to
check on what you’re doing. Only when you show your results to someone and the obvious flaw is highlighted, do you wish you had first considered the problem a bit more. Taking a few moments to sketch
out what you’re thinking before you perform an analysis can really help with this.
The lesson here is that you can use the most advanced bayesian multivariate analysis technique available, but if you don’t know what you’re looking for, it’s going to very hard to find an answer.
However if you have a clearly stated question, you’ll find that it can often be answered with something as simple as a t-test or correlation test.
Interested in bioinformatics and data analysis? Follow me on twitter. | {"url":"http://www.bioinformaticszen.com/post/the-dark-side-of-bioinformatics-data-mining/","timestamp":"2014-04-16T07:13:23Z","content_type":null,"content_length":"9183","record_id":"<urn:uuid:91117221-0a54-4427-9d61-d13d7d35e494>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
February 18th 2009, 04:45 PM #1
Feb 2009
please helppppppppp!
1. if dy/dx=(x^3 + 1)/(y) when x=1, then, when x=2, y=
a) radical 27/2 b) radical 27/8 c)+-radical 27/8 d)+-3/2 e)=+-radical 27/2
2. population p grows according to the equation dp/dt=kp, where k is a constant and t is measured in years. if the population doubles every 12 years then the calue of k is approcimately
a)3.583 b)1.792 c)0.693 d)0.279 e)0058
i dont kno this one
consider the differential equation dy/dx= (e^x - 1)/2y. if y=4, when x=0 what is the value of y when x=1
a. radical e+ 14
b. radical e+15
c. radical e^2 +11
d. radical e^2+15
e. radical e^2=12
please show me the step cause i dont kno these..thank u
There are three things that are useful to remember with differential equations:
1. Seperate- (eg. y dy= (x^3+1) dx for #1)
2. Integrate both sides
3. And dont forget the constant, C
Then you plug in the values of x and y to find C and then you can find the other x or y value.
February 18th 2009, 09:19 PM #2
Feb 2009
by that one green tree | {"url":"http://mathhelpforum.com/calculus/74362-please-helppppppppp.html","timestamp":"2014-04-20T04:23:28Z","content_type":null,"content_length":"28244","record_id":"<urn:uuid:614ef226-c419-49e7-b20a-59adffeacb9d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] First-order arithmetical truth
A.P. Hazen a.hazen at philosophy.unimelb.edu.au
Sat Oct 14 02:46:41 EDT 2006
Stephen Pollard was perhaps too brief in writing:
> > The first-order number theoretic truths are exactly the first-order
>> sentences in the language of arithmetic that follow from the axioms
>> of Peano Arithmetic supplemented by the following version of the
>> least number principle: "Among any numbers there is always a
>> least." (This principle is not firstorderizable; but that doesn't
>> make it unintelligible.)
---He left Francis Davey puzzled about what he intended:
> >
>I'm not sure that really answers my problem (though I would be
>interested to know if I am right about that). The LNP sounds like
>something you would need to formalise using a concept of "set", which is
>even harder to understand than that of "natural number".
Pollard (I know this from his published papers(*), not just
from his use of the English plural noun phrase "any numbers" in his
formulation of the least number principle) is a believer in "plural
quantification". This is the idea --
it can be traced back with considerable plausibility
to Russell's talk of "classes as many" in his
"Principles of Mathematics," but its recent popularity
can be attributed to George Boolos's "To be is to be
the value of a variable (or to be some values of some
variables" ("Journal of Philosophy" 1984; repr. with
some related articles in Boolos's posthumous collection
"Logic, Logic and Logic") --
that we can quantify plurally over objects in a conceptually
primitive way: that "Among any numbers there is a least" states a
generalization about arbitrary (cough! splutter!) collections of
numbers WITHOUT being a generalization about sets or classes or ...
or any kind of entities called "collections".
If this is so, it gives us the expressive power of full (monadic)
Second-Order Logic, so the mathematical content of Pollard's claim
is simply Dedekind's theorem about the categoricity of Second-Order
Peano Arithmetic. Philosophers fond of the idea, however, think it
gives us this expressive power without ontological commitment to (and
so without philosophical worries about our understanding of the
concept of set) sets.
(*) See his discussion of the Axiom of Choice in "Philosophical
Studies" v. 54 (1988) and v. 66 (1992).
Allen Hazen
Philosophy Department
University of Melbourne
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-October/010954.html","timestamp":"2014-04-18T10:38:46Z","content_type":null,"content_length":"4963","record_id":"<urn:uuid:6f8e4067-50fb-4353-87a8-e4cab25ff386>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
induction by absurd
January 19th 2006, 09:16 AM #1
Junior Member
Jan 2006
induction by absurd
here is a multipurpose pedagogic <<riddle>>
Proove That (P(1) and( P(n)=>P(n+1) for n<=1))=>(P(n) for all n natural integer)
Using this axiomatic property of N : Every subset of N admit a smallest element. (an element that belong to the subset and smaller than every element of the subset).
This is very trivial i know but this property can be used in many demonstration( x^3+y^3=z^3 as no integer solution by Fermat for exemple)
wont give the anwser!
here is a multipurpose pedagogic <<riddle>>
Proove That (P(1) and( P(n)=>P(n+1) for n<=1))=>(P(n) for all n natural integer)
Using this axiomatic property of N : Every subset of N admit a smallest element. (an element that belong to the subset and smaller than every element of the subset).
This is very trivial i know but this property can be used in many demonstration( x^3+y^3=z^3 as no integer solution by Fermat for exemple)
wont give the anwser!
Suppose $P(1)$ is true, and that $P(n) \Rightarrow P(n+1)$.
Let $S \subset \mathbb{N}$ be the subset of $\mathbb{N}$ for which $P(n)$ is false.
Then by the axiom there is a smallest element $s$ of $S$,
and $s>1$ by assumption.
Then $P(s-1)$ is true and $P(s)$ is false, which is
a contradiction of the assumption that $P(n) \Rightarrow P(n+1)$.
Hence there is no such element $s$ and so there is no such
set $S$. And so $P(n)$is true for every $n \epsilon \mathbb{N}$.
(Note here we are taking $\mathbb{N}=\{ 1,2,..\}$ if we want to
use $\mathbb{N}=\{ 0,1,2,..\}$, we would have to start with $P(0)$
being true rather than $P(1)$ in our base case.)
January 19th 2006, 09:48 AM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/math-challenge-problems/1668-induction-absurd.html","timestamp":"2014-04-16T17:40:09Z","content_type":null,"content_length":"37124","record_id":"<urn:uuid:77e685b8-ff57-43da-b923-4f4a6d849f3f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
A combinations with repetition
(a)Determine the number of integer solutions of
where a1,a2,a3>0, 0<a4<=25
(b)Mary has two dozen each of n different colored beads.
If she can select 20 beads(with repetition of colors allowed)
in 230,230 ways,what is the value of n?
Last edited by coolwind (2006-08-09 01:29:52) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=39664","timestamp":"2014-04-19T19:46:47Z","content_type":null,"content_length":"22686","record_id":"<urn:uuid:2f875829-f319-486c-9d9d-ec3c1129624f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
connected rates of change
July 30th 2011, 03:33 AM #1
Junior Member
Nov 2010
connected rates of change
A inverted conical vessel contains water to a height of 12cm initially. A hole is opened at the vertex and water leaks away in such a manner that after t minutes, the rate of decrease of the
height, h cm, of the water in the vessel is given by dh/dt = -2t/3 cm/min
a) calculate the time taken for the vessel to empty
b)Given the volume of water in the vessel after t minutes is ( $\frac{\pi}{6}$ h^3) cm^3 , calculate the rate of change of the volume of water in the vessel when t =3
I am already stumped by the first part. As dh/dt is constantly changing due to the vessel not being uniform in base area as the water level decreases i can't just divide the figures. Can anyone
point me in the right direction for this question?
Re: connected rates of change
The ratio of the height to the top radius if the water is constant. You can use this to your advantage. Think "Similar Triangles".
Re: connected rates of change
Re: connected rates of change
h = h(r) -- How can you express that?
Since we have $Volume(h,r) = \frac{1}{3}\pi r^{2}h$, that volume hint should be able to provide the required relationship.
Re: connected rates of change
what are the information in the question that i need to use for this?
am i supposed to express h as a function of r? if so, i don't really know how to do that with only one equation for volume and with V in the equation :/
Re: connected rates of change
a) Integrate -2t/3
you get : h = -t^2/3 + c. Now, since t=0, h=12, Therefore, h = -t^2 / 3 + 12
Therefore time taken to empty is to put h = 0, t = 6
b) V= 1/3 {\pi} r^2 h
put V = (
Solve and you get, 1/2 h^2 = r^2
Therefore, V = 1/6 {\pi} h^3
From (a), t = 3, h = 9
dV/dh = 1/2 {\pi} h^2
Use the rate of change: dV/dt = dV/dh x dh/dt
= 1/2 {\pi| (9)^2 x (-2)
I am a teacher and joined this forum recently. btw, can someone help me with how to type the codes for symbols?
Last edited by mithgar; July 31st 2011 at 08:41 AM.
Re: connected rates of change
a) Integrate -2t/3
you get : h = -t^2/3 + c. Now, since t=0, h=12, Therefore, h = -t^2 / 3 + 12
Therefore time taken to empty is to put h = 0, t = 6
b) V= 1/3 {\pi} r^2 h
put V = (
Solve and you get, 1/2 h^2 = r^2
Therefore, V = 1/6 {\pi} h^3
From (a), t = 3, h = 9
dV/dh = 1/2 {\pi} h^2
Use the rate of change: dV/dt = dV/dh x dh/dt
= 1/2 {\pi| (9)^2 x (-2)
I am a teacher and joined this forum recently. btw, can someone help me with how to type the codes for symbols?
well, found the tutorial on how to use math tag.
you get : h = $\frac{-t^2}{3} + c$
tried but doesnt seem to work
Last edited by Ackbeet; August 1st 2011 at 05:36 AM.
Re: connected rates of change
You have to use:
[ tex ] ... [ / tex ]
if you want to work with latex.
Re: connected rates of change
July 30th 2011, 03:35 AM #2
MHF Contributor
Aug 2007
July 30th 2011, 03:39 AM #3
Junior Member
Nov 2010
July 30th 2011, 06:14 AM #4
MHF Contributor
Aug 2007
July 30th 2011, 07:15 AM #5
Junior Member
Nov 2010
July 31st 2011, 08:28 AM #6
Jul 2011
July 31st 2011, 09:22 PM #7
Jul 2011
August 1st 2011, 03:11 AM #8
August 1st 2011, 06:57 AM #9
Jul 2011 | {"url":"http://mathhelpforum.com/calculus/185336-connected-rates-change.html","timestamp":"2014-04-19T13:37:15Z","content_type":null,"content_length":"53926","record_id":"<urn:uuid:50ff5595-a16c-4f34-91cb-a53f5e86ca56>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
linear recurrence equation with non constant coefficients
April 10th 2012, 01:13 PM #1
Apr 2011
linear recurrence equation with non constant coefficients
I would solve the following recurrence equation:
$ a_{n+1} = a_n + 1/(2(c+n-1))*a_{n-1} $
where c>0 is a fixed constant.
It is linear (and that's good), but there's a non constant coefficient.. There's a way to find an explicit solution?
Last edited by qwertyuio; April 10th 2012 at 01:24 PM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/197050-linear-recurrence-equation-non-constant-coefficients.html","timestamp":"2014-04-18T01:28:16Z","content_type":null,"content_length":"29742","record_id":"<urn:uuid:842863d3-f8d5-4fd6-86dc-268f2deca783>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Legendre transformation
Differential geometry
The Legendre transformation
The Legendre transformation is an operation on convex functions from a real normed vector space to the real line; it is one of the cornerstones of convex analysis. The space of arguments changes
In classical mechanics – Hamiltonians and Lagrangians
The main application of and the historical root of the notion of Legendre transform (in differential geometry) is in classical physics and its formalization by symplectic geometry. In classical
mechanics, the Hamiltonian function $H$ is a Legendre transform of the Lagrangean $L$ and vice versa.
When one formalizes classical mechanics as the local prequantum field theory given by prequantized Lagrangian correspondences, then the Legendre transform is exhibited by the lift from a Lagrangian
correspondence to a prequantized Lagrangian correspondence. For more on this see at The classical action, the Legendre transform and Prequantized Lagrangian correspondences.
In many dimensions, hybrid versions are possible. When the physics of the system is given by the variational principle, then the Legendre transform of an extremal quantity is a conserved quantity. In
thermodynamics, we can have some quantities set to be fixed (some candidates: entropy $S$, temperature $T$, pressure $P$, volume $V$, magnetization $M$); this dictates the choice of variables and
quantity which is extremized as well as which one takes the role of conserved energy. Some of the standard choices are enthalpy $H$, Helmholtz free energy $F$, Gibbs free energy $G$, internal energy
$U$, etc.
See also wikipedia:Legendre transformation and wikipedia:Legendre-Fenchel transformation; the two wikipedia articles have much detail in certain specific approaches and cases, but also miss some of
the basic ones to be balanced.
Via prequantized Lagrangian correspondences
See at prequantized Lagrangian correspondence.
In multisymplectic geometry
See at multisymplectic geometry – de Donder-Weyl-hamilton equations of motion.
The concept is named after Adrien-Marie Legendre.
Reviews include
Discussion of Legendre transformation in the context of Lie algebroids is in:
• Paulette Liberman, Lie algebroids and mechanics (ps)
• Juan Carlos Marrero et al, A survey of Lagrangian mechanics and control on Lie algebroids and Lie groupoids (pdf)
• Juan Carlos Marrero, Nonholonomic mechanics: a Lie algebroid perspective (pdf talk notes) | {"url":"http://www.ncatlab.org/nlab/show/Legendre+transformation","timestamp":"2014-04-17T21:26:40Z","content_type":null,"content_length":"46246","record_id":"<urn:uuid:c894f173-6eea-4f2a-8b47-93888e4395a3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
God Plays Dice
08 February 2009
Read today's Foxtrot
You should read
today's Foxtrot comic strip
. (I won't tell you why, because if I told you why you won't follow the link.)
5 comments:
That was so freakin' funny! :D Thanks for sharing it.
You guys need to get out more.
I don't see what the problem is in the last panel, given that they were obviously counting the number of 5s among all partitions of n.
I like it when jokes require a certain amount of math knowledge. I feel like it justifies all my years of math homework. | {"url":"http://godplaysdice.blogspot.com/2009/02/read-todays-foxtrot.html","timestamp":"2014-04-17T15:32:08Z","content_type":null,"content_length":"57706","record_id":"<urn:uuid:b7b40f1c-7398-448f-967d-3d663233ca07>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse Matrix - Eigenvalues & Eigenvectors (Proof Needed)
April 28th 2013, 05:41 PM #1
Mar 2013
Inverse Matrix - Eigenvalues & Eigenvectors (Proof Needed)
One of the questions I was presented in a tutorial the other day really stumped me and I am unsure as to how to prove it.
The question is:
"Suppose that A is an invertible matrix and that x is an eigenvector for A with eigenvalue 'lambda cannot equal 0'. Show that x is an eigenvector for the inverse matrix of A with eigenvalue
I have been shown a similar question where you had to prove that the matrix A^2 had an eigenvalue of lambda^2 through the manipulation of the equation Ax = (lambda)x as below:
Ax = (lambda)x
A(Ax) = A(lambda)x
A^2x = (lambda)Ax
A^2x = (lambda)(lambda)x
A^2x = (lambda)^2x
Is the solution to this question reached through the same steps, or is there a step I need to do differently?
Any help would be greatly appreciated
Re: Inverse Matrix - Eigenvalues & Eigenvectors (Proof Needed)
Perhaps you should attempt this question before asking for help. Don't be afraid to try.
$Ax=\lambda x \iff A^{-1}Ax=A^{-1}\lambda x \iff x=\lambda A^{-1}x$
Re: Inverse Matrix - Eigenvalues & Eigenvectors (Proof Needed)
Let $A$ be a matrix with eigenvalue $\lambda$ and eigenvector $x$.
Then $Ax = \lambda x$
Since $A$ is invertible, we can multiply both sides by $A^{-1}$.
$A^{-1} Ax = A^{-1}\lambda x$
$x = A^{-1} \lambda x$
Solve for $A^{-1} x$.
Thus, $A^{-1} x = \frac{1}{\lambda} x$
April 28th 2013, 07:52 PM #2
Super Member
Jan 2008
April 29th 2013, 09:10 AM #3
Junior Member
Apr 2013
Green Bay | {"url":"http://mathhelpforum.com/advanced-algebra/218329-inverse-matrix-eigenvalues-eigenvectors-proof-needed.html","timestamp":"2014-04-16T06:30:34Z","content_type":null,"content_length":"37720","record_id":"<urn:uuid:6cc05c0b-57da-4e92-ae08-d13b11188ebb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating average of every nth value [Formula tips]
Posted on September 5th, 2013 in
Excel Howtos
- 15 comments
Lets say you have a large list of numbers, and you want to calculate the average of every nth value. Not the average of all numbers, but just every nth number.
That is what we will learn in next few minutes.
Few assumptions
Before we jump in to any formulas, first lets assume that all your data is in a table, conveniently named as tbl. Lets say this table has below structure.
Also, the value of n is a named cell N.
Average of every nth value
Approach 1: Using helper columns
We just add an extra column to our tbl , called as helper.
In the helper column, write this formula.
=MOD([@ID], N)=0
This will fill the helper column with TRUE & FALSE values, TRUE for all nth values, FALSE for everything else. See aside.
Once we have the helper column, calculating average of every nth value is easy as eating every slice of a cake.
We use AVERAGEIF to do this.
Approach 2: Not using helper columns
Now things get interesting. Lets say you want to calculate average, but not use any helper columns.
First the formula:
=AVERAGE(IF(MOD(tbl[ID], N)=0,tbl[Value]))
Array entered.
Lets understand how it works:
We want the average of every nth item of tbl[Value] column.
In other words, we want average of every item of tbl[Value] column, whose corresponding tbl[ID] value is perfectly divisible by n.
How do we know when a value is perfectly divisible by another?
Don’t worry. You don’t have to do the long division on paper now. Instead we use Excel’s MOD function.
When a value is perfectly divisible by another, the reminder is zero.
So, MOD(value1, value2) = 0 means, value2 divides value1 perfectly.
That means…
We want the average of tbl[Value] when MOD(tbl[ID], N) = 0
Lets write that in Excel formula lingo.
=AVERAGE( IF(MOD(tbl[ID], N) = 0, tbl[Value]) )
This formula results in a bunch of values and FALSEs. Assuming N=3, this is what we get (for sample data):
=AVERAGE({FALSE;FALSE;15;FALSE;FALSE;18;FALSE;FALSE;18;FALSE;FALSE;15;FALSE;FALSE;14; …})
Since AVERAGE formula ignores any logical values, it will calculate the average of {15, 18, 18, 15, 14 … } and returns the answer you are expecting.
As this formula is processing arrays instead of single values, you need to array enter it (CTRL+SHIFT+Enter after typing the formula).
Bonus scenario: Average of FEBRUARY values only!
Here is a bonus scenario. Lets say you want to calculate the average sales of FEB alone… Then you can use AVERAGEIF (or AVERAGEIFS, if you want to have multiple conditions).
=AVERAGEIF(tbl[value], tbl[month], “FEB”)
Download example workbook:
Click here to download the example workbook. It contains all the techniques explained in this post. Play with the data & formulas to understand better.
Time for some challenges…
If you think averaging every nth value is not mean enough, try below challenges. Post your answers using comments.
1. Write a formula to calculate average of every nth value, starting at row number ‘t’.
2. Write a formula to calculate average of every nth value, assuming your table has only value column (no ID column).
Go ahead. Show off your formula skills. Post your answers in comments section.
Improving your Excel batting average
Calculating averages predates slice bread. Folklore says that when first neanderthal figured out how to express numbers and carved 2 of them on a cave wall, his manager walked by and asked “What is
the average of these two? Eh?” and thumped her chest.
Although caves & wall carvings are replaced by cubicles & spreadsheets, we are still calculating averages, almost 2.9 million of them per hour.
So it pays to learn a few tricks about Excel Average formulas. Check out below to improve your average:
If your boss is the kind who thumps her chest and mocks you for your poor Excel skills, don’t cave in. Fight back. Enroll in Excel School and show that you can evolve.
5 simple rules for making awesome column charts Formula Forensics No. 34. Extract words from a cell, where they occur in a list of words.
Written by Chandoo
Tags: array formulas, average(), averageifs, downloads, Learn Excel, Microsoft Excel Formulas, MOD()
Home: Chandoo.org Main Page
? Doubt: Ask an Excel Question
15 Responses to “Calculating average of every nth value [Formula tips]”
1. I think the helper column can be replaced by a ROW function:
(array entered)
and including every nth value after row t (specified in cell H5):
=AVERAGE(IF((MOD(ROW(tbl[Value])-ROW(tbl[[#Headers],[Value]]),$G$5)=0) * (ROW(tbl[Value])>=H5),tbl[Value]))
(array entered)
Don’t know if there’s a shorter version I’m missing here
2. =AVERAGE(IF(MOD((ROW(tbl)-ROW(tbl_start)),N)=0,tbl))
Array Enter
assume tbl has a header, and “tbl_start” refers to the cell of the header
3. “If you think averaging every nth value is not ‘mean’ enough”
4. Here’s a non array formula version that can be used with any range of values and any value for N:
I’m sure there’s a way to shorten this, but just my first attempt!
5. I have combined both part (starting at row number T and not using helper column) [G2 has the value of T]
=IFERROR(SUMPRODUCT(INDEX(tbl[Value],G4):INDEX(tbl[Value],COUNT(tbl[Value]))*(MOD(ROW(INDIRECT(“1:”&COUNT(tbl[Value])-G4+1)),G5)=0))/INT((COUNT(tbl[Value])-G4)/20), “Out of list Range”)
6. Hi Chandoo, I want to know, why the formula “averageifs” can’t be replaced by “averageif”. It seems that there is just one criteria required to the calculation, no need to use the formula
□ They both work Legei. I have used AVERAGEIFS as it will let you add more conditions if needed.
☆ Hi Chandoo,
Strangely Approach 1: AVERAGEIF(tbl[Value],tbl[Helper],TRUE) is not working for me as well.
Whereas if you use AVERAGEIFS, the function is working fine. Can you please double check if it (AVERAGEIF) works for you?
○ Use AVERAGEIF(tbl[Helper],TRUE,tbl[Value])
7. […] Calculating average of every nth value […]
8. How might one perform this function with the following restrictions:
1) the table has no ID column, rather just a single column of values, and
2) the table does not start conveniently at cell A1, but rather in a random place on the worksheet?
9. I don’t understand the [@id] notation, can you point me to something that clarifies?
10. When I recalculate the excel file for the*=AVERAGE(IF(MOD(tbl[ID],$G$5)=0,tbl[Value]))*, the value become zero, and the evaluation of the formula become AVERAGE(IF(MOD(1,20)=0,tbl[value])).
And further evaluation give AVERAGE(IF[FALSE,tbl[value]).
I really don’t know how to fix this problem.
P.S I am using EXCEL 2010
11. +AVERAGE(MOD(ROW(INDIRECT(“1:”&COUNT(a_list))),3)=0,a_list)
with ctrl + shift + enter
I think this works ?
12. +AVERAGE(MOD(ROW(INDIRECT(“1:”&COUNT(a_list))),n)=0,a_list)
i had checked my formula with 3 and hence put 3 in my previous post
5 simple rules for making awesome column charts Formula Forensics No. 34. Extract words from a cell, where they occur in a list of words. | {"url":"http://chandoo.org/wp/2013/09/05/calculating-average-of-every-nth-value-formula-tips/","timestamp":"2014-04-19T23:00:44Z","content_type":null,"content_length":"58472","record_id":"<urn:uuid:786efacf-6ce3-40a4-b78f-98971738391d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anaheim SAT Math Tutor
Find an Anaheim SAT Math Tutor
...I have 2+ years of tutoring experience: In high school, I taught pre-Algebra, Algebra, Trigonometry, Pre-Calculus, and Calculus to both groups and individual students. I have an SAT score of
1980. I am knowledgeable in all high-school and undergraduate math courses and physics.
12 Subjects: including SAT math, chemistry, physics, geometry
...I also teach SAT and ACT in those subjects. I was also a National Merit Finalist in high school. I have been teaching ACT English through Elite Educational Institute, helping many students
improve their understanding of the structure and content of the test.
22 Subjects: including SAT math, English, ACT Reading, ACT Math
...I found that persistence and hard work do pay off. Providing tutoring to students who are struggling with a subject is so rewarding for me. The best gift I could ever receive is seeing a
transformation in my students when they gain the confidence they need after getting tutored and realize that...
22 Subjects: including SAT math, English, reading, writing
...I earned a Professional Clear Special Education Credential for California in 1993 and worked as a Special Education teacher for some time. Study skills training was always a part of my
classroom work. My teaching activities were primarily with special needs students in public elementary and middle schools and later to a general population in adult vocational schools.
24 Subjects: including SAT math, reading, English, writing
...This year I have 4 precalculus students and 4 calculus students, one in honors AP and one in the IB program. There are many students in algebra 2 and under. All return students will be charged
the new rate.
9 Subjects: including SAT math, calculus, geometry, Chinese
Nearby Cities With SAT math Tutor
Anaheim Hills, CA SAT math Tutors
Buena Park SAT math Tutors
Cypress, CA SAT math Tutors
Federal, CA SAT math Tutors
Fullerton, CA SAT math Tutors
Garden Grove, CA SAT math Tutors
La Mirada SAT math Tutors
Lamirada, CA SAT math Tutors
Mirada, CA SAT math Tutors
Orange, CA SAT math Tutors
Placentia SAT math Tutors
Santa Ana SAT math Tutors
Tustin, CA SAT math Tutors
Westminster, CA SAT math Tutors
Yorba Linda SAT math Tutors | {"url":"http://www.purplemath.com/anaheim_sat_math_tutors.php","timestamp":"2014-04-19T05:23:51Z","content_type":null,"content_length":"23832","record_id":"<urn:uuid:54b762f0-d750-49ed-a618-c0fa23622216>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
ind the Diameter
Because of this relationship, you can solve for one unknown (the actual size, or diameter, for example) if you know the other two (the angular size and the distance, in this case). In fact, that is
exactly what you are going to do to find the size of the central star in HT Cas and answer the challenge question. Once you find or measure HT Cas's angular size and its distance from Earth, you can
plug the values into the equation above and solve for its actual size.
The 1.5 x 10^8 is a conversion factor to account for the units used in this problem, which need to be consistent. Parsecs are based on parallax, as measured from the Earths orbit which has a radius
of 1.5 x 10^8 km. | {"url":"http://imagine.gsfc.nasa.gov/YBA/HTCas-size/optical.html","timestamp":"2014-04-16T13:19:07Z","content_type":null,"content_length":"14729","record_id":"<urn:uuid:df624695-6221-4093-94f3-d77143a81d97>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solid Mensuration: Volume of a dam with the given dimensions.
February 23rd 2013, 08:08 AM #1
Feb 2013
A concrete dam of height 128 ft. was built in a gorge.One side AB of the gorge slopes at an angle of 60 degrees,and the other side CD at 45 degrees.The bases of the dam are horizontal rectangular
in shape .The lower base is 1215 ft. by 152 ft. and the upper base is 32 ft. wide .How many cubic yards of concrete were required?
Answer in cubic yards.
Image [Note: The other angle in the image is 60 not 70]
~Answer: 561230 cubic yards.
Please show the solutions | {"url":"http://mathhelpforum.com/geometry/213666-solid-mensuration-volume-dam-given-dimensions.html","timestamp":"2014-04-16T04:28:23Z","content_type":null,"content_length":"30296","record_id":"<urn:uuid:9a00c8e8-9c74-4d80-a7ac-ed12e6cdbb3d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Noncommutative Geometry
Welcome to E-Books Directory
This page lists freely downloadable books.
Noncommutative Geometry
E-Books for free online viewing and/or download
e-books in this category
Very Basic Noncommutative Geometry
by Masoud Khalkhali - University of Western Ontario , 2004
Contents: Introduction; Some examples of geometry-algebra correspondence; Noncommutative quotients; Cyclic cohomology; Chern-Connes character; Banach and C*-algebras; Idempotents and finite
projective modules; Equivalence of categories.
(761 views)
Homological Methods in Noncommutative Geometry
by D. Kaledin , 2008
The first seven lectures deal with the homological part of the story (cyclic homology, its various definitions, various additional structures it possesses). Then there are four lectures centered
around Hochschild cohomology and the formality theorem.
(1889 views)
Surveys in Noncommutative Geometry
by Nigel Higson, John Roe - American Mathematical Society , 2006
These lectures are intended to introduce key topics in noncommutative geometry to mathematicians unfamiliar with the subject. Topics: applications of noncommutative geometry to problems in ordinary
geometry and topology, residue index theorem, etc.
(2385 views)
An Introduction to Noncommutative Spaces and their Geometry
by Giovanni Landi - arXiv , 1997
These lectures notes are an introduction for physicists to several ideas and applications of noncommutative geometry. The necessary mathematical tools are presented in a way which we feel should be
accessible to physicists.
(4084 views)
An informal introduction to the ideas and concepts of noncommutative geometry
by Thierry Masson - arXiv , 2006
This is an extended version of a three hours lecture given at the 6th Peyresq meeting 'Integrable systems and quantum field theory'. We make an overview of some of the mathematical results which
motivated the development of noncommutative geometry.
(2857 views)
Noncommutative Geometry, Quantum Fields and Motives
by Alain Connes, Matilde Marcolli - American Mathematical Society , 2007
The unifying theme of this book is the interplay among noncommutative geometry, physics, and number theory. The two main objects of investigation are spaces where both the noncommutative and the
motivic aspects come to play a role.
(3799 views)
Noncommutative Geometry
by Alain Connes - Academic Press , 1994
The definitive treatment of the revolutionary approach to measure theory, geometry, and mathematical physics. Ideal for anyone who wants to know what noncommutative geometry is, what it can do, or
how it can be used in various areas of mathematics.
(4339 views)
Geometric Models for Noncommutative Algebra
by Ana Cannas da Silva, Alan Weinstein - University of California at Berkeley , 1998
Noncommutative geometry is the study of noncommutative algebras as if they were algebras of functions on spaces, like the commutative algebras associated to affine algebraic varieties, differentiable
manifolds, topological spaces, and measure spaces.
(2854 views)
More Sites Like This
Science Books Online Books Fairy
Maths e-Books Programming Books | {"url":"http://www.e-booksdirectory.com/listing.php?category=505","timestamp":"2014-04-16T08:53:20Z","content_type":null,"content_length":"12471","record_id":"<urn:uuid:4ec2ac4f-7905-4d88-830f-658ccab8ba7b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Object Is Placed 19.2 Cm From A First Converging ... | Chegg.com
An object is placed 19.2 cm from a first converging lens of focal length 12.2 cm. A second converging lens with focal length 5.00 cm is placed 10.0 cm to the right of the first converging lens.
(a) Find the position q1 of the image formed by the first converging lens.
=__ cm
(b) How far from the second lens is the image of the first lens?
=______ cm beyond the second lens
(c) What is the value of p2, the object position for the second lens?
=__________ cm
(d) Find the position q2 of the image formed by the second lens.
=_________ cm
(e) Calculate the magnification of the first lens.
(f) Calculate the magnification of the second lens.
(g) What is the total magnification for the system?
(h) Is the final image real or virtual (compared to the original object for the lens system)?
Is it upright or inverted (compared to the original object for the lens system)? | {"url":"http://www.chegg.com/homework-help/questions-and-answers/object-placed-192-cm-first-converging-lens-focal-length-122-cm-second-converging-lens-foca-q2367963","timestamp":"2014-04-25T07:13:07Z","content_type":null,"content_length":"22905","record_id":"<urn:uuid:a4d1fd13-1df6-4794-aef6-8c37806d0669>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chaos Game
Building Sierpinski’s triangle with the chaos game.
The chaos game is a simple experiment which creates a skewed version of Sierpinski’s triangle—also known as the Sierpinski gasket—between three chosen points. The Macromedia Flash animation shown
here is a real-time calculation and rendering of this game between three user-selected points. The rules of the chaos game are relatively simple:
1. Start with a blank canvas;
2. Choose three random “base” points on this canvas, labelling each of them one, two and three respectively;
3. Pick an arbitrary point on the canvas, calling it the “game point”;
4. Choose a random number between one and three (using a die or any other random number generation technique);
5. Draw the new game point halfway in between the previous game point and the base point corresponding to the chosen random number;
6. Repeat from step 4.
The number at the top of the Macromedia Flash animation is the total number of game points that have been drawn. Though the underlying ActionScript is relatively simple, the results obtained are very
Plaintext source of this program, released under the GPL. | {"url":"http://www.stilldreamer.com/mathematics/chaos_game/","timestamp":"2014-04-18T23:41:06Z","content_type":null,"content_length":"4945","record_id":"<urn:uuid:45235943-b4de-48a3-8059-bf27ff5f02a8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove that four vectors are coplanar
September 8th 2012, 05:09 PM #1
Junior Member
Jun 2011
Prove that four vectors are coplanar
I am given four vectors, A, B, C, and D, and need to show that they are coplanar.
I don't really know what to do. If it were three vectors, I know that I should scalar triple product them showing that the parallelepiped formed has zero volume, but when they add a fourth I'm
not sure what to do.
Re: Prove that four vectors are coplanar
Would it work to show that A, B, and C are linearly dependent (and thus coplanar) and then do the same for B, C, and D?
Re: Prove that four vectors are coplanar
Are the vectors in $\mathbb{R}^3$? If so, you just need to show that the cross product of any two of those four vectors always results in a vector that is parallel (to any other choice of 2
vectors). That is, the set of all possible cross products should be vectors parallel to each other.
Your own idea above should work too, unless there's something counter-intuitive im not seeing.
Last edited by SworD; September 8th 2012 at 06:41 PM.
Re: Prove that four vectors are coplanar
General approach that works for any number of vectors in any dimension of vector space:
Any set of vectors are coplanar if and only if the dimension of the subspace they span is less than or equal to 2.
For any list of vectors, the typical computational procedure of finding their linear span is by putting them in a matirx - each vector becomes a row - and then row reducing. (That's because the
actions of row reducing a matrix are the same as the action of replacing a vecotr by some linear combination of it and other vectors in the linear span.) Once you've row reduced as far as
possible, the non-zero rows remaining are necessarily linearly independent, and hence, when intepreted as vectors, are the basis of the linear span of the initial set of vectors. The number of
basis vectors of the linear span is the dimension of the linear. If you're asking about coplanar, you're asking if the linear span has two or fewer basis vectors.
Thus, put your vectors into a matrix, and then row reduce as far as possible. If the number of non-zero rows is two or less, then your original vectors are coplanar, otherwise, they aren't
Approach when your vectors are in $\mathbb{R}^{3}$:
Take the cross product of two of your vectors that produces a non-zero result (If that's impossible, then they're all coplanar - in fact, colinear). Call it N. Then all your vectors are coplanar
if and only if all of them are in the plane perpendicular to the vector N (you already know the 1st two are in that plane). A vector is perpendicular to N if and only if its dot product with N is
So in your case, with four vectors A, B, C, D in $\mathbb{R}^{3}$ they're all coplanar if and only if (assuming A and B are chosen so that AxB isn't 0):
<(A x B), C> = <(A x B), D> = 0. (There, <*,*> is the vector dot product, x is the vector cross product, and the N I refered to is AxB).
Last edited by johnsomeone; September 10th 2012 at 06:40 PM.
September 8th 2012, 05:54 PM #2
Junior Member
Jun 2011
September 8th 2012, 06:35 PM #3
Sep 2012
Planet Earth
September 10th 2012, 06:23 PM #4
Super Member
Sep 2012
Washington DC USA | {"url":"http://mathhelpforum.com/advanced-math-topics/203120-prove-four-vectors-coplanar.html","timestamp":"2014-04-19T20:46:27Z","content_type":null,"content_length":"39240","record_id":"<urn:uuid:a74b7c06-eaec-4f55-9bb5-4e1cf95460bc>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bargains, Bluffs and Other
Chapter 8: Games, Bargains, Bluffs and Other Really Hard Stuff
"There are two kinds of people in the world: Johnny Von Neumann and the rest of us."
Attributed to Eugene Wigner, a Nobel Prize-winning physicist
Economics assumes that individuals rationally pursue their own objectives. There are two quite different contexts in which they may do so, one of which turns out to be much easier to analyze than the
other. The easy context is the one where, in deciding what to do, I can safely treat the rest of the world as things rather than people. The hard context is the one where I have to take full account
of the fact that other people out there are seeking their objectives, and that they know that I am seeking my objectives and take account of it in their action, and they know that I know ... and I
know that they know that I know ... and .... .
A simple example of the easy kind of problem is figuring out the shortest route home from my office. The relevant factors—roads, bridges, paths, gates—can be trusted to behave in a predictable
fashion, unaffected by what I do. My problem is to figure out, given what they are, what I should do.
It is still the easy kind of problem if I am looking for the shortest distance in time rather than in space and must include in my analysis the other automobiles on the road. As it happens, those
automobiles have people driving them, and for some purposes that fact is important. But I don't have to take much account of the rational behavior of those people, given that I know its
consequence—lots of cars at 4:30 P.M., many fewer at 2 P.M.—by direct observation. I can do my analysis as if the cars were robots running on a familiar program.
For a simple example of the harder kind of problem, assume I am in a car headed for an intersection with no stop light or stop sign and someone else is in a car on the cross-street, about the same
distance from the intersection. If he is going to slow down and let me cross first, I should speed up, thus decreasing the chance of a collision; if he is going to try to make it through the
intersection first, I should slow down. He faces the same problem, with roles reversed. We may end up both going fast and running into each other, or both going slower and slower until we come to a
stop at the intersection, each politely waiting for the other.
To make the problem more interesting and the behavior more strategic, assume that both I and the other driver are male teenagers. Each of us puts what others might regard as an unreasonably low value
on his own life and an unreasonably high value on proving that he is courageous, resolute, and unwilling to be pushed around. We are playing a variant of the ancient game of "Chicken," a game popular
with adolescent males and great statesmen. Whoever backs down, slows enough so that the other can get through the intersection, loses.
If I am sure he is not going to slow down, it is in my interest to do so, since even an adolescent male would rather lose one game of Chicken than wreck his car and possibly lose his life. If he
knows I am going to slow down, it is in his interest to speed up. Precisely the same analysis applies to him: If he expects me to go fast, he should slow down, and if he is going to slow down, I
should speed up.
This is strategic behavior, behavior in which each person's actions are conditioned on what he expects the other person's actions to be. The branch of mathematics that deals with such problems,
invented by John von Neumann almost sixty years ago, is called game theory. His objective was a mathematical theory that would describe what choices a rational player would make and what the outcome
would be, given the rules of any particular game. His purpose was to better understand not merely games in the conventional sense but economics, diplomacy, political science—every form of human
interaction that involves strategic behavior.
It turned out that solving the general problem was extraordinarily difficult, so difficult that we are still working on it. Von Neumann produced a solution for the special case of two-person
fixed-sum games, games such as chess, where anything that benefits one party hurts the other. But for games such as Chicken, in which some outcomes (a collision) hurt both parties, or games like
democratic voting, in which one group of players can combine to benefit themselves at the expense of other players, he was less successful. He came up with a solution of a sort, but not a very useful
one, since a single game might have anything from zero to an infinite number of solutions, and a single solution might incorporate up to an infinite number of outcomes. Later game theorists have
carried the analysis a little further, but it is still unclear exactly what it means to solve such games and difficult or impossible to use the theory to provide unambiguous predictions of the
outcome of most real-world strategic situations.
Economics, in particular price theory, deals with this problem through prudent cowardice. Wherever possible, problems are set up, the world is modeled, in ways that make strategic behavior
unimportant. The model of perfect competition, for example, assumes an infinite number of buyers and sellers, producers and consumers. From the standpoint of any one of them, his actions have no
significant effect on the others, so what they do is unaffected by what he does, so strategic problems vanish.
This approach does not work very well for the economic analysis of law; however tightly we may close our eyes to strategic behavior, we find ourselves stumbling over it every few steps. Consider our
experience so far. In chapter 2, John was buying an apple that was worth a dollar to him from Mary, to whom it was worth fifty cents. What price must he pay for it? The answer was that it might sell
at any price between fifty cents and a dollar, depending on how good a bargainer each was. A serious analysis of the bargaining—which, at that point, I deliberately omitted—would have led us to
something very much like the game of Chicken, although with lower stakes. Mary insists she won't sell for less than ninety cents, John insists he won't pay more than sixty, and if neither gives in,
the apple remains with Mary, and the potential gain from the trade disappears.
We encountered strategic behavior again in chapters 4 and 5, this time concealed under the name of transaction costs. When one farmer refuses to permit the railroad to throw sparks in the hope of
selling his consent for a large fraction of what the railroad will save by not putting on a spark arrester, he is engaging in strategic behavior, generating what I called a holdout problem. So is the
free rider who, under a different legal rule, prevents farmers from raising enough money to pay the railroad to put on the spark arrester. So is the railroad when it keeps throwing sparks and paying
fines even though a spark arrester would be cheaper, in order to pressure the farmers to switch to clover.
One reason strategic behavior is so important in the economic analysis of law is that it deals with a lot of two-party interactions: litigation, bargaining over permission to breach a contract, and
the like. When I want to buy corn I have my choice of thousands of sellers, but when I want to buy permission to be released from a contract the only possible seller is the person I signed the
contract with. A second reason is that our underlying theory is largely built on the ideas of Coase, transaction costs are central to Coase's analysis, and transaction costs often involve strategic
Faced with this situation, there are two alternative approaches. One is to bite the bullet and introduce game theory wholesale into our work. That is an approach that some people doing economic
analysis of law have taken. I am not one of them. In my experience, if a game is simple enough so that game theory provides a reasonably unambiguous answer, there are probably other ways of getting
In most real-world applications of game theory, the answer is ambiguous until you assume away large parts of the problem in the details of how you set it up. You can get mathematical rigor only at
the cost of making real-world problems insoluble. I expect that will remain true until there are substantial breakthroughs in game theory. When I am picking problems to work on, ones that stumped
John von Neumann go at the bottom of the stack.
The alternative approach, and the one I prefer, is to accept the fact that arguments involving strategic behavior are going to be well short of rigorous and try to do the best one can despite that. A
first step in this approach is to think through the logic of games we are likely to encounter in order to learn as much as we can about possible outcomes and how they depend on the details of the
game. Formal game theory is helpful in doing so, although I will not be employing much of it here.
In the next part of the chapter I work through the logic of two games: bilateral monopoly and prisoner's dilemma. Those two, along with closely related variants, describe a large part of the
strategic behavior you will encounter, in this book and in life.
Bilateral Monopoly
Mary has the world's only apple, worth fifty cents to her. John is the world's only customer for the apple, worth a dollar to him. Mary has a monopoly on selling apples, John has a monopoly
(technically, a monopsony, a buying monopoly) on buying apples. Economists describe such a situation as bilateral monopoly. What happens?
Mary announces that her price is ninety cents, and if John will not pay it, she will eat the apple herself. If John believes her, he pays. Ninety cents for an apple he values at a dollar is not much
of a deal—but better than no apple. If, however, John announces that his maximum price is sixty cents and Mary believes him, the same logic holds. Mary accepts his price, and he gets most of the
benefit from the trade.
This is not a fixed-sum game. If John buys the apple from Mary, the sum of their gains is fifty cents, with the division determined by the price. If they fail to reach an agreement, the summed gain
is zero. Each is using the threat of the zero outcome to try to force a fifty cent outcome as favorable to himself as possible. How successful each is depends in part on how convincingly he can
commit himself, how well he can persuade the other that if he doesn't get his way the deal will fall through
Every parent is familiar with a different example of the same game. A small child wants to get her way and will throw a tantrum if she doesn't. The tantrum itself does her no good, since if she
throws it you will refuse to do what she wants and send her to bed without dessert. But since the tantrum imposes substantial costs on you as well as on her, especially if it happens in the middle of
your dinner party, it may be a sufficiently effective threat to get her at least part of what she wants.
Prospective parents resolve never to give in to such threats and think they will succeed. They are wrong. You may have thought out the logic of bilateral monopoly better than your child, but she has
hundreds of millions of years of evolution on her side, during which offspring who succeeded in making parents do what they want, and thus getting a larger share of parental resources devoted to
them, were more likely to survive to pass on their genes to the next generation of offspring. Her commitment strategy is hardwired into her; if you call her bluff, you will frequently find that it is
not a bluff. If you win more than half the games and only rarely end up with a bargaining breakdown and a tantrum, consider yourself lucky.
Herman Kahn, a writer who specialized in thinking and writing about unfashionable topics such as thermonuclear war, came up with yet another variant of the game: the Doomsday Machine. The idea was
for the United States to bury lots of very dirty thermonuclear weapons under the Rocky Mountains, enough so that if they went off, their fallout would kill everyone on earth. The bombs would be
attached to a fancy Geiger counter rigged to set them off if it sensed the fallout from a Russian nuclear attack. Once the Russians know we have a Doomsday Machine we are safe from attack and can
safely scrap the rest of our nuclear arsenal.
The idea provided the central plot device for the movie Doctor Strangelove. The Russians build a Doomsday Machine but imprudently postpone the announcement—they are waiting for the premier's
birthday—until just after an American Air Force officer has launched a unilateral nuclear attack on his own initiative. The mad scientist villain was presumably intended as a parody of Kahn.
Kahn described a Doomsday Machine not because he thought we should build one but because he thought we already had. So had the Russians. Our nuclear arsenal and theirs were Doomsday Machines with
human triggers. Once the Russians have attacked, retaliating does us no good—just as, once you have finally told your daughter that she is going to bed, throwing a tantrum does her no good. But our
military, knowing that the enemy has just killed most of their friends and relations, will retaliate anyway, and the knowledge that they will retaliate is a good reason for the Russians not to
attack, just as the knowledge that your daughter will throw a tantrum is a good reason to let her stay up until the party is over. Fortunately, the real-world Doomsday Machines worked, with the
result that neither was ever used.
For a final example, consider someone who is big, strong, and likes to get his own way. He adopts a policy of beating up anyone who does things he doesn't like, such as paying attention to a girl he
is dating or expressing insufficient deference to his views on baseball. He commits himself to that policy by persuading himself that only sissies let themselves get pushed around—and that not doing
what he wants counts as pushing him around. Beating someone up is costly; he might get hurt and he might end up in jail. But as long as everyone knows he is committed to that strategy, other people
don't cross him and he doesn't have to beat them up.
Think of the bully as a Doomsday Machine on an individual level. His strategy works as long as only one person is playing it. One day he sits down at a bar and starts discussing baseball with a
stranger—also big, strong, and committed to the same strategy. The stranger fails to show adequate deference to his opinions. When it is over, one of the two is lying dead on the floor, and the other
is standing there with a broken beer bottle in his hand and a dazed expression on his face, wondering what happens next. The Doomsday Machine just went off.
With only one bully the strategy is profitable: Other people do what you want and you never have to carry through on your commitment. With lots of bullies it is unprofitable: You frequently get into
fights and soon end up either dead or in jail. As long as the number of bullies is low enough so that the gain of usually getting what you want is larger than the cost of occasionally having to pay
for it, the strategy is profitable and the number of people adopting it increases. Equilibrium is reached when gain and loss just balance, making each of the alternative strategies, bully or
pushover, equally attractive. The analysis becomes more complicated if we add additional strategies, but the logic of the situation remains the same.
This particular example of bilateral monopoly is relevant to one of the central disputes over criminal law in general and the death penalty in particular: Do penalties deter? One reason to think they
might not is that the sort of crime I have just described, a barroom brawl ending in a killing—more generally, a crime of passion—seems to be an irrational act, one the perpetrator regrets as soon as
it happens. How then can it be deterred by punishment?
The economist's answer is that the brawl was not chosen rationally but the strategy that led to it was. The higher the penalty for such acts, the less profitable the bully strategy. The result will
be fewer bullies, fewer barroom brawls, and fewer "irrational" killings. How much deterrence that implies is an empirical question, but thinking through the logic of bilateral monopoly shows us why
crimes of passion are not necessarily undeterrable.
The Prisoner's Dilemma
Two men are arrested for a burglary. The District Attorney puts them in separate cells. He goes first to Joe. He tells him that if he confesses and Mike does not, the DA will drop the burglary charge
and let Joe off with a slap on the wrist—three months for trespass. If Mike also confesses, the DA cannot drop the charge but will ask the judge for leniency; Mike and Joe will get two years each.
If Joe refuses to confess, the DA will not feel so friendly. If Mike confesses, Joe will be convicted, and the DA will ask for the maximum possible sentence. If neither confesses, the DA cannot
convict them of the robbery, but he will press for a six-month sentence for trespass, resisting arrest, and vagrancy.
After explaining all of this to Joe, the DA goes to Mike's cell and gives the same speech, with names reversed. Table 8-1 shows the matrix of outcomes facing Joe and Mike.
Joe reasons as follows:
If Mike confesses and I don't, I get five years; if I confess too, I get two years. If Mike is going to confess, I had better confess too.
If neither of us confesses, I go to jail for six months. If Mike stays silent and I confess, I only get three months. So if Mike is going to stay silent, I am better off confessing. In fact, whatever
Mike does I am better off confessing.
Table 8-1
The payoff matrix for prisoner's dilemma: Each cell of the table shows the result of choices by the two prisoner's; Joe's sentence is first, Mike's second
Joe calls for the guard and asks to speak to the DA. It takes a while; Mike has made the same calculation, reached the same conclusion, and is in the middle of dictating his confession.
Both Joe and Mike have acted rationally, and both are, as a result, worse off. By confessing they each get two years; if they had kept their mouths shut, they each would have gotten six months. That
seems an odd consequence for rational behavior.
The explanation is that Joe is choosing only his strategy, not Mike's. If Joe could choose between the lower right-hand cell of the matrix and the upper left-hand cell, he would choose the former; so
would Mike. But those are not the choices they are offered. Mike is choosing a column, and the left-hand column dominates the right-hand column; it is better whichever row Joe chooses. Joe is
choosing a row, and the top row dominates the bottom.
Mike and Joe expect to continue their criminal career and may find themselves in the same situation again. If Mike double-crosses Joe this time, Joe can pay him back next. Intuitively, it seems that
prisoner's dilemma many times repeated, with the same players each time, should produce a more attractive outcome for the players than a single play.
Perhaps—but there is an elegant if counterintuitive argument against it. Suppose Joe and Mike both know that they are going to play the game exactly twenty times. Each therefore knows that on the
twentieth play future retaliation will no longer be an option. So the final play is an ordinary game of prisoner's dilemma, with the ordinary result: Both prisoners confess. Since they are both going
to confess on the twentieth round, neither has a threat available to punish betrayal on the nineteenth round, so that too is an ordinary game and leads to mutual confession. Since they are going to
confess on the nineteenth .... . The whole string of games comes unraveled, and we are back with the old result. Joe and Mike confess every time.
Many people find that result deeply counter-intuitive, in part because they live in a world where people have rationally chosen to avoid that particular game whenever possible. People engaged in
repeat relationships requiring trust take care not to determine the last play in advance, or find ways of committing themselves to retaliate, if necessary, even on the last play. Criminals go to
considerable effort to raise the cost to their co-workers of squealing and lower the cost of going to jail for refusing to squeal. None of that refutes the logic of prisoner's dilemma; it simply
means that real prisoners, and other people, are sometimes playing other games. When the net payoffs to squealing have the structure shown in table 8-1, the logic of the game is compelling. Prisoners
For a real prisoner's dilemma involving a controversial feature of our legal system, consider plea bargaining:
The prosecutor calls up the defense lawyer and offers a deal. If the client will plead guilty to second-degree murder, the District Attorney will drop the charge of first-degree murder. The accused
will lose his chance of acquittal, but he will also lose the risk of going to the chair.
Such bargains are widely criticized as a way of letting criminals off lightly. Their actual effect may well be the opposite—to make punishment more, not less, severe. How can this be? A rational
criminal will only accept a plea bargain if doing so makes him better off, produces, on average, a less severe punishment than going to trial. Does it not follow that the existence of plea bargaining
must make punishment less severe?
To see why that is not true, consider the situation of a hypothetical District Attorney and the defendants he prosecutes:
There are a hundred cases per year; the DA has a budget of a hundred thousand dollars. With only a thousand dollars to spend investigating and prosecuting each case, half the defendants will be
acquitted. But if he can get ninety defendants to cop pleas, the DA can concentrate his resources on the ten who refuse, spend ten thousand dollars on each case, and get a conviction rate of 90
A defendant faces a 90 percent chance of conviction if he goes to trial and makes his decision accordingly. He will reject any proposed deal that is worse for him than a 90 percent chance of
conviction but may well accept one that is less attractive than a 50 percent chance of conviction, leaving him worse off than he would be in a world without plea bargaining. All defendants would be
better off if none of them accepted the DA's offer, but each is better off accepting. They are caught in a many-player version of the prisoner's dilemma, alias the public good problem.
Prisoner's dilemma provides a simple demonstration of a problem that runs through the economic analysis of law: Individual rationality does not always lead to group rationality. Consider air
pollution, not by a few factories but by lots of automobiles. We would all be better off if each of us installed a catalytic converter. But if I install a converter in my car, I pay all of the cost
and receive only a small fraction of the benefit, so it is not worth doing. In much the same fashion everybody may be better off if nobody steals, since we are all potential victims, but my decision
to steal from you has very little effect on the probability that someone else will steal from me, so it may be in my interest to do it.
Constructing efficient legal rules is largely an attempt to get out of prisoner's dilemmas: criminal penalties to change the incentives of potential thieves, pollution laws to change the incentives
of potential polluters. We may not be able to succeed completely, but we can at least try, whenever possible, to choose rules under which individual rationality leads to group rationality instead of
rules that produce the opposite result.
I started this chapter with a very simple example of strategic behavior: two motorists approaching the same intersection at right angles. As it happens, there is a legal rule to solve that problem,
one that originated to solve the analogous problem of two ships on converging courses. The rule is "Starboard right of way." The ship, or the car, on the right has the right of way, meaning that the
other is legally obliged to slow down and let him go through the intersection first.
So far our discussion of games has yielded only two clear conclusions. One involves a version of bilateral monopoly in which each player precommits to his demands, pairs of players are selected at
random, and the outcome depends on what strategies that particular pair has precommitted to. That is the game of bullies and barroom brawls, the game sociobiologists have christened "hawk/dove." Our
conclusion was that increasing the cost of bargaining breakdown, making a fight between two bullies or two hawks more costly, decreases the fraction of players who commit to the bully strategy. That
is why we expect punishment for crimes of passion to deter. Our other clear conclusion was that rational players of a game with the payoff structure of prisoner's dilemma will betray each other.
Even these results become less clear when we try to apply them to real-world situations. Real-world games do not come with payoff matrices printed on the box. Prisoner's dilemma leads to mutual
betrayal, but that is a reason for people to modify the game, using commitment, reputation, altruism, and a variety of other devices to make it in each party's interest to cooperate instead of
betraying. So applying the theoretical analysis to the real world is still a hard problem.
We can draw some other and less rigorous conclusions from our discussion, however. It seems clear that in bilateral monopoly commitment is an important tactic, so we can expect players to look for
ways of committing themselves to stick to their demands. A small child says, "I won't pay more than sixty cents for your apple, cross my heart and hope to die." The CEO of a firm engaged in takeover
negotiations gives a speech arguing that if he offers more than ten dollars a share for the target, he will be overpaying, and his stockholders should fire him.
Individuals spend real resources on bargaining: time, lawyers' fees, costs of commitment, and risk of bargaining breakdown. The amount they will be willing to spend should depend on the amount at
stake—the same problem we encountered in our earlier discussion of rent seeking. So legal rules that lead to bilateral monopoly games with high stakes should be avoided where possible.
Consider, for a simple example, the question of what legal rules should apply to a breach of contract. Suppose I have agreed to sell you ten thousand customized cams, with delivery on March 30, for a
price of a hundred thousand dollars. Late in February my factory burns down. I can still, by extraordinary efforts and expensive subcontracting, fulfill the contract, but the cost of doing so has
risen from ninety thousand dollars to a million dollars.
One possible legal rule is specific performance: I signed the contract, I must deliver the cams. Doing so is inefficient; the cams are worth only $110,000 to you. The obvious solution is for us to
bargain; I pay you to permit me to cancel the contract.
Agreement provides you a net gain so long as I pay you more than the $10,000 you expected to make by buying the cams. It provides me a net gain so long as I pay you less than the $900,000 I will lose
if I have to sell you the cams. That leaves us with a very large bargaining range to fight over, which is likely to lead to large bargaining costs, including some risk of a very expensive breakdown:
We cannot agree on a price, you make me deliver the cams, and between us we are $890,000 poorer than if you had let me out of the contract. That suggests one reason why courts are reluctant to
enforce specific performance of contracts, usually preferring to permit breach and award damages, calculated by the court or agreed on in advance by the parties.
For another example, suppose a court finds that my polluting oil refinery is imposing costs on my downwind neighbor. One possibility is to permit the neighbor to enjoin me, to forbid me from
operating the refinery unless I can do it without releasing noxious vapors. An alternative is to refuse an injunction but permit the neighbor to sue for damages.
If the damage to the neighbor from my pollution is comparable to the cost to me of preventing it, the court is likely to grant an injunction, leaving me with the alternative of buying permission to
pollute from my neighbor or ending my pollution. If the cost of stopping pollution is much greater than the damage the pollution does, the court may refuse to grant an injunction, leaving my neighbor
free to sue for damages.
If the court granted an injunction in such a situation, the result would be a bilateral monopoly bargaining game with a very large bargaining range. I would be willing, if necessary, to pay anything
up to the (very large) cost to me of controlling my pollution; you would be willing to accept, if necessary, anything more than the (small) damage the pollution does to you. Where between those
points we ended up would depend on how well each of us bargained, and each of us would have an incentive to spend substantial resources trying to push the final agreement toward his preferred end of
the range.
Further Reading
Readers interested in a somewhat more extensive treatment of game theory will find it in chapter 11 of my Price Theory and Hidden Order. Readers interested in a much more extensive treatment will
find it in Game Theory and the Law by Douglas G. Baird, Robert H. Gertner and Randal C. Picker, Cambridge, Mass: Harvard University Press 1994.
Table of Contents | {"url":"http://www.daviddfriedman.com/Laws_Order_draft/laws_order_ch_8.htm","timestamp":"2014-04-19T20:21:04Z","content_type":null,"content_length":"33765","record_id":"<urn:uuid:546df94c-a3eb-4be0-9f2f-9b419de4f3e9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Planar Diagram
The PD (Planar Diagram) form for a virtual knot diagram is obtained by numbering the edges of the diagram in ascending order, beginning at some point and travelling along the diagram. Virtual
crossings are ignored, and after each non-virtual crossing is encountered, the new edge is numbered. Then, at each crossing, list the edges, beginning with the edge incoming on the under strand, and
proceeding counter-clockwise. Crossings are indicated like X[i,j,k,l], where i, j, k, and l are the four edges at that crossing.
For example, the PD of virtual knot 2.1, seen below, is X[2,4,3,1], X[3,1,4,2].
See also: Planar Diagrams at Dror Bar-Natan's Knot Atlas. | {"url":"http://www.math.toronto.edu/drorbn/Students/GreenJ/pd.html","timestamp":"2014-04-16T10:23:12Z","content_type":null,"content_length":"1623","record_id":"<urn:uuid:928d2ba5-7e10-42eb-a2ee-7e8b5b6a36e7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Which of the following is the equation of the axis of symmetry of the quadratic equation y = 2(x – 4)2 + 7? Answers: A. x=7 B. x=-4 C. x=4 D. x=-7
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/500c7ec8e4b0549a893064e4","timestamp":"2014-04-20T11:14:34Z","content_type":null,"content_length":"46577","record_id":"<urn:uuid:c09e7f69-058b-4ba5-b286-40b49db8f887>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Little Falls, NJ ACT Tutor
Find a Little Falls, NJ ACT Tutor
Hello,My name is Brigid. I am currently a student at NJCU and I am a biology major. I'm on scholarship and my strength are in math and science.
8 Subjects: including ACT Math, biology, algebra 2, elementary math
...I am a pre-med student in my second year of college and I have 3 years of tutoring experience. I currently work at the Math Center in South Orange, NJ. I have also tutored elementary school
kids in math and writing.
29 Subjects: including ACT Math, chemistry, English, reading
Hello,My goal in tutoring is to develop your skills and provide tools to achieve your goals. My teaching experience includes varied levels of students (high school, undergraduate and graduate
students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips a...
15 Subjects: including ACT Math, chemistry, calculus, statistics
...I work to my best ability to help all the students excess knowledge in the class. I work as substitute teacher in three different districts. I can teach the students in math and help them
become better at the subject.
3 Subjects: including ACT Math, geometry, algebra 1
...I have continued playing after grad school, to this day. I took drawing classes at Wesleyan University. I took four years of advanced drawing and painting at Harvard-Westlake.
63 Subjects: including ACT Math, reading, English, geometry
Related Little Falls, NJ Tutors
Little Falls, NJ Accounting Tutors
Little Falls, NJ ACT Tutors
Little Falls, NJ Algebra Tutors
Little Falls, NJ Algebra 2 Tutors
Little Falls, NJ Calculus Tutors
Little Falls, NJ Geometry Tutors
Little Falls, NJ Math Tutors
Little Falls, NJ Prealgebra Tutors
Little Falls, NJ Precalculus Tutors
Little Falls, NJ SAT Tutors
Little Falls, NJ SAT Math Tutors
Little Falls, NJ Science Tutors
Little Falls, NJ Statistics Tutors
Little Falls, NJ Trigonometry Tutors
Nearby Cities With ACT Tutor
Cedar Grove, NJ ACT Tutors
Fair Lawn ACT Tutors
Fairfield, NJ ACT Tutors
Fairlawn, NJ ACT Tutors
Hawthorne, NJ ACT Tutors
Lincoln Park, NJ ACT Tutors
Lyndhurst, NJ ACT Tutors
North Caldwell, NJ ACT Tutors
Nutley ACT Tutors
Paterson, NJ ACT Tutors
Singac, NJ ACT Tutors
Totowa ACT Tutors
Verona, NJ ACT Tutors
Wayne, NJ ACT Tutors
Woodland Park, NJ ACT Tutors | {"url":"http://www.purplemath.com/Little_Falls_NJ_ACT_tutors.php","timestamp":"2014-04-20T19:57:24Z","content_type":null,"content_length":"23527","record_id":"<urn:uuid:180a132c-93ef-40b7-bc10-bd533ad91fba>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is it possible to take "limits up to homotopy"?
up vote 10 down vote favorite
Before I begin, an apology: it's been a while since I've done much analysis, so I might be misusing (or just missing) terminology.
I have a chain complex $(V_\bullet,\partial)$ of topological vector spaces (in particular, the differential is continuous). My vector spaces are Cauchy-complete in the following sense: If $x_n \in
V_i$ is a sequence of vectors of fixed degree such that $\lim_{\min(m,n) \to \infty} (x_n - x_m) = 0$, then $\lim_{n\to \infty} x_n$ converges.
Actually, I have a particular sequence $x_n \in V_0$ that I care about. Unfortunately, $\lim_{n\to \infty} x_n$ does not converge. But it is "a Cauchy sequence up to homotopy." Specifically, for any
two $m,n$, I have a specific vector $y_{m,n} \in V_1$ such that: $$(*) \quad\quad\quad \lim_{\min(m,n) \to \infty} (x_n - x_m - \partial(y_{m,n})) = 0 $$ Also, I should mention that each individual
$x_n$ is not closed, so I can't talk about their classes in homology. But I do know that $\lim_{n\to \infty}(\partial x_n) = 0$. (Going in the other direction, none of the natural limits of the $y_
{m,n}$ converge, but I can prove variations of equation $(*)$ for them too, and so on ad infinitum.)
My question is whether $\lim_{n\to \infty} x_n$ exists in some homotopical sense, and how to define it. Of course, I don't expect that there is a specific (closed) element $x_\infty = \lim x_n$. But
I would expect that there is some natural set of such elements $x_\alpha$, for $\alpha$ ranging over some indexing set, along with specific homotopies $y_{\alpha,\beta}$ satisfying $\partial(y_{\
alpha,\beta}) = x_\beta - x_\alpha$.
If not, are there additional reasonable conditions that would assure such a limit? I have pretty good control over my particular example. For instance, much stronger than the Cauchy-completeness, I
know that if $\lim_{n\to \infty} (x_{n+1} - x_n) = 0$, then $\lim_{n\to \infty} x_n$ converges. The reason I know things like that is because the specific vector spaces $V_\bullet$ that I care about
are essentially power-series algebras over $\mathbb Q$. So if there's some natural condition that's necessary for convergence, I can easily check it, and it's probably satisfied.
homological-algebra topological-vector-spaces chain-complexes
2 Sorry if this is silly, but: let $B_* \subset V_*$ be the subspace given by the image of $\partial_{*+1}$. Then, consider the quotient $B_*^\perp = V_*/B_*$ and let $\rho_*:V_* \to B_*^\perp$ be
the obvious quotient map. What exactly is the difference between $x_n \subset V_\ast$ "converging up to chain-homotopy" and standard convergence of $\pi_*(x_n)$ to zero in $B_*^\perp$? – Vidit
Nanda Jul 25 '13 at 4:26
@ViditNanda: Um, good question. For some reason, I'm very trained to think that non-closed things aren't "real" in some sense. Anyway, the only difference between my data and a sequence that
converges in $V/B$ is that I have a little extra data, e.g. the $y$s. – Theo Johnson-Freyd Jul 25 '13 at 19:18
One thing I didn't explicitly ask for, but that one should always hope for, is good behavior under homotopy equivalences of complexes. – Theo Johnson-Freyd Jul 25 '13 at 19:20
This is a fascinating question, but could you clean it up a bit? Particularly, (1) the use of $x_n$ as both a sequence and the $n$-th element of that sequence makes it hard to parse, and (2) it'd
help to know what variations on equation $*$ hold for the $y$'s. The importance of (2) is that we have no control whatsoever on the $y$s as stated, so for instance things can get really hideous
and Cantor-set like. Finally, what is your index-set on the chain complex? Wouldn't *everything* be closed in $V_0$ if you use $\mathbb{N}$? – Vidit Nanda Jul 27 '13 at 15:24
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged homological-algebra topological-vector-spaces chain-complexes or ask your own question. | {"url":"http://mathoverflow.net/questions/137686/is-it-possible-to-take-limits-up-to-homotopy","timestamp":"2014-04-16T13:52:26Z","content_type":null,"content_length":"53404","record_id":"<urn:uuid:9a23fb94-55a0-4af2-888e-dea26927e4c4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determinant systems
Thank you guys. But when its up to math in english im not good,actually im not understanding this that much, i need someone to solve it in mathematical view..
Do you know how to form a matrix equation from a system of equations?
3x+y+2z=4 (1)
2x+3y-z=4 (2)
2x-y-z=0 (3)
If you take equation 2 and subtract that from equation 3, what equation are you left with?
Similarly, if you take equation 1 and add that to 2*equation 3, what equation are you left with? | {"url":"http://www.physicsforums.com/showthread.php?p=4209065","timestamp":"2014-04-21T04:44:44Z","content_type":null,"content_length":"33994","record_id":"<urn:uuid:bc12b220-7ef7-4779-9797-98f40216ea74>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integrate kˆ(n-1) / prod_{i=1...n} (kˆ(2)+x_iˆ{2}) dk between 0 and infinity, with x_i constants and n>=1?
up vote 1 down vote favorite
[some formatting tweaked, and the question copied from the title to the main body, by YC]
I've been struggling a lot to calculate this integral.
$$ \int_0^\infty \frac{k^{n-1}}{\prod_{i=1}^n (k^2+ x_i^2)}\; dk $$ where $x_i$ are constants and $n\geq 1$.
I did the calculation for n=1,2,3,4, with the hope of identifying some form and then find the result by induction. But here is what I got:
• n=1: I= (pi/2) * abs(x1)
• n=2: I= (1/2) * 1/(x2ˆ(2)-x1ˆ(2)) * log(x2ˆ(2) / x1ˆ(2))
• n=3: I= (pi/2) * [abs(x1) (x2ˆ(2)-x3ˆ(2)) +abs(x2) (x3ˆ(2)-x1ˆ(2))+ abs(x3) (x1ˆ(2)-x2ˆ(2))] / [(x2ˆ(2)-x3ˆ(2) (x3ˆ(2)-x1ˆ(2)) (x1ˆ(2)-x2ˆ(2)]
• n=4: I= (1/2) * [ A1 log(x1ˆ(2)) + A2 log(x2ˆ(2)) +... A4 log(x4ˆ(2))), where Ai= xiˆ(2) / [ prod (xjˆ(2)-xiˆ(2))]
-->> This makes me think that the result depends on whether n is even or uneven; that is, we would have a form in log( ) for n even, and something in pi/2 for n uneven?
Could you please help me here? What is the correct result and how to get it?
Your help is so much appreciated, many many thanks in advance! Elise
ca.analysis-and-odes integration
add comment
closed as off-topic by Ricardo Andrade, Chris Godsil, Ryan Budney, Andrey Rekalo, Dmitri Pavlov Jan 20 at 16:19
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not appear to be about research level mathematics within the scope defined in the help center." – Ricardo Andrade, Chris Godsil, Andrey Rekalo, Dmitri Pavlov
If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
You are correct. Use the partial fraction decomposition: http://en.wikipedia.org/wiki/Partial_fraction For example, if $n=4$, the decomposition is (over the rationals):
$$\begin{array}{l} {\frac {ck}{ \left( {k}^{2}+c \right) \left( -c+a \right) \left( -c+ b \right) \left( -d+c \right) }}-\\\ {\frac {dk}{ \left( {k}^{2}+d \right) \left( -d+a \right) \
up vote 2 left( -d+b \right) \left( -d+c \right) }}-\\\ {\frac {bk}{ \left( {k}^{2}+b \right) \left( -b+a \right) \left( -c+b \right) \left( -d+b \right) }}+\\\ {\frac {ak}{ \left( {k}^{2}+a \
down vote right) \left( -b+a \right) \left( -c+a \right) \left( -d+a \right) }}\end{array} $$ where $a=x_1^2,b=x_2^2,...$. I guess you can get the pattern. For odd $n$ there is no $k$ in the
accepted numerator. This is the cause of the difference you noticed.
Thank you so much for your insights, much appreciated :-) Best, Elise – Payze Apr 2 '11 at 11:47
add comment
Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes integration or ask your own question. | {"url":"http://mathoverflow.net/questions/60334/integrate-kn-1-prod-i-1-n-k2x-i2-dk-between-0-and-infinity-wi?sort=newest","timestamp":"2014-04-18T21:11:25Z","content_type":null,"content_length":"47461","record_id":"<urn:uuid:c0fe4d93-1602-4c81-9e80-cd7a47c9f4f2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Praise, Curse, and Recurse
Let's remember the words of Princess Leia, in the Star Wars Holiday Special:
We celebrate a day of peace. A day of harmony. A day of joy we can all share together joyously. A day that takes us through the darkness. A day that leads us into might. A day that makes us want
to celebrate the light. A day that brings the promise that one day, we'll be free to live, to laugh, to dream, to grow, to trust, to love, to be.
Happy Life Day, everyone! Or whatever tradition you celebrate.
I am not planning to work on any more entries until early January, so I just wanted say that I am extremely grateful for all the comments and suggestions I've received on my blog entries. With two
babies at home I don't get much time to study textbooks or tutorials, although I try, and I don't get much dedicated coding time. But the suggestions and concrete examples have helped me make good
use of what free time I do have. Thanks to all of you for your support learning this exciting language!
Let's get back to our CA generator. Literate Haskell follows:
Last time we defined a function to generate the next state of a given
cell in our cellular universe, given a rule number and a tuple consisting
of the current state of the cell to the left, the cell itself, and the
cell to the right.
>import Data.Bits
>genNextBit :: Int -> ( Bool, Bool, Bool ) -> Bool
>genNextBit rulenum ( left, center, right ) = rulenum `testBit` idx
> where idx = ( if left then (4::Int) else (0::Int) ) .|.
> ( if center then (2::Int) else (0::Int) ) .|.
> ( if right then (1::Int) else (0::Int) )
Recall that we can use automatic currying to make a rule-applying
function like so:
>rule_30 = genNextBit 30
We can ask GHCi for the type:
:type rule_30
rule_30 :: (Bool, Bool, Bool) -> Bool
I've put it off while I work on the rules, but it is time to figure out
how to actually represent our CA universe. Let's start by using a list.
I know that I'm going to write a number of inefficient functions, and
do evil things like take the length of lists a lot, but let's suspend all
concerns about efficiency over to a future discussion and consider this
purely a proof-of-concept.
Our inital universe at time zero has one cell set to True:
>initial_universe = [True]
But that isn't quite the right representation for the universe, because
it implies that our whole universe is one cell in size. We can't even
apply our rule once because there is no left cell and right cell!
Really, we want to pretend that we have an _infinite_ universe; at
time zero, all the cells to the left and right hold False. Remember,
Haskell is so powerful that it can traverse an infinite list in only
0.0003 seconds! Well, if you don't evaluate the whole thing, that is.
Taking advantage of lazy evaluation, you can define all kinds of
infinite structures. This construct will give us an infinite list of
False values:
>allFalse :: [Bool]
>allFalse = False : allFalse
We don't want to evaluate allFalse, but we can partially evaluate it
using a function like take. So can we represent our universe like this?
>genInfiniteUniverse :: [Bool] -> [Bool]
>genInfiniteUniverse known_universe = allFalse ++ known_universe ++ allFalse
Let's try it:
take 10 ( genInfiniteUniverse initial_universe )
Nope! Since the left-hand side of the universe is infinite, we will
never reach the middle element, no matter how far we travel from the
start of the list!
That's no good. However, we can do it another way. We'll allow our
universe to be expanded on demand on the left and right sides:
>expandUniverse :: Int -> [Bool] -> [Bool]
>expandUniverse expand_by known_universe =
> ( take expand_by allFalse ) ++ known_universe ++ ( take expand_by allFalse )
expandUniverse 3 initial_universe
We can use the expandUniverse function to expand our initial universe
out to a standardized width before we start applying the rules.
First, here's a routine to stringify a universe for display:
>boolToChar :: Bool -> Char
>boolToChar True = '#'
>boolToChar False = ' '
>stringifyUniverse :: [Bool] -> String
>stringifyUniverse ls = map boolToChar ls
Now our infrastructure is in place, so let's figure out how to apply
our generator rule. We know that we want to start with our initial
universe. Let's expand it to a fixed size. This will give us enough
elements to start making left/center/right tuples out of each consecutive
set of three cells. Each tuple is then used to look up the next state
of the cell at the center; this will become an element in our next
universe. Then we move to the next cell (not three cells down). This
means that the tuples overlap.
Let's make the tuples. We have to do some thinking here and consider
all the cases; the behavior isn't immediately obvious. The following
almost works:
universeToTuples :: [Bool] -> [(Bool, Bool, Bool)]
universeToTuples universe | length universe >= 3 =
( universe !! 0, universe !! 1, universe !! 2 ) :
universeToTuples ( tail universe )
universeToTuples universe = []
universeToTuples [False, True, True, True, False]
But it isn't quite right; it leaves off the end cases; when we apply
our rules, the intermediate representation of the universe as a list
of tuples to look up cell mappings will shrink. We actually want the
following tuples:
where the first element of the list is considered as if it was just
to the right of an implied False, and the last element is considered
as if it was just to the left of another implied False. This sounds
like another place we can use our universe expander:
>universeToTuples :: [Bool] -> [(Bool, Bool, Bool)]
>universeToTuples [] = []
>universeToTuples universe = tupleize $ expandUniverse 1 universe
> where
> tupleize xs =
> if length xs > 3 then tuple3 xs : tupleize ( tail xs )
> else [ tuple3 xs ]
> tuple3 xs = ( xs !! 0, xs !! 1, xs !! 2 )
Why did I write it that way? Well, I tried to write tupleize using
guards, special-casing length xs > 3 followed by an unguarded case for
all other possibilities, but GHC didn't like it -- it told me I had non-exhaustive patterns. There is probably a smarter way to write this,
but note that we definitely don't want this version:
universeToTuples universe = ( xs !! 0, xs !! 1, xs !! 2 )
: universeToTuples ( tail xs )
where xs = expandUniverse 1 universe
In that version, the universe keeps expanding as we walk down the list,
and we never get to the end!
OK, now that we have our tuples, we want to turn them into our new
universe, given a cellular rule number:
>tuplesToUniverse :: Int -> [(Bool, Bool, Bool)] -> [Bool]
>tuplesToUniverse rule [] = []
>tuplesToUniverse rule (tup:tups) = genNextBit rule tup : tuplesToUniverse rule tups
Note that we don't have to explicitly take the tail since we provide a
name for it in the pattern. We're ready to define our genUniverses function
that applies a given CA rule. We can express a given generation like this:
>nextUniverse :: Int -> [Bool] -> [Bool]
>nextUniverse rule universe = tuplesToUniverse rule $ universeToTuples universe
Now, let's generalize it:
>genUniverses :: Int -> Int -> Int -> [[Bool]]
>genUniverses rule width count = take count
> ( iterate ( nextUniverse rule ) ( expandUniverse ( width `div` 2 ) initial_universe ) )
(You could also use a fold, and I'm sure there are lots of other ways to
do it, but iterate seems to work fine).
And now, witness the unfolding of a universe! Note that the parameters
that go to genUniverses are the rule number, the width for display, and the number of generations:
putStr $ unlines $ map stringifyUniverse $ genUniverses 222 19 10
In general, a width of twice the number of generations - 1 will show
all the transitions we are interested in; you could consider the diagonal
area above to be the "light cone" of events causally connected to that
single point (although some rules will generate True cells outside of
that "light cone" based on the other False values in the initial universe). Let's make a helper function to choose a width for us:
>showRule rule gens =
> putStr $ unlines $ map stringifyUniverse $
> genUniverses rule ( gens * 2 - 1 ) gens
Let's try a few of the other rules:
showRule 252 15
showRule 78 15
## #
### #
## # #
### # #
## # # #
### # # #
## # # # #
### # # # #
## # # # # #
### # # # # #
## # # # # # #
### # # # # # #
And finally, my all-time favorite, which simulates a Sierpinski gasket:
showRule 82 32
# #
# #
# # # #
# #
# # # #
# # # #
# # # # # # # #
# #
# # # #
# # # #
# # # # # # # #
# # # #
# # # # # # # #
# # # # # # # #
# # # # # # # # # # # # # # # #
# #
# # # #
# # # #
# # # # # # # #
# # # #
# # # # # # # #
# # # # # # # #
# # # # # # # # # # # # # # # #
# # # #
# # # # # # # #
# # # # # # # #
# # # # # # # # # # # # # # # #
# # # # # # # #
# # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Followup: I had mentioned that my code had a bug, because some pictures, such as this one:
showRule 29 11
######### ###########
# #
######### ###########
# #
######### ###########
# #
######### ###########
# #
######### ###########
# #
look different than the way Wolfram's book and web site shows them, which is like this:
######### ###########
######### ###########
######### ###########
######### ###########
######### ###########
After a little investigation this seems to be because Wolfram's implementation wraps around; the left neighbor of the leftmost cell of a given universe is taken from the rightmost cell, and
vice-versa, while my implementation pretends that there is always more empty space available to the left and right.
Whether you consider this a bug or not is up to you. The wraparound behavior is probably considered more "canonical." You can compare the results from my program to the pictures at Wolfram's
MathWorld site here. If you replace my universeToTuples function with this one:
universeToTuples :: [Bool] -> [(Bool, Bool, Bool)]
universeToTuples [] = []
universeToTuples universe = tupleize $ wrapUniverse universe
wrapUniverse xs = ( last xs ) : ( xs ++ [ head xs ] )
tupleize xs =
if length xs > 3 then tuple3 xs : tupleize ( tail xs )
else [ tuple3 xs ]
tuple3 xs = ( xs !! 0, xs !! 1, xs !! 2 )
you will get the wraparound behavior.
Thanks for reading and as always, I appreciate your comments.
The little Haskell toy I wrote to simulate a dot-matrix printhead was useful in one sense: it helped me learn a little more Haskell! I couldn't help noticing that the results look like the
transformations you get when applying simple cellular automata rules. So my next experiment is to implement those rules, and see what else I can learn from the results.
What follows is Literate Haskell, but there is no main program to run; it is for you to experiment with interactively, if you like.
Some Haskell code to implement simple cellular automata (CA) rules:
part one, round one.
The simplest type of one-dimensional CA behaves as follows:
- a cell may be on or off (one or zero).
- time ticks by in discrete intervals.
- at each tick, the state of each cell is updated using a function that
maps the current state of the cell to the left, the cell itself, and
the cell to the right to the new cell value.
Because this value-generating function depends on 3 bits of input, a
given CA rule actually consists of 2^3 = 8 possible mappings. The
mappings can be represented by the binary values 0b000 to 0b111
(decimal 0..7). By Wolfram's numbering convention for these simple
CA, we actually count down from 7 to 0. So, let's say we have a rule
that maps each possible set of left, center, and right values:
0b111, 0b110, 0b101, 0b100, 0b011, 0b010, 0b001, 0b000
to the new cell value:
0b0, 0b0, 0b0, 0b1, 0b1, 0b1, 0b1, 0b0
We can treat the resulting number, in this case 0b00011110, as the
rule number. This is rule 30 in Wolfram's schema; there are 2^8,
or 256, possible rules, numbered from 0 to 255.
First, let's make a function that, given a rule number and three
values (left, center, and right), returns the new cell state. We
turn the three values into an index to look up the appropriate bit
in the rule number.
>import Data.Bits
>genNextBit :: Int -> ( Bool, Bool, Bool ) -> Bool
>genNextBit rulenum ( left, center, right ) = rulenum `testBit` idx
> where idx = ( if left then (4::Int) else (0::Int) ) .|.
> ( if center then (2::Int) else (0::Int) ) .|.
> ( if right then (1::Int) else (0::Int) )
Hmmm... it is lacking a certain elegance, and I don't quite like
the way the indentation rules work in this case, but let's test it
with a function that generates the 8 rule indices in the form of tuples:
>genRuleIndices :: [( Bool, Bool, Bool )]
>genRuleIndices = [ ( (val `testBit` 2), (val `testBit` 1),
> ( val `testBit` 0) ) | val <- [(7::Int),
> (6::Int)..(0::Int)] ]
Now if we write:
we get:
and if we write:
map (genNextBit 30) genRuleIndices
this expression curries us a function (mmm... curry!) which takes
a starting state and applies rule 30, then feeds it the possible
starting states. The result is:
That looks like the bit pattern for rule 30.
Just for fun, let's confirm by making a function that will
translate a list of output bit values back into a rule number.
The signature should look like this:
sumBitVals :: [Bool] -> Int
And we want the list
>test_bit_vals = [False,False,False,True,True,True,True,False]
to map back to 30. Take a shot at it yourself; you might find
that the result is instructional I'll wait.
(Musical interlude).
Did you try it? Let's look at some possible implementation strategies.
We could make a list of the powers of two and then do some list
manipulation to get the powers of two that correspond to our one-bits
summed up:
>powers_of_two_8_bits = reverse ( take 8 (iterate (2 *) 1) )
(An aside: since this is generated by a function that takes no
parameters at runtime, it would seem like a sufficiently smart
compiler could generate this list and stick it in the resulting
program so that no run-time calculation is required to generate
it. Does GHC do this? I don't know.)
Anyway, our implementation. We can tuple up our powers of two with
our bit values:
>tups = zip powers_of_two_8_bits test_bit_vals
Then we can turn this back into a list of only the powers of two that are
"on," and sum the results:
sum (map (\ tup -> if snd tup then fst tup else 0) tups )
It seems like this should be simpler still. I looked in the standard
library for a function to filter one list using a list of boolean
values as a comb. Let's write one:
>combListWithList :: [a] -> [Bool] -> [a]
>combListWithList [] [] = []
>combListWithList ls comb =
> if (head comb) then (head ls) : combListWithList (tail ls) (tail comb)
> else combListWithList (tail ls) (tail comb)
combListWithList powers_of_two_8_bits test_bit_vals
That seems good, although it doesn't express the idea that the lists
have to be the same length. It still seems amazing to me that I can
reel off functions like that in Haskell and have them work right the
first time! Now we can produce our final function:
>sumBitVals :: [Bool] -> Int
>sumBitVals ls = sum $ combListWithList powers_of_two_8_bits ls
sumBitVals test_bit_vals
There is probably an elementary way to implement the above function
using zipWith instead of my combList, but I'll leave that to you;
leave a comment if you figure it out!
As another aside, here is another function I came up with to generate
this bit vals; it doesn't rely on a specific length.
>foldBitVals :: [Bool] -> Int
>foldBitVals ls = snd (foldr
> (\ flag tup -> if flag then ((fst tup) * 2, (snd tup) + (fst tup))
> else ((fst tup) * 2, snd tup) )
> (1, 0) ls )
foldBitVals test_bit_vals
Take a moment to understand the above function; it is a little strange.
I perform a fold using a tuple. The first element is the power of two,
so it keeps track of our place in the list, and the second is an
accumulator as we add up our values. This is a trick for stashing
a state that is more complex than a single value into a fold.
Anyway, that was a long diversion, so let's end for now. Next time we'll look at ways to represent our CA universe as it evolves by applying our rules. As always, your comments are welcome!
It looks like this blog has been unceremoniously removed from Planet Scheme. I guess that's only fair, since I haven't written anything specifically about Scheme in a while. Best of intentions, and
all that. This blog probably belongs on "Planet Programming Language Dilettante."
Anyway, I received some helpful feedback on the short program I wrote to print bitmaps of binary numbers. Jason Creighton in particular took a shot at a rewrite, and the result as highly
enlightening. I also recieved two specific pieces of advice: use short function names, and don't write a comprehension like [a | a <- [a..b]] when you can just write [a..b]. Oops. Check, and thanks!
(Semi) or (Ill) Literate Haskell follows:
Haskell Binary Printhead Toy, Second Round
>import Data.Bits
Our functions to generate lists of signed and unsigned integers of
numbits length, revised with names as suplied by Jason and error-
catching as supplied by yours truly:
>unsignedInts :: Int -> [Int]
>unsignedInts bits | (1 <= bits) && (bits <= 16) =
> [(0::Int)..(bound::Int)]
> where bound = 2 ^ bits - 1
>unsignedInts bits = error "expected a value from 1 to 16"
>signedInts :: Int -> [Int]
>signedInts bits | (1 <= bits) && (bits <= 16) =
> [(- (bound::Int)) - 1..(bound::Int)]
> where bound = 2 ^ ( bits - 1 ) - 1
>signedInts bits = error "expected a value from 1 to 16"
Jason came up with a considerably simpler way to map the bits of
the binary numbers to the characters for printing:
>boolToChar :: Bool -> Char
>boolToChar True = '#'
>boolToChar False = ' '
I'm adopting that suggestion enthusiastically, since I don't think
code can get much simpler than that!
Jason then proceeded to turn my dog food into a much more flavorful
wafter-thin after-dinner mint, leaving me struggling to understand
just what he did:
>bitPicture :: (Int -> [Int]) -> Int -> [Char]
>bitPicture intGen bits = unlines $ map stringAtIndex (reverse [0..(bits-1)])
> where
> stringAtIndex = map boolToChar . bitsAtIndex
> bitsAtIndex idx = [ testBit n idx | n <- intGen bits ]
Wow. First off, Jason has supplied a type signature which expresses
what the function actually _does_. The appropriate integer generator
(the signed or unsigned version) comes in as a parameter; that's what
(Int -> [Int]) means. We next get an integer argument (the number of
bits), and finally, we spit out a list of chars.
Jason proceeds to use a couple of where clauses to set up some
simplifying expressions. Let's look at the first one:
StringAtIndex = map boolToChar . bitsAtIndex
OK, this is slightly mind-twisting right off the bat. In the world
of C and Java you can't write things like that. The dot is function
composition expressed naturally; instead of something like
stringOfBoolsFromBitsAtIndex idx = map boolToChar (bitsAtIndex idx)
we can just get Haskell to chain the functions together for us.
No fuss, no muss. But this function makes a forward reference to
bitsAtIndex, which we haven't seen yet (completely illegal in C
and Java). Let's make sure we understand what bitsAtIndex does:
bitsAtIndex idx = [ testBit n idx | n <- intGen bits ]
That's definitely more dense than the way I wrote it (and that's a
good thing). To break down the list comprehension, the expression on
the right hand side of the vertical bar says that we feed the supplied
integer generator function the number of bits to use. That gives us a
list. From that list we will draw our "n"s and test bit idx on them.
By doing it this way, Creighton has pulled an act of Haskell sleight-
of-hand: instead of generating the bitmaps for each integer value,
like I did, and then using "transpose" to rearrange them, this version
accesses the "columns" of the table directly! With those two
expressions, we've got our string representing all the bits at a
given index _for the whole supplied set of integers_.
Wow. I feel like an audience member watching a magician smash up my
watch and then give it back to me rebuilt as a marvelous whirring
There's more, but this part is relatively simple:
bitPicture intGen bits = unlines $ map stringAtIndex (reverse [0..(bits-1)])
We already know what unlines does; it turns our list of strings into a
properly line-broken string. We've just described how stringAtIndex works.
and map stringAtIndex (reverse [0..(bits-1)[) just feeds stringAtIndex a
set of indices mapped from high to low, so that our most significant bit
comes out leftmost. But what does "$" do?
My reference at zvon.org says this is the "right-associating infix
application operator (f $ x = f x), useful in continuation-passing style."
I'm not sure I fully understand, and I'm not going to go into the meaning
of "continuation-passing style" right now, but I guess it saves us from
having to write:
unlines $ map (stringAtIndex (reverse [0..(bits-1)]))
to get the right order of operations, which is nice. So I'll try to
incorporate it into my own Haskell.
Anyway, let's try it. We need to drive the function:
>main :: IO ()
>main = do
> print "Unsigned values (6 bits)"
> putStr $ bitPicture unsignedInts 6
> print "Signed values (6 bits)"
> putStr $ bitPicture signedInts 6
After using GHCi's :cd and :load command to read this file, just
"Unsigned values (6 bits)"
################ ################
######## ######## ######## ########
#### #### #### #### #### #### #### ####
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
"Signed values (6 bits)"
################ ################
######## ######## ######## ########
#### #### #### #### #### #### #### ####
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Excellent. I laughed; I cried; I learned something today. I hope you did, too. Thanks, Jason! And thanks to the other people who commented!
Followup: for those following along the discussion in the comments, here is an alternate implementation of the bitPicture function:
bitPicture intGen bits =
unlines $ map stringAtIndex $ [bits - 1, bits - 2..0]
where stringAtIndex = map boolToChar . bitsAtIndex
bitsAtIndex idx = map (flip testBit idx) (intGen bits)
Note that using [bits - 1, bits - 2..0] in the first line allows us to avoid the list reversal, so that seems like a clear win. The method of expressing bitsAtIndex above is one of several that were
suggested. The idea behind the flip is that the return value of intGen should become the first parameter to bitsAtIndex, not the second. You can also express this by forcing stringAtIndex to be
treated as an infix operator:
bitsAtIndex idx = map (`testBit` idx) (intGen bits)
Both of these alter the behavior of Haskell's automatic currying so that it replaces the second parameter instead of the first, leaving the first as the sole parameter to the resulting curried
function; in Common Lisp, you might do this explicitly using rcurry. And don't forget the original, using a list comprehension:
bitsAtIndex idx = [ testBit n idx | n <- intGen bits ]
Assuming there is not a specific concern about efficiency, it seems like the choice between these versions might be a matter of style. Thanks again to everyone who made suggestions.
The body of this article is a literate Haskell program. You can extract the text, then save it as a file with the extension .lhs, then execute it using GHCi.
This is a Haskell toy to visualize twos-complement binary numbers.
In my previous articles on the subject of integer division, I alluded
to underflow and overflow and the "weird number" -- well, now you
can see them!
Almost 30 years ago I wrote code on my Radio Shack TRS-80 to make my
dot-matrix printer spit out bitmaps of binary numbers, which made
pretty recursive patterns. This program commemorates that long-ago
geekery from my early teenage hacking days.
First, we need the bits library:
>import Data.Bits
We'll also need the list library later:
>import List
Let's simulate n-bit binary number in both a signed and unsigned
interpretation. Given a number of bits, we want a function to make
a list of the possible values. For example, in unsigned form 8 bits
will give us a list of the values 0..255; in signed form, it will be
an asymmetrical list from -128..127. Note that this function generates
the whole list, so if you give it a numbits value higher than, say,
16, you might regret it.
Our calculation fails for values of numbits <= 0, so we put in a
guard case for that. We also want to discourage creating very large
lists, so we put in a guard case for numbits > 16.
>gen_n_bit_nums_unsigned numbits | numbits <= 0 = []
>gen_n_bit_nums_unsigned numbits | numbits > 16 = error "too many bits!"
>gen_n_bit_nums_unsigned numbits =
> [ n | n <- [(0::Int)..((2::Int)^numbits - 1)] ]
>gen_n_bit_nums_signed numbits | numbits <= 0 = []
>gen_n_bit_nums_signed numbits | numbits > 16 = error "too many bits!"
>gen_n_bit_nums_signed numbits =
> [ n | n <- [-(2::Int)^(numbits - 1)..((2::Int)^(numbits - 1) - 1)] ]
To test this we can execute:
gen_n_bit_nums_unsigned 4
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
gen_n_bit_nums_signed 4
[-8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7]
That looks about right.
Now we want a function to access the bits of those numbers from most
significant to least significant and build a list of corresponding
boolean values. This would be expressed more naturally if EnumFromTo
operated descending sequences. I'm sure there's a better way, but for
now I will just reverse the resulting list:
>rightmost_bits num_bits val | num_bits > 16 = error "too many bits"
>rightmost_bits num_bits val | num_bits <= 0 = error "too few bits"
>rightmost_bits num_bits val =
> reverse [ testBit (val::Int) bit_idx | bit_idx <- [0..(num_bits - 1)] ]
Once we have a function which will do it with one number, doing it with a
whole list of numbers is very easy. We will make a curried function and
use map. Currying in Haskell is very easy, which makes sense given that
the operation, "to curry" and the language, Haskell, are both named after
Haskell Curry!
Our rightmost_bits function takes two arguments, but we want to make a
new function which we can hand off as an argument to map, along with a
list. This new function will only take one parameter. We can ask GHCi
what it thinks the type of the rightmost_bits function is:
:type rightmost_bits
It says:
rightmost_bits :: Int -> Int -> [Bool]
We can make a new function out of it by doing partial application:
instead of supplying all its arguments, we just supply the first one.
gen_rightmost_bits 4
If you do this in GHCi, you'll get an error message because the
result is not printable (GHCi can't apply show to the result).
But you can bind it:
let bit_generator_4 = gen_rightmost_bits 4
And then ask GHCi what its type is:
:type bit_generator_4
bit_generator_4 :: [Int] -> [[Bool]]
Take a moment to make sure you understand what we've done here:
Haskell has made us a curried function which takes a list of values
instead of a single value, and generates a list of lists. This
seems pretty close to magic! Let's make another function that
takes the number of bits and a list, generates the curried
function, then maps the list to the curried function:
>gen_rightmost_bits num_bits ls =
> map (rightmost_bits num_bits) ls
As you can see, working with curried functions is completely
effortless in Haskell. Here's a general function to generate
a list of all the bit values for unsigned numbers of num_bits
>gen_all_bits_unsigned num_bits =
> gen_rightmost_bits num_bits (gen_n_bit_nums_unsigned num_bits)
>gen_all_bits_signed num_bits =
> gen_rightmost_bits num_bits (gen_n_bit_nums_signed num_bits)
If we execute:
gen_all_bits_unsigned 4
We get back the following (I have reformatted the output for clarity):
[[False, False, False, False],
[False, False, False, True],
[False, False, True, False],
[False, False, True, True],
[False, True, False, False],
[False, True, False, True],
[False, True, True, False],
[False, True, True, True],
[True, False, False, False],
[True, False, False, True],
[True, False, True, False],
[True, False, True, True],
[True, True, False, False],
[True, True, False, True],
[True, True, True, False],
[True, True, True, True]]
This is recognizably a table of the unsigned binary values 0..15.
This approach doesn't seem very "Haskell-ish" yet -- instead of
generating the combinations as needed, I'm producing them all
ahead of time. It also feels to me like I'm pushing uphill on
the syntax, and having to use parentheses to force the order
of evaluation. We can think more about that later, but let's
get our pretty-printing working first.
How can I turn all those boolean values into a string I can print?
Well, we can take advantage of the fact that in Haskell, strings can
be treated as lists, and the same methods apply. This is not efficient,
but it allows us to use some of the standard functional techniques.
One of these is "fold."
Fold is a technique for boiling down a list into some kind of summary
form. Just what form this summary takes is up to you, since you can
supply your own function and a starting value. For example, to sum
the elements in a list, I could use foldl like this:
foldl (\ x y -> x + y) 0 [0, 1, 2]
The first parameter to foldl is a function. In this case I'll use
a lambda expression, which gives me a function I won't bother to
name. The second paramter is the starting value. The starting value
and the next item in the list are repeatedly passed to the function,
which accumulates a value. In this case, the final result is 3.
I can give foldl an empty string as the starting accumulator value
and my function can concatenate strings onto it:
>stringify_vals =
> foldl (\ x y -> if y then x ++ "#" else x ++ " ") ""
Now you might want to try to apply our list of lists to this
stringify_vals (gen_all_bits_unsigned num_bits)
If you do that, though, you'll get a type error like this:
"Couldn't match expected type `Bool' against inferred type `[Bool]'"
The reason can be seen if we examine the types:
:type (gen_all_bits_unsigned 4) :: [[Bool]]
(gen_all_bits_unsigned 4) :: [[Bool]]
:type stringify_vals
stringify_vals :: [Bool] -> [Char]
We have a list of lists of bools, but we want to feed our
stringification function a list of bools. To apply each
element of (gen_all_bits_unsigned 4) to our stringification
function, we can use map again:
>stringify_all_unsigned_vals num_bits =
> map stringify_vals (gen_all_bits_unsigned num_bits)
>stringify_all_signed_vals num_bits =
> map stringify_vals (gen_all_bits_signed num_bits)
This will give us a list of strings, one for each list of boolean
values. (I've reformatted the output).
[" ",
" #",
" # ",
" ##",
" # ",
" # #",
" ## ",
" ###",
"# ",
"# #",
"# # ",
"# ##",
"## ",
"## #",
"### ",
That's nice: we can start to see our recursive structure! But
we really want that value to be turned into a single printable
string. To achieve that, we can use the unlines function.
>prettyprint_unsigned num_bits =
> unlines (stringify_all_unsigned_vals num_bits)
>prettyprint_signed num_bits =
> unlines (stringify_all_signed_vals num_bits)
When we interpret
prettyprint_unsigned 3
we see a single string with newline characters escaped, like so:
" \n #\n # \n ##\n# \n# #\n## \n###\n"
This will give the results we want when we pass it to putStr, like so:
putStr (prettyprint_unsigned 8)
# #
# #
# #
# ##
## #
Hmmm. That's nice, but it would be really cool if the results
were rotated, so that we could see what it would look like if
a printhead moving horizontally was printing out these values.
To do this, we can actually start with our intermediate list of
lists. If our four-bit numbers give us
[[False, False, False, False],
[False, False, False, True],
[False, False, True, False],
we can see the values the other way using transpose. Let's write
rotated versions of gen_all_bits_unsigned and gen_all_bits_signed:
>gen_all_bits_unsigned_rot num_bits =
> transpose
> (gen_rightmost_bits num_bits (gen_n_bit_nums_unsigned num_bits))
>gen_all_bits_signed_rot num_bits =
> transpose
> (gen_rightmost_bits num_bits (gen_n_bit_nums_signed num_bits))
And make a new stringify function to play with:
>stringify_all_unsigned_vals_rot num_bits =
> map stringify_vals (gen_all_bits_unsigned_rot num_bits)
>prettyprint_unsigned_rot num_bits =
> unlines (stringify_all_unsigned_vals_rot num_bits)
putStr (prettyprint_unsigned_rot 6)
will give us:
################ ################
######## ######## ######## ########
#### #### #### #### #### #### #### ####
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Now that's what I'm talking about!
We can use the unsigned version to see the interesting transition
point between negative and positive values:
>stringify_all_signed_vals_rot num_bits =
> map stringify_vals (gen_all_bits_signed_rot num_bits)
>prettyprint_signed_rot num_bits =
> unlines (stringify_all_signed_vals_rot num_bits)
putStr (prettyprint_signed_rot 6)
will give us:
################ ################
######## ######## ######## ########
#### #### #### #### #### #### #### ####
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Now let's give our program a main function using the IO monad and
have it print out several of our results:
>main :: IO ()
>main = do
> print "Unsigned values (8 bits)"
> putStr (prettyprint_unsigned 8)
> print "Signed values (8 bits)"
> putStr (prettyprint_signed 8)
> print "Unsigned values (6 bits) rotated"
> putStr (prettyprint_unsigned_rot 6)
> print "Signed values (6 bits) rotated"
> putStr (prettyprint_signed_rot 6)
After using GHCi's :cd and :load command to read this file, just
I would invite any readers to play around with the code, and if you can make it more concise and/or more "Haskellish," please leave a comment.
Now if only we could get Haskell to generate the appropriate sound! My dot-matrix printer was loud. But that's a task for another day.
Update: there is a followup aticle to this one which develops the code further; see http://praisecurseandrecurse.blogspot.com/2006/12/revised-dot-matrix-printhead.html Also, make sure to take a look
at the comments on both this posting and the followup.
Antti-Juhani pointed out to me that Haskell provides the quot and rem pair of operations in addition to div and mod. The difference is that while both satisify the identity involving division,
multiplication, and the remainder, div and mod round down (towards -infinity), while quot and rem round towards zero (truncating the fractional part).
That's great! (Besides the fact that I displayed my ignorance in public, that is). It means I really can use GHCi to model my functions for testing! And I was getting worked up to write a C version
of Plauger's div which rounded away from zero for negative quotients so I could have some guaranteed-consistent behavior. Instead I can use the C standard function div and Haskell's quot and rem.
So, this sort of begs the question: is there a good reason to prefer one method of integer division rounding to another? Yes. It comes down to this question: do you want your algorithms to behave
smoothly across the origin, or to behave symmetrically around the origin?
Let's briefly consider some Haskell code for mapping a range of fractions onto a range of integers again.
let gen_nths denom =
[ (numer, denom::Int) | numer <- [ (-denom::Int)..(denom::Int) ] ]
let div_map m1 m2 = map (\ tup -> fst tup *
( ( (m2::Int) - (m1::Int) ) `div` 2 ) `div` snd tup )
div_map (-10) 10 ( gen_nths 3 )
[-10, -7, -4, 0, 3, 6, 10]
Notice something about those values: the distances between them follow a consistent pattern. Reading the list left to right, we have distances of 3, 3, 4, 3, 3, 4. This pattern repeats. It is smooth
across the origin.
Now let's change our div_map function to use quot and rem:
let quot_map m1 m2 = map ( \tup -> fst tup *
( ( (m2::Int) - (m1::Int) ) `quot` 2) `quot` snd tup )
quot_map (-10) 10 ( gen_nths 3 )
[-10, -6, -3, 0, 3, 6, 10]
Looking at the differences between values, from left to right, we find that they are 4, 3, 3, 3, 3, 4. The function is symmetric around the origin.
Does this matter in practice? Of course! Let's say that we have a mapping using a non-origin-crossing version of our quot_map function:
let gen_non_negative_nths denom =
[ (numer, denom::Int) | numer <- (0::Int)..(denom::Int) ] ]
let quot_map_2 m1 m2 = map ( \tup -> fst tup *
( (m2::Int) - (m1::Int) ) `quot` snd tup )
quot_map_2 0 20 ( gen_non_negative_nths 6 )
[0, 3, 6, 10, 13, 16, 20]
Shiny. The intervals go 3, 3, 4, 3, 3, 4. But if we shift the values:
let gen_non_positive_nths denom =
[ (numer, denom::Int) | numer <- [ (-denom::Int)..(0::Int) ] ]
quot_map_2 0 20 ( gen_non_positive_nths 6 )
[-20, -16, -13, -10, -6, -3, 0]
The intervals go 4, 3, 3, 4, 3, 3 -- they are backwards. In other words, the effect of integer rounding on the values of the codomain changes if you shift the domain of the denominator across the
origin: the ordering of the pattern is reversed.
If we do the same thing using div:
let div_map_2 m1 m2 =
map (\ tup -> fst tup * ( (m2::Int) - (m1::Int) ) `div` snd tup )
div_map_2 0 20 ( gen_non_negative_nths 6 )
[0, 3, 6, 10, 13, 16, 20]
div_map_2 0 20 ( gen_non_positive_nths 6 )
[-20, -17, -14, -10, -7, -4, 0]
we get a pattern of intervals (the effect of rounding) which remains the same: 3, 3, 4, 3, 3, 4.
Now, smoothness across the origin is is what I want, at least for the kinds of functions I am working on now. But your problem domain may lend itself to writing functions which involve rotations, or
mapping values across the origin: in that case, you're going to want the symmetry. The important thing is to know what strategy you are using and apply it consistently. I'm really impressed that
Haskell give me so many interesting options.
Well, this was initially going to be a three-part article, but things have become more interesting. The situation is not unlike picking up a log in the forest and finding all kinds of nifty crawlies
living under it. "Interesting" to certain geeky people like myself, at least! It's about to get geekier. If you are allergic to ANSI or ISO standards, you'd better stop reading now. But I claim the
payoff is worth it, if you can make it through to the end. At least, for some value of "worth it."
In the last installment we were exploring the behavior of integer division in Haskell. I want to get back to C now. We learned that the behavior of integer division involving negative results is not
strictly defined for C. The behavior of % (mod) is also not very strictly defined.
Let's start with Kernighan and Ritchie's The C Programming Language, Second Edition (I'm not even going to try to figure out rules for pre-ANSI/ISO C). K&R tell us on p. 10 that
...integer division truncates; any fractional part is discarded.
That isn't very detailed, but on p. 41 they tell us
the direction of truncation for / and the sign of the result for % are machine-dependent for negative operands, as is the action taken on overflow or underflow.
OK, but that doesn't describe exactly what the options are. Things get a little bit more detailed on p. 205, where we read that no guarantees are made about the sign of the remainder!
...if the second operand is zero, the result is undefined. Otherwise, it is always true that (a/b)*b + a%b is equal to a. If both operands are non-negative, then the remainder is non-negative and
smaller than the divisor; if not, it is guaranteed only that the absolute value of the remainder is smaller than the absolute value of the divisor.
This isn't actually internally consistent, because if the sign of the remainder is not correct, then the identity for division doesn't work for negative values unless the sign of the quotient is
allowed to be incorrect! I've never seen it suggested anywhere else that division could be that loosely defined. But K&R isn't the formal standard, so let's move on.
The reason for the "looseness" that exists in the C standard, of course, is that the standard was originally written to, as much as possible, codify existing practice, and C has a strong bias towards
efficiency. Since (apparently) different hardware division algorithms had different behavior, the standards committee did not feel that it was appropriate to require existing implementations to
change behavior. Doing so might have required existing architectures to have to replace hardware division with software division, which could have inflicted an enormous efficiency cost.
According to ISO/IEC 9899:1990 (known as C90):
When integers are divided and the division is inexact, if both operands are positive the result of the / operator is the largest integer less than the algebraic quotient and the result of the %
operator is positive. If either operand is negative, whether the result of the / operator is the largest integer less than or equal to the algebraic quotient or the smallest integer greater than
or equal to the algebraic quotient is implementation-defined, as is the sign of the result of the % operator. If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a.
So we know that there are two options for rounding. In order to maintain the identity involving integer division and mod, though, the value of mod has to be consistent with the rounding strategy,
even if the sign does not have to be consistent.
There is a back door that lets us get to defined behavior. Apparently the div function does have strictly defined rounding. In section 7.10.6.2 of the standard, we read:
If the division is inexact, the resulting quotient is the integer of lesser magnitude that is the nearest to the algebraic quotient.
That's interesting -- this is rounding towards zero. The div function returns a structure of type div_t that contains both the integer quotient and integer remainder. It is provided in the <stdlib>
header, available in C++ as <cstdlib>. In his book The Standard C Library, P. J. Plauger provides an implementation for div. It looks like this:
div_t (div)(int numer, int denom)
{ /* compute int quotient and remainder */
div_t val;
val.quot = numer / denom;
val.rem = numer - denom * val.quot;
if (val.quot < 0 && 0 < val.rem)
{ /* fix remainder with wrong sign */
val.quot += 1;
val.rem -= denom;
return (val);
We can see what he is doing: for negative quotients that are inexact answers to the division problem, he shifts the quotients up towards zero and the remainders get reversed. Note that he does not
use the built-in % facility, presumably since its behavior does not have very strong guarantees placed on it.
C: A Reference Manual (5th Edition) by Harbison and Steele seems to indicate that the semantics are a little more rigorously defined. We read there that
For integral operands, if the mathematical quotient of the operands is not an exact integer, then the fractional part is discarded (truncation toward zero). Prior to C99, C implementations could
choose to truncate toward or away from zero if either of the operands were negative. The div and ldiv library functions were always well defined for negative operands.
That seems pretty unambiguous, but then on page 419, when describing the div and ldiv function, the authors write
The returned quotient quot is the same as n/d, and the remainder rem is the same as n%d.
But that ignores the possibility of a pre-C99 implementation where / rounds away from zero for negative values, as does GHCi in my tests. So again we have a lack of rigorous internal consistency. It
is worth noting here that K&R don't make any extra statements about the required rounding behavior of div and ldiv, and since K&R is still considered the "bible" by many C programmers, the guarantees
on the div and ldiv function may not be very well-known -- or properly implemented.
Does C99 clarify this at all? ISO/IEC 9899:1999(E), Second Edition 1999-12-01, tells us that "the result of the / operator is the algebraic quotient with any fractional part discarded" and a footnote
indicating that this is often called "truncation towards zero."
How about C++? ISO/IEC 14882, Second Edition (2003-10-15) does not actually say how rounding is performed, although it says that "If both operands are nonnegative then the remainder is nonnegative;
if not, the sign of the remainder is implementation-defined." There is a footnote indicating that "According to work underway toward the revision of ISO C, the preferred algorithm for integer
division follows the rules defined in the ISO Fortran standard" (rounding towards zero). When we try to look up information about div and ldiv, we find that the C++ standard just refers us to the
Standard C library.
OK, all the standards language is giving me a headache; let's take a look at some code. To begin with, let's confirm the way my C implementation does rounding of integer quotients. We took a look at
6/5, -6/5, 5/6, and -5/6 in Haskell; let's look at the same quotients in C:
int val_1 = 6;
int val_2 = 5;
int val_3 = -6;
int val_4 = -5
int result_1 = val_1 / val_2;
int result_2 = val_3 / val_2;
int result_3 = val_2 / val_1;
int result_4 = val_4 / val_1;
printf("result_1 = %d, result_2 = %d, result_3 = %d, result_4 = %d\n",
result_1, result_2, result_3, result_4);
This gives me:
result_1 = 1, result_2 = -1, result_3 = 0, result_4 = 0
Yep, I'm getting rounding towards zero for both positive and negative values. Next, some Haskell. Let's bring back our algorithm from part 2:
let gen_nths denom =
[(numer, denom::Int) | numer <- [(-denom::Int)..(denom::Int)] ]
let div_map m1 m2 = map (\tup -> (fst tup + snd tup) *
(((m2::Int) - (m1::Int)) `div` 2) `div` snd tup + m1)
div_map (-10) 10 (gen_nths 3)
And now let's write up my C algorithm again. This time we'll use int instead of long since we don't need big numbers (although on my target, ints are actually 32 bits long).
unsigned int denom = 3;
int numer;
for ( numer = -3; numer <= 3; numer++ )
int quot = numer * 10 / denom;
printf("quotient %d\n", quot);
Go ahead and try it on your platform.
Whoah! What happened? You might have noticed something very odd about the first few quotients. They are monstrously large! On my target I get:
quotient 1431655755
quotient 1431655758
quotient 1431655762
quotient 0
quotient 3
quotient 6
quotient 10
What happened is that I injected a bug to point out yet another possible way your C code can fail catastrophically. This behavior can surprise even experienced programmers. Note that since my
denominator is always a positive value, I declared it as an unsigned int instead of an int. So what went wrong?
Let's look at the first example: -3/3 yielding 1431655755. The behavior you are looking at is actually mandated by the C standard. When mixing signed and unsigned types, the compiler is required to
promote (that is, widen) the actual type of the calculation to an unsigned type. So, internally, if we are performing the calulation -22 (signed) / 7 (unsigned), the compiler is required to notice
that both a signed and unsigned long value are presented to the integer division operator. The 32-bit twos-complement representation of -3 is 0xFFFFFFFD (the conversion to an unsigned intermediate
value changes the meaning, not the representation). I multiply this value by 10, yielding 0xFFFFFFE2, and then divide it by 3, yielding 0x5555554B, or a decimal value of 1431655755. The compiler
sticks this unsigned value right into the signed result variable. The signed or unsigned status of the destination variable has absolutely no effect; it does not dissuade the compiler from deciding
to treat the values as unsigned. (It is a common misconception to think that the destination value influences the type of the calculation in C).
If you have a target with 16-bit ints, your rolled-over values will be different (0xFFFD, 0xFFE2, and 0x554B, or decimal 21835). So the mixing of signed and unsigned values has not only given us the
wrong answer, but made the code that generates the wrong answer produce inconsistent results depending on the size of int, even when the values we start with are not actually too big to fit into 16
If that wasn't perfectly clear, and even if it was, if you are really trying to write portable numeric code in C that has to be correct, I urge to consider purchasing the MISRA (Motor Industry
Software Reliability Association) book MISRA-C:2004, Guidelines for the Use of the C language in Critical Systems. It has the most detailed explanation of C's arithmetic type conversions, and more
importantly, the implications of these type conversions in real code. The crux of the biscuit is that C is hard to get right, even for good programmers.
OK, so let's change my denominator back to a signed long and try it again. This time the results look more reasonable: -10, -6, -3, 0, 3, and 6.
Are things any different if we use div?
int denom = 3;
int numer;
for ( numer = -3; numer <= 3; numer++ )
div_t val = div(numer * 10, denom);
printf("quotient %d\n", val.quot);
In my case, no, since on my compiler and target, / already seems to round towards zero (asymmetrically) while Haskell rounds symmetrically downwards (floor behavior).
By the way, while we're at it, let's see what C has to say about the "weird" number divided by -1:
long most_negative_long = -2147483648;
long minus_one = -1;
long result = most_negative_long / minus_one;
ldiv_t ldiv_result = ldiv(most_negative_long, minus_one);
printf("results: %lX, %lX\n", result, ldiv_result.quot);
What do you get? I get zero as the result of both the / and ldiv.
I also get GCC telling me "warning: this decimal constant is unsigned only in ISO C90." What does that mean? The compiler is apparently warning us that in C99 we won't be getting an implicitly
unsigned value out of the constant -2147483648, in case we might have wanted to use this constant in an expression involving signed types to force the calculation to be promoted to an unsigned type.
But why would we get an unsigned value in the first place? Apparently in C90, the number's magnitude is interpreted first, and the value is negated. 2147483648 is too large (by one) to fit into a
signed long, so the compiler promotes the type to an unsigned long, then negates it.
I have been trying to come up with an example of an expression that behaves differently when I use -2147483648 as a constant, as opposed to 0x80000000, but so far I haven't been able to come up with
one. This may be because my compiler is already operating under C99 rules and so I never get the implicit promotion to an unsigned value.
Anyway, be that as it may, some parting advice and conclusions:
1. GHCi (on the PC and Mac) and C (Visual C++ 6.0 and GCC on the PC and Mac) yield distinctly different rounding behavior for integer division. C90 allows this behavior. Does Haskell? Would it make
sense for Haskell to define integer division more strictly, the way that C99 does?
2. Division by zero is never OK; neither is taking % zero. Both are a fatal error in GHCi. The results are undefined in C (but a fatal error is generated by most implementations).
3. Overflow (which, in the case of integer division, occurs only when the "weird number" is divided by -1) produces undefined results, and your code should avoid it, both in Haskell and in C.
4. Operations on unsigned values are guaranteed to produce results that are consistent from platform to platform, assuming the integer size is the same. Operations on signed values don't have this
5. The rounding (truncation) of inexact quotients may be either towards zero in all cases, or towards zero for positive quotients and away from zero when the results are negative. If your
implementation follows the newer C99 standard rounding should always be towards zero.
6. Mixing signed and unsigned values in C expressions is very dangerous. If you do so, your implementation may have errors far more severe than differences in rounding behavior.
7. Consider avoiding signed division of negative values in C altogether, if your algorithm allows it. Either implement the algorithm using entirely unsigned integral types, or shift the domain of
your function so that you are operating on non-negative signed values, and shift it back (using subtraction, which has well-defined semantics) when your division is done.
8. Algorithms which feed a range of values which cross the origin to integer division may vary between GHCi and C implementations. Because GHCi does not provide guaranteed rounding towards zero as
C99 and the div and ldiv functions require, it is difficult to prototype in GHCi and expect to get the same results in C.
And, finally,
9. Make sure your implementation actually works the way it is supposed to. Only testing can truly accomplish this.
Interesting, isn't it, how studying one language can enrich your understanding of another! At least, it works for me!
In the last installment, I pointed out a discrepancy between the way that Haskell does integer division and the way that some C code does it. Let's look at just how integer division behaves in
Haskell a little more closely. You can type these expressions into GHCi:
6 / 5
5 / 6
(6::Int) `div` (5::Int)
(5::Int) `div` (6::Int)
Integer division in the positive numbers always rounds down: 1.2 rounds down to 1, and 0.8333... rounds down to 0. In fact, it looks like the fractional part is just truncated. But let's check some
negative values to be sure:
-6 / 5
(-6::Int) `div` (5::Int)
-5 / 6
(-5::Int) `div` (-6::Int)
That's interesting; clearly it is not strictly truncation (rounding towards zero) that is happening, or else -5/6 would give us zero, not -1. And we clearly aren't getting "rounding" in the usual
sense of rounding to the nearest integer (although there is no universal rounding algorithm to round to the nearest integer; there are lots of different variations in practice).
It looks like we have floor() behavior, which in the case of negative values means rounding away from zero. To state it slightly more rigidly, if the result of the div operation is a whole number, we
get the whole number, otherwise we get an integer that represents the exact answer minus the fractional part.
If you learned integer division the old-fashioned way, on paper, you know what remainders are. The remainder is the leftover part of the process, and represents the numeator of the fractional part of
the quotient. In Haskell, div gives you the whole number result of integer division, and mod will give you the remainder. So, for example,
(2::Int) `div` (3::Int)
(2::Int) `mod` (3::Int)
In fact, what this points to is that there is a relationship between the result of div and the result of mod. It should hold true in all cases, actually; in C, it is part of the ISO standard. The
relationship is this:
For any quotient o = m `div` n where n is not zero, and the remainder p = m `mod` n, o * n + p = m.
In other words, you can reverse the integer division by multiplication as long as you add the remainder back in.
This relationship should be obviously true for any non-negative n and m. But I'll go further and say that while the exact meaning of mod for negative numbers varies from language to language, and
perhaps from implementation to implementation, this relationship should still hold true, with the possible exception of cases involving overflow (which I'll discuss further below).
We can represent the relationship in Haskell like this:
10 `div` 3
10 `mod` 3
let undivide n o p = o * n + p
undivide 3 ( 10 `div` 3 ) ( 10 `mod` 3 )
In fact, let's test it for a few values of n and m:
let check_divide m n = ( undivide n ( m `div` n ) ( m `mod` n ) ) == m
[ check_divide m n | m <- [(-5::Int)..(5::Int)], n <- [(-5::Int)..(-1::Int)] ]
[ check_divide m n | m <- [(-5::Int)..(5::Int)], n <- [(-5::Int)..(-1::Int)] ]
Try out the results for yourself; again, not a rigorous proof, but suggestive.
But what about the overflow cases? Let's examine them. If Haskell's Int type is a 32-bit signed int, then we should be able to represent the values −2,147,483,648 (corresponding to
10000000000000000000000000000000 in binary or 0x80000000 in hexadecimal), to +2,147,483,647 (corresponding to 01111111111111111111111111111111 in binary or 0x7FFFFFFF in hexadecimal). Let's ask
Haskell what happens when we try to exceed our largest positive number:
(2147483647::Int) + (1::Int)
Yep, we overflow; we get the largest negative number. Conversely, we can underflow:
(-2147483648::Int) - (1::Int)
We don't get run-time error checking (and probably don't want it, for the sake of efficiency, when using mahine types). Of course, division by zero is still considered a run-time error:
(2147483648::Int) `div` (0::Int)
*** Exception: divide by zero
What about overflow or underflow in the results of division? With multiplication, it is very easy to overflow; multiplying two large numbers can easily exceed the width of a 32-bit signed value. You
can come pretty close with:
(46341::Int) * (46340::Int)
because 46340 is the next-lowest integer value to the square root of 2,147,483,647. Generate a result just slightly higher and you will roll over to a large negative:
(46341::Int) * (46341::Int)
With multiplication there are lots of combinations of values that will overflow, but with division it is a little bit harder to trigger overflow or underflow. There is one way, though. While
2,147,483,647 divided by any lesser value will be a lesser value, and divided by itself will equal one, recall that twos-complement representation gives us a slightly asymmetric set of values: the
largest representable negative number is one larger, in absolute magnitude, than the largest representable positive value. Thus:
(214483647::Int) `div` (1::Int)
(214483647::Int) `div` (-1::Int)
(-214483648::Int) `div` (1::Int)
In 32 bits of twos-complement, -2,147,483,648 is the "weird number." As a general rule, if you take a twos-complement number, flip all the bits, and add one, you get the negative of that number. The
only exception is the weird number. Let's see what happens when we try to divide the weird number by -1:
(-2147483648::Int) `div` (-1::Int)
Now, I would like to tell you what GHCi says when I ask it to divide the weird number by -1. I'd also like to see what it says about the mod. But I can't. GHCi crashes when I type in those
So, I can't tell you what Haskell says. But I can tell you what the C standard says. Here's a hint: not only is it not defined, but in fact the result of any overflow that occurs with arithmetic
operations on signed integral types is not defined by the language standard. Only the results of arithemtic operations on unsigned integers have defined semantics for overflow, making them portable
-- assuming that the size of the type in question is the same. More on that next time.
OK. In our last installment I started talking about using Haskell one-liners to calculate some functions. The function in question was designed to map a set of fractions in the range -7/7 to 7/7 to
the machine integers 0..65536. The function itself, along with a wrapper to test, it fits neatly into a Haskell one-liner (well, it might be more than one line, depending on your terminal width). You
can use GHCi to evaluate it:
map (\tup -> (fst tup + snd tup) * 32768 `div` snd tup) [(numer, 7) | numer]
I showed in the last installment that this function gave me the results I wanted:
[0, 4681, 9362, 14043, 18724, 23405, 28086, 32768, 37449, 42130, 46811, 51492, 56173, 60854, 65536]
Let's play with that function a little more, and then I want to talk about division (hence the title of the entry). In particular, integer division.
First, let's simplify our expressions by breaking them down and establishing some bindings. We can also get rid of the presumption that our output range starts at zero:
let gen_nths denom = [(numer, denom::Int) | numer <- denom::int let div_map m1 m2 = map (\tup -> (fst tup + snd tup) * (((m2::Int) - (m1::Int)) `div` 2) `div` snd tup + m1)
And now we can test it. Note that we need to add a few parentheses:
div_map (-32768) 32768 (gen_nths 11)
[-32768, -29790, -26811, -23832, -20853, -17874, -14895, -11916, -8937, -5958, -2979, 0, 2978, 5957, 8936, 11915, 14894, 17873, 20852, 23831, 26810, 29789, 32768]
OK. Now what is it we've done exactly? Well, we've generated some test values, and an algorithm, for distributing a fractional range onto a machine integer range. For our purposes, think of it as a
little machine for testing division.
To see just why I chose this algorithm to play with Haskell, we have to go back and consider some C code. It's quite a headache-inducing paradigm shift to go back to C/C++, but bear with me. Let's
just start with the simplest thing: we'll create the same mapping we just generated, hard-coding the values:
long denom = 11;
long numer;
for ( numer = -11; numer <= 11; numer++ )
long val = numer * 32768 / denom;
printf("%d\n", val);
And the results:
Hmmm... wait a minute, let's compare the results:
Haskell C
-32768 same
-29790 -29789
-26811 -26810
-23832 -23831
-20853 -20852
-17874 -17873
-14895 -14894
-11916 -11915
-8937 -8936
-5958 -5957
-2979 -2978
0 same
2978 same
5957 same
8936 same
11915 same
14894 same
17873 same
20852 same
23831 same
26810 same
29789 same
32768 same
It looks like every single one of the negative values is off by one! In our third and final installment I'll dive a little deeper into the semantics of integer division.
With apologies to both Pink Floyd and John Donne.
I've been toying with Haskell. I wish I had enough free time for a systematic study of Richard Bird's Introduction to Functional Programming Using Haskell, Second Edition, but for now I have to
content myself with dipping into it in small scraps of free time when both babies are asleep and the dishes are done. In addition, I'm using Haskell to model the behavior of some functions that I'm
actually writing in C++.
One of the reasons I like Haskell is that with a tool like GHCi, which gives me a REPL (a Read, Evaluate, Print loop for interactive development), I can write toy one-liners that do something useful,
and since Haskell supports typing, I can test how machine types behave. With an important exception, which I will discuss.
Let's say I want to try out a function that maps the fractional values -7/7 through 7/7 to unsigned integers in the range 0..65536 (the range of values representable by 16 bits, plus one). In Haskell
I can try this out using a one-liner (typed into GHCi). Let's build it up. First, the range of input values. Ignore for now the possibility of using or introducing a true rational number type;
instead we'll generate a set of tuples. In Haskell you can express this very concisely using a list comprehension:
[(numer, 7) | numer <- [-7..7] ]
[(-7,7),(-6,7), (-5,7), (-4,7), (-3,7), (-2,7), (-1,7), (0,7), (1,7), (2,7), (3,7), (4,7), (5,7), (6,7), (7,7)]
The list comprehension can be (roughly) read as "Make a list of tuples out of numer, 7 where numer takes on the values from the list -7 to 7." Think for a moment about how you could do this in your
favorite language.
Now let's do our math on the resulting list. We can use map to apply a function to the list. We can use fst and snd to get to the values in the tuple; for example, to look at the numerators, do this:
map fst [(numer, 7) | numer <- [-7..7] ]
[-7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7]
Let's go ahead and do our scaling. We want to describe a single function to pass each tuple to; in fact we want a lambda abstraction, a function that we don't bother to name:
map (\tup -> fst tup / snd tup * 65536 ) [(numer, 7) | numer <- [-7..7] ]
[-65536.0, -56173.71428571428, -46811.42857142857, -37449.142857142855, -28086.85714285714, -18724.571428571428, -9362.285714285714, 0.0, 9362.285714285714, 18724.571428571428, 28086.85714285714, 37449.142857142855, 46811.42857142857, 56173.71428571428, 65536.0]
Whoops, that's not quite right. We want to map to only the positive values. We have to shift the domain (the incoming numerators) so they are non-negative:
map (\tup -> (fst tup + snd tup) / snd tup * 65536 ) [(numer, 7) | numer <- [-7..7] ]
[0.0, 9362.285714285714, 18724.571428571428, 28086.85714285714, 37449.142857142855, 46811.42857142857, 56173.71428571428, 65536.0, 74898.28571428571, 84260.57142857143, 93622.85714285714, 102985.14285714286, 112347.42857142857, 121709.71428571429, 131072.0]
But now the codomain is too big; we have to cut it in half:
map (\tup -> (fst tup + snd tup) / snd tup * 65536 / 2) [(numer, 7) | numer <- [-7..7]]
[0.0, 4681.142857142857, 9362.285714285714, 14043.42857142857, 18724.571428571428, 23405.714285714286, 28086.85714285714, 32768.0, 37449.142857142855, 42130.28571428572, 46811.42857142857, 51492.57142857143, 56173.71428571428, 60854.857142857145, 65536.0]
That looks about right. Now let's apply a type. Because I'm eventually going to be implementing this without floating-point, I want the math to be done on machine integers. There are some exceptions,
but most target architectures these days give you 32 bits for int, so Haskell's Int, which is defined to be 32 bits, should yield approximately what I want. To introduce typing, all I have to do is
add a type to the list values. Type inferencing will do the rest. Oh, and since division by slash is not defined on Int, I have to change it to `div` (infix division):
map (\tup -> (fst tup + snd tup) `div` snd tup * 65536 `div` 2) [(numer, (7::Int)) | numer <- [(-7::Int)..(7::Int)] ]
[0, 0, 0, 0, 0, 0, 0, 32768, 32768, 32768, 32768, 32768, 32768, 32768, 65536]
Whoah. Something has gone disastrously long. Take a moment to figure out what it is. I'll wait.
Did you figure it out?
The problem is that when I use integer division in an expression, the result is truncated (the fractional part is lost). If the division occurs before any of the other operations, most of the
significant bits of the math are lost! I can fix this by making sure to do the division last:
map (\tup -> (fst tup + snd tup) * 65536 `div` snd tup `div` 2) [(numer, (7::Int)) | numer <- [(-7::Int)..(7::Int)] ]
[0, 4681, 9362, 14043, 18724, 23405, 28086, 32768, 37449, 42130, 46811, 51492, 56173, 60854, 65536]
That's better. While I'm at it, I can simplify out that factor of 2 by using half of 65536:
map (\tup -> (fst tup + snd tup) * 32768 `div` snd tup) [(numer, (7::Int)) | numer <- [(-7::Int)..(7::Int)] ]
[0, 4681, 9362, 14043, 18724, 23405, 28086, 32768, 37449, 42130, 46811, 51492, 56173, 60854, 65536]
In fact, I can ask GHC to verify that the results are the same:
map (\tup -> (fst tup + snd tup) * 65536 `div` snd tup `div` 2) [(numer, (7::Int)) | numer <- [(-7::Int)..(7::Int)] ] == map (\tup -> (fst tup + snd tup) * 32768 `div` snd tup) [(numer, (7::Int)) | numer <- [(-7::Int)..(7::Int)] ]
Very cool! Of course, this is not a rigorous proof, but I'm satisfied that my function is correct for the given inputs, and sometimes that's enough. | {"url":"http://praisecurseandrecurse.blogspot.com/2006_12_01_archive.html","timestamp":"2014-04-20T05:43:33Z","content_type":null,"content_length":"181066","record_id":"<urn:uuid:3f63e3b4-4f4b-40b8-8d5a-4f1d833f9518>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phidgets 1135 - Precision Voltage Sensor - Arduino Forum
I've interfaced a Phidgets 1135 Precision Voltage Sensor to A0, via a GVS Shield on my Arduino Uno.
I'm getting a signal, though not appropriate to my voltage, as measured with DMM.
I've tried to apply the formulas in the manual above, but to no avail (though I'm no mathematician).
Any thoughts would be helpful, thanks in advance. | {"url":"http://forum.arduino.cc/index.php?topic=52039.msg371199","timestamp":"2014-04-18T16:52:26Z","content_type":null,"content_length":"85508","record_id":"<urn:uuid:e42b938a-8886-436e-9d8b-f9117a573257>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Science Colloquium
IBM Research/NYU/Columbia Theory Day
Various Speakers,
November 12, 2010 9:30AM
Warren Weaver Hall, 109
251 Mercer Street
New York, NY 10012
Fall 2010 Colloquia Calendar
New York University
9:30 - 10:00 Coffee and bagels
10:00 - 10:55 Prof. Boaz Barak
Subexponential Algorithms for Unique Games and
Related Problems
10:55 - 11:05 Short break
11:05 - 12:00 Dr. Matthew Andrews
Edge-Disjoint Paths via Raecke Decompositions
12:00 - 2:00 Lunch break
2:00 - 2:55 Prof. Ryan O'Donnell
Optimal Lower Bounds for Locality Sensitive Hashing
(except when q is tiny)
2:55 - 3:15 Coffee break
3:15 - 4:10 Prof. Toniann Pitassi
Pan Privacy and Differentially Private Communication
For directions, please see http://www.cims.nyu.edu/direct.html and
http://cs.nyu.edu/csweb/Location/directions.html (building 46)
To subscribe to our mailing list, follow instructions at
Yevgeniy Dodis dodis@cs.nyu.edu
Tal Malkin tal@cs.columbia.edu
Tal Rabin talr@watson.ibm.com
Baruch Schieber sbar@watson.ibm.com
Prof. Boaz Barak
(Microsoft Research and Princeton University)
Subexponential Algorithms for Unique Games and
Related Problems
We give subexponential time approximation algorithms for the unique
games and the small set expansion problems. Specifically, for some
absolute constant c, we give:
1. An exp(kn^epsilon)-time algorithm that, given as input a k-alphabet
unique game on n variables that has an assignment satisfying
1-epsilon^c fraction of its constraints, outputs an assignment
satisfying 1-epsilon fraction of the constraints.
2. An exp(n^epsilon/delta)-time algorithm that, given as input an
n-vertex regular graph that has a set S of delta n vertices with
edge expansion at most epsilon^c outputs a set S' of at most delta
n vertices with edge expansion at most epsilon.
We also obtain a subexponential algorithm with improved approximation
for the Multi-Cut problem, as well as subexponential algorithms with
improved approximations to Max-Cut, Sparsest-Cut and Vertex-Cover on
some interesting subclasses of instances.
Khot's Unique Games Conjecture (UGC) states that it is NP-hard to
achieve approximation guarantees such as ours for unique games. While
our results stop short of refuting the UGC, they do suggest that
Unique Games is significantly easier than NP-hard problems such as
3SAT, 3LIN, Label Cover and more, that are believed not to have a
subexponential algorithm achieving a non-trivial approximation ratio.
The main component in our algorithms is a new result on graph
decomposition that may have other applications. Namely we show that
for every delta>0 and a regular n-vertex graph G, by changing at most
delta fraction of G's edges, one can break G into disjoint parts so
that the induced graph on each part has at most n^epsilon eigenvalues
larger than 1-eta (where epsilon,eta depend polynomially on
delta). Our results are based on combining this decomposition with
previous algorithms for unique games on graphs with few large
eigenvalues (Kolla and Tulsiani 2007, Kolla 2010).
Joint work with Sanjeev Arora and David Steurer.
Dr. Matthew Andrews
(Bell Laboratories)
Edge-Disjoint Paths via Raecke Decompositions
The Edge-Disjoint Paths (EDP) problem is one of the original NP-hard
problems. We are given a set of source-destination pairs in a graph
and we wish to connect as many of them as possible using edge-disjoint
The Edge-Disjoint Paths with Congestion (EDPwC) problem is a
relaxation in which we want to route approximately as many demands as
in the optimum solution to EDP but we allow ourselves a small amount
of congestion on each edge.
In this talk we shall give a brief history of these problems and
describe a new algorithm for EDPwC based on the notion of a Raecke
decomposition. This algorithm gives a polylog(n) approximation for
the number of routed demands as long as we allow poly(log log n)
congestion on each edge.
Prof. Ryan O'Donnell
(Carnegie Mellon University and IAS)
Optimal Lower Bounds for Locality Sensitive Hashing
(except when q is tiny)
Locality Sensitive Hashing (LSH) is a widely used algorithmic tool,
combining hashing with geometry, with applications to nearest-neighbor
search. For points in {0,1}^d, one wants a family H of functions such
that if dist(x,y) <= r then Pr[h(x) = h(y)] >= p (when h is chosen randomly
from H), whereas if dist(x,y) >= cr then Pr[h(x) = h(y)] <= q.
For a fixed c, the quality of the LSH family is determined by the
smallness of its ``rho parameter'', namely rho = ln(1/p)/ln(1/q). In
their seminal 1998 work, Indyk and Motwani gave a simple LSH family
with rho <= 1/c. The only known lower bound, [Motwani-Naor-Panigrahy'07],
was that rho must be at least roughly 0.5/c.
In this work, we give a simple proof of the optimal lower bound,
rho >= 1/c. In one sentence, it follows from the fact that the
noise-stability of a boolean function at ``time'' t is a log-convex
function of t.
We will also discuss the fact that all results mentioned here rely
on the underpublicized assumption that q is not too small.
This is joint work with Yi Wu of IBM Research and Yuan Zhou of
Carnegie Mellon.
Prof. Toniann Pitassi
(University of Toronto)
Pan Privacy and Differentially Private
Communication Complexity
We study differential privacy in a distributed setting where two or
more parties would like to perform analysis of their joint data while
preserving privacy for all datasets. Our results imply almost tight
lower bounds on the accuracy of such data analyses, both for specific
natural functions (such as Hamming Distance) and in general. Our
bounds expose a sharp contrast between the two-party setting and the
simpler client-server setting (where privacy guarantees are
one-sided). In addition, our bounds demonstrate a dramatic gap between
the accuracy that can be obtained by differentially private data
analysis versus the accuracy attainable when privacy is relaxed to a
computational variant of differential privacy.
Our proof techniques expose connections between the ability to
approximate a function by a low-error differentially-private protocol
and the ability to approximate the function by a low-communication
protocol. Finally, our lower bounds have applications to pan private
algorithms. Loosely speaking, pan-private algorithms are streaming
algorithms where all stored data must be differentially private.
The work on pan-privacy is joint with: Cynthia Dwork, Moni Naor,
Guy Rothblum and Sergey Yekhanin. The work on differentially private
communication complexity is joint with: Andrew McGregor, Ilya Mironov,
Omer Reingold, Kunal Talwar and Salil Vadhan.
| contact | {"url":"https://cs.nyu.edu/webapps/fall2010/colloquia/80","timestamp":"2014-04-18T18:27:56Z","content_type":null,"content_length":"18027","record_id":"<urn:uuid:4c3e3ab9-ddac-4b0f-8560-2700a84bc681>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Horizontal Asymptotes
Finding Horizontal Asymptotes
A horizontal asymptote is a y-value on a graph which a function approaches but does not actually reach. Here is a simple graphical example where the graphed function approaches, but never quite
reaches, y=0. In fact, no matter how far you zoom out on this graph, it still won't reach zero. However, I should point out that horizontal asymptotes may only appear in one direction, and may be
crossed at small values of x. They will show up for large values and show the trend of a function as x goes towards positive or negative infinity.
To find horizontal asymptotes, we may write the function in the form of "y=". You can expect to find horizontal asymptotes when you are plotting a rational function, such as: [tex]y=\frac{x^3+2x^2+9}
{2x^3-8x+3}[/tex]. Horizontal asymptotes occur when the graph of the function grows closer and closer to a particular value without ever actually reaching that value as x gets very positive or very
To Find Horizontal Asymptotes:
1) Put equation or function in y= form.
2) Multiply out (expand) any factored polynomials in the numerator or denominator.
3) Remove everything except the terms with the biggest exponents of x found in the numerator and denominator. These are the "dominant" terms.
Sample A: Find the horizontal asymptotes of:
Remember that horizontal asymptotes appear as x extends to positive or negative infinity, so we need to figure out what this fraction approaches as x gets huge. To do that, we'll pick the "dominant"
terms in the numerator and denominator. Dominant terms are those with the largest exponents. As x goes to infinity, the other terms are essentially meaningless.
The largest exponents in this case are the same in the numerator and denominator (3). The dominant terms in each have an exponent of 3. Get rid of the other terms and then simplify by crossing-out
the [tex]x^3[/tex] in the top and bottom:
In this case, 2/3 is the horizontal asymptote of the above function. You should actually express it as y=2/3. This value is the asymptote because when we approach x=infinity, the "dominant" terms
will dwarf the rest and the function will always get closer and closer to y=2/3. Here's a graph of that function as a final illustration that this is correct:
(Notice that there's also a vertical asymptote present in this function.)
If the exponent in the denominator of the function is larger than the exponent in the numerator, the horizontal asymptote will be y=0, which is the x-axis. As x approaches positive or negative
infinity, that denominator will be much, much larger than the numerator (infinitely larger, in fact) and will make the overall fraction equal zero.
If there is a bigger exponent in the numerator of a given function, then there is NO horizontal asymptote. For example:
There will be NO horizontal asymptote(s) because there is a BIGGER exponent in the numerator, which is 3. See it? This will make the function increase forever instead of closely approaching an
asymptote. The plot of this function is below. Note that again there are also vertical asymptotes present on the graph.
Sample B: Find the horizontal asymptotes of:
In this sample, the function is in factored form. However, we must convert the function to standard form as indicated in the above steps before Sample A. That means we have to multiply it out, so
that we can observe the dominant terms.
Sample B, in standard form, looks like this:
Next: Follow the steps from before. We drop everything except the biggest exponents of x found in the numerator and denominator. After doing so, the above function becomes:
Cancel [tex]x^2[/tex] in the numerator and denominator and we are left with 2. Our horizontal asymptote for Sample B is the horizontal line y=2.
Related Lessons:
Asymptotes, Finding Asymptotes, Graphing Rational Functions, Rational Functions
Asymptote Calculator
Just type your function and select "Find the Asymptotes" from the drop down box. Click answer to see all asymptotes (completely free), or sign up for a free trial to see the full step-by-step details
of the solution. | {"url":"http://www.freemathhelp.com/finding-horizontal-asymptotes.html","timestamp":"2014-04-19T17:02:05Z","content_type":null,"content_length":"14350","record_id":"<urn:uuid:04c5e67b-435d-4baa-8c01-3599fff7a6f3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction and MotivationBackground and Related WorkDatasetsModeling Indicators of Group MotionModeling Positional IndicatorsModeling Positional Indicators Regarding GModeling Positional Indicators Regarding ḠModeling Directional IndicatorsHypothesis TestingExperimental ResultsModel CalibrationEstimated DistributionsStability of ParametersPerformance and SensitivityComparison and GeneralizationConclusionsReferencesFigures and Tables
Sensors Sensors 1424-8220 MDPI 10.3390/s130100875 sensors-13-00875 Article Deciphering the Crowd: Modeling and Identification of Pedestrian Group Motion YücelZeynep^* ZanlungoFrancesco IkedaTetsushi
MiyashitaTakahiro HagitaNorihiro Intelligent Robotics and Communication Laboratories, Advanced Telecommunications Research Institute International, Kyoto 619-0288, Japan; E-Mails: zanlungo@atr.jp
(F.Z.); ikeda@atr.jp (T.I.); miyasita@atr.jp (T.M.); hagita@atr.jp (N.H.) Author to whom correspondence should be addressed; E-Mail: zeynep@atr.jp; Tel.: +81-774-95-1405. 01 2013 14 01 2013 13 1 875
897 14 12 2012 20 12 2012 04 01 2013 © 2013 by the authors; licensee MDPI, Basel, Switzerland. 2013
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the
application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting
inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable
indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating
positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing
scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to
99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed
uncertainty measure based on the local and global indicators of group relation.
motion model tracking recognition
The observation of human behavior in public environments such as shopping malls, sport venues or stations is a common application. To increase our understanding of these data and utilize them more
efficiently, we must associate attributes to individual pedestrians. The attributes of interest depend considerably on applications. For instance, resolving the social relation between customers such
as mother-son, friends or couple is relevant in customer profiling [1]. Similarly, in intelligent environments, service quality can be improved by providing different services to clients by inferring
their relation to their partners. Besides, in public environments, such as prisons or stadiums, recognizing the leader or the subordinates of groups is helpful for investigating aggressive or
criminal activities [2,3].
However, the association of such attributes is considerably difficult due to the inherent contextual asperities and complex social relations. We propose treating this problem primarily by decomposing
the entire crowd into smaller structures. In other words, we propose handling the crowd as a combination of social groups and single individuals. Once we obtain such a categorization, assigning
social attributes is easier. We base our definition of social groups on the work of McPhail and Wohlstein [4], who regard a group as people engaged in a social relation to one or more pedestrians and
move together toward a common goal.
The detection of pedestrian groups is challenging from several perspectives. Figure 1 illustrates a scene, where the detection of group relations is not straightforward. This figure illustrates a
scene from a public space, where friends and families are walking. Here, gender, clothing and age of the pedestrians are important cues indicating a social relation such as a couple or friends. Human
cognition has evolved in such a way that these personal properties are identified easily in an unconscious manner. However, estimation of such cues from surveillance footage is not possible in most
cases since traditional image based methods do not perform well for such recordings.
Therefore, we propose taking a closer look at the trajectories, namely the distribution of the displacements and scalar product of the velocity vectors. Based on these, we develop two explicit
schemes for modeling the interaction among group members, in addition to two other schemes for modeling the interaction between groups and single pedestrians. The models are calibrated for different
sorts of environments, group structures, and densities. With our proposed hypothesis testing scheme, we show that our method can resolve group relation to a considerable degree for various
The outline of the paper is as follows. Section 2 presents prominent works in this field, and Section 3 elaborates on the properties of the datasets employed in modeling and evaluation. Sections 4
and 5 discuss the motion models and the integration of individual indicators with the help of uncertainty measures. Finally, Section 6 presents our experimental results indicating stability,
performance, sensitivity, and generalization issues in addition to a comparison with an earlier work in literature and an alternative decision scheme.
As smart environments spread, a vast amount of data is gathered, particularly from public spaces. The analysis of the crowd behavior in this sort of data is of great interest to numerous research
fields such as crowd modeling and simulation, public space design, visual surveillance, and event interpretation [5]. In this section, we focus on previous works that interpret ambient information
from a social relation perspective.
Human activity analysis bears numerous challenging traits [6]. For the solution of this problem, a social signaling standpoint is adopted by Cristani et al. [7], utilizing primarily the nonverbal
cues of human behavior. Gatica-Perez gives a detailed overview of the nonverbal cues of small group relation, such as internal states, personality, and social relations [8]. Additionally, Costa
demonstrates that group behavior presents distinctions in interpersonal distances depending on dominance, attraction, age similarity, and gender of the group members [9]. In the rest of this section,
we refer to such complex features as the high-level cues of group relation. Such cues are specific to individuals. On the contrary, low-level cues involve features like spatial position, velocity or
motion direction, which are not specific to individuals. We categorize low-level cues into two classes, linear and circular variables. Linear variables involve spatial position, trajectory shape, and
the configuration of group members, while circular variables are composed of motion direction and the correlation of velocities.
Recently, the utilization of high-level cues has become a popular approach in the association of attributes to individuals, particularly in social network research. Several works address
investigation of social relations based on such universally valid implicit cues as the age difference between parents and children or the opposite genders of heterosexual couples [10,11]. Some
studies investigate kin relationships using photo albums that span a long time window of several years or even decades [12,13]. On the other hand, the proximity relation of faces on an image [14],
clothing, or facial expressions [15] are used to estimate social relations.
For several contextual and practical reasons, these studies apply only to image domain and not to surveillance footage. First of all, in images from family albums or social network it is evident that
the individuals appearing in the same image are related to each other. Then the question becomes resolving the type of relationship. However, the relation among pedestrians in a crowd is not obvious.
Moreover, in video surveillance high-level cues are not available at all times.
To account for these challenging conditions, several studies propose integrating low-level and high-level cues. For instance, Ding et al. employ low-level cues in concept detection and define a
Gaussian process based affinity learning for spotting social networks in theatrical movies and Youtube videos [16]. However, the appearance matrix relating the actors in a movie is derived from the
script by searching for the names of the characters, which is not applicable in surveillance footage. By identifying the group structure, such behaviors as aggression or agitation are analyzed in
[2]. Yu et al. assume that the 3D tracks of individuals and corresponding high-resolution face images are provided to investigate social groups and their organizations [3], which cannot be
generalized to most other problems.
Compared with high-level cues, low-level ones are easier to derive. However, the analysis of group level activity based on low-level cues is profoundly integrated with stable multi-object tracking
[1,17]. In other words, the occlusion arising from the group motion, which stands as a significant challenge at the first glance, can potentially be exploited for the enhancement of data association
[18,19]. Namely, the search area is restricted based on the estimated future location of the objects from their past trajectories and motion models. Therefore, the dynamic models accounting for the
collective locomotion behavior of pedestrians are proposed to improve tracking performance particularly against occlusions in [20–22].
By exploiting the low-level linear cues, several studies propose employing the contextual information provided by the configuration of groups to detect collective unusual behavior in public spaces.
However, note that the problem of the resolution of group relations cannot be reduced to determining the similarity of trajectories [23]. The methods, which investigate similarity between individual
trajectories, are mainly used in semantic scene modeling. They do not establish a relationship between simultaneously observed trajectories, which is the core of our problem [24,25]. Instead of
finding the similarities between trajectories, Habe et al. propose finding interactions between trajectories to solve for mutual relationship between pedestrians. The influence that pedestrians exert
on each other in the transition of motion states is investigated [26]. Floor control constitutes another commonly used low-level linear cue of collective human activities [27,28]. However, French et
al. propose employing only the circular low-level cue of velocity correlation in a Bayesian framework and ignore the interpersonal distances [29]. In their framework, close proximity is not regarded
as an indicator of group motion since it is claimed to be misleading in complex settings. Similarly, Calderara et al. omit the spatial relationships of trajectory points and focus on trajectory
shapes [30]. Namely, they handle the problem from a circular statistics standpoint and cluster trajectories into similarity classes.
Yücel et al. suggest combining the linear and circular attributes [31–33]. In their framework, group relation is characterized by the distance between the moving parties and the alignment of their
velocity vectors. Similarly, Ge et al. propose an algorithm to detect pedestrian groups through a bottom-up hierarchical clustering scheme based on locomotion similarities derived from an aggregated
measure of velocity difference vectors and spatial proximity [34]. Similar to [34], Sandιkcι et al. propose to integrate the positional and directional cues in the resolution of group relations by
defining similarity metrics for position, velocity, and direction, all of which in turn are expressed in a joint similarity matrix, followed by an agglomerative clustering approach [35]. Nonetheless,
their motion models assume a very simple structure, which might not suffice to capture the distinctive attributes of group behavior. Bahlmann integrates linear and circular variables in a fairly
different problem: online handwriting recognition [36]. Integration is achieved through an approximated wrapped Gaussian distribution, which only holds for data with low deviation, i.e., σ < 1.
Besides, this approach assumes that the probability density function of the linear variable is Gaussian. These two assumptions enable integration into multivariate semicircular wrapped distribution.
However, neither holds for pedestrian trajectory data.
In addition to multi-object tracking and activity recognition, group models play an important role in such other fields as traffic analysis, evacuation dynamics, and the social sciences. Numerous
works in pedestrians simulations are inspired by the social force model [37,38]. Lerner et al. describe a pedestrian simulation method, where a real world recording is employed to reflect behavioral
complexity on individual level and group levels [39].
In light of these observations, we introduce a fundamental insight to collective pedestrian motion models by focusing on a short time interval and deriving low-level cues to infer the social
relation. We aim to introduce a fundamental insight to collective pedestrian motion models. We relax the conditions defining group motion and provide a flexible means of identification for group
relations. Since the final decision regarding group relations is based on the combination of positional and directional indicators, this problem is regarded as compound hypothesis testing. Various
experiments prove that our proposed method effectively grasps the characterizing features of group relations and can recognize group activity with significantly high performance rates under varying
environmental conditions and group configurations. Our paper makes the following contributions:
Positional modeling accounting for dyadic as well as multi-partner groups;
Directional modeling in both uniform and non-uniform environments;
Integration of positional and directional indicators through compound hypothesis testing;
Definition of local and global indicators and an uncertainty measure.
Three publicly available datasets are employed in development and testing of the motion models, namely Caviar, BIWI Walking Pedestrians dataset, and APT Pedestrian Behavior Analysis dataset
[20,40,41]. These are picked so as to effectively demonstrate the generalization capabilities of our proposed approach against varying environmental conditions and distinctions in group structure.
In Caviar dataset, five videos which are recorded from an oblique view over the entrance hall of a building involve group motion. The pedestrians present meeting and splitting behavior as well as
uninterrupted group motion [40]. Although its size is quite moderate, Caviar dataset is considered in this study mainly due to the publicly available ground truth concerning groups, which provides a
fair comparison with other methods. BrWI Walking Pedestrians dataset contains two sequences, BIWI-ETH and BIWI-Hotel, recorded from birds-eye view with a total of 650 tracks over 20 minutes [20]. The
experiment scenes are the entrance of a building and a sidewalk. Due to the characteristics of these scenes, there is a dominant direction in the pedestrian flux (see Figure 2(b)). APT Pedestrian
Behavior Analysis dataset is recorded in the entrance hall of a shopping center [41] (see Figure 2(c)). Unlike BIWI, such a prominent flow does not exist in any direction but a tendency to walk along
a certain direction is noticed. Due to the homogeneous distribution of the flow, APT dataset is regarded as coming from a uniform environment.
Table 1 shows the total number of observed pedestrians and group sizes. The Caviar dataset involves a fairly small number of pedestrians. BIWI-ETH contains various multi-partner groups, whereas
BIWI-Hotel and APT are composed of mainly dichotomous groups, who are often walking abreast. As the group size gets larger the possibility of abreast configuration decreases particularly in high
pedestrian densities, i.e., the groups may be bent forward or backward as well as arranged in a single file [42]. Among these sets, BIWI-ETH has the highest density followed by BIWI-Hotel, APT and
Caviar, consecutively.
From Figure 2 and Table 1 the main differences between these sets are concluded to be the presence of preferred direction in BIWI-ETH and BIWI-Hotel against more homogeneous distribution in Caviar
and APT and the frequent observation of multi-partner groups in BIWI-ETH against the dominance of dichotomous groups in BIWI-Hotel and APT. These variations are taken into consideration in the
development of motion models.
Since this study proposes an identification method for groups of pedestrians rather than a tracking algorithm, we consider well-tracked trajectories and carry out our analysis to identify the
pedestrian groups from these trajectories. For BIWI-ETH, BIWI-Hotel and ATR datasets, the trajectories which are obtained by state-of-the-art tracking algorithms, are publicly available
[20,41,43,44]. For Caviar dataset, we performed manual annotation and estimated the homography matrix to map the annotated pixel coordinates to ground plane. The sampling period of trajectory points
is 160 ms concerning BIWI-ETH and BIWI-Hotel sets and 100 ms concerning APT set. For Caviar dataset, the sampling rate is 200 ms. The group relations for all datasets are provided as ground truth
[41,44,45]. Using these trajectories and ground truth values, a convenient formulation is offered in accordance with the characteristics of the environment and the group structure.
The question addressed in this study is which parameters characterize group motion, how we can model them and determine whether two pedestrians belong to the same group or not. In what follows, we
introduce the terminology used in the rest of this study and then describe our proposed models of the indicators of group motion.
We term any two pedestrians who are observed simultaneously as a pair. Suppose that the pairs who are engaged in a group relation such as {p[i],p[j]} of Figure 3 constitute the set G, whereas the
pairs who are not engaged in a group relation such as {p[i],P[h]} comprise the complementary set Ḡ [4].
Based on the findings of [46], group motion is mainly characterized by positional indicators and directional indicators. We quantify positional indicators in terms of interpersonal distance, whereas
directional indicators are defined based on motion directions. In explicit terms, the positional indicator of group motion is represented by Δ and is composed of a set of linear variables {δ}, where
δ stands for the instantaneous distance between pedestrians (see Figure 3). On the other hand, the directional indicator, which is represented by Θ, is a set of circular variables, i.e., angles
between simultaneously observed velocity vectors {θ} (see Figure 3).
Obviously, in order to define a meaningful value for θ, the pedestrians should be moving with a velocity larger than a reasonable threshold. We picked this value examining the distribution of
velocity for all people in the environment (see Figure 4). In BrWI-ETH dataset the people who wait at the tram station have low velocities distributed more or less uniformly over 0 to 0.5 m/s. On the
other hand, there are basically two peaks in velocity distribution for APT dataset. The first peak is entered around 0.1 m/s and it relates the people who are watching the shelves, whereas the second
peak is centered around 1.2 m/s and it relates the people who walk steadily. Nevertheless, the number of these people is quite low compared with the steadily walking pedestrians. Thus, we picked
0.375 m/s as velocity threshold.
Since the velocity threshold is picked around the local minima of the velocity distribution separating the moving and stationary pedestrians, shifting the velocity threshold slightly would not affect
a large number of pedestrians and thus not change the performance of the proposed method drastically. Moreover, the local minima observed in BIWI-ETH and APT datasets do not arise due the specific
characteristics of these environments. According to Helbing et al., at normal density the velocity of pedestrians is given by a normal distribution with an average of 1.34 m/s and a standard
deviation of 0.26 m/s [47]. These values may change slightly according to the environment but putting the velocity threshold around 0.3 ∼ 0.5 m/s we will be sure to locate it at least 2σ from the
peak [48].
Based on these definitions, each pair of pedestrians is represented by a set, which is composed of these two indicators {Δ,Θ}. Moreover, each of G and Ḡ is described by two models characterizing the
positional and directional relations, i.e., Δ[G] and Θ[G] or Δ[Ḡ] and Θ[Ḡ]. The identification problem is deliberated with two different applications of the same approach in parallel, i.e.,
investigating whether Δ ∼ Δ[G] or Δ ∼ Δ[Ḡ] and Θ ∼ Θ[G] or Θ ∼ Θ[Ḡ]. The final decision is rendered based on the outcomes of these two, where the outcome implicating a lower uncertainty is preferred
in case of ambiguities.
In our previous study we followed a similar strategy and proposed a simplistic method to identify group motion [31]. Ideally, the pedestrians involved in group motion are proposed to be in close
proximity and have perfectly aligned velocity vectors. Since these ideal conditions are met seldom, certain thresholds are applied to account for the non-ideal nature of the behavior. In this manner,
satisfactory performance rates are achieved. Nevertheless, explicit models are necessary to improve the performance and to make the method flexible in order to effectively adapt to different
settings. To that end, the proximity and motion direction of pedestrians involved in a group relationship are investigated closely and a mathematical model is proposed for each of the relating
probability density functions (pdf) in what follows.
The positional indicators are modeled based on the following assumptions. First an arbitrary reference frame is assigned to the observation environment. In addition, the probability of visiting each
point in the environment is assumed to be equal, P ( p m ) = P ( p n ) , ∀ p n , p n ∈ Awhere P(p [m]) denotes the probability of visiting point p [m] and A stands for the observation environment.
Any displacement vector δ⃗ can be decomposed into two components, δ[x] and δ[y], where δ = δ x 2 + δ y 2. Namely, δ[x] = δ cos(α) and δ[y] = δ sin(α), where α stands for the argument of δ⃗ based on the
chosen reference frame (see Figure 5). Since group members prefer to keep a comfortable distance of ν between each other, δ[x] and δ[y] are statistically independent normally distributed random
variables, δ x ~ N ( ν cos ( α ) , σ 2 ) δ y ~ N ( ν sin ( α ) , σ 2 )Equation (2) implies that δ is distributed as a Rice distribution, p ( δ ∣ ν , σ ) = δ σ 2 exp ( − δ 2 − ν 2 2 σ 2 ) I 0 ( δ ν σ
2 )where I[0] stands for the modified Bessel function of the first kind with order 0 [49].
This distribution is independent of the choice of reference frame. Of course, in the presence of a strong pedestrian flow along a certain direction, the distributions of δ[x] and δ[y] have different
representations according to different choices of reference frame. This is due to the fact that α is determined by the major flow direction in such environments. In the presence of a major flow
direction α is distributed in a non-uniform manner, which affects δ[x] and δ[y]. However, the distribution of δ given by Equation (3) is invariant to the orientation α. Thus, the distribution of δ is
still given by Equation (3). This result obviously holds in the absence of any prominent direction such that α is a uniformly distributed circular random variable.
The unimodal formulation defined by Equation (3) provides a reasonable interpretation for the distance among members of a dichotomous group. However, multi-partner groups which are composed of three
or more pedestrians, present more complex proxemics bearing a multimodal approach.
In order to have a better insight into the structure of multi-partner groups, we define the degree of neighborhood based on the configuration of the group members. Namely, the group structure is
expressed in terms of a minimum spanning tree (MST). The degree of neighborhood concerning any two pedestrians is defined by the number of edges along the shortest path of the MST connecting them.
According to this definition, {p[i],p[j]} of Figure 3 has a degree of neighborhood that equals 1. In other words, they are are first neighbors, whereas {p[i],p[k]} of Figure 3 are second neighbors.
In this framework, within multi-partner groups, the distance between first neighbors is modeled using the unimodal formulation of Equation (3). Assuming that the relative position of all first
neighbors is given by the same function, i.e., the distribution function for the position of first neighbors is the same within the group, the distance between n^th order neighbors, n > 1, is modeled
by the convolution of the unimodal model to the n^th power. A multimodal framework, which is the linear combination of these N models is suggested to embrace the relation among members of a
multi-partner group composed of N + 1 people. Namely, Δ G ( δ ∣ ν , σ ) = ∑ n = 1 N K n Δ G n ( δ ∣ ν , σ )where K[n] is the observation frequency of n^th neighborhood. The function Δ[G][n] denotes
the distribution between the n^th neighbors and is equivalent to the convolution of Equation (3) to the n^th power. It is suggested to restrict N ∈ {1, 2, 3}. Because large groups (of 5 or more
people; tend to be arranged in complex configurations instead of abreast formation [42]. This limits the degree of neighborhood and eliminates the need to extend N over 3.
If any two simultaneously observed pedestrians are not engaged in a group relation, their relative locations at a particular instant are independent. This assumption, together with Equation (1),
makes the problem equivalent to randomly selecting two points from a uniform distribution in the observation environment and measuring the distance between them. Suppose that the dimensions of the
observation environment along the x− and y−axes are D. Then, p ( δ x ) = 2 D ( 1 − δ x D )while the pdf concerning δ[y] is computed in the same manner. Assuming that δ[x] and δ[y] are independent,
the relating joint pdf is resolved [50] as, p ( δ ) = { 1 D 2 δ ( 2 δ 2 − 4 δ + π ) , if 0 ≤ δ ≤ D 1 D 2 δ [ 4 δ 2 − 1 − ( δ 2 + 2 − π ) − 4 tan − 1 ( δ 2 − 1 ) ] if D < δ ≤ D 2This distribution
describes δ regarding Ḡ in a large environment, D ≫ c, where c ≈ 400 mm stands for the width of the human body. However, it does not account for the constraint imposed by the physical dimensions of
the pedestrians, that represent a minimum distance (cutoff) below which 5 cannot assume values. To account for this cutoff, δ is substituted with δ′ = δ − c and p(δ) is renormalized by replacing D
with D ′ = D − c / 2. Note that this distribution does not need to be calibrated since it only depends on the geometry of the observation area.
The directional indicator of group motion regarding any two pedestrians p[i] and p[j] is derived from their velocities. The scalar product of velocity vectors υ⃗[i] and υ⃗[j] is defined as, υ → i ⋅ υ →
j | υ → i | | υ → j | = cos ( θ i j )where θ denotes the angle between these vectors (see Figure 3). The directional indicators of group motion are represented in terms of this angle θ.
The pairs in G, excluding those exhibiting behaviors like meeting, splitting or standing, are expected to have the direction of the velocity vectors aligned to a considerable degree, whereas the
pairs in Ḡ do not present any correlation of direction. This suggests that the expected value of θ is 0 for both G and Ḡ. If θ were a linear random variable over (−∞, ∞), such a behavior could be
approximated with a normal distribution of mean 0 and standard deviation σ[θ]. However, θ is a circular random variable defined over [−π, π] and, thus, it cannot be modeled in terms of a standard
normal distribution.
Hence, the principles of directional statistics are invoked and the behavior of θ is modeled as a von Mises distribution [51], which is the circular analogue of the Gaussian distribution. The
following is the explicit form of the von Mises distribution, p ( θ ∣ μ , κ ) = exp ( κ cos ( θ − μ ) ) 2 π I 0 ( κ )where μ denotes the mean value and κ is analogous of 1/σ^2 of the normal
Note that the θ distribution relating G and Ḡ is described using the same function given by Equation (8), where the parameter κ enables modeling of different behaviors. In other words, for the
pedestrian pairs in G, the distribution of θ is very localized around μ = 0 and κ ≫ 1. On the other hand, for the pedestrian pairs in Ḡ, the distribution is uniform if there is no prominent flow and
κ → 0. Furthermore, in the presence of major flow θ has two peaks for each major flow, i.e., one for pedestrians moving in the same direction and another for pedestrians moving in opposite
directions. In that case, the distribution of θ regarding Ḡ is modeled as a linear combination of two von Mises distributions, one with μ = 0 and the other with μ = π. Even in this case, the
distribution around a particular peak is expected to be larger than that of pairs in G.
The decision whether a pair belongs to G or Ḡ is carried out using a compound hypothesis testing scheme, as shown in Algorithm 1. Since G or Ḡ are mutually exclusive and complementary events, a
decision can confidently be made as long as the individual indicators point to the same sort of group relation. In case of conflicts, a measure of uncertainty needs to be defined to resolve the final
decision. In what follows we describe how the individual decisions are carried out and we define the uncertainty measures for resolving the final decision in case of contradictions.
Algorithm 1: Compound hypothesis testing. Input: Trajectories of pedestrian p[i] and simultaneously observed pedestrians {p[j]}, 1 ≤ j ≤ J. Output: The nature of group relation of p[i] with {p[j]}
for j ← 1 to J do - Δ={|δ⃗[ij]|}; - Θ = {/(υ^i,υ^j)}; - L^δ, L^θ; /* Equation 9 */ if (L^δ > 0) ∧ (L^θ > 0); /* Equation 10 */ then {p[i],p[j]} ∈ G; else if (L^δ < 0) ∧ (L^θ < 0); /*
Equation 10 */ then {p[i],p[j]} ∈ Ḡ; else - Compute ρ^δ and ρ^θ; /* Equation 13 */ if [(Δ ∼ Δ[G]) ∧ (Θ ∼ Θ[Ḡ]) ∧ (ρ^δ < 1/ρ^θ)] ∨ [(Δ ∼ Δ[Ḡ]) ∧ (Θ ∼ Θ[G]) ∧ (ρ^θ < 1/ρ^δ)] then {p[i],p
[j]} ∈ G; else {p[i],p[j]} ∈ Ḡ
In binary decisions, a likelihood ratio test is one way of determining the underlying model. Concerning Δ, the log-likelihood ratio of being in a group relation over not being in a group relation, L^
δ, is defined as, L δ = log ( ∏ ∀ δ ∈ Δ Δ G ( δ ∣ ν , σ ) ∏ ∀ δ ∈ Δ Δ G ¯ ( δ ) )The following is the decision based on δ, = { Δ ~ Δ G L δ > 0 Δ ~ Δ G ¯ L δ < 0The decision based on θ is carried out
in a similar manner through the log-likelihood ratio concerning Θ, L^θ, computed in an analogous way to Equation (9).
As long as L^δ and L^θ have the same sign, a confident decision is made regarding the group relation (Algorithm 1 Lines 1 and 1). However, contradictions might arise. For example, when pedestrians
cross next to each other, move along a flow, or go through passages, their relative position might become close or their velocity vectors might be aligned, independent of their social relation. One
may argue that an intuitive way of resolving such cases is to pick the decision that implies a larger absolute value. However, we demonstrate in Section 6 that this straightforward approach is not
capable of compensating for the effect of these misleading cues. Therefore, we devise an uncertainty measure.
Inspired by the Kullback-Leibler divergence, a reliability estimate is employed to quantify the uncertainty of individual decisions rendered through Equation (10) [52]. The Kullback-Leibler
divergence of two distributions such as P and Q is defined as, D K L ( Q ‖ P ) = ∑ i p ( i ) log ( p ( i ) q ( i ) )Note that this measure is not symmetric, i.e., D[KL](Q‖P) ≠ D[KL](P‖Q). Thereby,
mathematically speaking, it is not a distance measure but it quantifies the difference between two probability distributions. To have a common reference point, the divergence terms are computed with
respect to the observed distributions. Hence, the divergences relating 8 with respect to G and Ḡ are defined as D G δ = D K L ( Δ ‖ Δ G ) and D G ¯ δ = D K L ( Δ ‖ Δ G ¯ ). Since these terms embrace
all {δ} through the summation term in Equation (11), we call them global indicators of group motion.
However, θ relating G does not present a behavior as regular as δ of G. Thus, it is proposed to focus on its local characteristics so as to avoid the misleading temporal imperfections that might lead
to a false similarity to Ḡ. Namely, the divergence term relating θ with respect to G is defined as, D G θ ( Θ ‖ Θ G ) = max θ { Θ ( θ ) log ( Θ ( θ ) Θ G ( θ ∣ κ ) ) }where the divergence of θ with
respect to Ḡ is computed in a similar manner. This equation implies that only the divergence value that indicates the maximum dissimilarity is accounted for. Thereby, it defines a local indicator of
group motion.
A direct comparison of the divergence terms defined above is not possible since they are not defined in terms of comparable measures. To enable a comparison, two uncertainty measures are defined
regarding each individual decision as the ratio of the concerning divergence values, ρ δ = D G δ / D G ¯ δ ρ θ = D G δ / D G ¯ δThe final resolution is determined by picking the decision with lower
uncertainty (Algorithm 1 Line 1).
This section discusses the performance of the estimated distributions in terms of a qualitative comparison, the stability of the model parameters with respect to varying training sets, the
identification performance of groups, sensitivity, generalization, and improvement introduced by compound hypothesis testing over individual models and the method of [31] and maximum absolute
log-likelihood ratio method.
The models defined in Section 4 bear a number of parameters, which need to be tuned for different environments and group behaviors. For instance, the positional relation model regarding G, Δ[G],
given in Equation (4) requires the determination of ν and σ. Similarly, the directional relation models, Θ[G] and Θ[Ḡ], given in Equation (8) require calibration of κ.
For solving these model parameters, we propose shuffling the dataset and randomly selecting 10% of the pairs in G and 10% of the pairs in Ḡ. The squared error between the distributions of the
positional and directional indicators concerning these randomly selected sets and the proposed models is minimized using a golden section search. Subsequently, the remaining 90% of the data is
employed to evaluate of the proposed models. Section 6.2 presents the performance of this estimation scheme.
In our investigation of the stability of the model parameters, and the sensitivity of the model against varying training sets, this procedure is repeated by shuffling the dataset 50 times. Sections
6.3 and 6.4 report the performance metrics following such a validation scheme.
Figure 6 demonstrates the modeled and observed distributions of the positional indicators for a particular run of the calibration scheme described in Section 6.1. The observed distribution is
expressed in terms of the histograms that relate the samples constituting the 90% of all observations. The model concerning Δ[G] of BIWI-ETH is modeled with both unimodal and multimodal approaches.
For this case, the multimodal approach in Equation (4) considers N to be 3. Since BIWI-ETH contains various multi-partner groups (see Table 1), the improvement of the multimodal approach over the
unimodal approach can easily be observed in Figure 6(a). On the other hand, due to the dominance of the dichotomous groups in APT, the unimodal scheme provides satisfactory performance in modeling Δ
[G] concerning APT. For Δ[Ḡ], fairly good results are obtained for both sets. The smoother shape of the observed distribution of APT is due to the larger number of observations compared with
Figures 7(a,b) illustrate the modeled and observed distributions of the directional indicators relating G. As expected, both models peak around 0, where the spread concerning APT is slightly larger
than that of BIWI-ETH. This difference reflects the more regular motion pattern of the pedestrians due to fewer distractions in comparison with APT's shopping center environment. On the other hand,
the models concerning Ḡ present a clear distinction arising from the different flow characteristics. Due to the lack of prominent flow direction, θ is distributed more evenly for APT and is
concentrated around 0 and π for BIWI-ETH.
Repeating the calibration method described in Section 6.1 50 times using a set of randomly selected samples that constitutes 10% of all the data, we obtain the statistics shown in Table 2.
The Δ[G] models relating different datasets lead to similar values for ν, changing between 0.81 cm and 0.67 m with a fairly small variation within 0.06 m. Hall defines close phase personal distance
to be between 46 cm and 75 cm and far phase personal distance to be between 76 cm and 120 cm [53]. Our findings are consistent with these values.
Regarding the θ models, the κ values relating G are always larger than those of Ḡ. As explained in Section 4.2, this indicates that the θ pattern concerning G is more structured than that of Ḡ.
Nonetheless, the distinction becomes most clear in APT due to the lack of prominent flow direction. Moreover, the deviations of κ have quite insignificant values, provided that the sample set is
large, as in BIWI-ETH and APT, whereas in BIWI-Hotel the deviation of κ regarding both G and Ḡ is higher relative to κ due to the reduced number of samples.
Table 3 illustrates the performance of detecting the individual group relations of 50 runs of the proposed method together with the sensitivity of the identification rates. The overall success rates
are all above roughly 85%, where the rates of G and Ḡ do not present any significant distinction between the different runs of the proposed method with respect to different datasets.
Since the group structure of multi-partner groups gets more complex, particularly in high pedestrian densities, it is not possible to provide stable statistics for the performance rates with respect
to the degree of neighborhood [42]. This fact supports our unifying approach in modeling δ of G, where the different degrees of neighborhood are blended in Equation (4).
Moreover, in multi-partner groups by applying a cross-check, the pedestrians, who are found to be in group relations to the same pedestrians independent of their degree of neighborhood, can be linked
to each other. The detection rates regarding G increase to 100% by applying this cross-check. Figure 8 illustrates several examples of challenging cases from BIWI-ETH and BIWI-Hotel sets.
This section presents the performance rates based on the decisions of each individual indicator and ascertains that compound hypothesis testing improves the identification of group relations.
Moreover, the alternative of hypothesis testing described in Section 5, where a decision is made in favor of the maximum absolute log-likelihood ratio, is applied and the superiority of our proposed
method is verified. In addition, the detection performance of method of [31] is reported and it is ascertained that our proposed method outperforms it.
The improvement introduced by the integration of two observations through compound hypothesis testing as described in Section 5 is presented in Table 4. The improvement achieved by using both
indicators (Δ + Θ), in comparison with using a single indicator (Δ or Θ), is presented in terms of the difference in performance rates of the individual decisions and performance rates after
integration. It is observed that the numbers are often positive, which indicates that compound hypothesis testing provides an improvement over the individual models in almost every case.
The detection of G in Caviar is the only exception. Using the positional indicator Δ, a detection rate of 93.18% is achieved. Integrating positional and directional indicators, the detection rate
decreases to 86.68%. This is due to the fact that the pedestrians in Caviar follow scenarios such as meeting and splitting, which cannot be determined using the directional indicators as explained in
Section 4.2. The ground truth is given based on the video sequence, where visual cues are available. However, group relation is resolved using the indicators derived only from trajectory data. This
implies that certain cues are not reflected such as gaze direction or body posture, whereas cues like position are still present. Therefore, it is not surprising that for describing behaviors like
meeting and splitting, using only the positional indicator Δ results in a better performance than Δ + Θ.
Table 5 illustrates the performance rates of the model of [31] for pairs in G, pairs in Ḡ and all pairs. In BIWI-ETH, which involves a non-uniform environment with high pedestrian density, Reference
[31] has a positive bias for Ḡ, which misleadingly increases the overall detection rate to 95.05%. However, G, which is observed less often than Ḡ is only detected by a 65.52% success rate. In
BIWI-Hotel, which involves a dominant flow direction environment with low pedestrian density, the identification rates of [31] and the proposed method are comparable. In APT, Reference [31] detects
both G and Ḡ with roughly 9% lower rates than our method.
In Section 5, it is mentioned that a straightforward way of dealing with conflicting decisions is to pick the decision that implies a larger absolute value. The identification rates achieved by
selecting the decision with a higher absolute log-likelihood ratio instead of applying compound hypothesis testing is presented in Table 5. Although the overall performance rates seem close to the
proposed method, the detection rates of G are considerably lower than those of Ḡ. In other words, this approach has a positive bias for Ḡ. Therefore, our proposed method proves to have no bias in
favor of a particular class, which implies a fair distinction of group relation.
Positional and directional models are proposed for identification of pedestrian groups in crowded environments together with a compound evaluation scheme. Different environmental characteristics are
accounted for in addition to varying group structures. Our results indicate that our proposed models grasp the characterizing features of different environmental settings and varying patterns of
group relations. Moreover, the model parameters are shown to be stably derived from a small set of data. In addition, the group relations are illustrated to be identified with satisfactorily high
rates. The efficacy of compound evaluations is verified by a comparison with individual decisions as well as with another method in the literature. Finally, our contributions are listed as
improvements in positional and directional models to adjust to different environments and group structures, the description of compound evaluations and the comparison of the models, and the
resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relations.
This research is supported by the Ministry of Internal Affairs and Communications of Japan through the project of Ubiquitous Network Robots for Elderly and Challenged.
HaritaogluI.FlicknerM.Detection and Tracking of Shopping Groups in StoresProceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern RecognitionKauai, HI, USA8– 14
December 2001I-431I-438 ChangM.C.KrahnstoeverN.GeW.Probabilistic Group-Level Motion Analysis and Scenario RecognitionProceedings of the 2011 IEEE International Conference on Computer VisionBarcelona,
Spain6– 13 November 2011747754 YuT.LimS.N.PatwardhanK.A.KrahnstoeverN.Monitoring, Recognizing and Discovering Social NetworksProceedings of the IEEE Conference on Computer Vision and Pattern
RecognitionMiami, FL, USA20– 25 June 200914621469 McPhailC.WohlsteinR.Using film to analyze pedestrian behavior19821034710.1177/0049124182010003007
ZhanB.MonekossoD.N.RemagninoP.VelastinS.A.XuL.Q.Crowd analysis: A survey20081934535710.1007/s00138-008-0132-4 AggarwalJ.K.RyooM.S.Human activity analysis: A review201110.1145/1922649.1922653
CristaniM.RaghavendraR.Del BueA.MurinoV.Human behavior analysis in video surveillance: A social signal processing perspective2013100869710.1016/j.neucom.2011.12.038 Gatica-PerezD.Automatic nonverbal
analysis of social interaction in small groups: A review2009271775178710.1016/j.imavis.2009.01.004 CostaM.Interpersonal distances in group walking201034152610.1007/s10919-009-0077-y
ChenY.Y.HsuW.H.LiaoH.Y.M.Discovering Informative Social Subgraphs and Predicting Pairwise Relationships from Group PhotosProceedings of the 20th ACM International Conference on MultimediaNara,
Japan29 October–2 November 2012 SinglaP.KautzH.LuoJ.GallagherA.Discovery of Social Relationships in Consumer Photo Collections Using Markov LogicProceedings of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition WorkshopsAnchorage, AK, USA23– 28 June 200817 XiaS.ShaoM.J.L.FuY.Understanding kin relationships in a photo2012141046105610.1109/TMM.2012.2187436
WangG.GallagherA.C.LuoJ.ForsythD.A.Seeing people in social context: Recognizing people and social relationships20106315169182 GallagherA.C.ChenT.Understanding Images of Groups of PeopleProceedings of
the IEEE Conference on Computer Vision and Pattern RecognitionMiami, FL, USA20– 25 June 2009256263 MurilloA.KwakI.BourdevL.KriegmanD.BelongieS.Urban Tribes: Analyzing Group Photos from a Social
PerspectiveProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition WorkshopsProvidence, RI, USA16– 21 June 20122835 DingL.YilmazA.Learning relations among movie
characters: A social network perspective20106314410423 WangX.TieuK.GrimsonE.Correspondence-free activity analysis and scene modeling in multiple camera views201032567110.1109/TPAMI.2008.241
WuB.NevatiaR.Detection and segmentation of multiple, partially occluded objects by grouping, merging, assigning part detection responses20098218520410.1007/s11263-008-0194-9
JorgeP.AbrantesA.MarquesJ.On-Line Tracking Groups of Pedestrians with Bayesian NetworksProceedings of the 6th International Workshop on Performance Evaluation for Tracking and SurveillancePrague,
Czech Republic10 May 2004 PellegriniS.EssA.SchindlerK.GoolL.J.V.You'll Never Walk Alone: Modeling Social Behavior for Multi-Target TrackingProceedings of the IEEE 12th International Conference on
Computer Vision29 September– 2 October 2009261268 PellegriniS.EssA.GoolL.J.V.Improving data association by joint modeling of pedestrian trajectories and groupings20106311452465
BoseB.WangX.GrimsonE.Multi-Class Object Tracking Algorithm that Handles Fragmentation and GroupingProceedings of the IEEE Conference on Computer Vision and Pattern RecognitionMinneapolis, MN, USA17–
22 June 200718 WertheimerM.Laws of Organization in Perceptual FormsRoutledge and Kegan PaulLondon, UK1938 WangX.MaK.T.NgG.W.GrimsonW.E.L.Trajectory analysis and semantic region modeling using
nonparametric hierarchical bayesian models20119528731210.1007/s11263-011-0459-6 YanW.ForsythD.A.Learning the Behavior of Users in a Public Space through Video TrackingProceedings of the 7th IEEE
Workshops on Application of Computer VisionBreckenridge, CO USA5– 7 January 2005370377 HabeH.HondaK.KidodeM.Human Interaction Analysis Based on Walking Pattern TransitionsProceedings of the 3rd ACM/
IEEE International Conference on Distributed Smart CamerasComo, Italy30 August– 2 September 200918 ZenG.LepriB.RicciE.LanzO.Space Speaks: Towards Socially and Personality Aware Visual
SurveillanceProceedings of the 1st ACM International Workshop on Multimodal Pervasive Video AnalysisFlorence, Italy25–29 October 20103742 ChoiW.ShahidK.SavareseS.Learning Context for Collective
Activity RecognitionProceedings of the IEEE Conference on Computer Vision and Pattern RecognitionProvidence, RI, USA20– 25 June 201132733280 FrenchA.P.NaeemA.DrydenI.L.PridmoreT.P.Using Social
Effects to Guide Rracking in Complex ScenesProceedings of the IEEE Conference on Advanced Video and Signal Based SurveillanceLondon, UK5– 7 September 2007212217 CalderaraS.PratiA.CucchiaraR.Mixtures
of von mises distributions for people trajectory shape analysis20112145747110.1109/TCSVT.2011.2125550 YücelZ.IkedaT.MiyashitaT.HagitaN.Identification of Mobile Entities Based on Trajectory and Shape
InformationProceedings of the IEEE /RSJ International Conference on Intelligent Robots and SystemsSan Francisco, CA, USA25– 30 September 201135893594
YücelZ.ZanlungoF.IkedaT.MiyashitaT.HagitaN.Modeling Indicators of Coherent MotionProceedings of the IEEE /RSJ International Conference on Intelligent Robots and SystemsVilamoura-Algarve, Portugal7–
12 October 201218 YücelZ.MiyashitaT.HagitaN.Modeling and Identification of Group Motion via Compound Evaluation of Positional and Directional CuesProceedings of the 21st International Conference on
Pattern RecognitionTsukuba, Japan11–15 November 201218 GeW.CollinsR.T.RubackB.Vision-based analysis of small groups in pedestrian crowds2012341003101610.1109/TPAMI.2011.176 SandikciS.ZingerS.de
WithP.H.N.Detection of human groups in videos20116915507518 BahlmannC.Directional features in online handwriting recognition20063911512510.1016/j.patcog.2005.05.012 HelbingD.MolnarP.Social force
model for pedestrian dynamics1995514282428610.1103/PhysRevE.51.4282 MehranR.OyamaA.ShahM.Abnormal Crowd Behavior Detection Using Social Force ModelProceedings of the IEEE Conference on Computer
Vision and Pattern RecognitionMiami, FL, USA20– 25 June 2009935942 LernerA.ChrysanthouY.LischinskiD.Crowds by example20072665566410.1111/j.1467-8659.2007.01089.x FisherR.The PETS04 Surveillance
Ground-Truth Data SetsProceedings of the 6th IEEE International Workshop on Performance Evaluation of Tracking and SurveillancePrague, Czech Republic10 May 2004 Zeynep YücelAvailable online: http://
www.irc.atr.jp/zeynep/research (accessed on 9 January 2013) MoussaïdM.PerozoN.GarnierS.HelbingD.TheraulazG.The walking behaviour of pedestrian social groups and its impact on crowd
dynamics201010.1371/journal.pone.0010047 GlasD.MiyashitaT.IshiguroH.HagitaN.Laser Tracking of Human Body Motion Using Adaptive Shape ModelingProceedings of the International Conference on Intelligent
Robots and SystemsSan Diego, CA, USA29 October–2 November 2007602608 ETH Zurich—Department of Information Technology and Electrical Engineering Computer Vision LaboratoryAvailable online: http://
www.vision.ee.ethz.ch/datasets/index.en.html (accessed on 9 January 2013) Benchmark Data for PETS-ECCV 2004Available online: http://www-prima.inrialpes.fr/PETS04/caviar_data.html (accessed on 9
January 2013) MoussaïdM.HelbingD.GarnierS.JohanssonA.CombeM.TheraulazG.Experimental study of the behavioural mechanisms underlying self-organization in human crowds200910.1098/rspb.2009.0405
HelbingD.FarkasI.MolnarP.VicsekT.Simulation of Pedestrian Crowds in Normal and Evacuation SituationsSpringerBerlin, Germany2002 DaamenW.HoogendoornS.Free Speed Distributions for Pedestrian
TrafficProceedings of the 85th Annual Meeting of Transportation Research BoardWashington DC, USA22– 26 January 2006 AbramowitzM.StegunI.Courier DoverNew York, NY, USA1965 Francesco ZanlungoAvailable
online: https://sites.google.com/site/francescozanlungo/squarelinepicking (accessed on 9 January 2013) MardiaK.V.JuppP.E.John Wiley and Sons Ltd.New York, NY, USA2000 KoseC.WeselR.Robustness of
Likelihood Ratio Tests: Hypothesis Testing under Incorrect ModelsProceedings of the Thirty-Fifth Asilomar Conference on Signals, Systems and ComputersPacific Grove, CA, USA4–7 November 200117381742
HallE.T.DoubledayGarden City, NY, USA1966
Which pedestrians are in a group? It is hard to tell from snapshots since traditional image based methods do not apply to surveillance footage. Trajectories are an important clue of group relation.
Experiment scenes from datasets (a) Caviar; (b) BIWI-ETH; (c) BIWI-Hotel and (d) APT. Pedestrians moving as a group are denoted with bounding boxes of the same color.
Pedestrians of same group are denoted with same color. Some positional and directional measures employed in identification of groups are illustrated in reference to p[i].
Velocity distribution concerning all pedestrians in (a) BIWI-ETH and (b) APT datasets.
Distribution of δ[x], δ[y] and δ regarding G.
Observed and modeled distributions of δ regarding G for (a) BIWI-ETH and (b) APT. Figures (c) and (d) are organized similarly for Ḡ.
Observed and modeled distributions of θ regarding G for (a) BIWI-ETH and (b) APT Figures (c) and (d) are organized similarly for Ḡ.
Pedestrians of same group are denoted with same marker and color, whereas pedestrians who do not belong to a group are denoted with gray circles. (a,b,c) Two pedestrians present meeting and splitting
behavior; (d) Groups behave in a non-coherent manner; (e) Considerable occlusion; (f,g) Two groups move along same flow. Groups pass through each other moving (h,i) in opposite directions and (j) in
same direction; (k) Unrelated pedestrians present group-like behavior; (l,m) Unrelated pedestrians follow similar trajectories with similar velocities to groups; (n,o) Waiting people introduce
Specifications of datasets.
Duration Group size Total # of pedestrians
Caviar 1′11″ 5 - 1 - - 17
BIWI-ETH 8′38″ 38 10 6 1 4 360
BIWI-Hotel 12′54″ 38 3 - - - 223
APT 30′00″ 128 8 - - - 531
The mean values and standard deviations of ν, σ and κ over the 50 runs.
Caviar BIWI-ETH BIWI-Hotel APT
Δ[G](δ|ν,σ) ν 0.81 ±0.04 0.76 ±0.06 0.67 ±0.03 0.71 ±0.02
σ 0.33 ±0.07 0.22 ±0.05 0.14 ±0.03 0.13 ±0.02
Θ[G](θ|κ) κ 6.36 ±1.15 69.53 ±9.18 164 ±40.25 9.59 ±2.11
Θ[Ḡ](θ|κ) κ 0.32 ±0.39 15.03 ±1.38 36.29 ±9.18 0.89 ±0.13
Performance rates of the proposed method.
G(%) Ḡ(%) Total(%)
Caviar 86.68 ± 0.33 94.36 ± 0.23 87.82 ± 0.32
BIWI-ETH 85.62 ± 0.00 91.15 ±0.00 90.51 ±0.00
BIWI-Hotel 95.89 ± 0.33 96.77 ±1.61 96.57 ±0.51
APT 94.77 ±0.15 99.84 ±0.10 99.10 ±2.76
Improvement introduced by compound evaluation over individual decisions.
G(%) Ḡ(%) Total(%)
Caviar Δ → Δ + Θ −6.5 22.4 −2.7
Θ → Δ + Θ 4.96 3.89 4.77
BIWI-ETH Δ → Δ + Θ 3.03 0.36 0.47
Θ → Δ + Θ 13.62 2.65 3.18
BIWI-Hotel Δ → Δ + Θ 0.06 0.04 0.04
Θ → Δ + Θ 0.33 0.04 0.07
APT Δ → Δ + Θ 5.74 0.56 3.14
Θ → Δ + Θ 8.05 9.66 8.87
Performance comparison of the proposed method to the method of [31] and method of maximum absolute log-likelihood ratio.
Proposed method (%) Method of [31] (%) Maximum absolute log-likelihood ratio (%)
G Ḡ Total G Ḡ Total G Ḡ Total
Caviar 86.68 94.36 87.82 57.50 94.83 63.17 55.88 86.28 60.49
BIWI-ETH 85.62 91.15 90.51 65.52 97.23 95.05 58.81 96.16 88.79
BIWI-Hotel 95.89 96.77 96.57 97.87 96.21 96.25 88.75 98.18 96.03
APT 94.77 99.84 99.10 88.08 89.81 88.33 86.72 99.65 97.77 | {"url":"http://www.mdpi.com/1424-8220/13/1/875/xml","timestamp":"2014-04-16T13:14:33Z","content_type":null,"content_length":"124294","record_id":"<urn:uuid:cc2c822e-8144-4d82-a218-59f41cdfef16>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: jeremys_@_md5.umd.edu
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: jeremys_@_md5.umd.edu
User Profile for: jeremys_@_md5.umd.edu
UserID: 36664
Name: Weissenburger - Jeremy S.
Registered: 12/6/04
Total Posts: 6
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=36664","timestamp":"2014-04-20T01:56:48Z","content_type":null,"content_length":"12489","record_id":"<urn:uuid:29479bfb-2706-47ba-83f9-b0a1989954f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ultrasonic generator introduction
Ultrasonic generator is used to generate ultrasonic transducer to provide ultrasonic energy devices. Ultrasonic generator on its incentives in two ways: one is he excited. The other is self-excited.
If you press the final amplifier tube type device used points, and divided into four types: tube-type ultrasonic generator; thyristor inverter type ultrasonic generator; transistor-type ultrasonic
generator and ultrasonic generator power modules. Tube type and thyristor inverter has now basically eliminated, the current widespread use of the transistor type generator.
He excited ultrasonic generator mainly consists of two parts, the first stage is an oscillator, amplifier after the class is. Generally coupled through the output transformer, added to the ultrasonic
energy transducer. The ultrasonic generator self-excited oscillation is to, amplifier, output transformer and transducer set as a whole, form a closed loop circuit to meet the amplitude, phase
feedback conditions, the formation of a power amplifier of the oscillator. And resonance in the transducer mechanical resonance frequency.
Based on the characteristics of ultrasonic generator, the main discussion, analysis, design, resonant ultrasonic generator, power amplifier and matching and other related issues.
I. On the resonance problem <frequency automatic tracking>
The so-called resonance problem is to ask the generator's output signal frequency can change in the work of the transducer resonant frequency tracking, automatic tracking frequency that is known. The
commonly used frequency automatic tracking roughly the following methods:
1. Acoustic tracking
In acoustic coupling mode, collected from the transducer resonant frequency of the telecommunications number, and then fed back to the pre-amplifier, so that the formation of self-excited oscillator.
Circuit is a closed-loop system, the circuit in the power of the moment to produce a shock pulse, the pulse after the preamplifier, power amplifier to motivate the transducer, the transducer at its
natural frequency of vibration. Resulting in the feedback sound the same frequency on the receiver available telecommunications numbers. After a phase-shift circuit, frequency selection,
pre-amplifier and power amplifier to go excitation transducer, if the oscillator phase and amplitude conditions, the system will self-oscillation, and oscillation frequency tracking of the transducer
resonance frequency.
2. Power tracking
The so-called "power track" also known as self-excited oscillator feedback. Roughly the following form
(1) impedance bridge in the form of dynamic feedback system
Impedance bridge in the form of dynamic feedback system consisting of the frequency of automatic tracking circuit works as follows; it is the principle of using the bridge balance compensation
transducer with electrical arm of the active component of reactive power, by means of a variable differential transducer to extract and arm oscillation current is proportional to the feedback
voltage, the closed-loop system in the mechanical resonance frequency of the transducer on the self-oscillating. The method of transducer electrical parameters of compensation may be independent of
frequency, and thus good tracking in a wide frequency band.
Differential variable automatic frequency tracking device bridge circuit shown in Figure 1.29. Map Tf is the differential variable device, the positive feedback voltage Uf, leads from the secondary
winding W3; primary winding W1, W2 and impedance Z1, Z2 constitute four-arm bridge, Z1 is the transducer impedance (which by the arm impedance Zm and electrical impedance Ze arm formed in parallel),
Z2 is the impedance of the compensation components. Z3 is used to compensate the bridge reactance. Let Zm "Ze is Z1 = Ze. Obviously, if meet conditions W1I1 = W2I2 (W1W2 of primary turns),
bridge-balance, Tf secondary winding voltage feedback Uf = 0. This equilibrium can be table Z2/Ze = W2/W1
X2/Xe = R2/Re = W2/W1 = n
That is, where Re / Xe Ze and Z2, respectively the real and imaginary parts. N represents the differential coefficient of the variable control of the primary winding turns ratio of two parts, which
is equal to the compensation components electrical transducer impedance and arm ratio.
Under normal circumstances, the current Im flowing through the Zm make the bridge out of balance, Tf sense to give birth to the secondary winding is proportional to the transducer arm oscillating
current Im feedback voltage Uf
Uf = Im (W1/W3) Rim
Where Rim is the oscillator input resistance.
When the system of self-excited frequency fo = fr (transducer of mechanical resonance frequency), the current Im. Reach the maximum. System feedback strongest, Uf feedback voltage maximum, to meet
the range of conditions, and Uf and Uin-phase, but also to meet the phase condition, the system self-excited in the transducer mechanical resonance frequency. If the transducer's mechanical resonance
frequency reduced due to some factors, the fo> fr. , Zm was inductive load, Im the phase angle is negative, resulting in Uf lag Uin (the original input voltage), the system oscillation frequency is
reduced so as to achieve the purpose of tracking.
Differential variable automatic frequency tracking device bridge circuit advantages in its electrical resistance of the transducer elements of the compensation is independent of frequency, in order
to ensure the feedback voltage within a wide frequency band with only the mechanical oscillation current, reliable tracking, offset smaller .
(2) partial pressure of the load the way feedback system.
This system is shown in Figure 1.30, the figure form a closed loop of the circuit circuitous road. Circuit in the power of the moment produces an electrical pulse, the transducer amplifier added to
both ends, so excited vibration transducer. The vibration frequency of the transducer itself, the natural frequency of the transducer at both ends of the oscillation signal sent by the partial
pressure on the adjustable phase shifter, and then sent to the amplifier. When the adjustable phase shifter is adjusted to meet the self-excitation phase conditions, the system self-excited in the
transducer's natural frequency. Transducer resonant frequency of small changes, the circuit system can keep track of the work is always at its best.
(3) phase-locked type frequency automatic tracking
Phase-locked type frequency automatic tracking system and the first two self-excited systems, the circuit is much more complex, but it can get a better frequency automatic tracking performance.
Therefore, the generator of ultrasonic plastic welding machine to get more and more widely used.
The use of phase-locked frequency technology for automatic tracking of the key is how to get the load circuit, the phase difference between voltage and current. Piezoelectric transducer in the
vicinity of the resonant frequency of the equivalent circuit shown in Figure 1.31. Diagram inside the dashed border equivalent circuit for the transducer. C0 is the static capacitance transducer. Rm
is the mechanical impedance reflected to the electrical side of the resistance (if you ignore mechanical loss is reflected in the electrical side of the radiation resistance), Lm is the dynamic
equivalent quality reflected in the electrical side of the value; Cm for the equivalent force along the reflected power side of the dynamic capacitance value; L. Work for the transducer resonant
state in the parallel matching inductor.
The circuitous path of the resonant frequency
ω2 = 1/LoCo = 1/LmCm
In the resonant state, the voltage across the transducer U = iRm,
The voltage and current are in phase. When the transducer when the detuning, u and i no longer in phase, and frequency of its reactance characteristic curve shown in Figure 1.32. Figure ωs for the
series resonant frequency. ωp for the parallel resonant frequency.
When ωs <ω <ωp the circuit was emotional, u ahead of i, the phase difference is positive. When ω = ωs, the circuit was pure resistance characteristics, i-phase current and voltage U. Therefore, the
voltage across the transducer and the flow through the transducer current, positive and negative phase difference between them, size, representing the excitation signal frequency and the vibration
system natural frequency relationship. Therefore, if the voltage and current signals out of phase, as an incentive to change the resonant frequency of vibration system control signal, to achieve
frequency tracking purposes. This is the phase-locked type frequency tracking of the basic principles.
Phase-locked type frequency automatic tracking system practical circuit diagram shown in Fig. The figure by the phase comparator, voltage comparator, a low-pass filter, VCO, incentive amplifier,
power amplifier, current sensing and voltage sampling and so on. This is a closed-loop system. However, due to voltage-controlled oscillator itself can work alone. Therefore, the open-loop system can
also work. Open-loop similar to his when excited ultrasonic generator. This is not the same as self-excited oscillator. The system uses a transducer on the final stage of the phase difference between
voltage and current, the phase comparison to obtain the phase error signal, and then low-pass filtered, to control the VCO output signal frequency. So that it remains with the mechanical resonant
frequency of vibration of the system consistent.
Although the frequency and phase relationship between the dependent transformation, but the feedback system consisting of two, from the control results, is not the same. Frequency of the feedback
results, will lead to the input and output signal frequency difference between the smallest possible, and the feedback phase of the phase difference between the results of the two signals as small as
possible. Posed based on the frequency and phase differential - integral relationship between the phase feedback system eventually led to two signal frequency difference is zero.
Phase-locked type frequency locked loop automatic tracking system has application specific integrated circuit, such as CD4046 and so on. Not be detailed here, please refer to the principle of
phase-locked loop integrated circuit data sheets and books.
Finally, we summarize. Phase-locked type frequency automatic tracking system has the following characteristics;
(1) The phase-locked loop is an excellent band-pass filter, therefore, no error with the system to other non-resonant frequency
(2) automatic tracking system frequency control signal and sampling the voltage waveform of the waves good or bad, does not matter much;
(3) The output power is relatively stable. Changes will not load a significant change,
(4) The control system operates in the small-signal state, they are able to work long hours.
Previous posts:
Next article:
ultrasonic generator schematic | {"url":"http://www.ultrasonic-generators.com/2011/piezo-generator_1018/2.html","timestamp":"2014-04-21T09:45:41Z","content_type":null,"content_length":"17698","record_id":"<urn:uuid:ab36ca4d-05b2-4819-a988-c815d68d6634>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Term Projects
The main goal of the whole class is to do an research project in Neural Networks. It does not have to be very extensive (we only have one semester), but it should be original and address a specific
research question. Optimally it could be a start for a larger project that you can continue after class, even into a dissertation. Some of the most common kinds of projects are outlined below. Many
others are possible; talk to Risto if you have something unusual in mind.
Types of Projects
Many projects focus on some application of neural networks, like time series prediction or fault detection or robot control. Here the architecture is pretty standard, but the application is novel,
and a lot of thinking goes into how to pose the problem so that it can be solved with neural networks. Expect also to spend some time in getting the training data together. You should also compare
the performance to whatever the standard techniques are in the domain.
Cognitive Science applications often consists of modeling some human (or animal) data in an interesting behavioral task. Here you should be familiar with the psychological or linguistic phenomenon,
and the creative part is to come up with the architecture that will match that. A good model will lead to predictions that go beyond what you trained it to do, about the architecture or impairments
or behavior under different conditions. In other words, by using the neural network, you will learn something about the phenomenon you are modeling.
A third category of projects concerns the neural network techniques themselves. You'll come up with some idea of how to make the network learn faster, or generalize better, or solve harder or new
kinds of problems. You will implement your technique and compare the performance with prevailing other algorithms (other neural nets and perhaps other learning algorithms or standard ways of solving
the problem). Note that your idea may not work out in the end, but that's ok: not all ideas do. If we decided that it was worth checking out and you did a thorough study showing it doesn't work,
that's a perfectly good project for this class.
Your project can also be a theoretical one, where you prove something about the existing algorithms, or an algorithm that you proposed. For example, under what conditions the learning will converge,
or what kinds of problems it can solve. Often such a project requires that you have good command of the mathematical techniques already; most of the work will be in identifying a good problem.
Techniques often used in NN include statistical mechanics, nonlinear optimization, regularization, probability theory, nonlinear dynamics, etc.
Timeline etc.
You should start thinking about possible projects immediately. The topic talk is designed to give you both inspiration and the necessary background: as you are doing the reading and preparing the
talk, try to identify potential research problems. Usually you can come up with a number of them that way, and it will be an easy task to continue the topic talk into a project. Your project does not
necessarily have to be related to your topic talk though, and although you can do the project together with your topic talk partner, you don't have to.
As you are trying to define a project (even before your topic talk), talk to Risto; he'll give you feedback on the feasibility and scope of the project, and you'll set some concrete goals for it. He
may also be able to point you to the relevant literature and tools (e.g. simulator code).
A couple of weeks before the project talks begin, you should turn in (by email) your final project proposal to Risto; see the class schedule for the due date of project proposals.
You should have completed your project by the time you are giving your project presentation to the class. At the very least, you should have your model implemented and have at least preliminary
results. A talk where you just talk about what you are going to do is not very useful. For the remainder of the class, you can then obtain more substantial results (i.e. from multiple runs), and
write up the project in the project paper.
Sat Aug 18 16:33:44 CDT 2012 | {"url":"http://www.cs.utexas.edu/~risto/cs394n/projects.html","timestamp":"2014-04-23T13:33:45Z","content_type":null,"content_length":"4776","record_id":"<urn:uuid:2c372aad-ab55-4d2d-a0c4-85f1d78b2a15>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
DRDB: SDAPCD 260-313 PERFORMANCE TESTS & COMPLIANCE PROVISIONS
RULE 260.313 - PERFORMANCE TESTS AND COMPLIANCE PROVISIONS
(a) Rule 260.8(d) and 260.8(f) do not apply to the performance test procedures required by this subpart.
(b) The owner or operator of an affected facility shall conduct an initial performance test as required under Rule 260.8(a) and thereafter a performance test each calendar month for each affected
facility according to the procedures in this rule.
(c) The owner or operator shall use the following procedures for determining monthly volume-weighted average emissions of VOC's in kilograms per liter of coating solids applied (G).
1. An owner or operator shall use the following procedures for any affected facility which does not use a capture system and control device to comply with the emissions limit specified under Rule
260.312. The owner or operator shall determine the composition of the coatings by formulation data supplied by the manufacturer of the coating or by an analysis of each coating, as received,
using Reference Method 24. The Control Officer may require the owner or operator who uses formulation data supplied by the manufacturer of the coating to determine the VOC content of coatings
using Reference Method 24. The owner or operator shall determine the volume of coating and the mass VOC-solvent used for thinning purposes from company records on a monthly basis. If a common
coating distribution system serves more than one affected facility or serves both affected and existing facilities, the owner or operator shall estimate the volume of coating used at each
facility by using the average dry weight of coating and the surface area coated by each affected and existing facility or by other procedures acceptable to the Control Officer.
(i) Calculate the volume-weighted average of the total mass of VOC's consumed per unit volume of coating solids applied (G) during each calendar month for each affected facility, except as
provided under Rule 260.313(c)(2) and (c)(3). Each monthly calculation is considered a performance test. Except as provided in Subsection (c)(1)(iv) of this rule, the volume-weighted average of
the total mass of VOC's consumed per unit volume of coating solids applied (G) each calendar month will be determined by the following procedures.
(A) Calculate the mass of VOC's used (M[o] + M[d]) during each calendar month for each affected facility by the following equation:
(L[dj]D[dj] will be 0 if no VOC solvent is added to the coatings, as received.)
n is the number of different coatings used during the calendar month and m is the number of different diluent VOC-solvents used during the calendar month.
(B) Calculate the total volume of coating solids used (L[s)] in each calendar month for each affected facility by the following equation:
Select the appropriate transfer efficiency from Table 1. If the owner or operator can demonstrate to the satisfaction of the Control Officer that other transfer efficiencies other than those
shown are appropriate, the Control Officer will approve their use on a case-by-case basis. Transfer efficiency values for application methods not listed below shall be determined by the
Control Officer on a case-by-case basis. An owner or operator must submit sufficient data for the Control Officer to judge the accuracy of the transfer efficiency claims.
TABLE 1. TRANSFER EFFICIENCIES
┃Application Methods │Transfer Efficiency (T)┃
┃Air atomized spray │0.25 ┃
┃Airless spray │.25 ┃
┃Manual electrostatic spray │.60 ┃
┃Nonrotational automatic electrostatic spray │.70 ┃
┃Rotating head electrostatic spray (manual & automatic) │.80 ┃
┃Dip coat and flow coat │.90 ┃
┃Electrodeposition │.95 ┃
Where more than one application method is used within a single surface coating operation, the owner or operator shall determine the composition and volume of each coating applied by each
method through a means acceptable to the Control Officer and compute the weighted average transfer efficiency by the following equation:
n is the number of coatings used and
p is the number of application methods used.
(C) Calculate the volume weighted average mass of VOC's consumed per unit volume of coating solids applied (G) during the calendar month for each affected facility by the following equation:
(ii) Calculate the volume-weighted average of VOC emissions to the atmosphere (N) during the calendar month for each affected facility by the following equation:
(iii) Where the volume-weighted average mass of VOC discharged to the atmosphere per unit volume of coating solids applied (N) is less than or equal to 0.90 kilogram per liter, the affected
facility is in compliance.
(iv) If each individual coating used by an affected facility has a VOC content, as received, which when divided by the lowest transfer efficiency at which the coating is applied, results in a
value equal to or less than 0.90 kilogram per liter, the affected facility is in compliance provided no VOC's are added to the coatings during distribution or application.
1. An owner or operator shall use the following procedures for any affected facility that uses a capture system and a control device that destroys VOC's (e.g., incinerator) to comply with the
emission limit specified in Rule 260.312.
(i) Determine the overall reduction efficiency (R) for the capture system and control device. For the initial performance test the overall reduction efficiency (R) shall be determined as
prescribed in Subsections (c)(2)(i)(A), (B), and (C) of this rule. In subsequent months, the owner or operator may use the most recently determined overall reduction efficiency (R) for the
performance test providing control device and capture system operating conditions have not changed. The procedure is Subsections (c)(2)(i)(A), (B), and (C), of the rule shall be repeated when
directed by the Control Officer or when the owner or operator elects to operate the control device or capture system at conditions different from the initial performance test.
(A) Determine the fraction (F) of total VOC's emitted by an affected facility that enters the control device using the following equation:
(C) Determine overall reduction efficiency (R) using the following equation:
(ii) Calculate the volume-weighted average of the total mass of VOC's per unit volume of coating solids applied (G) during each calendar month for each affected facility using equations in
Subsections (c)(1)(i)(A), (B), and (C) of this rule.
(iii) Calculate the volume-weighted average of VOC emissions to the atmosphere (N) during each calendar month by the following equation:
(iv) If the volume-weighted average mass of VOC's emitted to the atmosphere for each calendar month (N) is less than or equal to 0.90 kilogram per liter of coating solids applied, the affected
facility is in compliance. Each monthly calculation is a performance test.
1. An owner or operator shall use the following procedures for any affected facility which uses a control device that recovers the VOC's (e.g., carbon adsorber) to comply with the applicable
emission limit specified under Rule 260.312.
(i) Calculate the total mass of VOC's consumed (M[o] + M[d]) and the volume-weighted average of the total mass of VOC's per unit volume of coating solids applied (G) during each calendar month
for each affected facility using equations in Subsection (c)(1)(i)(A), (B), and (C) of this rule.
(ii) Calculate the total mass of VOC's recovered (M[r]) during each calendar month using the following equation:
(iii) Calculate overall reduction efficiency of the control device (R) for each calendar month for each affected facility using the following equation:
(iv) Calculate the volume-weighted average mass of VOC's emitted to the atmosphere (N) for each calendar month for each affected facility using equation in Subsection (c)(2)(iii) of this rule.
(v) If the weighted average mass of VOC's emitted to the atmosphere for each calendar month (N) is less than or equal to 0.90 kilogram per liter of coating solids applied, the affected facility
is in compliance. Each monthly calculation is a performance test. | {"url":"http://www.arb.ca.gov/DRDB/SD/CURHTML/R260-313.HTM","timestamp":"2014-04-19T12:01:59Z","content_type":null,"content_length":"12345","record_id":"<urn:uuid:8e22ae1c-477c-409f-93fe-52a86ad8cbc8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic programming with focusing proofs in linear logic
Results 1 - 10 of 272
"... When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃
G from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ..."
Cited by 306 (40 self)
Add to MetaCart
When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃ G
from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ∪ {D}. Thus during the bottom-up search for a cut-free proof contexts, represented as the
left-hand side of intuitionistic sequents, grow as stacks. While such an intuitionistic notion of context provides for elegant specifications of many computations, contexts can be made more
expressive and flexible if they are based on linear logic. After presenting two equivalent formulations of a fragment of linear logic, we show that the fragment has a goal-directed interpretation,
thereby partially justifying calling it a logic programming language. Logic programs based on the intuitionistic theory of hereditary Harrop formulas can be modularly embedded into this linear logic
setting. Programming examples taken from theorem proving, natural language parsing, and data base programming are presented: each example requires a linear, rather than intuitionistic, notion of
context to be modeled adequately. An interpreter for this logic programming language must address the problem of splitting contexts; that is, when attempting to prove a multiplicative conjunction
(tensor), say G1 ⊗ G2, from the context ∆, the latter must be split into disjoint contexts ∆1 and ∆2 for which G1 follows from ∆1 and G2 follows from ∆2. Since there is an exponential number of such
splits, it is important to delay the choice of a split as much as possible. A mechanism for the lazy splitting of contexts is presented based on viewing proof search as a process that takes a
context, consumes part of it, and returns the rest (to be consumed elsewhere). In addition, we use collections of Kripke interpretations indexed by a commutative monoid to provide models for this
logic programming language and show that logic programs admit a canonical model.
, 1996
"... We present the linear type theory LLF as the forAppeared in the proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science --- LICS'96 (E. Clarke editor), pp. 264--275, New
Brunswick, NJ, July 27--30 1996. mal basis for a conservative extension of the LF logical framework. LLF c ..."
Cited by 217 (44 self)
Add to MetaCart
We present the linear type theory LLF as the forAppeared in the proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science --- LICS'96 (E. Clarke editor), pp. 264--275, New
Brunswick, NJ, July 27--30 1996. mal basis for a conservative extension of the LF logical framework. LLF combines the expressive power of dependent types with linear logic to permit the natural and
concise representation of a whole new class of deductive systems, namely those dealing with state. As an example we encode a version of Mini-ML with references including its type system, its
operational semantics, and a proof of type preservation. Another example is the encoding of a sequent calculus for classical linear logic and its cut elimination theorem. LLF can also be given an
operational interpretation as a logic programming language under which the representations above can be used for type inference, evaluation and cut-elimination. 1 Introduction A logical framework is
a formal system desig...
- JOURNAL OF THE ACM , 1996
"... We show that a type system based on the intuitionistic modal logic S4 provides an expressive framework for specifying and analyzing computation stages in the context of functional languages. Our
main technical result is a conservative embedding of Nielson & Nielson's two-level functional language in ..."
Cited by 185 (22 self)
Add to MetaCart
We show that a type system based on the intuitionistic modal logic S4 provides an expressive framework for specifying and analyzing computation stages in the context of functional languages. Our main
technical result is a conservative embedding of Nielson & Nielson's two-level functional language in our language Mini-ML, which in
, 1978
"... linear logic to reason about sequent ..."
- ACM TRANSACTIONS ON COMPUTATIONAL LOGIC , 2004
"... This paper introduces a logical system, called BV, which extends multiplicative linear logic by a non-commutative self-dual logical operator. This extension is particularly challenging for the
sequent calculus, and so far it is not achieved therein. It becomes very natural in a new formalism, call ..."
Cited by 87 (15 self)
Add to MetaCart
This paper introduces a logical system, called BV, which extends multiplicative linear logic by a non-commutative self-dual logical operator. This extension is particularly challenging for the
sequent calculus, and so far it is not achieved therein. It becomes very natural in a new formalism, called the calculus of structures, which is the main contribution of this work. Structures are
formulae subject to certain equational laws typical of sequents. The calculus of structures is obtained by generalising the sequent calculus in such a way that a new top-down symmetry of derivations
is observed, and it employs inference rules that rewrite inside structures at any depth. These properties, in addition to allowing the design of BV, yield a modular proof of cut elimination.
- In Proceedings of 9th Annual IEEE Symposium On Logic In Computer Science , 1994
"... The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement,
Lolli [12], provide data types, higher-order programming) but lack primitives for concurrency. The logic pro ..."
Cited by 86 (7 self)
Add to MetaCart
The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli
[12], provide data types, higher-order programming) but lack primitives for concurrency. The logic programming language, LO (Linear Objects) [2] provides for concurrency but lacks abstraction
mechanisms. In this paper we present Forum, a logic programming presentation of all of linear logic that modularly extends the languages λProlog, Lolli, and LO. Forum, therefore, allows
specifications to incorporate both abstractions and concurrency. As a meta-language, Forum greatly extends the expressiveness of these other logic programming languages. To illustrate its expressive
strength, we specify in Forum a sequent calculus proof system and the operational semantics of a functional programming language that incorporates such nonfunctional features as counters and
references. 1
- Theoretical Computer Science , 1996
"... The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement,
Lolli [15], provide for various forms of abstraction (modules, abstract data types, and higher-order program ..."
Cited by 85 (11 self)
Add to MetaCart
The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli
[15], provide for various forms of abstraction (modules, abstract data types, and higher-order programming) but lack primitives for concurrency. The logic programming language, LO (Linear Objects)
[2] provides some primitives for concurrency but lacks abstraction mechanisms. In this paper we present Forum, a logic programming presentation of all of linear logic that modularly extends λProlog,
Lolli, and LO. Forum, therefore, allows specifications to incorporate both abstractions and concurrency. To illustrate the new expressive strengths of Forum, we specify in it a sequent calculus proof
system and the operational semantics of a programming language that incorporates references and concurrency. We also show that the meta theory of linear logic can be used to prove properties of the
objectlanguages specified in Forum.
, 2003
"... The Concurrent Logical Framework, or CLF, is a new logical framework in which concurrent computations can be represented as monadic objects, for which there is an intrinsic notion of
concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous con ..."
Cited by 73 (25 self)
Add to MetaCart
The Concurrent Logical Framework, or CLF, is a new logical framework in which concurrent computations can be represented as monadic objects, for which there is an intrinsic notion of concurrency. It
is designed as a conservative extension of the linear logical framework LLF with the synchronous connectives# of intuitionistic linear logic, encapsulated in a monad. LLF is itself a conservative
extension of LF with the asynchronous connectives -#, & and #.
- Proceedings of the Tenth Annual Symposium on Logic in Computer Science , 1995
"... We present new proofs of cut elimination for intuitionistic, classical, and linear sequent calculi. In all cases the proofs proceed by three nested structural inductions, avoiding the explicit
use of multi-sets and termination measures on sequent derivations. This makes them amenable to elegant and ..."
Cited by 64 (8 self)
Add to MetaCart
We present new proofs of cut elimination for intuitionistic, classical, and linear sequent calculi. In all cases the proofs proceed by three nested structural inductions, avoiding the explicit use of
multi-sets and termination measures on sequent derivations. This makes them amenable to elegant and concise implementations in Elf, a constraint logic programming language based on the LF logical
framework. 1 Introduction Gentzen's sequent calculi [Gen35] for intuitionistic and classical logic have been the central tool in many proof-theoretical investigations and applications of logic in
computer science such as logic programming or automated theorem proving. The central property of sequent calculi is cut elimination (Gentzen's Hauptsatz) which yields consistency of the logic as a
corollary. The algorithm for cut elimination may be interpreted computationally, similarly to the way normalization for natural deduction may be viewed as functional computation. For the case of
linear logic, ...
- Theoretical Computer Science , 1997
"... In order to reason about specifications of computations that are given via the proof search or logic programming paradigm one needs to have at least some forms of induction and some principle
for reasoning about the ways in which terms are built and the ways in which computations can progress. The l ..."
Cited by 61 (19 self)
Add to MetaCart
In order to reason about specifications of computations that are given via the proof search or logic programming paradigm one needs to have at least some forms of induction and some principle for
reasoning about the ways in which terms are built and the ways in which computations can progress. The literature contains many approaches to formally adding these reasoning principles with logic
specifications. We choose an approach based on the sequent calculus and design an intuitionistic logic F Oλ ∆IN that includes natural number induction and a notion of definition. We have detailed
elsewhere that this logic has a number of applications. In this paper we prove the cut-elimination theorem for F Oλ ∆IN, adapting a technique due to Tait and Martin-Löf. This cut-elimination proof is
technically interesting and significantly extends previous results of this kind. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=20614","timestamp":"2014-04-23T23:01:13Z","content_type":null,"content_length":"38245","record_id":"<urn:uuid:029f118c-e0ee-4024-9c63-15d3f2833045>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
SIMMS Integrated Mathematics makes math accessible to all students by using real-world contexts from a variety of disciplines. It incorporates the use of technology as a learning tool in all facets
and at all levels of mathematics.
SIMMS Integrated Mathematics is a complete, NSF-funded, NCTM Standards-based curriculum for all students that spans Algebra though Pre-Calculus.
• Engaging real-world explorations/applications
• Clear learning objectives and expected outcomes
• Multiple assessments
• Daily practice problems
• Multiple learning styles
• Increased interest in higher-level math
• Improved student readiness for standardized tests and college-level math
The third edition of SIMMS Integrated Mathematics contains all of the basic elements found in previous editions, along with some new features. For example, each activity now offers an additional
problem set designed to hone mathematical skills before students solve application-based problems. Several individual modules were substantially revised, presenting fresh approaches to geometric
proofs, hypothesis testing, compositions of functions, and other topics. Each module at each level includes the following sections:
• Explorations
• Discussions
• Mathematics Notes
• Warm-Ups
• Assignments
• Research Projects (in most modules)
• Summary Assessment
• Module Summary
• Glossary
The teacher edition in the third edition of SIMMS Integrated Mathematics has been redesigned as a wrap-around textbook. A Teacher Resource CD is also included.
For information on the curriculum and how to order materials, contact:
Kendall/Hunt Publishing
4050 Westmark Drive
P.O. Box 1840
Dubuque, IA 52004-1840 | {"url":"http://www.ithaca.edu/compass/simms.htm","timestamp":"2014-04-16T22:32:04Z","content_type":null,"content_length":"26361","record_id":"<urn:uuid:5cb4eee2-a920-41a2-bfdb-cb50473a3c17>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving LaPlace Equation for x^3?
February 27th 2010, 03:21 PM #1
Feb 2010
Solving LaPlace Equation for x^3?
Good evening everyone! I have a little bit of a headache from trying to figure out how to attack this complex analysis problem:
(2nd partial derivative of u with respect to y)+(2nd partial derivative of u with respect to x)=x^3
(Laplace's equation on the left, x^3 on the right). We are supposed to find u. Now, Our teacher's hint was this:
4*d/dz*du/d(z-bar) is equivalent to LaPlace's equation.
My thinking was this: Integration by parts should suffice from there. But, I have hit a huge mental block-- I cannot seen to get x^3 in terms of z. That would make this all go through a lot
better. Do any of you guys have any suggestions? I would love to hear any of them! Thanks so much!
PS (I was thinking of just using my diffeq knowledge and stating that u could be equal to 1/20(x^5)+cx+dy+e*xy+f, but I couldn't help but feel that would be cheating...)
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/differential-geometry/131070-solving-laplace-equation-x-3-a.html","timestamp":"2014-04-20T12:59:09Z","content_type":null,"content_length":"29581","record_id":"<urn:uuid:cb3082b4-d485-43c6-a827-c591089ff539>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wei-Chi Yang
Calculus I - Math 151
• Lab Schedule
• Walker_221 Computer Lab 11/19/2008 2:00 PM - 4:00 PM
Walker_216 Computer Lab 12/01/2008 2:00 PM - 4:00 PM
Walker_216 Computer Lab 12/03/2008 2:00 PM - 4:00 PM Walker_216 Computer Lab 12/08/2008 2:00 PM - 4:00 PM
• Writing A Calculus I (Math 151) Project
1. Objective: To learn how Calculus is linked to your chosen major or field.
2. Guidelines:
a. You may choose up to three classmates to form a group.
b. You need to identify a real-life problem in your fields that you would like to do. (For example, it could be an optimization problem.)
c. You need to use technological tools to demonstrate how you use Calculus to achieve your answer.
3. Writing up your report: You need to prepare a Word or PowerPoint file
4. Oral presentation: December 10.
1. (skip) Exercises on Quadratic Functions
□ Polynomial functions (CASIO) (Maple)-lab 2
□ Rational functions (CASIO) (Maple)-lab 2
□ Find two rational functions f which satisfy all the following conditions, and use Maple to check your answers:
☆ the vertical asymptotes of f are at x=1 and x=-3;
☆ the horizontal asymptote of f is at y=-2;
☆ f(0) = 3.
2. (read on your own) How the number e is being found. (A Java applet, developed by IES)
3. Inverse Functions
4. **Explore Inverse Trig with ClassPad.
5. Limits (Maple file)
6. Continuous Function (html file)
7. Concept of tangent lines. (html)
8. Derivative Functions:
□ How do we find the derivative at one point? (an avi file)
9. Finding the derivative at one point numerically and algebraically. (Maple file)
10. An animation on finding the slope of the tangent line. (Maple file)
11. Explore the derivatives of sin(x) and tan(x) graphically with ClassPad (CP file, a Video Clip).
12. Relationship between f and f'. (Maple file)
13. (Investigating max/min and inflection points (Maple file)
14. Peer Instructions-Review for an old test 4.
15. A Review Sheet.
16. (December 1) Hints on some exercises-implicit differentiation
17. (December 1) Review for an old final exam.
18. Projects on Limits:
8 Limits Limits (Piecewise-Defined Functions and Squeeze Principle) ClassPad e8 Video v8
9 Limit2 Finding Limits Using Geometric Approaches e9 v9
10 Shrinking Circle A Shrinking Circle: A Limit problem e10 v10
19. Projects on Optimizations:
12 Shortest_Dist1 Shortest distance between a point and a curve ClassPad e12 Video v12
13 Shortest_Dist2 Minimum Distance Between Two Curves e13 v13
14 Ladder A Ladder Problem e14 v14
15 Rope A Calculus Problem e15 v15
20. ** Assessment (WebWork)
□ Students will login as:
Ussername = (their RU login, e.g. cmett)
Password = (their 6-digit RU id number)
and of course they should also change passwords at their first opportunity.
21. A link to my Trigonometry Class.
22. A link to my Precalculus Class.
Review from College Algebra:
1. Parabolic Functions
Some on line interactive calculator and software | {"url":"http://www.radford.edu/~wyang/website/151s00.html","timestamp":"2014-04-21T09:36:42Z","content_type":null,"content_length":"45758","record_id":"<urn:uuid:dcfac6be-7bfa-48de-9822-a1da7a86468d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jean Baptiste Joseph Fourier demonstrated around 1800 that any continuous function can be perfectly described as a sum of sine waves. This in fact means that you can create any sound, no matter how
complex, if you know which sine waves to add together.
This concept really excited the early pioneers of electronic music, who imagined that sine waves would give them the power to create any sound imaginable and previously unimagined. Unfortunately,
they soon realized that while adding sine waves is easy, interesting sounds must have a large number of sine waves which are constantly varying in frequency and amplitude, which turns out to be a
hugely impractical task.
However, additive synthesis can provide unusual and interesting sounds. Moreover both, the power of modern computers, and the ability of managing data in a programming language offer new dimensions
of working with this old tool. As with most things in Csound there are several ways to go about it. We will try to show some of them, and see how they are connected with different programming
What are the main parameters of Additive Synthesis?
Before going into different ways of implementing additive synthesis in Csound, we shall think about the parameters to consider. As additive synthesis is the addition of several sine generators, the
parameters are on two different levels:
• For each sine, there is a frequency and an amplitude with an envelope.
□ The frequency is usually a constant value. But it can be varied, though. Natural sounds usually have very slight changes of partial frequencies.
□ The amplitude must at least have a simple envelope like the well-known ADSR. But more complex ways of continuously altering the amplitude will make the sound much more lively.
• For the sound as a whole, these are the relevant parameters:
□ The total number of sinusoids. A sound which consists of just three sinusoids is of course "poorer" than a sound which consists of 100 sinusoids.
□ The frequency ratios of the sine generators. For a classical harmonic spectrum, the multipliers of the sinusoids are 1, 2, 3, ... (If your first sine is 100 Hz, the others are 200, 300, 400,
... Hz.) For an inharmonic or noisy spectrum, there are probably no simple integer ratios. This frequency ratio is mainly responsible for our perception of timbre.
□ The base frequency is the frequency of the first partial. If the partials are showing an harmonic ratio, this frequency (in the example given 100 Hz) is also the overall perceived pitch.
□ The amplitude ratios of the sinusoids. This is also very important for the resulting timbre of a sound. If the higher partials are relatively strong, the sound appears more brilliant; if the
higher partials are soft, the sound appears dark and soft.
□ The duration ratios of the sinusoids. In simple additive synthesis, all single sines have the same duration, but they may also differ. This usually relates to the envelopes: if the envelopes
of different partials vary, some partials may die away faster than others.
It is not always the aim of additive synthesis to imitate natural sounds, but it can definitely be learned a lot through the task of first analyzing and then attempting to imitate a sound using
additive synthesis techniques. This is what a guitar note looks like when spectrally analyzed:
Spectral analysis of a guitar tone in time (courtesy of W. Fohl, Hamburg)
Each partial has its own movement and duration. We may or may not be able to achieve this successfully in additive synthesis. Let us begin with some simple sounds and consider ways of programming
this with Csound; later we will look at some more complex sounds and advanced ways of programming this.
Simple Additions of Sinusoids inside an Instrument
If additive synthesis amounts to the adding sine generators, it is straightforward to create multiple oscillators in a single instrument and to add the resulting audio signals together. In the
following example, instrument 1 shows a harmonic spectrum, and instrument 2 an inharmonic one. Both instruments share the same amplitude multipliers: 1, 1/2, 1/3, 1/4, ... and receive the base
frequency in Csound's pitch notation (octave.semitone) and the main amplitude in dB.
EXAMPLE 04A01_AddSynth_simple.csd
-o dac
;example by Andrés Cabrera
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giSine ftgen 0, 0, 2^10, 10, 1
instr 1 ;harmonic additive synthesis
;receive general pitch and volume from the score
ibasefrq = cpspch(p4) ;convert pitch values to frequency
ibaseamp = ampdbfs(p5) ;convert dB to amplitude
;create 8 harmonic partials
aOsc1 poscil ibaseamp, ibasefrq, giSine
aOsc2 poscil ibaseamp/2, ibasefrq*2, giSine
aOsc3 poscil ibaseamp/3, ibasefrq*3, giSine
aOsc4 poscil ibaseamp/4, ibasefrq*4, giSine
aOsc5 poscil ibaseamp/5, ibasefrq*5, giSine
aOsc6 poscil ibaseamp/6, ibasefrq*6, giSine
aOsc7 poscil ibaseamp/7, ibasefrq*7, giSine
aOsc8 poscil ibaseamp/8, ibasefrq*8, giSine
;apply simple envelope
kenv linen 1, p3/4, p3, p3/4
;add partials and write to output
aOut = aOsc1 + aOsc2 + aOsc3 + aOsc4 + aOsc5 + aOsc6 + aOsc7 + aOsc8
outs aOut*kenv, aOut*kenv
instr 2 ;inharmonic additive synthesis
ibasefrq = cpspch(p4)
ibaseamp = ampdbfs(p5)
;create 8 inharmonic partials
aOsc1 poscil ibaseamp, ibasefrq, giSine
aOsc2 poscil ibaseamp/2, ibasefrq*1.02, giSine
aOsc3 poscil ibaseamp/3, ibasefrq*1.1, giSine
aOsc4 poscil ibaseamp/4, ibasefrq*1.23, giSine
aOsc5 poscil ibaseamp/5, ibasefrq*1.26, giSine
aOsc6 poscil ibaseamp/6, ibasefrq*1.31, giSine
aOsc7 poscil ibaseamp/7, ibasefrq*1.39, giSine
aOsc8 poscil ibaseamp/8, ibasefrq*1.41, giSine
kenv linen 1, p3/4, p3, p3/4
aOut = aOsc1 + aOsc2 + aOsc3 + aOsc4 + aOsc5 + aOsc6 + aOsc7 + aOsc8
outs aOut*kenv, aOut*kenv
; pch amp
i 1 0 5 8.00 -10
i 1 3 5 9.00 -14
i 1 5 8 9.02 -12
i 1 6 9 7.01 -12
i 1 7 10 6.00 -10
i 2 0 5 8.00 -10
i 2 3 5 9.00 -14
i 2 5 8 9.02 -12
i 2 6 9 7.01 -12
i 2 7 10 6.00 -10
Simple Additions of Sinusoids via the Score
A typical paradigm in programming: If you find some almost identical lines in your code, consider to abstract it. For the Csound Language this can mean, to move parameter control to the score. In our
case, the lines
aOsc1 poscil ibaseamp, ibasefrq, giSine
aOsc2 poscil ibaseamp/2, ibasefrq*2, giSine
aOsc3 poscil ibaseamp/3, ibasefrq*3, giSine
aOsc4 poscil ibaseamp/4, ibasefrq*4, giSine
aOsc5 poscil ibaseamp/5, ibasefrq*5, giSine
aOsc6 poscil ibaseamp/6, ibasefrq*6, giSine
aOsc7 poscil ibaseamp/7, ibasefrq*7, giSine
aOsc8 poscil ibaseamp/8, ibasefrq*8, giSine
can be abstracted to the form
aOsc poscil ibaseamp*iampfactor, ibasefrq*ifreqfactor, giSine
with the parameters iampfactor (the relative amplitude of a partial) and ifreqfactor (the frequency multiplier) transferred to the score.
The next version simplifies the instrument code and defines the variable values as score parameters:
EXAMPLE 04A02_AddSynth_score.csd
-o dac
;example by Andrés Cabrera and Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giSine ftgen 0, 0, 2^10, 10, 1
instr 1
iBaseFreq = cpspch(p4)
iFreqMult = p5 ;frequency multiplier
iBaseAmp = ampdbfs(p6)
iAmpMult = p7 ;amplitude multiplier
iFreq = iBaseFreq * iFreqMult
iAmp = iBaseAmp * iAmpMult
kEnv linen iAmp, p3/4, p3, p3/4
aOsc poscil kEnv, iFreq, giSine
outs aOsc, aOsc
; freq freqmult amp ampmult
i 1 0 7 8.09 1 -10 1
i . . 6 . 2 . [1/2]
i . . 5 . 3 . [1/3]
i . . 4 . 4 . [1/4]
i . . 3 . 5 . [1/5]
i . . 3 . 6 . [1/6]
i . . 3 . 7 . [1/7]
i 1 0 6 8.09 1.5 -10 1
i . . 4 . 3.1 . [1/3]
i . . 3 . 3.4 . [1/6]
i . . 4 . 4.2 . [1/9]
i . . 5 . 6.1 . [1/12]
i . . 6 . 6.3 . [1/15]
You might say: Okay, where is the simplification? There are even more lines than before! - This is true, and this is certainly just a step on the way to a better code. The main benefit now is
flexibility. Now our code is capable of realizing any number of partials, with any amplitude, frequency and duration ratios. Using the Csound score abbreviations (for instance a dot for repeating the
previous value in the same p-field), you can do a lot of copy-and-paste, and focus on what is changing from line to line.
Note also that you are now calling one instrument in multiple instances at the same time for performing additive synthesis. In fact, each instance of the instrument contributes just one partial for
the additive synthesis. This call of multiple and simultaneous instances of one instrument is also a typical procedure for situations like this, and for writing clean and effective Csound code. We
will discuss later how this can be done in a more elegant way than in the last example.
Creating Function Tables for Additive Synthesis
Before we continue on this road, let us go back to the first example and discuss a classical and abbreviated method of playing a number of partials. As we mentioned at the beginning, Fourier stated
that any periodic oscillation can be described as a sum of simple sinusoids. If the single sinusoids are static (no individual envelope or duration), the resulting waveform will always be the same.
You see four sine generators, each with fixed frequency and amplitude relations, and mixed together. At the bottom of the illustration you see the composite waveform which repeats itself at each
period. So - why not just calculate this composite waveform first, and then read it with just one oscillator?
This is what some Csound GEN routines do. They compose the resulting shape of the periodic wave, and store the values in a function table. GEN10 can be used for creating a waveform consisting of
harmonically related partials. After the common GEN routine p-fields
<table number>, <creation time>, <size in points>, <GEN number>
you have just to determine the relative strength of the harmonics. GEN09 is more complex and allows you to also control the frequency multiplier and the phase (0-360°) of each partial. We are able to
reproduce the first example in a shorter (and computational faster) form:
EXAMPLE 04A03_AddSynth_GEN.csd
-o dac
;example by Andrés Cabrera and Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giSine ftgen 0, 0, 2^10, 10, 1
giHarm ftgen 1, 0, 2^12, 10, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8
giNois ftgen 2, 0, 2^12, 9, 100,1,0, 102,1/2,0, 110,1/3,0, \
123,1/4,0, 126,1/5,0, 131,1/6,0, 139,1/7,0, 141,1/8,0
instr 1
iBasFreq = cpspch(p4)
iTabFreq = p7 ;base frequency of the table
iBasFreq = iBasFreq / iTabFreq
iBaseAmp = ampdb(p5)
iFtNum = p6
aOsc poscil iBaseAmp, iBasFreq, iFtNum
aEnv linen aOsc, p3/4, p3, p3/4
outs aEnv, aEnv
; pch amp table table base (Hz)
i 1 0 5 8.00 -10 1 1
i . 3 5 9.00 -14 . .
i . 5 8 9.02 -12 . .
i . 6 9 7.01 -12 . .
i . 7 10 6.00 -10 . .
i 1 0 5 8.00 -10 2 100
i . 3 5 9.00 -14 . .
i . 5 8 9.02 -12 . .
i . 6 9 7.01 -12 . .
i . 7 10 6.00 -10 . .
As you can see, for non-harmonically related partials, the construction of a table must be done with a special care. If the frequency multipliers in our first example started with 1 and 1.02, the
resulting period is acually very long. For a base frequency of 100 Hz, you will have the frequencies of 100 Hz and 102 Hz overlapping each other. So you need 100 cycles from the 1.00 multiplier and
102 cycles from the 1.02 multiplier to complete one period and to start again both together from zero. In other words, we have to create a table which contains 100 respectively 102 periods, instead
of 1 and 1.02. Then the table values are not related to 1 - as usual - but to 100. That is the reason we have to introduce a new parameter iTabFreq for this purpose.
This method of composing waveforms can also be used for generating the four standard historical shapes used in a synthesizer. An impulse wave can be created by adding a number of harmonics of the
same strength. A sawtooth has the amplitude multipliers 1, 1/2, 1/3, ... for the harmonics. A square has the same multipliers, but just for the odd harmonics. A triangle can be calculated as 1
divided by the square of the odd partials, with swaping positive and negative values. The next example creates function tables with just ten partials for each standard form.
EXAMPLE 04A04_Standard_waveforms.csd
-o dac
;example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giImp ftgen 1, 0, 4096, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
giSaw ftgen 2, 0, 4096, 10, 1,1/2,1/3,1/4,1/5,1/6,1/7,1/8,1/9,1/10
giSqu ftgen 3, 0, 4096, 10, 1, 0, 1/3, 0, 1/5, 0, 1/7, 0, 1/9, 0
giTri ftgen 4, 0, 4096, 10, 1, 0, -1/9, 0, 1/25, 0, -1/49, 0, 1/81, 0
instr 1
asig poscil .2, 457, p4
outs asig, asig
i 1 0 3 1
i 1 4 3 2
i 1 8 3 3
i 1 12 3 4
Triggering Sub-instruments for the Partials
Performing additive synthesis by designing partial strengths into function tables has the disadvantage that once a note has begun there is no way of varying the relative strengths of individual
partials. There are various methods to circumvent the inflexibility of table-based additive synthesis such as morphing between several tables (using for example the ftmorf opcode). Next we will
consider another approach: triggering one instance of a sub-instrument for each partial, and exploring the possibilities of creating a spectrally dynamic sound using this technique.
Let us return to the second instrument (05A02.csd) which already made some abstractions and triggered one instrument instance for each partial. This was done in the score; but now we will trigger one
complete note in one score line, not just one partial. The first step is to assign the desired number of partials via a score parameter. The next example triggers any number of partials using this
one value:
EXAMPLE 04A05_Flexible_number_of_partials.csd
-o dac
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giSine ftgen 0, 0, 2^10, 10, 1
instr 1 ;master instrument
inumparts = p4 ;number of partials
ibasfreq = 200 ;base frequency
ipart = 1 ;count variable for loop
;loop for inumparts over the ipart variable
;and trigger inumpartss instanes of the subinstrument
ifreq = ibasfreq * ipart
iamp = 1/ipart/inumparts
event_i "i", 10, 0, p3, ifreq, iamp
loop_le ipart, 1, inumparts, loop
instr 10 ;subinstrument for playing one partial
ifreq = p4 ;frequency of this partial
iamp = p5 ;amplitude of this partial
aenv transeg 0, .01, 0, iamp, p3-0.1, -10, 0
apart poscil aenv, ifreq, giSine
outs apart, apart
; number of partials
i 1 0 3 10
i 1 3 3 20
i 1 6 3 2
This instrument can easily be transformed to be played via a midi keyboard. The next example connects the number of synthesized partials with the midi velocity. So if you play softly, the sound will
have fewer partials than if a key is struck with force.
EXAMPLE 04A06_Play_it_with_Midi.csd
-o dac -Ma
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giSine ftgen 0, 0, 2^10, 10, 1
massign 0, 1 ;all midi channels to instr 1
instr 1 ;master instrument
ibasfreq cpsmidi ;base frequency
iampmid ampmidi 20 ;receive midi-velocity and scale 0-20
inparts = int(iampmid)+1 ;exclude zero
ipart = 1 ;count variable for loop
;loop for inparts over the ipart variable
;and trigger inparts instances of the sub-instrument
ifreq = ibasfreq * ipart
iamp = 1/ipart/inparts
event_i "i", 10, 0, 1, ifreq, iamp
loop_le ipart, 1, inparts, loop
instr 10 ;subinstrument for playing one partial
ifreq = p4 ;frequency of this partial
iamp = p5 ;amplitude of this partial
aenv transeg 0, .01, 0, iamp, p3-.01, -3, 0
apart poscil aenv, ifreq, giSine
outs apart/3, apart/3
f 0 3600
Although this instrument is rather primitive it is useful to be able to control the timbre in this way using key velocity. Let us continue to explore some other methods of creating parameter
variation in additive synthesis.
User-controlled Random Variations in Additive Synthesis
In natural sounds, there is movement and change all the time. Even the best player or singer will not be able to play a note in the exact same way twice. And within a tone, the partials have some
unsteadiness all the time: slight excitations in the amplitudes, uneven durations, slight frequency fluctuations. In an audio programming environment like Csound, we can achieve these movements with
random deviations. It is not so important whether we use randomness or not, rather in which way. The boundaries of random deviations must be adjusted as carefully as with any other parameter in
electronic composition. If sounds using random deviations begin to sound like mistakes then it is probably less to do with actually using random functions but instead more to do with some poorly
chosen boundaries.
Let us start with some random deviations in our subinstrument. These parameters can be affected:
• The frequency of each partial can be slightly detuned. The range of this possible maximum detuning can be set in cents (100 cent = 1 semitone).
• The amplitude of each partial can be altered, compared to its standard value. The alteration can be measured in Decibel (dB).
• The duration of each partial can be shorter or longer than the standard value. Let us define this deviation as a percentage. If the expected duration is five seconds, a maximum deviation of 100%
means getting a value between half the duration (2.5 sec) and the double duration (10 sec).
The following example shows the effect of these variations. As a base - and as a reference to its author - we take the "bell-like sound" which Jean-Claude Risset created in his Sound Catalogue.^1
EXAMPLE 04A07_Risset_variations.csd
-o dac
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
;frequency and amplitude multipliers for 11 partials of Risset's bell
giFqs ftgen 0, 0, -11,-2,.56,.563,.92, .923,1.19,1.7,2,2.74, \
giAmps ftgen 0, 0, -11, -2, 1, 2/3, 1, 1.8, 8/3, 1.46, 4/3, 4/3, 1, 4/3
giSine ftgen 0, 0, 2^10, 10, 1
seed 0
instr 1 ;master instrument
ibasfreq = 400
ifqdev = p4 ;maximum freq deviation in cents
iampdev = p5 ;maximum amp deviation in dB
idurdev = p6 ;maximum duration deviation in %
indx = 0 ;count variable for loop
ifqmult tab_i indx, giFqs ;get frequency multiplier from table
ifreq = ibasfreq * ifqmult
iampmult tab_i indx, giAmps ;get amp multiplier
iamp = iampmult / 20 ;scale
event_i "i", 10, 0, p3, ifreq, iamp, ifqdev, iampdev, idurdev
loop_lt indx, 1, 11, loop
instr 10 ;subinstrument for playing one partial
;receive the parameters from the master instrument
ifreqnorm = p4 ;standard frequency of this partial
iampnorm = p5 ;standard amplitude of this partial
ifqdev = p6 ;maximum freq deviation in cents
iampdev = p7 ;maximum amp deviation in dB
idurdev = p8 ;maximum duration deviation in %
;calculate frequency
icent random -ifqdev, ifqdev ;cent deviation
ifreq = ifreqnorm * cent(icent)
;calculate amplitude
idb random -iampdev, iampdev ;dB deviation
iamp = iampnorm * ampdb(idb)
;calculate duration
idurperc random -idurdev, idurdev ;duration deviation (%)
iptdur = p3 * 2^(idurperc/100)
p3 = iptdur ;set p3 to the calculated value
;play partial
aenv transeg 0, .01, 0, iamp, p3-.01, -10, 0
apart poscil aenv, ifreq, giSine
outs apart, apart
; frequency amplitude duration
; deviation deviation deviation
; in cent in dB in %
;;unchanged sound (twice)
r 2
i 1 0 5 0 0 0
;;slight variations in frequency
r 4
i 1 0 5 25 0 0
;;slight variations in amplitude
r 4
i 1 0 5 0 6 0
;;slight variations in duration
r 4
i 1 0 5 0 0 30
;;slight variations combined
r 6
i 1 0 5 25 6 30
;;heavy variations
r 6
i 1 0 5 50 9 100
For a midi-triggered descendant of the instrument, we can - as one of many possible choices - vary the amount of possible random variation on the key velocity. So a key pressed softly plays the
bell-like sound as described by Risset but as a key is struck with increasing force the sound produced will be increasingly altered.
EXAMPLE 04A08_Risset_played_by_Midi.csd
-o dac -Ma
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
;frequency and amplitude multipliers for 11 partials of Risset's bell
giFqs ftgen 0, 0, -11, -2, .56,.563,.92,.923,1.19,1.7,2,2.74,3,\
giAmps ftgen 0, 0, -11, -2, 1, 2/3, 1, 1.8, 8/3, 1.46, 4/3, 4/3, 1,\
giSine ftgen 0, 0, 2^10, 10, 1
seed 0
massign 0, 1 ;all midi channels to instr 1
instr 1 ;master instrument
;;scale desired deviations for maximum velocity
;frequency (cent)
imxfqdv = 100
;amplitude (dB)
imxampdv = 12
;duration (%)
imxdurdv = 100
;;get midi values
ibasfreq cpsmidi ;base frequency
iampmid ampmidi 1 ;receive midi-velocity and scale 0-1
;;calculate maximum deviations depending on midi-velocity
ifqdev = imxfqdv * iampmid
iampdev = imxampdv * iampmid
idurdev = imxdurdv * iampmid
;;trigger subinstruments
indx = 0 ;count variable for loop
ifqmult tab_i indx, giFqs ;get frequency multiplier from table
ifreq = ibasfreq * ifqmult
iampmult tab_i indx, giAmps ;get amp multiplier
iamp = iampmult / 20 ;scale
event_i "i", 10, 0, 3, ifreq, iamp, ifqdev, iampdev, idurdev
loop_lt indx, 1, 11, loop
instr 10 ;subinstrument for playing one partial
;receive the parameters from the master instrument
ifreqnorm = p4 ;standard frequency of this partial
iampnorm = p5 ;standard amplitude of this partial
ifqdev = p6 ;maximum freq deviation in cents
iampdev = p7 ;maximum amp deviation in dB
idurdev = p8 ;maximum duration deviation in %
;calculate frequency
icent random -ifqdev, ifqdev ;cent deviation
ifreq = ifreqnorm * cent(icent)
;calculate amplitude
idb random -iampdev, iampdev ;dB deviation
iamp = iampnorm * ampdb(idb)
;calculate duration
idurperc random -idurdev, idurdev ;duration deviation (%)
iptdur = p3 * 2^(idurperc/100)
p3 = iptdur ;set p3 to the calculated value
;play partial
aenv transeg 0, .01, 0, iamp, p3-.01, -10, 0
apart poscil aenv, ifreq, giSine
outs apart, apart
f 0 3600
It will depend on the power of your computer whether you can play examples like this in realtime. Have a look at chapter 2D (Live Audio) for tips on getting the best possible performance from your
Csound orchestra.
In the next example we will use additive synthesis to make a kind of a wobble bass. It starts as a bass, then evolve to something else, and then ends as a bass again. We will first generate all the
inharmonic partials with a loop. Ordinary partials are arithmetic, we add the same value to one partial to get to the next. In this example we will instead use geometric partials, we will
multiplicate one partial with a certain number (kfreqmult) to get the next partial frequency. This number is not constant, but is generated by a sine oscilator. This is frequency modulation. Then
some randomness is added to make a more interesting sound, and chorus effect to make the sound more "fat". The exponential function, exp, is used because if we move upwards in common musical scales,
then the frequencies grow exponentially.
EXAMPLE 04A09_Wobble_bass.csd
<CsoundSynthesizer> ; Wobble bass made with additive synthesis
<CsOptions> ; and frequency modulation
; Example by Bjørn Houdorf, March 2013
sr = 44100
ksmps = 1
nchnls = 2
0dbfs = 1
instr 1
kamp = 24 ; Amplitude
kfreq expseg p4, p3/2, 50*p4, p3/2, p4 ; Base frequency
iloopnum = p5 ; Number of all partials generated
alyd1 init 0
alyd2 init 0
seed 0
kfreqmult oscili 1, 2, 1
kosc oscili 1, 2.1, 1
ktone randomh 0.5, 2, 0.2 ; A random input
icount = 1
loop: ; Loop to generate partials to additive synthesis
kfreq = kfreqmult * kfreq
atal oscili 1, 0.5, 1
apart oscili 1, icount*exp(atal*ktone) , 1 ; Modulate each partials
anum = apart*kfreq*kosc
asig1 oscili kamp, anum, 1
asig2 oscili kamp, 1.5*anum, 1 ; Chorus effect to make the sound more "fat"
asig3 oscili kamp, 2*anum, 1
asig4 oscili kamp, 2.5*anum, 1
alyd1 = (alyd1 + asig1+asig4)/icount ;Sum of partials
alyd2 = (alyd2 + asig2+asig3)/icount
loop_lt icount, 1, iloopnum, loop ; End of loop
outs alyd1, alyd2 ; Output generated sound
f1 0 128 10 1
i1 0 60 110 50
gbuzz, buzz and GEN11
gbuzz is useful for creating additive tones made of of harmonically related cosine waves. Rather than define attributes for every partial individually gbuzz allows us to define global aspects for the
additive tone, specifically, the number of partials in the tone, the partial number of the lowest partial present and an amplitude coefficient multipler which shifts the peak of spectral energy in
the tone. Number of harmonics (knh) and lowest hamonic (klh) although k-rate arguments are only interpreted as integers by the opcode therefore changes from integer to integer will result in
discontinuities in the output signal. The amplitude coefficient multiplier allows smooth modulations.
In the following example a 100Hz tone is created in which the number of partials it contains rises from 1 to 20 across its 8 second duration. A spectrogram/sonogram displays how this manifests
spectrally. A linear frequency scale is employed so that partials appear equally spaced.
EXAMPLE 04A10_gbuzz.csd
-o dac
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; a cosine wave
gicos ftgen 0, 0, 2^10, 11, 1
instr 1
knh line 1, p3, 20 ; number of harmonics
klh = 1 ; lowest harmonic
kmul = 1 ; amplitude coefficient multiplier
asig gbuzz 1, 100, knh, klh, kmul, gicos
outs asig, asig
i 1 0 8
The total number of partials only reaches 19 because the line function only reaches 20 at the very conclusion of the note.
In the next example the number of partials contained within the tone remains constant but the partial number of the lowest partial rises from 1 to 20.
EXAMPLE 04A11_gbuzz_partials_rise.csd
-o dac
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; a cosine wave
gicos ftgen 0, 0, 2^10, 11, 1
instr 1
knh = 20
klh line 1, p3, 20
kmul = 1
asig gbuzz 1, 100, knh, klh, kmul, gicos
outs asig, asig
i 1 0 8
In the sonogram it can be seen how, as lowermost partials are removed, additional partials are added at the top ot the spectrum. This is because the total number of partials remains constant at 20.
In the final gbuzz example the amplitude coefficient multiplier rises from 0 to 2. It can be heard (and seen in the sonogram) how, when this value is zero greatest emphasis is placed on the lowermost
partial and when this value is 2 the uppermost partial has the greatest emphasis.
EXAMPLE 04A12_gbuzz_amp_coeff_rise.csd
-o dac
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; a cosine wave
gicos ftgen 0, 0, 2^10, 11, 1
instr 1
knh = 20
klh = 1
kmul line 0, p3, 2
asig gbuzz 1, 100, knh, klh, kmul, gicos
fout "gbuzz3.wav",4,asig
i 1 0 8
buzz is a simplified version of gbuzz with fewer parameters – it does not provide for modulation of the lowest partial number and amplitude coefficient multiplier.
GEN11 creates a function table waveform using the same parameters as gbuzz. When a gbuzz tone is required but no performance time modulation of its parameters is needed GEN11 may provide a more
efficient option. GEN11 also opens the possibility of using its waveforms in a variety of other opcodes. gbuzz, buzz and GEN11 may prove useful as a source in subtractive synthesis.
Additive synthesis can still be an exciting way of producing sounds. The nowadays computational power and programming structures open the way for new discoveries and ideas. The later examples were
intended to show some of these potentials of additive synthesis in Csound.
1. Jean-Claude Risset, Introductory Catalogue of Computer Synthesized Sounds (1969), cited after Dodge/Jerse, Computer Music, New York / London 1985, p.94^^ | {"url":"http://en.flossmanuals.net/csound/_all/","timestamp":"2014-04-16T08:19:19Z","content_type":null,"content_length":"1048985","record_id":"<urn:uuid:c53accf7-f894-45ca-907a-2aab5787f754>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boolean Functions
October 5th 2011, 04:01 AM #1
Junior Member
Mar 2011
Boolean Functions
Hi I'm having trouble with this question and was hoping for a push in the right direction...
I've been given the boolean function
$f = w \oplus y \oplus xz \oplus wxz \oplus wyz \oplus wxyz$
and been asked to apply the karnaugh map method to it.
However I don't know how to get straight from that function to a Karnaugh map and I'm having trouble with the algebra in trying to get rid of the $\oplus$ signs.
I've gotten as far as
$f = (wy'\vee w'y)(x'\vee z')\vee (w'\vee y)(w\vee y')(xy) \oplus wxz \oplus wyz \oplus wxyz$
and I'm finding the rest of the algebra messy and intimidating. I don't think I'm on the right track.
I'd really appreciate some pointers, thanks in advance!
Re: Boolean Functions
As I understand, the Karnaugh map method goes from truth values to a formula, so you are not supposed to symbolically transform your formula into a DNF.
Draw a Karnaugh map 4x4 and label it as shown here. Then for each term in your sum put one mark into corresponding cells. For example, if the top of the table says xy and the columns are marked
00, 01, 11, 10, and if the left of the column says zw and the rows are marked 00, 01, 11, 10, then for the term xz you would put one mark into each of the four cells of the lower-right quarter of
the table. You can use "xz" as the mark to keep track of the terms.
If there are n variables total, a term consisting of k variables corresponds to $2^{n-k}$ cells of the table. Therefore, you will have 8 + 8 + 4 + 2 + 2 + 1 = 25 marks in total. Finally, since $\
oplus$ is exclusive disjunction, cells with even number of marks will have truth value F and those with odd number of marks will have T. After that, you can construct a DNF corresponding to this
Karnaugh map.
Re: Boolean Functions
Thanks emakarov!
I really appreciate your prompt reply, but the question seems to suggest reducing the function to a DNF because it states "recall that $x \oplus y = xy' \vee x'y$ ..."
Should I apply the karnaugh map method without first going through the algebra regardless?
Thanks again,
Re: Boolean Functions
Also I got 27 marks... And I'm generally confused... Do I put a 1 (true) in a cell with an even number of marks and a 0 (false) in a cell with an odd number of marks?
In that case do I still put a 0 in the cells with no marks?
Re: Boolean Functions
First, what does the question exactly ask? The OP says "apply the Karnaugh map method," but what do you have to get in the end?
To create a Karnaugh map, one needs to know the truth table. In contrast, rewriting using logical equivalences neither uses nor gives the truth table, so I don't see how Karnaugh method is useful
Maybe $x \oplus y = xy' \vee x'y$ was given to remind the definition and the truth table of $\oplus$.
Re: Boolean Functions
The question says:
"recall that the boolean addition is defined by the rule $x \oplus y = xy' \vee x'y$. Consider the Boolean function in four variables $w, x, y z$ defined by the formula $f= w \oplus y \oplus xz \
oplus wxz \oplus wyz \oplus xyz \oplus wxyz$.
Apply the the Karnaugh map method to find all simple Boolean expressions corresponding to $f$."
Thanks again for your help emakarov!
Re: Boolean Functions
Well, do you agree with the following correspondence between the terms and the number of cells?
$\begin{array}{c|c}\mbox{Term} & \mbox{Number of cells (marks)}\\\hline w & 8\\y & 8\\xz & 4\\wxz & 2\\wyz & 2\\wxyz & 1\\\hline\mbox{Total} & 25\end{array}$
The other way around. It's as if you make a Karnaugh map for each of the 6 terms and then add (using ⊕) the corresponding cells together. The sum 1 ⊕ 1 ⊕ ... ⊕ 1 = 1 iff the left-hand side has an
odd number of 1's.
Yes because zero is an even number.
Re: Boolean Functions
Ah I understand, that makes sense, however there is an additional term in the function that doesn't appear in your table $xyz$.
So with the inclusion of xyz, would it be 27 marks?
How would I go about answering this question by manipulating the function into a DNF? I'm confused because the question says to find "all simple Boolean expressions corresponding to $f$"
Thanks again. I really do appreciate your help.
Re: Boolean Functions
The concepts of "simple Boolean expressions" and which simple Boolean expressions correspond to a function are not standard. Could you give the definitions? Are simple Boolean expressions
conjunctions of variables and negation of variables?
Re: Boolean Functions
I'm sorry, I really don't know. I'm only a beginner, just started a discrete maths course this semester and I don't think we went into that much detail in the lectures...
Re: Boolean Functions
Well, you have to either look up the meaning of "a simple Boolean expression" in the textbook/lecture notes or ask the instructor. It is impossible to solve a problem if one does not know the
definitions of all concepts involved; otherwise it is not clear what to solve.
My guess would be that simple Boolean expressions corresponding to f are conjunctions of variables and their negations that occur in the DNF of f, but this needs to be double-checked.
Re: Boolean Functions
Thanks for the advice emakarov, I'll be sure to check.
October 5th 2011, 05:29 AM #2
MHF Contributor
Oct 2009
October 5th 2011, 06:07 AM #3
Junior Member
Mar 2011
October 5th 2011, 06:18 AM #4
Junior Member
Mar 2011
October 5th 2011, 06:22 AM #5
MHF Contributor
Oct 2009
October 5th 2011, 06:30 AM #6
Junior Member
Mar 2011
October 5th 2011, 06:44 AM #7
MHF Contributor
Oct 2009
October 5th 2011, 06:56 AM #8
Junior Member
Mar 2011
October 5th 2011, 07:08 AM #9
MHF Contributor
Oct 2009
October 5th 2011, 07:12 AM #10
Junior Member
Mar 2011
October 5th 2011, 07:18 AM #11
MHF Contributor
Oct 2009
October 6th 2011, 02:01 AM #12
Junior Member
Mar 2011 | {"url":"http://mathhelpforum.com/discrete-math/189606-boolean-functions.html","timestamp":"2014-04-16T10:19:48Z","content_type":null,"content_length":"71126","record_id":"<urn:uuid:2ad30ec5-aa5f-46b1-a7c9-8531ebcc7085>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 16
, 2005
"... We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite
matrices. We show that the approach is very efficient for graph bisection problems, such as max-cut. Other appli ..."
Cited by 207 (17 self)
Add to MetaCart
We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices.
We show that the approach is very efficient for graph bisection problems, such as max-cut. Other applications include max-min eigenvalue problems and relaxations for the stable set problem.
- Mathematical Programming , 1996
"... Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of
barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direc ..."
Cited by 103 (17 self)
Add to MetaCart
Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of
barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primal-dual steps, but
the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented. Key words: constrained optimization, interior point method,
large-scale optimization, nonlinear programming, primal method, primal-dual method, SQP iteration, barrier method, trust region method.
- SIAM Review , 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods,
interior-point techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Cited by 76 (4 self)
Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interior-point
techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the
simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian
and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed
dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomial-time interior method for linear programming; in 1985, a formal connection was established between his method
and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article
provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
- SIAM Journal on Optimization , 1999
"... The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and
trust regions to solve the subproblems occurring in the iteration. Both primal and primal-dual versions of t ..."
Cited by 74 (17 self)
Add to MetaCart
The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust
regions to solve the subproblems occurring in the iteration. Both primal and primal-dual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests. Key
words: constrained optimization, interior point method, large-scale optimization, nonlinear programming, primal method, primal-dual method, successive quadratic programming, trust region method.
- Mathematical Programming , 1995
"... We present a generalization of a homogeneous self-dual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "big-M"
parameter or two-phase method, and it generates either a solution converging towards feasibility and compleme ..."
Cited by 24 (3 self)
Add to MetaCart
We present a generalization of a homogeneous self-dual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "big-M"
parameter or two-phase method, and it generates either a solution converging towards feasibility and complementarity simultaneously or a certificate proving infeasibility. Moreover, if the MCP is
polynomially solvable with an interior feasible starting point, then it can be polynomially solved without using or knowing such information at all. To our knowledge, this is the first interior-point
and infeasible-starting algorithm for solving the MCP that possesses these desired features. Preliminary computational results are presented. Key words: Monotone complementarity problem, homogeneous
and self-dual, infeasible-starting algorithm. Running head: A homogeneous algorithm for MCP. Department of Management, Odense University, Campusvej 55, DK-5230 Odense M, Denmark, email:
eda@busieco.ou.dk. y De...
- SIAM J. OPTIM , 2000
"... We propose a BFGS primal-dual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and
consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Cited by 13 (1 self)
Add to MetaCart
We propose a BFGS primal-dual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and
consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges q-superlinearly for each fixed
µ. We also show that it is globally convergent to the analytic center of the primal-dual optimalset when µ tends to 0 and strict complementarity holds.
, 1997
"... Recently the authors have proposed a homogeneous and self-dual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interior-point type method,
nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of th ..."
Cited by 13 (1 self)
Add to MetaCart
Recently the authors have proposed a homogeneous and self-dual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interior-point type method,
nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of the problem. In this paper we specialize the algorithm to the solution of general smooth convex
optimization problems that also possess nonlinear inequality constraints and free variables. We discuss an implementation of the algorithm for large-scale sparse convex optimization. Moreover, we
present computational results for solving quadratically constrained quadratic programming and geometric programming problems, where some of the problems contain more than 100,000 constraints and
variables. The results indicate that the proposed algorithm is also practically efficient. Department of Management, Odense University, Campusvej 55, DK-5230 Odense M, Denmark. E-mail:
eda@busieco.ou.dk y ...
- Computers & Chemical Engineering , 1999
"... A novel nonlinear programming (NLP) strategy is developed and applied to the optimization of differential algebraic equation (DAE) systems. Such problems, also referred to as dynamic
optimization problems, are common in chemical process engineering and remain challenging applications of nonlinear pr ..."
Cited by 6 (3 self)
Add to MetaCart
A novel nonlinear programming (NLP) strategy is developed and applied to the optimization of differential algebraic equation (DAE) systems. Such problems, also referred to as dynamic optimization
problems, are common in chemical process engineering and remain challenging applications of nonlinear programming. These applications often consist of large, complex nonlinear models that result from
discretizations of DAEs. Variables in the NLP model include state and control variables, with far fewer control variables than states. Moreover, all of these discretized variables have associated
upper and lower bounds which can be potentially active. To deal with this large, highly constrained problem, an interior point NLP strategy is developed. Here a log barrier function is used to deal
with the large number of bound constraints in order to transform the problem to an equality constrained NLP. A modified Newton method is then applied directly to this problem. In addition, this
method uses an efficient decomposition of the discretized DAEs and the solution of the Newton step is performed in the reduced space of the independent variables. The resulting approach exploits many
of the features of the DAE system and is performed element by element in a forward manner. Several large dynamic process optimization problems are considered to demonstrate the effectiveness of this
approach; these include complex separation and reaction processes (including reactive distillation) with several hundred DAEs. NLP formulations with over 55,000 variables are considered. These
problems are solved in 5 to 12 CPU minutes on small workstations. Key words: interior point; dynamic optimization; nonlinear programming 1 1
- Department of Computational and Applied Mathematics, Rice University , 1995
"... In this work we define a trust-region feasible-point algorithm for approximating solutions of the nonlinear system of equalities and inequalities F(x, y) = 0, y ≥ 0, where F: R^n × R^m → R^p is
continuously differentiable. This formulation is quite general; the Karush-Kuhn-Tucker condi ..."
Cited by 4 (0 self)
Add to MetaCart
In this work we define a trust-region feasible-point algorithm for approximating solutions of the nonlinear system of equalities and inequalities F(x, y) = 0, y ≥ 0, where F: R^n × R^m &
rarr; R^p is continuously differentiable. This formulation is quite general; the Karush-Kuhn-Tucker conditions of a general nonlinear programming problem are an obvious example, and a set of
equalities and inequalities can be transformed, using slack variables, into such form. We will be concerned with the possibility that n, m, and p may be large and that the Jacobian matrix may be
sparse and rank deficient. Exploiting the convex structure of the local model trust-region subproblem, we propose a globally convergent inexact trust-region feasible-point algorithm to minimize an
arbitrary norm of the residual, say ||F(x, y)||a, subject to the nonnegativity constraints. This algorithm uses a trust-region globalization strategy to determine a descent direction as an inexact
solution of the local model trust-region subproblem and then, it uses linesearch techniques to obtain an acceptable steplength. We demonstrate that, under rather weak hypotheses, any accumulation
point of the iteration sequence is a constrained stationary point for f = ||F||a, and that the sequence of constrained residuals converges to zero. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=372671","timestamp":"2014-04-16T17:18:04Z","content_type":null,"content_length":"39823","record_id":"<urn:uuid:b562c9de-b395-4b9c-8914-60029d71e918>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compiled Overall Rankings (Updated 2-21-06)
When did Pineiro become a Phillie?
Thanks for all the comments everyone. To those who are posting that they disagree with my rankings, please keep in mind that I am simply averaging a number of sources, not infusing my own opinion.
To answer some of your questions, I have no idea how Pineiro became a Phillie on my list!
Also, here are some of the sources I'm using.
Sporting News
USA Today
Numerour paper publications
Projections from trusted websites like Baseball Prospectus
Injury History
Ballpark Factor
Lineup Factor
and others.
I hope the rankings are useful and helpful. Thanks for all the comments.
why not take Yahoo's (ESPN, Fox, CBS?) out to make it better?
I understand your reasoning, but I think using as many as possible makes things more accurate. I throw out the highest and lowest ranking for each player anyway, but I like including as many sources
as possible to give the truest representation.
By the way, could someone explain to me what "Standard Deviation" is? Thanks! I've had more than one person tell me it would be useful and easy to include since I'm using Excel, but I don't know what
it is!
Thank you.
By the by, a new update was just published to the rankings (hopefully I have everyone's teams correct this time!!).
btaylor1978 wrote:I understand your reasoning, but I think using as many as possible makes things more accurate. I throw out the highest and lowest ranking for each player anyway, but I like
including as many sources as possible to give the truest representation.
By the way, could someone explain to me what "Standard Deviation" is? Thanks! I've had more than one person tell me it would be useful and easy to include since I'm using Excel, but I don't know
what it is!
basically its a measure of dispersion (or variation) in the sample. it measures how spread out the numbers are. there seems to be a pretty solid consensus that vlad is the #3 pick. maybe a few
rankings he is #4. since there is very little variation in his rankings he has a low standard deviation. someone else like soriano or sizemore will likely have a higher standard deviation. a high
standard deviation for someone in the top 50 or 100 means that the sources have a hard time agreeing on that players rankings. standard deviation isnt quite as useful outside of the top 100. the
difference between a ranking #150 and #170 isnt nearly as big as the difference between #30 and #50.
if you have excel, the formula is pretty simple. its "=stdev(data set)" without the quotes. the data set would just be all the rankings for that particular player.
back from the dead
Add this thread to another reason why the cafe is a top three resource to fantasy baseball. We show the ratings sites how to rank their players
Very cool. Makes perfect sense.
After running the numbers, the highest StDev in my Top 25 belongs to Chris Carpenter and Miguel Tejada, each at about 10. A-Rod and Pujols, to no one's surprise, have the lowest StDev, both at 0.34.
Thanks for the help. Would you all recommend that I publish this as part of the spreadsheet rankings on the website?
I appreciate your help!
including Std. Dev. and an explaination at the bottom would be quality. | {"url":"http://www.fantasybaseballcafe.com/forums/viewtopic.php?t=170441&start=10","timestamp":"2014-04-18T21:13:10Z","content_type":null,"content_length":"76658","record_id":"<urn:uuid:f0bc5e56-0ccf-4126-827b-61617277b490>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brook Taylor
Courtesy of The National Portrait Gallery, London
Brook Taylor, (born August 18, 1685, Edmonton, Middlesex, England—died December 29, 1731, London), British mathematician, a proponent of Newtonian mechanics and noted for his contributions to the
development of calculus.
Taylor was born into a prosperous and educated family who encouraged the development of his musical and artistic talents, both of which found mathematical expression in his later life. He was tutored
at home before he entered St. John’s College, Cambridge, in 1701 to study law. He completed his LL.B. in 1709 and his doctorate in 1714, but it is doubtful that he ever practiced as a lawyer.
Taylor’s first important mathematical paper, which provided a solution to the problem of the centre of oscillation of a body, was published in 1714, although he had actually written it by 1708. His
delay in publishing led to a priority dispute with the noted Swiss mathematician Johann Bernoulli. Taylor’s famous investigation of the vibrating string, a topic that played a large role in
clarifying what mathematicians meant by a function, was also published in 1714.
Taylor’s Methodus Incrementorum Directa et Inversa (1715; “Direct and Indirect Methods of Incrementation”) added to higher mathematics a new branch now called the calculus of finite differences.
Using this new development, Taylor studied a number of special problems, including the vibrating string, the determination of the centres of oscillation and percussion, and the path of a light ray
refracted in the atmosphere. The Methodus also contained the celebrated formula known as Taylor’s theorem, which Taylor had first stated in 1712 and the full significance of which began to be
recognized only in 1772 when the French mathematician Joseph-Louis Lagrange proclaimed it the basic principle of differential calculus.
A gifted artist, Taylor set forth in Linear Perspective (1715) the basic principles of perspective. This work and his New Principles of Linear Perspective (1719) contained the first general treatment
of the principle of vanishing points. Taylor was elected a fellow of the Royal Society of London in 1712 and in the same year sat on the committee for adjudicating Sir Isaac Newton’s and Gottfried
Wilhelm Leibniz’s conflicting claims of priority in the invention of calculus. | {"url":"http://www.britannica.com/print/topic/584793","timestamp":"2014-04-20T22:39:24Z","content_type":null,"content_length":"10566","record_id":"<urn:uuid:2f48adae-b68b-4624-9ac4-221314862b54>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Calculate angles, sides and areas for any regular polygon without
using trig functions and Pi
Replies: 9 Last Post: May 29, 2013 9:55 PM
Messages: [ Previous | Next ]
JT Re: Calculate angles, sides and areas for any regular polygon without
using trig functions and Pi
Posts: Posted: May 29, 2013 11:34 AM
Registered: On 29 Maj, 17:23, JT <jonas.thornv...@gmail.com> wrote:
4/7/12 > On 29 Maj, 17:07, Ray Vickson <RGVick...@shaw.ca> wrote:
> > On Tuesday, May 28, 2013 5:35:58 PM UTC-7, JT wrote:
> > > Do you think i could calculate all the angles in turns and the lengths
> > > of sides(perimeter) and area of any regular polygon without using
> > > trigonometric functions and Pi?
> > Why don't you do some reading about what is known, and has been known for hundreds of years? You just cannot avoid irrational numbers when getting the sides, etc., of most regular
polygons. That means that you can NEVER express the answer in terms of a nice fraction (i.e., rational number). If you claim to be able to do it you are provably doing something wrong,
because it cannot be done. Period. End of story.
> > A nice, expository article that discusses some of the history (what the Greeks knew, what Gauss discovered, etc) is given in
> >http://www.math.iastate.edu/thesisarchive/MSM/EekhoffMSMSS07.pdf
> > This article has actual derivations and proofs of some of the formulas. Of course you can avoid calculating pi, just by expressing the angles in degrees instead of radians. The issue
is how you can calculate trig functions of various angles, and that is what the various formulas are doing.
> I do intend to use radians nor degrees, i will use turns, ratios and
> triangles to solve it for any polygon and the forumula will be
> expressed as a ratio vs the vertex line. So the vertex line will be
> one and the vertices ratios of one.
Sorry not radians not degrees, and the edges will be expressed as
ratios of the verice line so edge/vertice where the vertice is 1. | {"url":"http://mathforum.org/kb/message.jspa?messageID=9128627","timestamp":"2014-04-18T18:53:19Z","content_type":null,"content_length":"29534","record_id":"<urn:uuid:b1ed1894-14d0-41bb-86bc-3c8b19fdffdf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics, University of North Alabama
[7] A Decomposition for a Class of Nonlinear Functionals, In AMS Southeastern Section Spring 2013 Conference (to appear), 2013.
[6] Establishing the Impact of a Computer Science/Mathematics Anti-Symbiotic Stereotype in CS Students, In Journal of Computing Sciences in Colleges (to appear), Consortium for Computing Sciences in
Colleges, 2013.
[5] Using Computer Programming to Teach Mathematical Reasoning and Sense Making in the High School Classroom, In ACTM Fall Forum 2012, 2012.
[4] Involving Undergraduates in Interdisciplinary Research to Support STEM Education, 2012. (Council on Undergraduate Research, 2012 Biennial Conference)
[3] A Measure Associated with a Non-linear Operator on a Banach Lattice, In MAA Southeastern Section Spring 2012 Conference, 2012.
[2] Level the Playing Field for Women with Immediate Immersion into Beginning Programming in a Problem Based Learning Environment, In Tennessee Women in Computing, 2011.
[1] The Influence of Stereotypes on STEM majors in a Problem-Based Curriculum Investigation: Computer Science vs. Mathematics Majors, In Proceedings of the 53rd Annual ACM Mid-Southeast Conference, | {"url":"http://www.una.edu/math/faculty/stovall.html","timestamp":"2014-04-17T18:35:43Z","content_type":null,"content_length":"8946","record_id":"<urn:uuid:ca4112cb-9a41-49ec-b598-d56e1b22a279>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Connected Correlation Function
A selection of articles related to connected correlation function.
Original articles from our library related to the Connected Correlation Function. See Table of Contents for further available material (downloadable resources) on Connected Correlation Function.
Connected Correlation Function is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Connected Correlation Function books and
related discussion.
Suggested Pdf Resources
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/connected-correlation-function/","timestamp":"2014-04-19T02:33:24Z","content_type":null,"content_length":"29507","record_id":"<urn:uuid:a93ca206-ecbc-49b8-85ea-1de1e7caca67>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adelaide, WA Math Tutor
Find an Adelaide, WA Math Tutor
...In my first role as a software developer,I developed an imaging software in Windows (SDK 3) when Windows just started offering software development kits. I continue to work in Windows in
current projects using Visual Studio. I have been working in MS Windows for over 20 years.
16 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry
...I also managed undergraduates in my laboratory and taught them scientific techniques, data analysis, 'laboratory math', and safety training. The bottom line is that I love teaching, and now
that I am graduated and pursuing a career in industry, I find that I miss it. I am confident that with a ...
14 Subjects: including algebra 2, trigonometry, anthropology, algebra 1
...In total, I have worked with over 400 physics students, so I am familiar with addressing many common conceptual misunderstandings and calculation difficulties. I took the general GRE in 2010,
scoring 790/800 on quantitative reasoning and 740/800 on verbal reasoning. Many of my GRE students have had backgrounds in the humanities, so have not taken math classes for several years.
17 Subjects: including prealgebra, English, linear algebra, algebra 1
I am a PhD in Physics and Mathematics since March 1987. I have excellent interpersonal and communication skills. I am an extensive professional with teaching experience in the fields of
Mathematics (Pre-calculus, Calculus, Differential Equations) and Medical and Biological Physics in a position of associate professor.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...Additionally, I have taken undergraduate and graduate level Biostatistics courses with success. I have a Ph.D. in Immunology. Genetics was part of my required coursework for both my
undergraduate and graduate degrees.
17 Subjects: including geometry, statistics, probability, algebra 1
Related Adelaide, WA Tutors
Adelaide, WA Accounting Tutors
Adelaide, WA ACT Tutors
Adelaide, WA Algebra Tutors
Adelaide, WA Algebra 2 Tutors
Adelaide, WA Calculus Tutors
Adelaide, WA Geometry Tutors
Adelaide, WA Math Tutors
Adelaide, WA Prealgebra Tutors
Adelaide, WA Precalculus Tutors
Adelaide, WA SAT Tutors
Adelaide, WA SAT Math Tutors
Adelaide, WA Science Tutors
Adelaide, WA Statistics Tutors
Adelaide, WA Trigonometry Tutors
Nearby Cities With Math Tutor
Avondale, WA Math Tutors
Bryn Mawr, WA Math Tutors
Coal Creek, WA Math Tutors
Earlmount, WA Math Tutors
Eastgate, WA Math Tutors
Hazelwood, WA Math Tutors
Highlands, WA Math Tutors
Houghton, WA Math Tutors
Juanita, WA Math Tutors
Newport Hills, WA Math Tutors
Novelty, WA Math Tutors
Pine Lake, WA Math Tutors
Queensborough, WA Math Tutors
Queensgate, WA Math Tutors
Totem Lake, WA Math Tutors | {"url":"http://www.purplemath.com/Adelaide_WA_Math_tutors.php","timestamp":"2014-04-16T13:22:19Z","content_type":null,"content_length":"23934","record_id":"<urn:uuid:1ee2f32a-f3df-4519-89eb-a087472c39f5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Van Nuys Algebra 2 Tutor
Find a Van Nuys Algebra 2 Tutor
...While obtaining my degrees, I had to take many public speaking courses. I passed all with flying colors! I also am currently employed by a television news station, where speaking in front of a
lot of people every day is required.
35 Subjects: including algebra 2, reading, English, writing
...I have also started my own asset holding company this year. I have worked with Microsoft Word on an almost daily basis for the past 12yrs. I have written numerous scholastic papers (two thesis
papers 70+ pages), inter-office memorandums, resumes, agendas, work related reports, and technical (instructional) pieces.
31 Subjects: including algebra 2, reading, writing, English
...I recently graduated from UC San Diego where its mathematics program is ranked top 20 in the nation. Furthermore, I helped fellow peers with their math classes and through my tutoring
techniques, raised their grades by a substantial amount while helping them understand the material. While study...
13 Subjects: including algebra 2, chemistry, calculus, geometry
...That is how I approach tutoring. I graduated from California State University Northridge (CSUN) with a BS degree in electrical engineering. As an engineering student, I have a solid knowledge
of mathematics, and my engineering experience has given me an insight into the way in which mathematics relates to real life applications.
9 Subjects: including algebra 2, calculus, algebra 1, trigonometry
...I have studied sewing and quilting for several years. I sew my own costumes for Renaissance fairs and Halloween, and I've made two quilts. I embroider by hand and can make great finished
products by hand and by machine.
15 Subjects: including algebra 2, biology, algebra 1, elementary math | {"url":"http://www.purplemath.com/Van_Nuys_algebra_2_tutors.php","timestamp":"2014-04-18T15:52:50Z","content_type":null,"content_length":"23872","record_id":"<urn:uuid:e0d4a348-e317-47b8-bee8-61e61a532f83>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
local homeomorphism
local homeomorphism
Basic concepts
Étale morphisms
A continuous map $f : X \to Y$ between topological spaces is called a local homeomorphism if restricted to a neighbourhood of every point in its domain it becomes a homeomorphism.
One also says that this exhibits $X$ as an étale space over $Y$.
Notice that, despite the similarity of terms, local homeomorphisms are, in general, not local isomorphisms in any natural way. See the examples below.
A local homeomorphism is a continuous map $p : E \to B$ between topological spaces (a morphism in Top) such that
• for every $e \in E$, there is an open set $U i e$ such that the image $p_*(U)$ is open in $B$ and the restriction of $p$ to $U$ is a homeomorphism $p|_U: U \to p_*(U)$,
or equivalently
• for every $e \in E$, there is a neighbourhood $U$ of $e$ such that the image $p_*(U)$ is a neighbourhood of $p(e)$ and $p|_U: U \to p_*(U)$ is a homeomorphism.
See also etale space.
For $Y$ any topological space and for $S$ any set regarded as a discrete space, the projection
$X \times S \to X$
is a local homeomorphism.
For $\{U_i \to Y\}$ an open cover, let
$X := \coprod_i U_i$
be the disjoint union space of all the pathches. Equipped with the canonical projection
$\coprod_i U_i \to Y$
this is a local homeomorphism.
In general, for every sheaf $A$ of sets on $Y$; there is a local homeomorphism $X \to Y$ such that over any open $U \hookrightarrow X$ the set $A(U)$ is naturally identified with the set of sections
of $Y \to X$. See étale space for more on this.
Revised on March 19, 2012 11:11:10 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/local+homeomorphism","timestamp":"2014-04-16T19:01:41Z","content_type":null,"content_length":"31916","record_id":"<urn:uuid:e1625f91-58ff-4ed6-b0ce-fec170558f9b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Measure the Density of Metals
Edit Article
Edited by Maluniu, Jmuddy95, Awesome.Nicky
Density is one of the basic properties of matter. Density is defined as the mass of an object divided by the volume of the object. This yields a density of mass per unit volume. If 2 objects have the
same volume, but different densities, the object with the higher density will weigh more than the identical looking object having the lower density. While the densities of some objects can be
modified by outside pressure, particularly gases, the density of metals is the same under any condition. Knowing the density of an object can be a valuable tool in determining the make up of a sample
of unknown material, as no 2 metals have the same density. Use these tips to learn how to measure the density of metals.
1. 1
Weigh the object. Use any accurate weighing technique, such as a pan balance, to measure the weight of the object. Weigh the object while it is dry so that absorbed water does not affect the
accuracy of the weighing.
2. 2
Determine the volume of the object by measurement. Measurements can be done if the object is regularly shaped and uniform.
□ Measure the object. Use the vernier calipers to measure the dimensions of regularly shaped and uniform objects. With the shape and dimensions of the object known, basic mathematical
calculations will yield the volume. For example, the volume of a regular rod is length times pi times radius squared, while the volume of a rectangle is the product of length width and depth.
3. 3
Find the volume of the object by displacement. Displacement will be required to find the volume of an irregularly shaped object that cannot be accurately measured. For example, a piece of rock
gathered by a geologist would be too irregular to measure and it's volume would have to be found by displacement.
□ Prepare the displacement measuring vessel. Put a beaker of water on the pan weighing balance. Make sure that there is enough water in the beaker to cover the object to be measured. Do not
overfill the beaker to the point where introduction of the object to the baker would cause the water to overflow and spill out of the beaker, which would ruin the measurement.
□ Record the water level in the beaker.
□ Test the object. Tie the object to the end of a light string and lower the object in to the water such that it is fully submerged. Record the new water level in the beaker.
□ Calculate the apparent volume change in the beaker. Multiply the increase in water level by the surface area of the water to obtain the volume added to the beaker by the introduction of the
object. This added volume is the volume of the object.
4. 4
Calculate density. Divide the weight of the object by the volume of the object. The result of this division is the density of the object.
Things You'll Need
• Pan weighing balance
• Vernier calipers
Sources and Citations
Article Info
Categories: Physics
Recent edits by: Jmuddy95, Maluniu
Thanks to all authors for creating a page that has been read 12,256 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Measure-the-Density-of-Metals","timestamp":"2014-04-19T08:27:19Z","content_type":null,"content_length":"62404","record_id":"<urn:uuid:9155aa17-0038-4df1-88e1-6510cdeadd86>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimating a Difference
A quick way to estimate the difference between two numbers is to round each number and then subtract the rounded numbers. This probably won't be the exact answer but it may be close enough for
some purposes.
How to Estimate a difference by rounding.
□ Round each term that will be subtracted
□ Subtract the rounded numbers
An estimate can sometimes be improved. If the sum of 645 - 450 were estimated, we would round 645 to 600 and 450 to 500. The estimate would be 600 - 500 or 100. One number was rounded down and
the other was rounded up. The number 645 was rounded down by 45 and 450 was rounded up by 50. Adding 45 + 50 gives 95, which rounds to 100. Therefore, a better estimate would be 200. The actual
difference is 195.
How to Improve the Estimate.
□ Round each term that will be subtracted
□ Subtract the rounded numbers
□ If one is rounded down and the other up see if the amount of rounding is more than 50. If it is add 100 to or subtract 100 from the estimate.
□ If both numbers are rounded down or both are rounded up a closer estimate will not be produced by this method. | {"url":"http://www.aaamath.com/g28b-subestimate.html","timestamp":"2014-04-20T18:22:40Z","content_type":null,"content_length":"8463","record_id":"<urn:uuid:3a87529b-63a6-496d-8b44-ef3cab8b36c7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spirals and Time
Recreational APL
Eugene McDonnell
Winter 1977
Spirals and Time
In this issue I write about the number spiral and the Gregorian calendar. In discussing them I shall use the ⍺⍵ notation used by Iverson in his recent book [1].
The ⍺⍵ notation has many advantages in exposition. By removing as much as possible of the programming content, the reader is free to concentrate on the mathematical aspects of what is being
presented. The notation is used as follows: the name of the function is given first, then a colon, and then the body of the function. The left and right arguments to the function are represented in
the body by the symbols ⍺ and ⍵ , respectively. In a monadic function the argument can be represented by either symbol. For example, a function to determine square roots could be written sqrt:⍵*0.5 .
A function to determine the overtime pay (time and a half) for a worker whose rate is ⍺ and who worked a total of ⍵ hours could be written as ot:⍺×1.5×0⌈⍵-40 .
A function body can be in three parts, separated by colons, to permit conditional execution. In this form the central part is executed first. If the result is 1, the right part is executed; if 0, the
left part is executed. A recursive function to compute the greatest common divisor of its arguments could be written as gcd:(⍺|⍵)gcd⍺:0=⍺:⍵ .
The Number Spiral
The long-time problems editor of this journal, Bob Smith (of Scientific Time Sharing Corporation) recently found a delightful use of the scan operator in connection with the number spiral. The number
spiral is an arrangement of the integers in a spiral fashion. This can be done in a number of ways; I shall use a clockwise spiral, beginning with 0. The spiral of order n has n windings. For
example, the spiral of order 9 looks like this:
At the end of this article, I’ll present functions that permit the generation of spirals of order n . You may want to try your hand at your own solutions before looking at mine. Here are three
problems in connection with the number spiral:
1. Write a function ls which takes a positive integer argument n and gives as its result the length of a side (that is, the number of rows) in the spiral of order n . The result of ls 9 is 5.
2. Write a function cn which takes a positive integer argument n and gives as a result the number of elements in the spiral of order n . The result of cn 9 is 25.
3. Write a function sp to generate the spiral of order n for any positive integer n . Remember, the numbers in the result should wind around clockwise. If you can, write the function so that it
works in either index origin.
Bob Smith’s finding has to do with the elements that appear at the turns in the number spiral. In the spiral of order 9 , these numbers are 0 1 2 4 6 9 12 16 20 . Can you think of an expression that
generates these numbers? Bob’s is -\-\⍳9 . Isn’t that pretty? (0-origin indexing).
How Long Is a Year?
The problem the calendar designer faces is to make a civil artifact, in which the unit is the day, so that it faithfully measures the journey of the sun from one vernal equinox (or some other
convenient point) to the next. Since this journey does not take an integral number of days, the calendar maker must insert an extra day in the year from time to time. This action is called
Astronomers have long known fairly accurately how long the physical year is. Hipparchus of Rhodes, in about 150 B.C., measured the year at 365.242 days. The current estimate puts it at 365.242199
days, which is accurate to about one-tenth of a second.
The Gregorian calendar, which was first put into use in 1582, attempted to correct deficiencies in the Julian calendar. The Julian calendar had three common years of 365 days, followed by one leap
year of 366 days. We can obtain the year length by (+/365 365 365 366)÷4 , which gives 365.25. The difference between 365.25 and 365.242199 produced enough error so that the calendar eventually was
out of phase with the seasons. By 1582 the discrepancy was 9 days.
The function op:÷⍵-365.242199 tells us how many years it will take a calendar whose year has length ⍵ to become one day out of phase with the physical year. The sign of the result indicates whether
the calendar year is too long (+) and needs fewer intercalated days, or too short (-) and needs more intercalated days. The value of op 365.25 is 128.1877 years. The Julian calendar year was thus too
The solution provided by the Gregorian calendar was a compound one. It said that every fourth year shall be a leap year unless the year is divisible by 100, in which case it should be a common year,
unless it is also divisible by 400, in which case it should be a leap year.
A function to determine if a given year is a common year or a leap year uses a wonderful inner product. The result of ly:0≠.=4 100 400∘.|⍵ is 0 if ⍵ is the number of a common year, and 1 if it is a
leap year. This inner product is so wonderful that it was turned down for inclusion in the first edition of the Gilman and Rose book on APL [2], because it would have been too much for the book’s
intended audience at the point in the book where it would have had to appear!
How long is the Gregorian year? We can answer this question with the help of the ly function. In any 400-year period the number of days is precisely the same. We need only to compute 365+.+ly⍳400 ,
which gives 146097, and to divide this by 400, which gives 365.2425. The Gregorian calendar will be one day out of phase with the physical year in op 365.2425 , or 3322.2591 years. The Gregorian
calendar, like the Julian calendar, is too long.
There is nothing in the Gregorian calendar, as now instituted, to handle this excess. Let’s see what we might do to remedy this defect. The multiple of 400 nearest to 3322.2591 is 3200. Let’s propose
that every year divisible by 3200 be a common year. The ly function would be modified to ly:0≠.=4 100 400 3200∘.|⍵ ; and the length of the year would be (365+.+ly⍳3200)÷3200 , or 365.2421875 days. A
calendar with a year this long would become out of phase by one day in op 365.2421875 , or 86956.52153 years, and the year would be too short. How shall we handle this? Well, the nearest multiple of
3200 to 86956.52153 is 86400. Let’s propose then that a year divisible by 86400 be a leap year. The ly function now would be ly:0≠.=4 100 400 3200 86400∘.|⍵ ; and the length of the year would be
(365+.+ly⍳86400)÷86400 , or 365.2421991 days. The result of op 365.2421991 is 13500009.24, which means that the calendar would be one day out of phase in a little over thirteen and a half million
Do you feel like quitting? I do. By the way, do any of you recognize 86400? It is also the number of seconds in one day!
Back to the Number Spiral
I solved the three problems like this:
1. ls:⌈⍵*2
2. cn:⌊(⍵*2)÷4
3. sp:(⊖⍉sp⍵-1),(cn⍵)+⍳ls⍵:⍵=1:1 1⍴⎕io
Notice that the result of cn ⍳9 is 0 1 2 4 6 9 12 16 20 , the same result obtained by -\-\⍳9 using 0-origin indexing. In the function sp , the expression ⊖⍉ is used to give its argument a
quarter-turn counterclockwise rotation. Appending the interval vector (cn⍵)+⍳ls⍵ to it gives the desired result. The recursion is terminated when the argument ⍵ is equal to 1, in which case the
starting matrix 1 1⍴⎕io is the result.
1. Iverson, K. E., Elementary Analysis, APL Press, Swarthmore, Pa., 1976.
2. Gilman, L., and Rose, A. J., APL: An Interactive Approach. Wiley, New York, 1970.
First appeared in APL Quote-Quad, Volume 7, Number 4, Winter 1977. The text requires the APL385 Unicode font, which can be downloaded from http://www.vector.org.uk/resource/apl385.ttf . To resolve
(or at least explain) problems with displaying APL characters see http://www.vector.org.uk/archive/display.htm .
original writing: 2009-05-25 22:40
last updated: 2010-08-19 20:35 | {"url":"http://www.jsoftware.com/papers/eem/spirals.htm","timestamp":"2014-04-18T03:41:00Z","content_type":null,"content_length":"11366","record_id":"<urn:uuid:04f11fc4-60f9-4e41-9d11-e08a9490d2bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
A sufficient condition for the ring property that every prime ideal is maximal
Let $R$ be a commutative ring with $1 eq 0$ and suppose that for all $a \in R$, there exists an integer $n > 1$ such that $a^n = a$. Prove that every prime ideal of $R$ is maximal.
Let $P \subseteq R$ be a prime ideal. Now $R/P$ is an integral domain. Let $a + P \in R/P$ be nonzero; there exists $n \geq 2$ such that $a^n = a$. In particular, $a^n + P = a + P$, so that $a(a^
{n-1} - 1) + P = 0$. Since $a otin P$, we have $a^{n-1} + P = 1+P$, so that $(a+P)(a^{n-2} + P) = 1$. Thus $R/P$ is a division ring, and since $R$ is commutative, $R/P$ is a field. Thus $P \subseteq
R$ is maximal.
Leave a Reply Cancel reply
Recent Comments
• Group isomorphisms on If a group has an automorphism which is fixed point free of order 2, then it is abelian
• RD Reese on Characterize the center of a group ring
• kimochis on Find all possible orders of elements in Sym(5)
• kimochis on Find all possible orders of elements in Sym(5)
• kimochis on Find all possible orders of elements in Sym(5)
• Redlock on Inductive step of the Jordan-Holder Theorem
• nbloomf on Classification of groups of order 20
By nbloomf, on September 30, 2010 at 11:00 am, under AA:DF. Tags: commutative ring, maximal ideal, prime ideal, ring. No Comments
Post a comment or leave a trackback: Trackback URL. | {"url":"http://crazyproject.wordpress.com/2010/09/30/a-sufficient-condition-for-the-ring-property-that-every-prime-ideal-is-maximal/","timestamp":"2014-04-17T22:07:33Z","content_type":null,"content_length":"70604","record_id":"<urn:uuid:8f8cc47f-a396-43cc-bd09-4eb0ca1726fc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Slicing/selection in multiple dimensions simultaneously
[Numpy-discussion] Slicing/selection in multiple dimensions simultaneously
Mike Ressler mike.ressler@alum.mit....
Tue Sep 11 17:11:53 CDT 2007
The following seems to be a wart: is it expected?
Set up a 10x10 array and some indexing arrays:
Suppose I want to extract only the "even" numbered rows from a - then
print a[q,:]
<works - output deleted>
Every fifth column:
print a[:,r]
<works - output deleted>
Only the even rows of every fifth column:
print a[q,r]
<type 'exceptions.ValueError'> Traceback (most recent call last)
/.../.../.../<ipython console> in <module>()
<type 'exceptions.ValueError'>: shape mismatch: objects cannot be
broadcast to a single shape
But, this works:
print a[q,:][:,r]
[[ 0 5]
[20 25]
[40 45]
[60 65]
[80 85]]
So why does the a[q,r] form have problems? Thanks for your insights.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-September/029164.html","timestamp":"2014-04-17T16:02:30Z","content_type":null,"content_length":"3741","record_id":"<urn:uuid:ccbc45b0-dc77-4ef0-909a-4b08dca322b9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Big Bang Nucleosynthesis: Probing the First 20 Minutes -
G. Steigman
2.4. Nonstandard BBN
The predictions of the primordial abundance of ^4He depend sensitively on the early expansion rate (the Hubble parameter H) and on the amount - if any - of a [e] - [e] asymmetry (the [e] chemical
potential µ[e] or the neutrino degeneracy parameter [e]). In contrast to ^4He, the BBN-predicted abundances of D, ^3He and ^7Li are determined by the competition between the various two-body
production/destruction rates and the universal expansion rate. As a result, the D, ^3He, and ^7Li abundances are sensitive to the post-e^± annihilation expansion rate, while that of ^4He depends on
both the pre- and post-e^± annihilation expansion rates; the former determines the "freeze-in" and the latter modulates the importance of Kneller & Steigman 2003). Also, the primordial abundances of
D, ^3He, and ^7Li, while not entirely insensitive to neutrino degeneracy, are much less affected by a nonzero [e] (e.g., Kang & Steigman 1992). Each of these nonstandard cases will be considered
below. Note that the abundances of at least two different relic nuclei are needed to break the degeneracy between the baryon density and a possible nonstandard expansion rate resulting from new
physics or cosmology, and/or a neutrino asymmetry.
2.4.1. Additional Relativistic Energy Density
The most straightforward variation of SBBN is to consider the effect of a nonstandard expansion rate H' H. To quantify the deviation from the standard model it is convenient to introduce the "
expansion rate factor" (or speedup/slowdown factor) S, where
Such a nonstandard expansion rate might result from the presence of "extra" energy contributed by new, light (relativistic at BBN) particles "X". These might, but need not, be additional flavors of
active or sterile neutrinos. For X particles that are decoupled, in the sense that they do not share in the energy released by e^± annihilation, it is convenient to account for the extra contribution
to the standard-model energy density by normalizing it to that of an "equivalent" neutrino flavor (Steigman et al. 1977),
For SBBN, N[] = 0 (N[] N[]) and for each such additional "neutrino-like" particle (i.e. any two-component fermion), if T[X] = T[], then N[] = 1; if X should be a scalar, N[] = 4/7. However, it may
well be that the X have decoupled even earlier in the evolution of the Universe and have failed to profit from the heating when various other particle-antiparticle pairs annihilated (or unstable
particles decayed). In this case, the contribution to N[] from each such particle will be < 1 (< 4/7). Henceforth we drop the X subscript. Note that, in principle, we are considering any term in the
energy density that scales like "radiation" (i.e. decreases with the expansion of the Universe as the fourth power of the scale factor). In this sense, the modification to the usual Friedman equation
due to higher dimensional effects, as in the Randall-Sundrum model (Randall & Sundrum 1999a, b; see also Cline, Grojean, & Servant 1999; Binetruy et al. 2000; Bratt et al. 2002), may be included as
well. The interest in this latter case is that it permits the possibility of an apparent negative contribution to the radiation density (N[] < 0; S < 1). For such a modification to the energy
density, the pre-e^± annihilation energy density in Equation 1 is changed to
Since any extra energy density (N[] > 0) speeds up the expansion of the Universe (S > 1), the right-hand side of the time-temperature relation in Equation 3 is smaller by the square root of the
factor in parentheses in Equation 9.
In the post-e^± annihilation Universe the extra energy density is diluted by the heating of the photons, so that
While the abundances of D, ^3He, and ^7Li are most sensitive to the baryon density (^4He mass fraction (Y) provides the best probe of the expansion rate. This is illustrated in Figure 2 where, in the
N[] - [10] plane, are shown isoabundance contours for D/H and Y[P] (the isoabundance curves for ^3He/H and for ^7Li/H, omitted for clarity, are similar in behavior to that of D/H). The trends
illustrated in Figure 2 are easy to understand in the context of the discussion above. The higher the baryon density ([10]), the faster primordial D is destroyed, so the relic abundance of D is
anticorrelated with [10]. But, the faster the Universe expands (N[] > 0), the less time is available for D destruction, so D/H is positively, albeit weakly, correlated with N[]. In contrast to D (and
to ^3He and ^7Li), since the incorporation of all available neutrons into ^4He is not limited by the nuclear reaction rates, the ^4He mass fraction is relatively insensitive to the baryon density,
but it is very sensitive to both the pre- and post-e^± annihilation expansion rates (which control the neutron-to-proton ratio). The faster the Universe expands, the more neutrons are available for ^
4He. The very slow increase of Y[P] with [10] is a reflection of the fact that for a higher baryon density, BBN begins earlier, when there are more neutrons. As a result of these complementary
correlations, the pair of primordial abundances y[D] ^5(D / H)[P] and Y[P], the ^4He mass fraction, provide observational constraints on both the baryon density (S (or on N[]) when the Universe was
some 20 minutes old. Comparing these to similar constraints from when the Universe was some 380 Kyr old, provided by the WMAP observations of the CBR polarization and the spectrum of temperature
fluctuations, provides a test of the consistency of the standard models of cosmology and of particle physics and further constrains the allowed range of the present-Universe baryon density (e.g.,
Barger et al. 2003a, b; Crotty, Lesgourgues, & Pastor 2003; Hannestad 2003; Pierpaoli 2003).
Figure 2. Isoabundance curves for D and ^4He in the N[] - [10] plane. The solid curves are for ^4He (from top to bottom: Y = 0.25, 0.24, 0.23). The dotted curves are for D (from left to right: y[D] ^
5(D/H) = 3.0, 2.5, 2.0). The data point with error bars corresponds to y[D] = 2.6 ± 0.4 and Y[P] = 0.238 ± 0.005; see the text for discussion of these abundances.
The baryon-to-photon ratio provides a dimensionless measure of the universal baryon asymmetry, which is very small (^-9). By charge neutrality the asymmetry in the charged leptons must also be of
this order. However, there are no observational constraints, save those to be discussed here (see Kang & Steigman 1992; Kneller et al. 2001, and further references therein), on the magnitude of any
asymmetry among the neutral leptons (neutrinos). A relatively small asymmetry between electron type neutrinos and antineutrinos ([e] ^-2) can have a significant impact on the early-Universe ratio of
neutrons to protons, thereby affecting the yields of the light nuclides formed during BBN. The strongest effect is on the BBN ^4He abundance, which is neutron limited. For [e] > 0, there is an excess
of neutrinos ([e]) over antineutrinos ([e]), and the two-body reactions regulating the neutron-to-proton ratio (Eq. 5) drive down the neutron abundance; the reverse is true for [e] < 0. The effect of
a nonzero [e] asymmetry on the relic abundances of the other light nuclides is much weaker. This is illustrated in Figure 3, which shows the D and ^4He isoabundance curves in the [e] - [10] plane.
The nearly horizontal ^4He curves reflect the weak dependence of Y[P] on the baryon density, along with its significant dependence on the neutrino asymmetry. In contrast, the nearly vertical D curves
reveal the strong dependence of y[D] on the baryon density and its weak dependence on any neutrino asymmetry (^3He/H and ^7Li/H behave similarly: strongly dependent on [e]). This complementarity
between y[D] and Y[P] permits the pair {[e]} to be determined once the primordial abundances of D and ^4He are inferred from the appropriate observational data.
Figure 3. Isoabundance curves for D and ^4He in the [e] - [10] plane. The solid curves are for ^4He (from top to bottom: Y[P] = 0.23, 0.24, 0.25). The dotted curves are for D (from left to right: y
[D] ^5(D/H) = 3.0, 2.5, 2.0.) The data point with error bars corresponds to y[D] = 2.6 ± 0.4 and Y[P] = 0.238 ± 0.005; see the text for discussion of these abundances. | {"url":"http://ned.ipac.caltech.edu/level5/March04/Steigman/Steigman2_4.html","timestamp":"2014-04-19T17:05:47Z","content_type":null,"content_length":"16639","record_id":"<urn:uuid:4e95a8a6-a884-4690-bf7d-60038c57fca3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lemont, IL Math Tutor
Find a Lemont, IL Math Tutor
...I also offer online lessons with a special discount--contact me for more details if interested.Before I knew I was going to teach French, I was originally going to become a math teacher. I
have had numerous students tell me that I should teach math, as they really enjoy my step-by-step, simple breakdown method. I have helped a lot of people conquer their fear of math.
16 Subjects: including algebra 1, ACT Math, English, geometry
I will help your child learn by using personalized teaching solutions. I have been successful in the past tutoring high school students in math and science, but enjoy all ages and skill levels.
As a current dental student, my passion for learning is proven every day, and I hope to inspire your child to achieve their academic goals.
17 Subjects: including algebra 1, algebra 2, biology, statistics
...I was an advanced math student, completing the equivalent of Geometry before high school. I continued applying geometric skills in high school, where I was a straight A student and completed
calculus as a junior. Finally, I tutored math through college to stay fresh, and would be able to work with any student needing assistance at whatever point in their development they encounter
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...Prealgebra is the precursor to future Algebra classes and essentially all Math courses. Work here can begin with simple multiplication and division, working up to beginning Algebraic
equations. The main focus however, is generally in word problems.
11 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have directed my own Archaeology project in Persia. I have taught archaeology at the Oriental Institute Museum, U of Sydney, and to private classes. I have been an archaeologist for almost
20 years.
10 Subjects: including calculus, geometry, trigonometry, algebra 1
Related Lemont, IL Tutors
Lemont, IL Accounting Tutors
Lemont, IL ACT Tutors
Lemont, IL Algebra Tutors
Lemont, IL Algebra 2 Tutors
Lemont, IL Calculus Tutors
Lemont, IL Geometry Tutors
Lemont, IL Math Tutors
Lemont, IL Prealgebra Tutors
Lemont, IL Precalculus Tutors
Lemont, IL SAT Tutors
Lemont, IL SAT Math Tutors
Lemont, IL Science Tutors
Lemont, IL Statistics Tutors
Lemont, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/lemont_il_math_tutors.php","timestamp":"2014-04-19T10:02:11Z","content_type":null,"content_length":"23859","record_id":"<urn:uuid:fbf973a0-ef4e-4ad7-879f-2ad509f2afc0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math game using playing cards?
Re: Math game using playing cards?
Hi Jc;
Welcome to the forum!
Jc wrote:
I didn't bookmark the page
Uh Oh! That's a killer.
I know of multiplication war which is a pretty good game. How about times table football? Any of that sound familiar?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=159645","timestamp":"2014-04-18T10:39:07Z","content_type":null,"content_length":"9870","record_id":"<urn:uuid:6a898d52-6f09-4d06-84f5-f04242f09205>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
We Use Math Blog
San Antonio Spurs center David Robinson received a BS in mathematics from the United States Naval Academy.
Read More
"My mathematics tools help me make life better for my community, they help me make wise choices as a consumer and they give me a valuable skill that all employers want."
Read More
"While education in any field is a worthwhile pursuit, an education in mathematics is one that is used as a basis for many academic career choices."
Read More
"I thoroughly enjoy the career I’ve chosen, and I have no question that I wouldn’t be here if I had not started my training with a degree in mathematics. The analytical problem-solving skills one
develops working through a mathematics curriculum are highly valuable and transferable to any future aspiration."
Read More | {"url":"http://weusemath.org/?page_id=346&paged=70","timestamp":"2014-04-20T23:27:55Z","content_type":null,"content_length":"36350","record_id":"<urn:uuid:7903d511-ae5d-4193-88fe-c2d90dce0946>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do you graph xy + 2x - y = 5 ?
December 7th 2010, 09:59 AM
How do you graph xy + 2x - y = 5 ?
How do you graph xy + 2x - y = 5 ? I'm currently using Lybniz in Ubuntu 10.10
Or could you suggest any other software ?
Thank you.
December 7th 2010, 10:32 AM
Collect the y terms on one side $xy-y = 5-2x$ and factorise $y(1-x) = 5-2x$ and divide to give $y = \frac{5-2x}{1-x}$
Bear in mind there will be a vertical asymptote at $x=1$
As for graphing it I use kmplot but since it will pull in a lot of KDE/Qt dependencies. Of course plotting the graph should be simple in lybniz
Edit: yay for another linux user (Rock) | {"url":"http://mathhelpforum.com/math-software/165585-how-do-you-graph-xy-2x-y-5-a-print.html","timestamp":"2014-04-18T19:50:58Z","content_type":null,"content_length":"4493","record_id":"<urn:uuid:b7191eaa-65af-405c-bcb8-765fd8699b9c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westville Grove, NJ Precalculus Tutor
Find a Westville Grove, NJ Precalculus Tutor
...I have a background in mathematics that currently reaches up to calculus III. During my high school years, I had the chance to tutor younger students in topics with which they were having
trouble. I have tutored mathematics from algebra I up through precalculus.
7 Subjects: including precalculus, chemistry, physics, calculus
...I am a world-renowned expert in the Maple computer algebra system, which is used in many math, science, and engineering courses. My tutoring is guaranteed: During our first session, I will
assess your situation and determine a grade that I think you can get with regular tutoring. If you don't get that grade, I will refund your money, minus any commission I paid to this website.
11 Subjects: including precalculus, calculus, statistics, ACT Math
...This gave me the opportunity to tutor students in a variety of math subjects, including Differential Equations. I have a bachelor's degree in secondary math education. During my time in
college, I took one 3-credit course in Linear Algebra.
11 Subjects: including precalculus, calculus, geometry, algebra 1
...I have experience in both single and multiple variable calculus. I have experience in both derivatives and integration. I have taken several courses in geometry and have experience with shapes
and angles.
13 Subjects: including precalculus, calculus, geometry, GRE
...The Detailed Version of the Details: You might be asking yourself, "Why would a mechanical engineer be a good tutor for me or my child?" While I was working as a writing tutor, I was being
trained to think critically and solve complex problems using calculus and differential equations. I bring...
37 Subjects: including precalculus, reading, writing, English
Related Westville Grove, NJ Tutors
Westville Grove, NJ Accounting Tutors
Westville Grove, NJ ACT Tutors
Westville Grove, NJ Algebra Tutors
Westville Grove, NJ Algebra 2 Tutors
Westville Grove, NJ Calculus Tutors
Westville Grove, NJ Geometry Tutors
Westville Grove, NJ Math Tutors
Westville Grove, NJ Prealgebra Tutors
Westville Grove, NJ Precalculus Tutors
Westville Grove, NJ SAT Tutors
Westville Grove, NJ SAT Math Tutors
Westville Grove, NJ Science Tutors
Westville Grove, NJ Statistics Tutors
Westville Grove, NJ Trigonometry Tutors
Nearby Cities With precalculus Tutor
Almonesson precalculus Tutors
Billingsport, NJ precalculus Tutors
Blackwood Terrace, NJ precalculus Tutors
Blenheim, NJ precalculus Tutors
Brooklawn, NJ precalculus Tutors
Center City, PA precalculus Tutors
East Haddonfield, NJ precalculus Tutors
Grenloch precalculus Tutors
Hilltop, NJ precalculus Tutors
Hurffville, NJ precalculus Tutors
Jericho, NJ precalculus Tutors
Verga, NJ precalculus Tutors
West Collingswood Heights, NJ precalculus Tutors
West Collingswood, NJ precalculus Tutors
Westmont, NJ precalculus Tutors | {"url":"http://www.purplemath.com/Westville_Grove_NJ_Precalculus_tutors.php","timestamp":"2014-04-18T05:40:46Z","content_type":null,"content_length":"24668","record_id":"<urn:uuid:6396c25f-d5bf-4767-ab86-3c7ab736e425>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flushing, NY Prealgebra Tutor
Find a Flushing, NY Prealgebra Tutor
...Starting in high school and continuing till today, I have tutored students in a variety of subjects ranging from college-level writing and math to SAT and GRE prep. I have coached kids in
tennis and basketball as well as instructed beginners in guitar and piano. I am also an ordained rabbi and have tutored in Bible, Talmud, Cantillation, and Jewish Law.
16 Subjects: including prealgebra, reading, writing, biology
...I recently studied Elementary Education & Mathematics at Queens College. I am in the college Initial Certificate program now and I worked for a year as a tutor for the Elementary Mind Math.
Until now, I have been tutoring pre-Algebra, Algebra, and general Math.
14 Subjects: including prealgebra, calculus, algebra 1, algebra 2
...I also cover important strategies such as: when and how to guess; which questions, if any, to skip; and how to avoid common errors. For many GRE students, the quantitative reasoning (QR)
section is the most intimidating. Sometimes that's because students feel they weren't very good at math in school, or because it has been awhile since they have used formal equations in their
everyday life.
18 Subjects: including prealgebra, geometry, GRE, algebra 1
...For example, I set up a khan academy challenge to help my SAT students practice on their own while I tracked their progress and efforts remotely. Beyond tutoring, I have much experience
working with children. I mentor through Infinite Family and volunteer with youth development projects through New York Cares.
4 Subjects: including prealgebra, algebra 1, SAT math, elementary math
I believe that everyone can do well in Mathematics and Science. It does however require effort. I look forward to showing you the way, but the effort has to come from you.
11 Subjects: including prealgebra, calculus, GRE, GMAT
Related Flushing, NY Tutors
Flushing, NY Accounting Tutors
Flushing, NY ACT Tutors
Flushing, NY Algebra Tutors
Flushing, NY Algebra 2 Tutors
Flushing, NY Calculus Tutors
Flushing, NY Geometry Tutors
Flushing, NY Math Tutors
Flushing, NY Prealgebra Tutors
Flushing, NY Precalculus Tutors
Flushing, NY SAT Tutors
Flushing, NY SAT Math Tutors
Flushing, NY Science Tutors
Flushing, NY Statistics Tutors
Flushing, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/flushing_ny_prealgebra_tutors.php","timestamp":"2014-04-21T10:40:55Z","content_type":null,"content_length":"24181","record_id":"<urn:uuid:9fd7aa49-8d54-429c-bbb8-fcf5e4f90c46>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Posters
= Preview Document = Member Document = Pin to Pinterest
• Includes pentagon, hexagon, octagon, trapezoid, cone, and right triangle. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
• Includes pentagon, hexagon, octagon, trapezoid cone,and right triangle. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
• Five colorful posters with number and picture. Great for early childhood rooms.
• Cute colorful poster for Flips, Slides, and Turns. Coordinates with PowerPoints and Interactives on the membership site.
• Includes circle, oval, square, rectangle, triangle, star and diamond Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
• These fish-themed pages serve as a colorful guide for word problems, displaying the mathematical symbols (+/-) that accompany the most common addition and subtraction keywords.
• Includes circle, oval, square, rectangle, triangle, star, and diamond. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
• A set of three color posters that illustrate types of polygons, quadrilaterals and triangles. CC: Math: 5.G.B.3-4
• Five signs/posters that match our color version. Great for kids to color or to photocopy on colored paper.
• Graphic chart to help students study volumes and areas of geometric shapes. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
• Five signs/posters that match our color version. Great for kids to color or to photocopy on colored paper.
A colorful display of the rules for determining area and volume. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
Printable 10x10 board with squares numbered from 1-100. Use with hundred square activities and the 100th day of school. Use to teach addition and subtraction.
One poster says "Addition Keywords" and one says "Subtraction Keywords"; these match our colorful fish-themed addition and subtraction posters.
Beans, Pennies, Buttons... and more. Label your math manipulatives and tools with these colorful labels. Color illustrations. Eight labels per page, three pages.
Posters with illustrative examples. One page each: mean, median, mode.
A set of three posters featuring multiplication word problems with U. S. currency.
A poster explaining the rules of division of negative numbers.
Three poster set. Poster one says "Math Symbols." Posters two and three each show 18 labeled symbols related to basic mathematics.
Includes line, line segment, points, end points, ray, intersecting, parallel and perpendicular line posters. Common Core: Geometry: 4.G.A.1, 5.GA.1
These fish-themed pages serve as a colorful guide for word problems, displaying the mathematical symbol(÷)that accompany the most common Division Keywords.
These fish-themed pages serve as a colorful guide for word problems, displaying the mathematical symbol (x) that accompany the most common Multiplication Keywords.
Large signs for classroom display. One page for each numeral.
Directions for determining circumference, area, diameter, etc. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
"A number is divisible by 2 if the number is even." Simple rules for divisibility.
Commutative, associative, distributive, and identity property. Common Core: Math: 3.0A.5 3.0A.6, 4.OA.1
These fish-themed pages serve as a colorful guide for word problems, displaying the mathematical symbols (+/-) that accompany the most common addition and subtraction keywords
Colorful math posters that help teach the basic concepts of subtraction for older students.
Five pages of examples of word problems converted to equations.
Colorful math posters that help teach the basic concepts of addition for older students.
Colorful math posters that help teach the basic concepts of addition for older students.
"When you read PRODUCT... multiply!" Five posters of guidelines to help with reading word problems as equations.
Color poster with directions for determining circumference, area, diameter, etc. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
Eight data posters relating to data analysis, median, mode, range, and minimum and maximum value. Common Core: Grade 6 Statistics and Probability: 6.SP.A.1, 6.SPA.2
Simple poster explains how to divide decimals by a whole number. Common Core: 6.NS.3
"When you read PER... divide!" Five posters of guidelines to help with reading word problems as equations.
A set of three posters featuring the use of addition word problems with U.S. currency.
All 20 of our shape posters in one easy download: quadrilaterals (square, rectangle, rhombus, parallelogram), triangles (equilateral, isoceles, scalene, right, obtuse, acute), curved shapes
(circle, oval, crescent), other polygons (pentagon, hexagon, octagon); one per page, each with a picture and a definition. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
A set of three posters featuring the use of subtraction word problems with U.S. money.
Set of 9 insect-themed posters illustrating the addition of ten ones and further ones to find sums 11-19. Correlates to common core math standards.
Common Core Math: K.NBT.1
A set of five posters each with nine symbols related to mathematics. Includes operation symbols, relation symbols, geometry symbols, set symbols and miscellaneous symbols.
Brief introductions to basic shapes: quadrilateral shapes, triangles, and curved shapes; one set per page, each with a picture and a definition.
"When you read PER... divide!" Five posters of guidelines to help with reading word problems as equations. (with the / symbol)
A black and white sign illustrates decimal point places: tenths, hundredths, thousandths.
An enlarged version of abcteach's Jelly Bean Color Bar Graph file, for use on a board.
A one page illustrated chart of roman numerals with its English word from 1 to 20 by ones and from 30 to 100 by tens.
A one page illustrated chart of a clock face and color coded explanation of the small and large hands.
Common Core Math: Measurement & Data 1.MD.3
Eight colorful math posters that help teach the concepts of area, perimeter and dimensional figures. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3 | {"url":"http://www.abcteach.com/directory/subjects-math-math-posters-4118-2-0","timestamp":"2014-04-18T05:55:59Z","content_type":null,"content_length":"222559","record_id":"<urn:uuid:a55bce8a-4fd0-4a81-ba0d-ea77bc41247f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
arallel lines
From Greek: para allelois "beside one another"
Lines are parallel if they lie in the same plane, and are the same distance apart over their entire length
Try this
Drag any orange dot at the points P or Q. As the line
moves, the line
will remain parallel to it.
Parallel lines remain the same distance apart over their entire length. No matter how far you extend them, they will never meet.
The arrows
To show that lines are parallel, we draw small arrow marks on them. In the figure above, note the arrows on the lines
. This shows that these lines are parallel. If the diagram has another set of parallel lines they would have two arrows each, and so on.
Shorthand notation
When we write about parallel lines there is a shorthand we can use. We can write
. which is read as "the line PQ is parallel to the line RS".
Constructing a parallel line
In the Constructions chapter, there is an animated demonstration of how to construct a line parallel to another that passes through a given point, using only a compass and straightedge. See
Constructing a parallel line through a point.
Parallel planes
planes can be parallel to each other also. It means that the two planes are the same perpendicular distance apart everywhere. So, for example, the cards in a deck of cards are parallel.
An example of this is a cylinder, where the two bases (ends) are always parallel to each other.
Other parallel topics
Angles associated with parallel lines
(C) 2009 Copyright Math Open Reference. All rights reserved
Math Open Reference now has a Common Core alignment.
See which resources are available on this site for each element of the Common Core standards.
Check it out | {"url":"http://www.mathopenref.com/parallel.html","timestamp":"2014-04-20T21:23:22Z","content_type":null,"content_length":"9981","record_id":"<urn:uuid:d42e1d46-8936-42c0-8031-654307b55f0a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
January 15-16, 2004
ITEM 122-2701-R0104 Authorization to Confer The Title of Professor Emeritus of Mathematics upon Professor William Self; Montana State University-Billings
WHEREAS, Professor William Self has completed a distinguished teaching career of thirty-four years, including fifteen at Montana State University-Billings, formerly Eastern Montana College;
WHEREAS, Professor Self has received numerous awards including a National Science Foundation Fellowship and memberships in Pi Mu Epsilon Mathematics Honor Society and Kappa Mu Epsilon Mathematics
Honor Society;
WHEREAS, he has presented many papers and served as commentator at state, regional, national, and international conferences;
WHEREAS, he has published papers on harmonic analysis, robotics, Fourier analysis, and continued fractions in national and international journals;
WHEREAS, he is a member of a school of mathematicians, Frank Forelli and his students, who, over the past three decades, have made internationally recognized contributions to the theory of function
algebras, complex variables, and harmonic analysis;
WHEREAS, he is a member of learned societies such as The American Mathematical Society, the Mathematical Association of America, and the Society for Industrial and Applied Mathematics;
WHEREAS, he served Montana State University-Billings on numerous committees such as the Library Committee, the Professional Development Committee, and the Travel Committee;
WHEREAS, he served the faculty of Montana State University-Billings as Faculty Grievance Officer;
WHEREAS, he served the mathematics community of Montana as co-organizer of two Montana Academy of Science conferences, one Collegiate Mathematics Interface Conference, and four Big Sky Analysis
WHEREAS, he served the international mathematics community for many years as reviewer for Mathematical Reviews;
WHEREAS, he mentored many students, giving unstintingly of his time out of class and serving as faculty advisor to the Math and Computer Science Club;
WHEREAS, he has become a recognized expert in computer algebra systems and has lead the state of Montana in the incorporation of Mathematica into the mathematics curriculum;
AND WHEREAS, Montana State University-Billings wishes to honor Professor Self for his outstanding service to education and his discipline;
THEREFORE, the Board of Regents of Higher Education, on the recommendation of the Chancellor of Montana State University-Billings, confers upon Dr. William Self the title of Professor of Mathematics
Emeritus, with all the rights, privileges, and responsibilities pertaining thereto. | {"url":"http://www.mus.edu/board/meetings/Archives/ITEM122-2701-R0104.htm","timestamp":"2014-04-20T16:56:21Z","content_type":null,"content_length":"7547","record_id":"<urn:uuid:e0986705-d1f1-4a99-8f19-9bb2ed5b3058>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Albert Einstein 1911-12, 1922-23
Several events related to Albert Einstein's life occurred in recent days and months. If you consider yourself a sort of Einstein fan, you may let me mention some of them.
First, you may finally buy Einstein's brain for $9.99 (or $13.99 now?), no strings or hair attached. See Google News or go to the iTunes AppStore.
Second, Caltech and Princeton University Presses teamed up and released the 13th volume of Einstein papers. They cover January 1922-March 1923 and you may already buy the book for $125 at the PUP
website: it has over 900 pages. Einstein is travelling a lot, is ahead of time and already (or still) speaks Hebrew in British Palestine ;-), and doesn't even mention his Nobel prize. Wired about the
new book.
Third, there was a conference of Einstein fans among relativists three months ago in Prague. It was a centennial one because Einstein was a full professor (for the first time!) at the local Charles
University (German section: then named Karl-Ferdinands Universität) between 1911 and 1912. He left after the third semester, in July 1912, almost exactly 100 years ago.
You may want to read some of the dozens of presentations. I will recommend you the presentation by Prof Jiří Bičák, the main organizer and my undergraduate ex-instructor of general relativity:
Einstein’s Days and Works in Prague: Relativity Then and Now
It's a fun PDF file that shows a lot about the social context as well as the state of his thoughts about general relativity – which wasn't complete yet – at that time.
Source (EN). Click to zoom in.
He would work in the 7 (at that time: 3) Viničná Street (name related to wineries:
Street View
) which is a building of the Faculty of Natural Sciences these days. Of course, it was four decades before the Faculty of Mathematics and Physics was established but I know the place well because
it's less than 500 meters (direct line) from the "Karlov" building of the Faculty of Maths of Physics, my Alma Mater.
In April 1911, he wrote this to his friend Grossmann:
I have a magnificent institute here in which I work very comfortably. Otherwise it is less homey (Czech language, bedbugs, awful water, etc.). By the way, Czechs are more harmless than one
As big a compliment to the Czechs as you can get. ;-) Two weeks later, he added this in a letter to M. Besso:
The city of Prague is very fine, so beautiful that it is worth a long journey for
Bičák adds lots of fun stuff about the local reaction to Einstein's presence. But there's quite some physics he did in Prague – essential developments that had to occur for the general theory of
relativity to be born. Einstein himself
the importance of his year in Prague rather concisely, in a foreword to the Czech translation of a 1923 book explaining relativity:
I am pleased that this small book is finally released in the mother tongue of the country in which I finally found enough concentration that was needed for the basic idea of general relativity,
one that I have been aware of since 1908, to be gradually dressed up into less fuzzy clothes. In the silent rooms of the Institute for Theoretical Physics of Prague's German University in Viničná
Street, I discovered that the equivalence principle implied the bending of light near the Sun and that it was large enough to be observable, even though I was unaware that 100 years earlier, a
similar corollary was extracted from Newton's mechanics and his theory of light emission. In Prague, I also discovered the consequence of the principles that says that spectral lines are shifted
towards the red end, a consequence that hasn't been perfectly experimentally validated yet.
Well, be sure that as of today, it's been validated for half a century – in an
that took place in another building where I have worked for 6 years (and yes, the Wikipedia picture of the building is mine, too).
The Czech translation I used to translate it to modern Lumo English was probably obtained by translating a German original and you will surely forgive me some improvements.
Note that it was a pretty powerful combination of insights: gravitational red shift as well as the light bending (observed during the 1919 solar eclipse) were discovered in Prague. It was years
before Einstein had the final form of the equations of general relativity, Einstein's equations.
Today, people – including people who consider themselves knowledgeable about physics – often fail to understand that many insights may be physically deduced even if one doesn't possess the final
exact form of the equations. Principles imply a lot. They may be logically processed to derive new insights. At the beginning of the 20th century, people like Einstein could do such things very well.
Many people today would almost try to outlaw such reasoning – principled reasoning that used to define good theoretical physicists. They would love to outlaw principles themselves. They would love to
claim it is illegal to believe any principles, it is illegal to be convinced that any principles are true.
Albert Einstein and Johanna Fantová, his Czech yachting buddy while in the U.S.
The derivation of the bending of light is a somewhat annoying argument and the right numerical factor may only be obtained if you're careful about the equations of GR. So while I was not sure whether
Einstein got the right numerical coefficient in 1911-12, I feel that I wouldn't trust it, anyway. Up to the numerical coefficient, the answer may be calculated from Newton's mechanics. (Well, later I
searched for the answer and
Einstein's numerical coefficient in 1911
was indeed wrong, the same as the Newtonian one, i.e. one-half of the full GR result.)
Just imagine that you shoot a bullet whose speed is the speed of light – Newton's light corpuscle – around the Sun so that the bullet barely touches the Sun and you calculate how much the bullet's
trajectory is bent towards the Sun due to the star's gravitational attraction. You integrate the appropriate component of the acceleration to find out the change of the velocity, take the ratio of
this perpendicular velocity to the original component of the velocity, and that's the angle.
General relativity just happens to give you a result that is almost the same: well, it's exactly 2 times larger.
It's perhaps more insightful to discuss the derivation of the gravitational red shift where one may reasonably trust even the numerical coefficient and it is indeed right. His argument went like this
(optimization by your humble correspondent).
Consider a carousel rotating by the angular speed \(\omega\). Objects standing at the carousel at distance \(R\) from the axis of rotation will feel the centrifugal acceleration \(R\omega^2\). They
will also move by the speed \(v=R\omega\). Special relativity guarantees that their clocks will tick slower (time dilation), by the factor of \[
\frac{t_\text{at carousel}}{t_\text{at the center}} =
\sqrt{1-\frac{v^2}{c^2}} = \sqrt{ 1-\frac{R^2\omega^2}{c^2}} \approx 1 - \frac{R^2\omega^2}{2c^2} .
\] Observers in the rotating frame of the carousel will interpret the centrifugal force as a gravitational field. And because from the rotating frame's viewpoint, the velocity of all objects standing
on the carousel is zero so the special relativistic time dilation doesn't exist in this frame, the slowing down of time must be a consequence of the gravitational field.
The coefficient determining how much the time is slowed down only depends on \(R\omega\). How is this quantity related to some observables describing the gravitational field? Well, the gravitational
acceleration is \(R\omega^2\), as mentioned previously, and it may be integrated to get the gravitational potential:\[
\Phi = -\int_0^R \dd\rho \,\rho \omega^2 = -\frac{R^2\omega^2}{2}.
\] Note that the gravitational potential is negative away from \(R=0\) because the gravitational (=centrifugal) force is directed outwards so outwards corresponds to "down" in the analogy with the
usual Earth's gravitational field. Now, we see that the gravitational potential depends on \(R\omega\) only as well so it's the right quantity that determines the gravitational slowdown of the time,
i.e. the gravitational redshift. Substituting this formula for \(\Phi\), we see that\[
\frac{t_\text{at carousel}}{t_\text{at the center}} = \dots \approx 1+ \frac{\Phi}{c^2}.
\] So the gravitational potential \(\Phi\) is really related to the "rate of time" which we would call \(\sqrt{g_{00}}\) these days. Einstein could realize this fact several years before he could
write down the equations of the general theory of relativity.
Because special relativity guaranteed that motion unexpectedly influences the flow of time and because by 1911-12, he fully appreciated that the accelerated motion and the inertial forces it causes
may be physically indistinguishable from a gravitational field, he could see that the gravitational field influences the flow of time, too. And he could even extract the right formula in the
Newtonian limit.
Of course, it's safer to work with the full theory. However, such "semi-heuristic" derivations are valid very frequently and you shouldn't just dismiss them, especially if the author of such
arguments seems to know what he or she is doing.
And that's the memo.
P.S. Joseph sent me a link to a rather
fascinating 1912 or so notebook
of Einstein with handwritten puzzles and trivial tensor calculus that Einstein was just learning or co-discovering.
snail feedback (4) :
In the equation after 'Substituting this formula...', I think you have an extra 2.
I erased the factor while proofreading, 20 seconds before I saw your comment. ;-)
Nice explanation, I always need and appreciate step by step derivations such as the one in this article for example ... ;-)
A recent book about Einstein attemps on unification by Jeroen Van Dongen that seems interesting is
I just have seen the amazon and google books previews so far, as the book is quite expensive, but those look good. Also Isaacson very good biography draws on Van Dongen thesis on that topic,
which seems to be the basis of the book | {"url":"http://motls.blogspot.cz/2012/09/albert-einstein-1911-12-1922-23.html","timestamp":"2014-04-19T22:52:37Z","content_type":null,"content_length":"204859","record_id":"<urn:uuid:f10a992e-8ca0-468d-8305-599a7c3cf2d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |