content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Assume an equilateral triangle T of numbers. A move from any non-base number n in T ends at the number immediately below and to the left or right of n. A path in T is a sequence of moves starting at
the apex and ending at a number in the base. The path sum is the sum of the numbers along a path. Given a triangle, find a path with maximum sum among all paths through the triangle.
A brute-force solution is straightforward:
(define (max-sum-path triangle)
; Return a pair, the car of which is the path sum of the cdr, which
; is a max path through the given triange. A path is a list of 0s
; and 1s indicating left (0) or right (1) moves.
((rows (1- (triangle 'row-count)))
(max-path '())
(max-sum -1))
(let loop
((row 0) (i 0) (path '()) (sum (triangle 'get-value 0 0)))
(if (< row rows)
; Above the base.
(let ((r (1+ row)))
; Let's go down and to the left.
(loop r i (cons 0 path) (+ (triangle 'get-value r i) sum))
; Now let's go down and to the right.
(let ((i2 (1+ i))
(sum' (+ (triangle 'get-value r i2) sum)))
(loop r i2 (cons 1 path) sum'))
; At the base; this path is done.
(if (or (null? max-path) (> sum max-sum))
(set! max-path path)
(set! max-sum sum)))))
(cons max-sum (reverse max-path)))))
A numeric triangle is a simple-object representing an equilateral triangle of numbers. For reasons explained below (and which you may have already anticipated), a numeric triangle is a wrapper around
a general triangular matrix.
(define (numeric-triangle rows)
((triangle (triangle-matrix rows)))
; Fill the triangle with random numbers.
(do ((r 0 (1+ r))) ((>= r rows))
(do ((i 0 (1+ i))) ((> i r))
(triangle 'set-value! r i (- (random 100) 50))))
(lambda (cmd . args)
(case cmd
((row-count) (triangle 'row-count))
((get-value) (apply triangle 'get-value args))
((set-value!) (apply triangle 'set-value! args))
(else (error "unrecognized command"))))))
A triangular matrix is represented as a vector of minimum necessary size. Perhaps the easiest way to understand the indexing is to imagine a triangle stored in a 2d matrix with all rows pushed to the
left; that is, stored in a lower triangular matrix.
(define (triangle-matrix rows)
(define (gauss n) (/ (* n (1+ n)) 2))
(define (index r i) (+ (gauss r) i))
((row-count rows)
(N (gauss rows))
(values (make-vector N '()))
(get-value (lambda (r i) (vector-ref values (index r i))))
(set-value! (lambda (r i v) (vector-set! values (index r i) v))))
(lambda (cmd . args)
(case cmd
((row-count) rows)
((get-value) (apply get-value args))
((set-value!) (apply set-value! args))
(else (error "unrecognized command"))))))
This solution works, but isn't good because it's exponential in the number of rows in the triangle. Increasing the triangle size by five rows increases the solution run-time in usec by at least an
order of magnitude; the solution run in Guile 1.8.1 on a fairly modern laptop takes over four minutes to find a path through a 25-row triangle.
The solution is slow because it repeatedly recomputes max-sum paths for sub-triangles. The fix for such problems is to use dynamic programming: once a max-sum path is found for a sub-triangle,
remember it so the next time the max-sum path for that sub-triangle is needed, it can be found by indexing rather than more onerous computing.
Another trick to exploit is to find sub-triangle max-sum paths from the base up rather than from the apex down. This allows a third trick, which is to replace a full table with a pair of rows (as was
done for edit-distance); for simplicity, however, this last trick will be skipped.
(define (max-sum-path-dp triangle)
; Return a pair, the car of which is the path sum of the cdr, which
; is a max path through the given triange. A path is a list of 0s
; and 1s indicating left (0) or right (1) moves.
(let* ((rows (triangle 'row-count))
(table (triangle-matrix rows)))
; Fill in the base
(let ((r (1- rows)))
(do ((i 0 (1+ i))) ((> i r))
(table 'set-value! r i
(cons (triangle 'get-value r i) '()))))
; Work up from the base to the apex, row by row. When done,
; return the max-sum path found at the apex.
(do ((row (- rows 2) (1- row))) ((< row 0) (table 'get-value 0 0))
; Within each row, work left to right (or right to left; it
; doesn't matter).
(do ((i 0 (1+ i))) ((> i row))
((row-below (1+ row))
(left (table 'get-value row-below i)) (l-max (car left))
(right (table 'get-value row-below (1+ i)))
(r-max (car right)))
(let ((v (triangle 'get-value row i)))
(table 'set-value! row i
(if (< r-max l-max)
(cons (+ l-max v) (cons 0 (cdr left)))
(cons (+ r-max v) (cons 1 (cdr right)))))))))))
The dynamic-programming version of max-sum-path solves a 25-row triangle in 9 msec and a 300-row triangle in 36 sec. | {"url":"http://community.schemewiki.org/?max-sum-triangle-path","timestamp":"2014-04-17T00:55:59Z","content_type":null,"content_length":"27266","record_id":"<urn:uuid:8ed40b43-4945-4cc5-9139-6989d0fde16a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Intersection between two cones
Replies: 6 Last Post: Feb 7, 2013 4:36 AM
Messages: [ Previous | Next ]
Roja Re: Intersection between two cones
Posted: Feb 7, 2013 2:47 AM
Posts: 6
Registered: "Bill Whiten" <W.Whiten@uq.edu.au> wrote in message <keteb4$gmc$1@newscl01ah.mathworks.com>...
4/16/12 > "Doctor61" wrote in message <ket0to$214$1@newscl01ah.mathworks.com>...
> > I have two 3d circles (centre coordiantes and radii) with their normals passing through origin. If you consider the origin to be the vertex of a right cone having the circle as the
base, how can I determine if there is an intersection between these cones?
> if centre is x1,y1,z1 and radius r set d^2=x1^2+y1^2+z1^2
> Cone is intersection of spheres a^2 (d^2+r^2)=x^2+y^2+z^2 and
> a^2 r^2 = (a x1-x)^2+(a y1-y)^2+(a z1-z)^2
> Eliminate a to get equation of cone:
> x^2+y^2+z^2 = (r^2+d^2) (x1 x+y1 y+z1 z)^2/d^4
> (check this)
> Set x^2+y^2+z^2=1 and solve for points on two cones and this sphere.
> E.g. Solve linear equations (1= (r^2+d^2) (x1 x+y1 y+z1 z)^2/d^4 etc)
> for x and y then substitute into x^2+y^2+z^2=1 to get quadratic for z.
> Regards
Hi Bruno,
I have actually thought about something like what you described, but I don't know how to implement it in matlab. I thought I can find the two circles at the same distance from origin and
see if they have intersections. But I don't know how to do that. Based on what I have found so far, if t is the parameter then any point P on the circle is given by:
P=Rcos(t)u? +Rsin(t)n? ×u? +c
Where u is a unit vector from the centre of the circle to any point on the circumference; R is the radius; n is a unit vector perpendicular to the plane and c is the centre of the circle.
So if I have u1,n1,c1 and u2,n2 and c2, then how can I find the intersection?
Date Subject Author
2/6/13 Intersection between two cones Roja
2/6/13 Re: Intersection between two cones Bruno Luong
2/6/13 Re: Intersection between two cones Bill Whiten
2/7/13 Re: Intersection between two cones Roja
2/7/13 Re: Intersection between two cones Bruno Luong
2/7/13 Re: Intersection between two cones Roja
2/7/13 Re: Intersection between two cones Bill Whiten | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2433185&messageID=8256994","timestamp":"2014-04-20T00:43:36Z","content_type":null,"content_length":"24677","record_id":"<urn:uuid:801aba4a-05a0-432c-b42d-3e7d3dda56ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Uniform and Exponential Distribution of Random Variables
Date: 01/16/2006 at 16:49:00
From: troy
Subject: Relationship between uniform distrib and exponential distrib
I've seen statements claiming if you take the natural log of a
uniformly distributed random variable, it becomes a exponentially
distributed random variable. I know it's a true statement, but I
wonder if you would provide a proof?
Date: 01/17/2006 at 09:28:45
From: Doctor George
Subject: Re: Relationship between uniform distrib and exponential distrib
Hi Troy,
Thanks for writing to Doctor Math.
The standard form of the problem includes a minus sign on the
logarithm, so I will include it.
Let X ~ U[0,1] and Y = -ln(X).
F(y) = P(Y < y)
= P[-ln(X) < y]
= P[X > e(-y)]
= 1 - P[X < e(-y)]
= 1 - F[e(-y)]
Now differentiate both sides with respect to y.
f(y) = f[e(-y)]e(-y) = e(-y)
Y X
Does that make sense? Write again if you need more help.
- Doctor George, The Math Forum
Date: 01/17/2006 at 12:26:42
From: troy
Subject: Thank you (Relationship between uniform distrib and
exponential distrib)
Thank you, Doctor George, for the succinct proof. I'd like to
continue the discussion below. Although it's not a complex
derivation, it's definitely non-trivial. I wondered why most articles
skipped the proof and only made the statement. Could I ask you to
point me to a reference where I could find similar derivations?
Also on the last equation, f(y) = f[e(-y)]e(-y) = e(-y),
Y X
it implies that f[e(-y)] = 1.
What is the reason for this? Is it because by definition that Y =
-ln(X), therefore x = e(-y), so the probability of x = e(-y) is always 1?
Thank you again for the beautiful derivation.
Date: 01/17/2006 at 13:58:33
From: Doctor George
Subject: Re: Thank you (Relationship between uniform distrib and
exponential distrib)
Hi Troy,
You are on the right track. Since X is uniform,
f(x) = 1
now substituting x = e(-y) we get
f[e(-y)] = 1
As for a reference, most any college level book on Mathematical
Statistics will contain examples similar to this. Look for a section
on transformation of variables. Sometimes a book will skip a short
proof like this and have the reader work it out as a problem.
- Doctor George, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/70362.html","timestamp":"2014-04-19T23:40:35Z","content_type":null,"content_length":"7473","record_id":"<urn:uuid:4138e64d-de2b-44f7-8b2a-03361790332b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Defining standard atmosphere
An ideal understanding of an environment that is anything but ideal.
By Karsten Shein
Comm-Inst, Climate Scientist
There are certain facts we know about the atmosphere and the things in it. For example, a strong low usually means low ceilings and storms. When it gets hot and humid the air density decreases until
it becomes difficult to get our birds into the air. Windspeed is a function of the difference in pressure between where the wind is blowing from and where it is blowing to.
We use these atmospheric facts to help us figure out what we might be up against when we take to the skies. But, beyond these basics, pilots actually rely quite heavily on approximations as well. In
particular, many aircraft performance specifications reference an estimate of an idealized atmosphere, devoid of convection, cold winters or hot summers.
When we turn to the operating handbook to determine a pressure altitude or a climb rate, we are using a set of commonly agreed on values of temperature, pressure and density for any given altitude.
These numbers constitute the Intl Standard Atmosphere (ISA). The US has a slight variation on the ISA known as the US Standard Atmosphere, but the product is essentially the same and the values are
identical up to at least 32 km (105,000 ft).
Since the 1920s, meteorologists have sent rockets, balloons and even aircraft aloft, rigged with barometers, thermometers and other instruments. Millions of measurements from around the planet have
allowed them to discover that, for at least the lowest layer of the atmosphere—the troposphere—average conditions can be approximated by a set of equations.
For example, air density decreases exponentially with increasing altitude. Since the pressure exerted by the air molecules at any given altitude is dependent on the density, pressure also decreases
exponentially. Air temperature also declines with increasing altitude, but the decrease is more linear the further one goes from the heating source—Earth's surface.
Using these equations, meteorologists were able to produce tables of what the average ambient conditions should be at any given altitude in the lower atmosphere. Because an aircraft's performance is
fundamentally a function of air density, aircraft designers used these equations to provide guidance on how well an aircraft could be expected to perform at different altitudes.
Service ceilings, engine power, rates of climb, pressure and density altitudes, takeoff and landing distances are all referenced against these average conditions, and corrective factors are built
into the tables for when the actual ambient conditions differ from standard.
Ideal gases
The basis of the standard atmosphere is the known relationship between temperature, pressure and density. Known as the Ideal Gas Law, the relationship is in the form of the equation pressure x volume
= amount of the gas x a gas constant x temperature.
Since the gas constant is, well, constant, we can ignore it—and if we divide the amount of gas by the volume, we get density, so we can rewrite the relationship in a more general sense as pressure =
density x temperature. In a nutshell this means that, as temperature decreases, so does pressure—and, as density (amount of the gas in the volume) increases, so does pressure.
And so on. When this equation is applied to the average temperature lapse rate of the lower atmosphere, the pressure, temperature and density for any altitude can be estimated.
ISA is based on a set of fixed assumptions, including a constant surface temperature and a constant temperature lapse rate. Until the rapid warming of the atmosphere experienced since the 1980s,
Earth's long-term average global temperature was 59°F (15°C).
The average decrease in temperature with altitude through the lowest 11 km (approx 36,000 ft) is roughly 6.5°C per 1000 m (3.6°F or 2°C/1000 ft). Above 11 km, different lapse rates are used, and ICAO
has published ISA values all the way up to 262,500 ft.
What these fixed standards mean is that, using the ISA, you could estimate that, at 18,000 ft, the ISA temperature would be –21°C (–5°F), and the pressure would be 506 mb (506 hPa or 14.94 inches Hg)
—roughly half that at the surface. At 30,000 ft, you would expect a temperature of –44°C (–48°F) and a pressure of 300 mb (300 hPa or 9 inches Hg). Such known quantities are great to know when you
need to estimate the performance of your aircraft, but there are some caveats.
When things are not standard
When was the last time you experienced a flight through a beautiful standard atmosphere? One of the great pieces of statistical wisdom is that averages rarely occur. They are simply a mathematical
expression of a middle point of the whole of all the observations.
This rarity is especially true in the atmosphere. These standard values are nothing more than estimates from equations that are designed to come close to the average conditions.
Unfortunately, in working to approximate the averages, the equations can and often are oversimplified. An example provided by a Pro Pilot reader serves as a great example of this.
The reader asked why altimeter settings at BOG (El Dorado, Bogotá, Colombia) were normally so high—almost never below 31.00. Indeed, that does seem strange, especially since Bogotá is very close to
the Equator. Because of the general circulation of Earth's atmosphere, surface air from both the Northern and Southern Hemispheres converges at the Equator and is heated along the way.
The result is a band of the atmosphere where the converging surface air meets and is forced to rise. We see this on satellite images as a ring of clouds that encircles the Earth at the tropics—also
known as the Intertropical Convergence Zone (ITCZ).
1 | 2| 3 next | {"url":"http://www.propilotmag.com/archives/2011/June%2011/A3_Atmosphere_p1.html","timestamp":"2014-04-21T01:59:47Z","content_type":null,"content_length":"14963","record_id":"<urn:uuid:e180d301-6447-45d2-8720-4485349e4fef>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fullerton, CA SAT Math Tutor
Find a Fullerton, CA SAT Math Tutor
...At the private academy, I was an SAT instructor for small classrooms, with no more than eight students per session. I also developed and administered a full-semester curriculum in Geometry for
one very bright home-schooled student. My baseline strategy for tutoring is to get to know my student.
10 Subjects: including SAT math, calculus, physics, geometry
...I am the tutor of choice in my subject areas because of my intelligence. I graduated a salutatorian at one of California's most prestigious and rigorous public schools. At Dartmouth College, I
earned a Bachelor's degree in Economics and spent many, many evenings as a paid academic tutor to my classmates.
35 Subjects: including SAT math, reading, English, calculus
...I have taught all levels of mathematics, biology chemistry, physics, and calculus. I will be able to guide the students and overcome any obstacles in their studies and eventually improve their
studying skills and grades throughout their academic careers. I believe that each child is a unique in...
32 Subjects: including SAT math, chemistry, calculus, statistics
...I passed the California Bar Exam the first time I took it. Of course, I took a course including lectures and practice exams. I think it would be foolish for anyone to take the Bar Exam without
doing that.
63 Subjects: including SAT math, chemistry, English, physics
...I have taken the undergraduate course in Linear algebra. Additionally, physics is often referred to as the science of linear algebra and consequently I've picked up a lot of the more complex
methods from studying operators in graduate quantum mechanics courses. I have done three years of physics research, all of which was in MATLAB.
26 Subjects: including SAT math, calculus, physics, geometry
Related Fullerton, CA Tutors
Fullerton, CA Accounting Tutors
Fullerton, CA ACT Tutors
Fullerton, CA Algebra Tutors
Fullerton, CA Algebra 2 Tutors
Fullerton, CA Calculus Tutors
Fullerton, CA Geometry Tutors
Fullerton, CA Math Tutors
Fullerton, CA Prealgebra Tutors
Fullerton, CA Precalculus Tutors
Fullerton, CA SAT Tutors
Fullerton, CA SAT Math Tutors
Fullerton, CA Science Tutors
Fullerton, CA Statistics Tutors
Fullerton, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Fullerton_CA_SAT_math_tutors.php","timestamp":"2014-04-17T15:53:47Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:b5aead8f-6a48-4b40-b563-2b17a44fb406>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Low/High resistance, at equal wattage: what's the difference?
10-02-2012, 06:35 AM #1
I was wondering: what is the difference between a low and high resistance atty (or carto, or whatever), at the same wattage? In other words, if the only difference is that the same wattage can be
achieved with lower voltage for LR attys, then LR attys would seem to be the way to go in terms of battery life... What's the point in using medium or high resistance with a VV device? Does it
have to do with vapor heat? Help please...
Weirdly and counter-intuitively, the way electricity works (I read something about it on here and just now had to prove it to myself on paper) the higher resistance atomizer running at a higher
voltage actually draws less amperage from the battery and therefore the battery lasts longer. Seems strange but the formulae don't lie.
Low resistance atties were developed mainly for people with constant voltage devices that wanted more wattage (heat) from their vape. That's the only way to do it if you can't crank up your
Last edited by A17kawboy; 10-02-2012 at 07:18 AM.
Keep your powder dry and your wick wet!!
This is part of the difference... almost.
More accurately, it has to do with the vapor temperature.
Heat is energy. The energy stored in a PV is expelled as heat.
A given battery will provide a fixed total amount of heat regardless of
the voltage, resistance, or output wattage. At higher wattage the heat
is generated more quickly.
The power output of a PV is measured in watts, which is how fast
energy (heat) is dissipated. The heat can be dissipated slowly or more
quickly depending on the wattage (rate of energy output).
Vapor quality is not dependent on heat, but on temperature.
At a fixed vapor temperature, the vapor quantity depends on
the overall amount of heat.
Transfer more heat to a given quantity of e-liquid and the vapor
production is warmer (or hot). Or with more liquid in contact with
the heat source, the vapor temperature is cooler.
A fixed amount of heat transferred to one drop of e-liquid will
have a higher temperature than the same amount of heat
transferred to two drops of e-liquid.
Now this is where the difference between a "small" coil with
low resistance and a "big" coil with high resistance at the
same power output becomes evident.
The "big" coil comes into contact with more e-liquid and creates
more vapor at a lower temp than from a "small" coil even with
the same power output rate (watts).
Now carry this one step further...
Compare a 2.5 ohm coil to a 2.5 ohm dual coil setup.
In the dual coil setup you have two 5 ohm coils operating
at the same time in parallel.
At 5 volts and 2.5 ohms, the current is 2 amps and the total
power output is 10 watts in both the single and dual coil
configuration. But in the single coil all 10 watts are laid on
a small 2.5 ohm coil, and in the dual coil the power is split
between two larger 5 ohm coils with 5 watts each.
Same power. Same heat. Totally different vapor temperature,
quantity, and quality.
Finally, comparing a single low resistance coil to a single
high resistance coil at the same power output, the heat
is dissipated over a greater area (and volume of e-liquid)
in the high resistance coil resulting in a lower vapor temp.
The HV setup can push more power (watts) and generate
more vapor with a given (desired) vapor temperature.
Or at the same power output, the HV setup will generate
more vapor at a lower temperature.
Last edited by dragginfly; 10-02-2012 at 05:57 PM.
"You know me to be a very smart man. Don’t you think that if I were wrong, I’d know it?" ~ Sheldon Cooper
Weirdly and counter-intuitively, the way electricity works (I read something about it on here and just now had to prove it to myself on paper) the higher resistance atomizer running at a higher
voltage actually draws less amperage from the battery and therefore the battery lasts longer. Seems strange but the formulae don't lie.
Unfortunately, that is only half the formula. Below is a post from another thread, but it boils down to voltage doesn't matter; as long as wattage is the same, the battery drain current will be
the same....regardless of the resistance.
A boost regulator has to transform low voltage into higher voltage. This can only be done by using more current on the input (battery) side. The amount of additional voltage needed is expressed
(Volts out - Volts in) / Volts out = Percentage of voltage increase, also know as the switch duty cycle.
Now that we know the duty cycle, we can figure the additional input current required to obtain the desired output voltage.
Amps out / ( 1 - Duty Cycle) = Amps in
Known factors:
3.7v in
8 watts out
3.0 ohm carto
Ohms Law tells us that we'll need 4.9v and 1.63 amps output to achieve 8 watts.
(4.9-3.7)/4.9 = .24 = 24% increase in voltage = 24% switch duty cycle
1.63 / (1-.24) = 2.16 amps input drawn from the battery.
Now for validation.
As deemed by the Law of Conservation of Energy, power (watts) in must equal power out. (The true statement of the law is power in equals power out + efficiency losses. But for simplicity sake,
we'll get to efficiency below.)
Power = Volts * Amps
3.7 * 2.16 = 8 watts input
4.9 * 1.93 = 8 watts output
But what about efficiency?
Typical efficiency for a boost converter is in the 75-90% range. Efficiency is not constant. It varies with the desired outputs. You can find the efficiency for certain [manufacturer chosen]
situations in the regulators data sheet. Using an optimistic value of 90% efficiency we can figure our adjusted input current.
Power out / efficiency = adjusted power in
8 / .9 = 8.89 watts input.
Adjusted power in / Volts in = adjusted amps in
8.89 / 3.7 = 2.4 amps input.
And comparison:
A fixed voltage device @ 3.7v will achieve an 8 watt output using 2.16 amps (Ohms Law)
8 / 3.7 = 2.16 amps input
The above calculations explain why I say that boost regulators will get less battery life than a same size fixed volt, and also that you will not achieve better battery life by using a higher
resistance coil.
This holds true for buck (multiple battery, high voltage "bucked" down to a lower voltage) devices as well, but the math is slightly different.
| Brass M16 Clone | #8 | OKR-T/6 | Trident Clone | IGO-L |
"The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn."
@Rader2146: Thank you once again sir! Very much appreciate your time and patience explaining these fundamentals. So why the heck are manufacturers not working on variable resistance atomizers
that will work on fixed voltage devices?
If I understand the concept correctly that would enable users to vary power and still have maximum battery usage efficiency.
"Early days..." I guess.
Keep your powder dry and your wick wet!!
Just look how far we have come in the past few years! At this pace the e-cigs of the future will not resemble what we have today at all.
Coil size... coil size... coil size..... Many times an LR atty has a small coil. Not always... but often.
Not many people talk about this... but the size of the coil matters in addition to the wattage supplied or ohms of the atty.
If you have one coil, three ohms, four wraps- and another three ohm coil with eight wraps... they vape differently.
It's not *just* wattage. It's also the coil, wicking material, and variations in the atomizers- none of them are the same.
So while the electrical math is useful. It doesn't paint the whole picture... which is why people have preferences in atomizers. Some coils are tight around the wicking material. Some coils are
loose around the wicking material. Some coils have no wicking material. Some coils are large.... small.... ect.
So really... wattage is very much a general measure for the purposes of vaping. Which is to say something like "I generally prefer 6.5 watts". But the particular hardware you might be using can
change that drastically- a good example being Vision Clearomizers which perform nicely at lower wattages but are about 2.4 ohm.
Vapelink Show: http://www.vapelink.com
Video Reviews and Commentary: http://www.youtube.com/user/cozzcon
Vapelink Announcement: http://www.youtube.com/watch?v=XCK2n...ature=youtu.be
I was just going to mention coil size. A longer coil would spread out the heat a bit more.
Excellent way to state it!
It took me nearly a full page of explanation to conclude with this above...
I have GOT to learn to be more concise.
"You know me to be a very smart man. Don’t you think that if I were wrong, I’d know it?" ~ Sheldon Cooper
10-02-2012, 07:15 AM #2
Senior Member Verified Member
ECF Veteran
Join Date
Jul 2012
Northern BC, Canada
10-02-2012, 02:25 PM #3
10-02-2012, 05:54 PM #4
10-03-2012, 04:46 AM #5
Ultra Member Verified Member
ECF Veteran
Join Date
Mar 2012
Waco, TX
Blog Entries
10-03-2012, 06:24 AM #6
Senior Member Verified Member
ECF Veteran
Join Date
Jul 2012
Northern BC, Canada
10-04-2012, 08:51 PM #7
Ultra Member Verified Member
ECF Veteran
Join Date
Sep 2011
10-05-2012, 12:10 PM #8
Ultra Member Verified Member
Registered Reviewer/Blogger
ECF Veteran
Join Date
Jun 2010
Chicago IL
Blog Entries
10-05-2012, 10:33 PM #9
10-06-2012, 06:08 AM #10 | {"url":"http://www.e-cigarette-forum.com/forum/variable-voltage-apv-discussion/338030-low-high-resistance-equal-wattage-whats-difference.html","timestamp":"2014-04-21T02:49:44Z","content_type":null,"content_length":"103442","record_id":"<urn:uuid:9485e6ad-fb79-4949-b46a-f7fef3f0c568>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thirteen Ed Online - Fluid Power Systems
Fluid Power Systems
Procedures for Teachers is divided into four sections:
-- Preparing for the Lesson.
-- Conducting the Lesson.
-- Additional Activities.
-- Managing Resources and Student Activities.
Prep Materials:
○ 2 12cc veterinary syringes per team of students.
○ 2 20cc veterinary syringes per team of students.
○ 2 30cc veterinary syringes per team of students.
○ 36" plastic tubing per team of students.
○ 11"x17" graph paper.
○ 1' rulers.
○ Pencils.
○ Assorted rubber bands.
○ Wood glue. (Don't use hot glue because it stretches too much.)
○ Clear pine wood, assorted lengths.
○ 12"x12"x3/4" plywood.
○ Miter saw. (A power miter saw would be a big help.)
○ Assorted screws & fasteners.
○ Test tubes.
○ Test tube racks.
Computer Resources:
You will need at least one computer with Internet access to complete this lesson. While many configurations will work, we recommend:
-- Modem: 28.8 Kbps or faster.
-- Browser: Netscape Navigator 3.0 or above or Internet Explorer 3.0 or above.
-- Macintosh computer: System 7.0 or above and at least 16 MB of RAM.
-- IBM-compatible computer: 386 or higher processor with at least 16 MB of RAM, running Windows 3.1. Or, a 486/66 or Pentium with at least 16 MB of RAM, running Windows 95 or higher.
For more information, visit
What You Need to Get Connected
in wNetSchool's Internet Primer.
The following sites should be bookmarked:
Reeko's Mad Scientist Lab
Students find out about Pascal's Law by reading about an experiment to make a "submarine" sink. It's fun!
Digger Jr. Explains
This clearly outlined and vivid description of hydraulics and hydrostatics makes these concepts easy for all to understand.
Using this site, students discover the four states of matter and the laws that govern them.
Hydraulics Project
This site shows some good examples of homemade hydraulic devices that were created from simple materials.
Time Allotment:
This technology learning activity requires about 15 class periods.
Design Brief and Criteria:
Distribute and discuss the Design Brief, in Organizers for Students. Ask students the question, "What are fluids?"
Break students into teams of 2. Give each student team the Research Log, in Organizers for Students. Students should visit each site and "log in" any information that is relevant to their design
problem. Students should begin research on the topic of hydraulics/pneumatics with the following sites. (Encourage your students to search for more information. This is just a start.)
Reeko's Mad Scientist Lab
Digger Jr. Explains
(See the "Brain Teaser" for a sample problem to work out with your students to understand mechanical advantage in fluid systems.)
The Internet Science Room
Hydraulics Project
ANSWER KEY to the Research Log
1. What are the four states of matter?
• Solid, liquid, gas, and plasma.
2. What are the characteristics of fluids?
• They take on the shape of their container; they "flow."
3. Define the following terms as they relate to fluid power:
• Pascal's Law: Any force applied to an enclosed fluid is transmitted in all directions to the walls of the container.
• Area: The measurement of length x width; for a circle, (pi)r^2.
• Pressure: Expressed in pound per inches squared (PSI), it is the force divided by the volume area.
• Force: Mass x acceleration.
• Mechanical Advantage: The amount of energy that a mechanism reduces in a force.
4. Explain why gases are used for some fluid systems and liquids are used in others.
• Gases are used primarily because of the portability of the systems and ease of set-up. Liquid systems are used where higher power and precision is required.
5. Solve the problem portrayed in the diagram provided in the Research Log, in Organizers for Students.
• Area of Piston A: 3.14 x (.5)^2 = .785 in.sq.
• PSI of system: 50 lbs. / .785 in.sq. = 63.694 PSI
• Output force at Piston B: 63.694 PSI x 113.04 in.sq. (area of piston "B") = 7199.970 lbs.
6. Discuss the pros and cons of hydraulics and pneumatics.
• Answers may vary but here is one possibility: Pneumatic systems are portable and less expensive but do not provide the same power or precision as hydraulic systems. Hydraulic systems are more
accurate and provide more power than pneumatic systems. However, hydraulic systems are not as easily portable. They use chemical fluids that can pollute, and are more costly.
7. List a minimum of 5 applications of fluid power we see in our every day lives.
• Answers may vary. A Back Hoe, a nail gun, "Jaws of Life" tool, car brakes system, and an industrial robot are some possibilities.
Students should sketch or download pictures of at least four possible designs for their chosen solution to the assignment in the Design Brief, in Organizers for Students. Students should make
notations on the designs, making reference to any special features. This should relate directly to the information gathered from the students' Internet research.
Teams should discuss the pros and cons of the design they have found (or come up with on their own). After comparing their choice to the design criteria, instruct them to choose the one they think
will work best for the given situation.
Have the teams complete a set of detailed, full size, working drawings of the design they have chosen. This should include dimensions and any other important information needed to build their
When the teams complete the design phase of the activity, they should begin construction using the tools and materials outlined in the Design Brief, in Organizers for Students.
Testing, Evaluation & Redesign:
At the end of the construction phase, organize a testing session for the hydraulic arms. Try to make this a BIG DEAL! Things like lab coats and safety gear will add to the atmosphere. Set up a mock
isolation chamber so you can "remotely" control the robots from "outside" the enclosure. Video taping also makes for a good follow up, as you will be able to review each step of the testing to
evaluate and discuss key points with your students. Students should record data as the testing takes place, make notations about the operation of their design, and record any problems that occur.
At the conclusion of testing, have each team compile the results and complete a portfolio that includes the research, design possibilities, chosen design with explanation, any pre-construction
sketches, and a summary and conclusion from the testing results for possible redesign and changes for a future attempt.
Science: Study the states of matter. Discuss what type of matter falls into the fluid category. Become familiar with and discuss Pascal's Law.
Language Arts: Have students give an oral presentation of their design to the class. Students should also turn in a documentation portfolio to reflect their work on this design problem.
Mathematics: Calculate the area of a circle using (pi)r^2. Calculate pressure, force, and mechanical advantage in a fluid system. Utilize the formulas: Pressure = Force / Area and Force = Pressure x
One Computer in the Classroom
If you have access to one computer in your classroom, you can organize your class in several ways. Divide your class into two groups. Instruct one of the groups to do paper research while the second
group is working on the computer. Bring in books, encyclopedias, etc., from the library for the group doing paper research. When the groups have finished working, have them switch places.
If you have a big monitor or projection facilities, you can do Internet research together as a class. Make sure that every student in your class can see the screen, go to the relevant Web site(s),
and review the information presented there. You can also select a search engine page and allow your students to suggest the search criteria. Again, bookmark and/or print the pages that you think are
helpful for reference later.
Several Computers in the Classroom
Divide your class into small groups. Groups can do Internet research using pages you have bookmarked. Group members should take turns navigating the bookmarked site.
You can also set the class up so that each computer is dedicated to certain sites. Students will then move around the classroom, getting different information from each station.
Using a Computer Lab
A computer center or lab space, with a computer-to-student ratio of one to three, is ideal for doing Web-based projects. Generally, when doing Web-based research, it is helpful to put students in
groups of three. This way, students can help each other if problems or questions arise. It is often beneficial to bookmark sites for students ahead of time.
Submit a Comment: We invite your comments and suggestions based on how you used the lesson in your classroom. | {"url":"http://www.thirteen.org/edonline/lessons/fluid/b.html","timestamp":"2014-04-20T23:34:22Z","content_type":null,"content_length":"21603","record_id":"<urn:uuid:3e920238-8463-4223-b3a0-7abba74b48c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Bridgewater Algebra 2 Tutor
...I am flexible in how I teach so I'll fit the student's favorite learning style. My most recent student earned an 800. SAT writing is really testing understanding of vocabulary, grammar,
colloquial English, and common usage.
55 Subjects: including algebra 2, English, reading, algebra 1
...I enjoy helping others to reason through their ideas and develop a writing style. I have a BA in philosophy, which included a two-year "great books" program. Together this comprised an
extensive amount of reading, analyzing texts, developing theses, and putting ideas into words.
29 Subjects: including algebra 2, reading, English, geometry
...I have also tutored a multitude of students for their sociology classes, and all of them have received excellent grades. I have a law degree from Boston University, one of the top law schools
in the United States. I have tutored numerous students for their law courses.
67 Subjects: including algebra 2, English, calculus, reading
Hi! I have a bachelor's in Chemistry from UMass Amherst and am currently enrolled in my PharmD at Northeastern University. I have tutored Chemistry and Algebra before and have had a lot of success
with it.
15 Subjects: including algebra 2, chemistry, geometry, algebra 1
...I still enjoy playing now, and I have been branching out to the local fiddle and Irish Seisun music scene. I started teaching again in 2008 and have been steadily growing my own studio of
students. I have been salsa dancing since 2003 and I have been teaching salsa since 2006.
13 Subjects: including algebra 2, English, geometry, writing | {"url":"http://www.purplemath.com/East_Bridgewater_Algebra_2_tutors.php","timestamp":"2014-04-18T08:32:55Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:faeb388a-b9ff-40c6-ae27-b10e46adf89e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integral of Sin(theta)/Sin(theta/2)
well, since thw OP hasn't shown up yet, i am going to make it a little bit easier for him.
Like cristo suggested you need to write sin(theta) in terms of sin(theta/2)
notice that sin(theta)=sin(2(theta/2)), now applying the double angle forumula for sin, what do we get?? like sin(x+y) = sin(x)cos(y)+cos(x)sin(y), now apply the same thing here, just notice that in
our case we have x=y. Can you go from here?? | {"url":"http://www.physicsforums.com/showthread.php?t=193274","timestamp":"2014-04-17T01:02:02Z","content_type":null,"content_length":"27170","record_id":"<urn:uuid:83db0212-bd7e-4118-a31e-6507a9e76384>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: calculating AIC for log-logistic model
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: calculating AIC for log-logistic model
From rgutierrez@stata.com (Roberto G. Gutierrez, StataCorp.)
To statalist@hsphsun2.harvard.edu
Subject Re: st: calculating AIC for log-logistic model
Date Fri, 26 Jul 2002 09:21:34 -0500
In our continuing discussion on this matter, Shige Song <sgsong@ucla.edu>
> Using the example from the book on page 224. Now the AIC should be
> calculated as:
> AIC = -2lnL + 2(k+c) = -2*(-42.241) + 2*(2+2) = 92.482
> as reported in Table 13.2 on page231. Now I want to introduced the same set
> of covariates ("protect" and "age") to the shape paremater (gamma), should
> the AIC be like this:
> AIC = -2lnL + 2(k+c) = -2lnL + 2*(4+2)
> is that correct? Shoud I also include the constant term in the gamma
> parameter equation (in you answer you said the constantt term in the main
> equation should be excluded but you did not say what to do abont the
> constant term in the gamma parameter equation). Thanks!
Your calculations are correct. k+c is incremenented by 2 to reflect the two
new estimated parameters. Whether you want to think of it as k increasing by
2 or c increasing by 2 is really a matter of personal taste. I like to think
of it as c since it represents things "ancillary" to the main equation, but it
really does not matter.
Just follow this rule: Set (k+c) to equal the _total_ number of all
estimated parameters for all equations, including _all_ constant terms.
As to your other question, the constant term in the main equation is not
counted in k because it is already counted in c: c = 2 for the standard
log-logistic model; 1 for shape parameter gamma + 1 for the scale parameter
(the constant term in the main equation), but again we counted the constant
term in c only as a convention. It really doesn't matter as long as it
is counted somewhere.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-07/msg00489.html","timestamp":"2014-04-16T04:35:32Z","content_type":null,"content_length":"6587","record_id":"<urn:uuid:1a1eaeb9-00d9-456e-9593-eafb90a0087c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edge Prediction in a Social Graph: My Solution to Facebook's User Recommendation Contest on Kaggle
A couple weeks ago, Facebook launched a link prediction contest on Kaggle, with the goal of recommending missing edges in a social graph. I love investigating social networks, so I dug around a
little, and since I did well enough to score one of the coveted prizes, I’ll share my approach here.
(For some background, the contest provided a training dataset of edges, a test set of nodes, and contestants were asked to predict missing outbound edges on the test set, using mean average precision
as the evaluation metric.)
What does the network look like? I wanted to play around with the data a bit first just to get a rough feel, so I made an app to interact with the network around each node.
Here’s a sample:
(Go ahead, click on the picture to play with the app yourself. It’s pretty fun.)
The node in black is a selected node from the training set, and we perform a breadth-first walk of the graph out to a maximum distance of 3 to uncover the local network. Nodes are sized according to
their distance from the center, and colored according to a chosen metric (a personalized PageRank in this case; more on this later).
We can see that the central node is friends with three other users (in red), two of whom have fairly large, disjoint networks.
There are quite a few dangling nodes (nodes at distance 3 with only one connection to the rest of the local network), though, so let’s remove these to reveal the core structure:
And here’s an embedded version you can manipulate inline:
Since the default view doesn’t encode the distinction between following and follower relationships, we can mouse over each node to see who it follows and who it’s followed by. Here, for example, is
the following/follower network of one of the central node’s friends:
The moused over node is highlighted in black, its friends (users who both follow the node and are followed back in turn) are colored in purple, its followees are teal, and its followers in orange. We
can also see that the node shares a friend with the central user (triadic closure, holla!).
Here’s another network, this time of the friend at the bottom:
Interestingly, while the first friend had several only-followers (in orange), the second friend has none. (which suggests, perhaps, a node-level feature that measures how follow-hungry a user is…)
And here’s one more node, a little further out (maybe a celebrity, given it has nothing but followers?):
The Quiet One
Let’s take a look at another graph, one whose local network is a little smaller:
A Social Butterfly
And one more, whose local network is a little larger:
Again, I encourage everyone to play around with the app here, and I’ll come back to the question of coloring each node later.
Next, let’s take a more quantitative look at the graph.
Here’s the distribution of the number of followers of each node in the training set (cut off at 50 followers for a better fit – the maximum number of followers is 552), as well as the number of users
each node is following (again, cut off at 50 – the maximum here is 1566)
Nothing terribly surprising, but that alone is good to verify. (For people tempted to mutter about power laws, I’ll hold you off with the bitter coldness of baby Gauss’s tears.)
Similarly, here are the same two graphs, but limited to the nodes in the test set alone:
Notice that there are relatively more test set users with 0 followees than in the full training set, and relatively fewer test set users with 0 followers. This information could be used to better
simulate a validation set for model selection, though I didn’t end up doing this myself.
Preliminary Probes
Finally, let’s move on to the models themselves.
In order to quickly get up and running on a couple prediction algorithms, I started with some unsupervised approaches. For example, after building a new validation set* to test performance offline, I
• Recommending users who follow you (but you don’t follow in return)
• Recommending users similar to you (when representing users as sets of their followers, and using cosine similarity and Jaccard similarity as the similarity metric)
• Recommending users based on a personalized PageRank score
• Recommending users that the people you follow also follow
And so on, combining the votes of these algorithms in a fairly ad-hoc way (e.g., by taking the majority vote or by ordering by the number of followers).
This worked quite well actually, but I’d been planning to move on to a more machine learned model-based approach from the beginning, so I did that next.
*My validation set was formed by deleting random edges from the full training set. A slightly better approach, as mentioned above, might have been to more accurately simulate the distribution of the
official test set, but I didn’t end up trying this out myself.
Candidate Selection
In order to run a machine learning algorithm to recommend edges (which would take two nodes, a source and a candidate destination, and generate a score measuring the likelihood that the source would
follow the destination), it’s necessary to prune the set of candidates to run the algorithm on.
I used two approaches for this filtering step, both based on random walks on the graph.
Personalized PageRank
The first approach was to calculate a personalized PageRank around each source node.
Briefly, a personalized PageRank is like standard PageRank, except that when randomly teleporting to a new node, the surfer always teleports back to the given source node being personalized (rather
than to a node chosen uniformly at random, as in the classic PageRank algorithm).
That is, the random surfer in the personalized PageRank model works as follows:
• He starts at the source node $X$ that we want to calculate a personalized PageRank around.
• At step $i$: with probability $p$, the surfer moves to a neighboring node chosen uniformly at random; with probability $1-p$, the surfer instead teleports back to the original source node $X$.
• The limiting probability that the surfer is at node $N$ is then the personalized PageRank score of node $N$ around $X$.
Here’s some Scala code that computes approximate personalized PageRank scores and takes the highest-scoring nodes as the candidates to feed into the machine learning model:
Personalized PageRank
1 /**
2 * Calculate a personalized PageRank around the given user, and return
3 * a list of the nodes with the highest personalized PageRank scores.
4 *
5 * @return A list of (node, probability of landing at this node after
6 * running a personalized PageRank for K iterations) pairs.
7 */
8 def pageRank(user: Int): List[(Int, Double)] = {
9 // This map holds the probability of landing at each node, up to the
10 // current iteration.
11 val probs = Map[Int, Double]()
12 probs(user) = 1 // We start at this user.
14 val pageRankProbs = pageRankHelper(start, probs, NumPagerankIterations)
15 pageRankProbs.toList
16 .sortBy { -_._2 }
17 .filter { case (node, score) =>
18 !getFollowings(user).contains(node) && node != user
19 }
20 .take(MaxNodesToKeep)
21 }
23 /**
24 * Simulates running a personalized PageRank for one iteration.
25 *
26 * Parameters:
27 * start - the start node to calculate the personalized PageRank around
28 * probs - a map from nodes to the probability of being at that node at
29 * the start of the current iteration
30 * numIterations - the number of iterations remaining
31 * alpha - with probability alpha, we follow a neighbor; with probability
32 * 1 - alpha, we teleport back to the start node
33 *
34 * @return A map of node -> probability of landing at that node after the
35 * specified number of iterations.
36 */
37 def pageRankHelper(start: Int, probs: Map[Int, Double], numIterations: Int,
38 alpha: Double = 0.5): Map[Int, Double] = {
39 if (numIterations <= 0) {
40 probs
41 } else {
42 // Holds the updated set of probabilities, after this iteration.
43 val probsPropagated = Map[Int, Double]()
45 // With probability 1 - alpha, we teleport back to the start node.
46 probsPropagated(start) = 1 - alpha
48 // Propagate the previous probabilities...
49 probs.foreach { case (node, prob) =>
50 val forwards = getFollowings(node)
51 val backwards = getFollowers(node)
53 // With probability alpha, we move to a follower...
54 // And each node distributes its current probability equally to
55 // its neighbors.
56 val probToPropagate = alpha * prob / (forwards.size + backwards.size)
57 (forwards.toList ++ backwards.toList).foreach { neighbor =>
58 if (!probsPropagated.contains(neighbor)) {
59 probsPropagated(neighbor) = 0
60 }
61 probsPropagated(neighbor) += probToPropagate
62 }
63 }
65 pageRankHelper(start, probsPropagated, numIterations - 1, alpha)
66 }
67 }
Propagation Score
Another approach I used, based on a proposal by another contestant on the Kaggle forums, works as follows:
• Start at a specified user node and give it some score.
• In the first iteration, this user propagates its score equally to its neighbors.
• In the second iteration, each user duplicates and keeps half of its score S. It then propagates S equally to its neighbors.
• In subsequent iterations, the process is repeated, except that neighbors reached via a backwards link don’t duplicate and keep half of their score. (The idea is that we want the score to reach
followees and not followers.)
Here’s some Scala code to calculate these propagation scores:
Propagation Score
1 /**
2 * Calculate propagation scores around the current user.
3 *
4 * In the first propagation round, we
5 *
6 * - Give the starting node N an initial score S.
7 * - Propagate the score equally to each of N's neighbors (followers
8 * and followings).
9 * - Each first-level neighbor then duplicates and keeps half of its score
10 * and then propagates the original again to its neighbors.
11 *
12 * In further rounds, neighbors then repeat the process, except that neighbors
13 * traveled to via a backwards/follower link don't keep half of their score.
14 *
15 * @return a sorted list of (node, propagation score) pairs.
16 */
17 def propagate(user: Int): List[(Int, Double)] = {
18 val scores = Map[Int, Double]()
20 // We propagate the score equally to all neighbors.
21 val scoreToPropagate = 1.0 / (getFollowings(user).size + getFollowers(user).size)
23 (getFollowings(user).toList ++ getFollowers(user).toList).foreach { x =>
24 // Propagate the score...
25 continuePropagation(scores, x, scoreToPropagate, 1)
26 // ...and make sure it keeps half of it for itself.
27 scores(x) = scores.getOrElse(x, 0: Double) + scoreToPropagate / 2
28 }
30 scores.toList.sortBy { -_._2 }
31 .filter { nodeAndScore =>
32 val node = nodeAndScore._1
33 !getFollowings(user).contains(node) && node != user
34 }
35 .take(MaxNodesToKeep)
36 }
38 /**
39 * In further rounds, neighbors repeat the process above, except that neighbors
40 * traveled to via a backwards/follower link don't keep half of their score.
41 */
42 def continuePropagation(scores: Map[Int, Double], user: Int, score: Double,
43 currIteration: Int): Unit = {
44 if (currIteration < NumIterations && score > 0) {
45 val scoreToPropagate = score / (getFollowings(user).size + getFollowers(user).size)
47 getFollowings(user).foreach { x =>
48 // Propagate the score...
49 continuePropagation(scores, x, scoreToPropagate, currIteration + 1)
50 // ...and make sure it keeps half of it for itself.
51 scores(x) = scores.getOrElse(x, 0: Double) + scoreToPropagate / 2
52 }
54 getFollowers(user).foreach { x =>
55 // Propagate the score...
56 continuePropagation(scores, x, scoreToPropagate, currIteration + 1)
57 // ...but backward links (except for the starting node's immediate
58 // neighbors) don't keep any score for themselves.
59 }
60 }
61 }
I played around with tweaking some parameters in both approaches (e.g., weighting followers and followees differently), but the natural defaults (as used in the code above) ended up performing the
After pruning the set of candidate destination nodes to a more feasible level, I fed pairs of (source, destination) nodes into a machine learning model. From each pair, I extracted around 30 features
in total.
As mentioned above, one feature that worked quite well on its own was whether the destination node already follows the source.
I also used a wide set of similarity-based features, for example, the Jaccard similarity between the source and destination when both are represented as sets of their followers, when both are
represented as sets of their followees, or when one is represented as a set of followers while the other is represented as a set of followees.
Similarity Metrics
1 abstract class SimilarityMetric[T] {
2 def apply(set1: Set[T], set2: Set[T]): Double;
3 }
5 object JaccardSimilarity extends SimilarityMetric[Int] {
6 /**
7 * Returns the Jaccard similarity between two sets, 0 if both are empty.
8 */
9 def apply(set1: Set[Int], set2: Set[Int]): Double = {
10 val union = (set1.union(set2)).size
12 if (union == 0) {
14 } else {
15 (set1 & set2).size.toFloat / union
16 }
17 }
19 }
21 object CosineSimilarity extends SimilarityMetric[Int] {
22 /**
23 * Returns the cosine similarity between two sets, 0 if both are empty.
24 */
25 def apply(set1: Set[Int], set2: Set[Int]): Double = {
26 if (set1.size == 0 && set2.size == 0) {
28 } else {
29 (set1 & set2).size.toFloat / (math.sqrt(set1.size * set2.size))
30 }
31 }
33 }
35 // ************
36 // * FEATURES *
37 // ************
39 /**
40 * Returns the similarity between user1 and user2 when both are represented as
41 * sets of followers.
42 */
43 def similarityByFollowers(user1: Int, user2: Int)
44 (implicit similarity: SimilarityMetric[Int]): Double = {
45 similarity.apply(getFollowersWithout(user1, user2),
46 getFollowersWithout(user2, user1))
47 }
49 // etc.
Along the same lines, I also computed a similarity score between the destination node and the source node’s followees, and several variations thereof.
Extended Similarity Scores
1 /**
2 * Iterate over each of user1's followings, compute their similarity with
3 * user2 when both are represented as sets of followers, and return the
4 * sum of these similarities.
5 */
6 def followerBasedSimilarityToFollowing(user1: Int, user2: Int)
7 (implicit similarity: SimilarityMetric[Int]): Double = {
8 getFollowingsWithout(user1, user2)
9 .map { similarityByFollowers(_, user2)(similarity) }
10 .sum
11 }
Other features included the number of followers and followees of each node, the ratio of these, the personalized PageRank and propagation scores themselves, the number of followers in common, and
triangle/closure-type features (e.g., whether the source node is friends with a node X who in turn is a friend of the destination node).
If I had had more time, I would probably have tried weighted and more regularized versions of some of these features as well (e.g., downweighting nodes with large numbers of followers when computing
cosine similarity scores based on followees, or shrinking the scores of nodes we have little information about).
Feature Understanding
But what are these features actually doing? Let’s use the same app I built before to take a look.
Here’s the local network of node 317 (different from the node above), where each node is colored by its personalized PageRank (higher scores are in darker red):
If we look at the following vs. follower relationships of the central node (recall that purple is friends, teal is followings, orange is followers):
…we can see that, as expected (because edges that represented both following and follower were double-weighted in my PageRank calculation), the darkest red nodes are those that are friends with the
central node, while those in a following-only or follower-only relationship have a lower score.
How does the propagation score compare to personalized PageRank? Here, I colored each node according to the log ratio of its propagation score and personalized PageRank:
Comparing this coloring with the local follow/follower network:
…we can see that followed nodes (in teal) receive a higher propagation weight than friend nodes (in purple), while follower nodes (in orange) receive almost no propagation score at all.
Going back to node 1, let’s look at a different metric. Here, each node is colored according to its Jaccard similarity with the source, when nodes are represented by the set of their followers:
We can see that, while the PageRank and propagation metrics tended to favor nodes close to the central node, the Jaccard similarity feature helps us explore nodes that are further out.
However, if we look the high-scoring nodes more closely, we see that they often have only a single connection to the rest of the network:
In other words, their high Jaccard similarity is due to the fact that they don’t have many connections to begin with. This suggests that some regularization or shrinking is in order.
So here’s a regularized version of Jaccard similarity, where we downweight nodes with few connections:
We can see that the outlier nodes are much more muted this time around.
For a starker difference, compare the following two graphs of the Jaccard similarity metric around node 317 (the first graph is an unregularized version, the second is regularized):
Notice, in particular, how the popular node in the top left and the popular nodes at the bottom have a much higher score when we regularize.
And again, there are other networks and features I haven’t mentioned here, so play around and discover them on the app itself.
For the machine learning algorithms on top of my features, I experimented with two types of models: logistic regression (using both L1 and L2 regularization) and random forests. (If I had more time,
I would probably have done some more parameter tuning and maybe tried gradient boosted trees as well.)
So what is a random forest? I wrote an old (layman’s) post on it here, but since nobody ever clicks on these links, let’s copy it over:
Suppose you’re very indecisive, so whenever you want to watch a movie, you ask your friend Willow if she thinks you’ll like it. In order to answer, Willow first needs to figure out what movies
you like, so you give her a bunch of movies and tell her whether you liked each one or not (i.e., you give her a labeled training set). Then, when you ask her if she thinks you’ll like movie X or
not, she plays a 20 questions-like game with IMDB, asking questions like “Is X a romantic movie?”, “Does Johnny Depp star in X?”, and so on. She asks more informative questions first (i.e., she
maximizes the information gain of each question), and gives you a yes/no answer at the end.
Thus, Willow is a decision tree for your movie preferences.
But Willow is only human, so she doesn’t always generalize your preferences very well (i.e., she overfits). In order to get more accurate recommendations, you’d like to ask a bunch of your
friends, and watch movie X if most of them say they think you’ll like it. That is, instead of asking only Willow, you want to ask Woody, Apple, and Cartman as well, and they vote on whether
you’ll like a movie (i.e., you build an ensemble classifier, aka a forest in this case).
Now you don’t want each of your friends to do the same thing and give you the same answer, so you first give each of them slightly different data. After all, you’re not absolutely sure of your
preferences yourself – you told Willow you loved Titanic, but maybe you were just happy that day because it was your birthday, so maybe some of your friends shouldn’t use the fact that you liked
Titanic in making their recommendations. Or maybe you told her you loved Cinderella, but actually you *really really* loved it, so some of your friends should give Cinderella more weight. So
instead of giving your friends the same data you gave Willow, you give them slightly perturbed versions. You don’t change your love/hate decisions, you just say you love/hate some movies a little
more or less (you give each of your friends a bootstrapped version of your original training data). For example, whereas you told Willow that you liked Black Swan and Harry Potter and disliked
Avatar, you tell Woody that you liked Black Swan so much you watched it twice, you disliked Avatar, and don’t mention Harry Potter at all.
By using this ensemble, you hope that while each of your friends gives somewhat idiosyncratic recommendations (Willow thinks you like vampire movies more than you do, Woody thinks you like Pixar
movies, and Cartman thinks you just hate everything), the errors get canceled out in the majority. Thus, your friends now form a bagged (bootstrap aggregated) forest of your movie preferences.
There’s still one problem with your data, however. While you loved both Titanic and Inception, it wasn’t because you like movies that star Leonardio DiCaprio. Maybe you liked both movies for
other reasons. Thus, you don’t want your friends to all base their recommendations on whether Leo is in a movie or not. So when each friend asks IMDB a question, only a random subset of the
possible questions is allowed (i.e., when you’re building a decision tree, at each node you use some randomness in selecting the attribute to split on, say by randomly selecting an attribute or
by selecting an attribute from a random subset). This means your friends aren’t allowed to ask whether Leonardo DiCaprio is in the movie whenever they want. So whereas previously you injected
randomness at the data level, by perturbing your movie preferences slightly, now you’re injecting randomness at the model level, by making your friends ask different questions at different times.
And so your friends now form a random forest.
Moving on, I essentially trained scikit-learn’s classifiers on an equal split of true and false edges (sampled from the output of my pruning step, in order to match the distribution I’d get when
applying my algorithm to the official test set), and compared performance on the validation set I made, with a small amount of parameter tuning:
Random Forest
1 ########################################
2 # STEP 1: Read in the training examples.
3 ########################################
4 truths = [] # A truth is 1 (for a known true edge) or 0 (for a false edge).
5 training_examples = [] # Each training example is an array of features.
6 for line in open(TRAINING_SET_WITH_FEATURES_FILENAME):
7 values = [float(x) for x in line.split(",")]
8 truth = values[0]
9 training_example_features = values[1:]
11 truths.append(truth)
12 training_examples.append(training_example_features)
14 #############################
15 # STEP 2: Train a classifier.
16 #############################
17 rf = RandomForestClassifier(n_estimators = 500, compute_importances = True, oob_score = True)
18 rf = rf.fit(training_examples, truths)
So let’s look at the variable importance scores as determined by one of my random forest models, which (unsurprisingly) consistently outperformed logistic regression.
The random forest classifier here is one of my earlier models (using a slightly smaller subset of my full suite of features), where the targeting step consisted of taking the top 25 nodes with the
highest propagation scores.
We can see that the most important variables are:
• Personalized PageRank scores. (I put in both normalized and unnormalized versions, where the normalized versions consisted of taking all the candidates for a particular source node, and scaling
them so that the maximum personalized PageRank score was 1.)
• Whether the destination node already follows the source.
• How similar the source node is to the people the destination node is following, when each node is represented as a set of followers. (Note that this is more or less measuring how likely the
destination is to follow the source, which we already saw is a good predictor of whether the source is likely to follow the destination.) Plus several variations on this theme (e.g., how similar
the destination node is to the source node’s followers, when each node is represented as a set of followees).
Model Comparison
How do all of these models compare to each other? Is the random forest model universally better than the logistic regression model, or are there some sets of users for which the logistic regression
model actually performs better?
To enable these kinds of comparisons, I made a small module that allows you to select two models and then visualize their sliced performance.
(Go ahead, play around.)
Above, I bucketed all test nodes into buckets based on (the logarithm of) their number of followers, and compared the mean average precision of two algorithms: one that recommends nodes to follow
using a personalized PageRank alone, and one that recommends nodes that are following the source user but are not followed back in return.
We see that except for the case of 0 followers (where the “is followed by” algorithm can do nothing), the personalized PageRank algorithm gets increasingly better in comparison: at first, the two
algorithms have roughly equal performance, but as the source node gets more followers, the personalized PageRank algorithm dominates.
And here’s an embedded version you can interact with directly:
Admittedly, building a slicer like this is probably overkill for a Kaggle competition, where the set of variables is fairly limited. But imagine having something similar for a real world model, where
new algorithms are tried out every week and we can slice the performance by almost any dimension we can imagine (by geography, to make sure we don’t improve Australia at the expense of the UK; by
user interests, to see where we could improve the performance of topic inference; by number of user logins, to make sure we don’t sacrifice the performance on new users for the gain of the core).
Mathematicians do it with Matrices
Let’s switch directions slightly and think about how we could rewrite our computations in a different, matrix-oriented style. (I didn’t do this in the competition – this is more a preview of another
post I’m writing.)
Personalized PageRank in Scalding
Personalized PageRank, for example, is an obvious fit for a matrix rewrite. Here’s how it would look in Scalding’s new Matrix library:
(For those who don’t know, Scalding is a Hadoop framework that Twitter released at the beginning of the year; see my post on building a big data recommendation engine in Scalding for an
Personalized PageRank, Matrix Style
1 // ***********************************************
2 // STEP 1. Load the adjacency graph into a matrix.
3 // ***********************************************
5 val following = Tsv(GraphFilename, ('user1, 'user2, 'weight))
7 // Binary matrix where cell (u1, u2) means that u1 follows u2.
8 val followingMatrix =
9 following.toMatrix[Int,Int,Double]('user1, 'user2, 'weight)
11 // Binary matrix where cell (u1, u2) means that u1 is followed by u2.
12 val followerMatrix = followingMatrix.transpose
14 // Note: we could also form this adjacency matrix differently, by placing
15 // different weights on the following vs. follower edges.
16 val undirectedAdjacencyMatrix =
17 (followingMatrix + followerMatrix).rowL1Normalize
19 // Create a diagonal users matrix (to be used in the "teleportation back
20 // home" step).
21 val usersMatrix =
22 following.unique('user1)
23 .map('user1 -> ('user2, 'weight)) { user1: Int => (user1, 1) }
24 .toMatrix[Int, Int, Double]('user1, 'user2, 'weight)
26 // ***************************************************
27 // STEP 2. Compute the personalized PageRank scores.
28 // See http://nlp.stanford.edu/projects/pagerank.shtml
29 // for more information on personalized PageRank.
30 // ***************************************************
32 // Compute personalized PageRank by running for three iterations,
33 // and output the top candidates.
34 val pprScores = personalizedPageRank(usersMatrix, undirectedAdjacencyMatrix, usersMatrix, 0.5, 3)
35 pprScores.topRowElems(numCandidates).write(Tsv(OutputFilename))
37 /**
38 * Performs a personalized PageRank iteration. The ith row contains the
39 * personalized PageRank probabilities around node i.
40 *
41 * Note the interpretation:
42 * - with probability 1 - alpha, we go back to where we started.
43 * - with probability alpha, we go to a neighbor.
44 *
45 * Parameters:
46 *
47 * startMatrix - a (usually diagonal) matrix, where the ith row specifies
48 * where the ith node teleports back to.
49 * adjacencyMatrix
50 * prevMatrix - a matrix whose ith row contains the personalized PageRank
51 * probabilities around the ith node.
52 * alpha - the probability of moving to a neighbor (as opposed to
53 * teleporting back to the start).
54 * numIterations - the number of personalized PageRank iterations to run.
55 */
56 def personalizedPageRank(startMatrix: Matrix[Int, Int, Double],
57 adjacencyMatrix: Matrix[Int, Int, Double],
58 prevMatrix: Matrix[Int, Int, Double],
59 alpha: Double,
60 numIterations: Int): Matrix[Int, Int, Double] = {
61 if (numIterations <= 0) {
62 prevMatrix
63 } else {
64 val updatedMatrix = startMatrix * (1 - alpha) +
65 (prevMatrix * adjacencyMatrix) * alpha
66 personalizedPageRank(startMatrix, adjacencyMatrix, updatedMatrix, alpha, numIterations - 1)
67 }
68 }
Not only is this matrix formulation a more natural way of expressing the algorithm, but since Scalding (by way of Cascading) supports both local and distributed modes, this code runs just as easily
on a Hadoop cluster of thousands of machines (assuming our social network is orders of magnitude larger than the one in the contest) as on a sample of data in a laptop. Big data, big matrix style,
Cosine Similarity as L2-Normalized Multiplication
Here’s another example. Calculating cosine similarity between all users is a natural fit for a matrix formulation since, after all, the cosine similarity between two vectors is just their
L2-normalized dot product:
Cosine Similarity, Matrix Style
1 // A matrix where the cell (i, j) is 1 iff user i is followed by user j.
2 val followerMatrix = ...
4 // A matrix where cell (i, j) holds the cosine similarity between
5 // user i and user j, when both are represented as sets of their followers.
6 val followerBasedSimilarityMatrix =
7 followerMatrix.rowL2Normalize * followerMatrix.rowL2Normalize.transpose
A Similarity Extension
But let’s go one step further.
To change examples for ease of exposition: suppose you’ve bought a bunch of books on Amazon, and Amazon wants to recommend a new book you’ll like. Since Amazon knows similarities between all pairs of
books, one natural way to generate this recommendation is to:
1. Take every book B.
2. Calculate the similarity between B and each book you bought.
3. Sum up all these similarities to get your recommendation score for B.
In other words, the recommendation score for book B on user U is:
DidUserBuy(U, Book 1) * SimilarityBetween(Book B, Book 1) + DidUserBuy(U, Book 2) * SimilarityBetween(Book B, Book2) + … + DidUserBuy(U, Book n) * SimilarityBetween(Book B, Book n)
This, too, is a dot product! So it can also be rewritten as a matrix multiplication:
1 // A matrix where cell (i, j) holds the similarity between books i and j.
2 val bookSimilarityMatrix = ...
4 // A matrix where cell (i, j) is 1 if user i has bought book j,
5 // and 0 otherwise.
6 val userPurchaseMatrix = ...
8 // A matrix where cell (i, j) holds the recommendation score of
9 // book j to user i.
10 val recommendationMatrix = userPurchaseMatrix * bookSimilarityMatrix
Of course, there’s a natural analogy between this score and the feature I described a while back above, where I compute a similarity score between a destination node and a source node’s followees
(when all nodes are represented as sets of followers):
1 /**
2 * Iterate over each of user1's followings, compute their similarity
3 * with user2 when both are represented as sets of followers, and return
4 * the sum of these similarities.
5 */
6 def followerBasedSimilarityToFollowings(user1: Int, user2: Int)
7 (implicit similarity: SimilarityMetric[Int]): Double = {
8 getFollowingsWithout(user1, user2)
9 .map { similarityByFollowers(_, user2)(similarity) }
10 .sum
11 }
13 /**
14 * The matrix version of the above function.
15 *
16 * Why are these the same? Note that the above function simply computes:
17 * DoesUserFollow(User A, User 1) * Similarity(User 1, User B) +
18 * DoesUserFollow(User A, User 2) * Similarity(User 2, User B) + ... +
19 * DoesUserFollow(User A, User n) * Similarity(User n, User B)
20 */
21 val followingMatrix = ...
22 val followerBasedSimilarityMatrix =
23 followerMatrix.rowL2Normalize * followerMatrix.rowL2Normalize.transpose
25 val followerBasedSimilarityToFollowingsMatrix =
26 followingMatrix * followerBasedSimilarityMatrix
For people comfortable expressing their computations in a vector manner, writing your computations as matrix manipulations often makes experimenting with different algorithms much more fluid.
Imagine, for example, that you want to switch from L1 normalization to L2 normalization, or that you want to express your objects as binary sets rather than weighted vectors. Both of these become
simple one-line changes when you have vectors and matrices as first-class objects, but are much more tedious (especially in a MapReduce land where this matrix library was designed to be applied!)
when you don’t.
Finish Line
By now, I think I’ve spent more time writing this post than on the contest itself, so let’s wrap up.
I often get asked what kinds of tools I like to use, so for this competition my kit consisted of:
• Scala, for code that needed to be fast (e.g., extracting features) or that I was going to run repeatedly (e.g., scoring my validation set).
• Python, for my machine learning models, because scikit-learn is awesome.
• Ruby, for quick one-off scripts.
• R, for some data analysis and simple plotting.
• Coffeescript and d3, for the interactive visualizations.
Finally, I put up a Github repository containing some code, and here are a couple other posts I’ve written that people who like this entry might also enjoy: | {"url":"http://blog.echen.me/2012/07/31/edge-prediction-in-a-social-graph-my-solution-to-facebooks-user-recommendation-contest-on-kaggle/","timestamp":"2014-04-16T16:30:51Z","content_type":null,"content_length":"105306","record_id":"<urn:uuid:60991355-cba1-4e35-8e35-037ecaf74d56>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is OUNCES IN GRAMS?
Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS!
All the definitions and meanings found are from third-party authors, please respect their copyright.
© 2014 - mrwhatis.net | {"url":"http://mrwhatis.net/ounces-in-grams.html","timestamp":"2014-04-16T21:58:06Z","content_type":null,"content_length":"34172","record_id":"<urn:uuid:a16f253d-19d4-4358-a199-f654762f1d08>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Melrose Park Algebra 1 Tutor
Find a Melrose Park Algebra 1 Tutor
...During the summers I was in high school, I assisted my tennis coach as an instructor for a tennis camp. After high school, I continued on and played tennis for my college, Rose-Hulman Institute
of Technology, a division 3 school. During the four season with the team, I played both singles and doubles at every match.
13 Subjects: including algebra 1, chemistry, calculus, geometry
...I am certified to teach English in the state of Illinois and have experience with Chicago Public Schools. I have additional tutoring experience in the subjects of Accounting, Algebra, Biology,
Computer Skills, Geometry, History, and Humanities. I also have experience working as both a GED and E...
39 Subjects: including algebra 1, reading, English, calculus
...I have over eight years of experience tutoring math. My students have ranged from middle school to college. I pride myself on being able to push past any difficulty a student is faced with.I
love this subject and am extremely competent teaching it.
21 Subjects: including algebra 1, chemistry, calculus, geometry
...Most importantly, my goal is to help students take ownership of their learning, with the goal of making you an independent and successful Spanish student. I have extensive experience tutoring
elementary aged children. I have spent last year tutoring two kids, one in 3rd grade and one in 4th.
28 Subjects: including algebra 1, Spanish, English, world history
...My experience includes leading workshops in Biology at the University of Miami as well as coaching Pop-Warner football. I lead workshops for a year at the University. My job was to make sure
students understood the lecture material and answer any questions they had.
27 Subjects: including algebra 1, reading, English, chemistry | {"url":"http://www.purplemath.com/melrose_park_il_algebra_1_tutors.php","timestamp":"2014-04-21T13:03:53Z","content_type":null,"content_length":"24101","record_id":"<urn:uuid:6670bd98-9eef-4559-8813-1b8ac752e7a4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Play Smart When You Play the Lottery
Submitted by BruceFrey
Posted Oct 31 2009 11:56 AM
Your odds of winning a big prize in a giant lottery are really, really small, no matter how you slice it. You do have some control over your fate, however. Here are some ways to give yourself an
advantage (albeit slight) over all the other lotto players who haven't bought this book.
In October of 2005, the biggest Powerball lottery winner ever was crowned and awarded $340 million. It wasn't me. I don't play the lottery because, as a statistician, I know that playing only
slightly increases my chances of winning. It's not worth it to me.
Of course, if I don't play, I can't win. Buying a lottery ticket isn't necessarily a bad bet, and if you are going to play, there are a few things you can do to increase the amount of money you will
win (probably) and increase your chances of winning (possibly). Whoever bought the winning $340 million ticket in Jacksonville, Oregon, that October day likely followed a few of these winning
strategies, and you should too.
Because Powerball is a lottery game played in most U.S. states, we will use it as our example. This hack will work for any large lottery, though.
Powerball, like most lotteries, asks players to choose a set of numbers. Random numbers are then drawn, and if you match some or all of the numbers, you win money! To win the biggest prizes, you have
to match lots of numbers. Because so many people play Powerball, many tickets are sold, and the prize money can get huge.
Of course, correctly picking all the winning numbers is hard to do, but it's what you need to do to win the jackpot. In Powerball, you choose five numbers and then a sixth number: the red powerball.
The regular white numbers can range from 1 to 55, and the powerball can range from 1 to 42. Table 4.14 shows the different combinations of matches that result in a prize, the amount of the prize, and
the odds and probability of winning the prize.
Table 4.14. Powerball payoffs
│ Match │ Cash │ Odds │ Percentage │
│Powerball only │$3 │1 in 69 │1.4 percent │
│1 white ball and the powerball │$4 │1 in 127 │0.8 percent │
│3 white balls │$7 │1 in 291 │0.3 percent │
│2 white balls and the powerball │$7 │1 in 745 │0.1 percent │
│3 white balls and the powerball │$100 │1 in 11,927 │0.008 percent │
│4 white balls │$100 │1 in 14,254 │0.007 percent │
│4 white balls and the powerball │$10,000 │1 in 584,432 │0.0002 percent │
│5 white balls │$200,000 │1 in 3,563,609 │0.00003 percent │
│5 white balls and the powerball │Grand prize│1 in 146,107,962│0.0000006 percent│
Armed with all the wisdom you likely now have as a statistician (unless this is the first hack you turned to in this book), you might have already made a few interesting observations about this
payoff schedule.
The easiest prize to win is the powerball only match, and even then there are slim chances of winning. If you match the powerball (and no other numbers), you win $3. The chances of winning this prize
are about 1 in 69.
This is not a good bet by any reasonable standard. It costs a dollar to buy a ticket, to play one time, and the expected payout schedule is $3 for every 69 tickets you buy. So, on average, after 69
plays you will have won $3 and spent $69.
Actually, your payoff will be a little better than that. The odds shown in Table 4.14 are for making a specific match and not doing any better than that. Some proportion of the time when you match
the powerball, you will also match a white ball and your payoff will be $4, not $3. Choosing five white ball numbers and matching at least 1 will happen 39 percent of the time.
So, after having matched the powerball, you have a little better than a third chance of hitting at least one white ball as well. Even so, your expected payoff is about $3.39 for every $69 you throw
down that rat hole (I mean, spend on the lottery), which is still not a good bet.
The odds for the powerball only match don't seem quite right. I said there were 42 different numbers to choose from for the powerball, so shouldn't there be 1 out of 42 chances to match it, not 1 in
Yes, but remember this shows the chances of hitting that prize only and not doing better (by matching some other balls). Your odds of winning something, anything, if you combine all the winning
permutations together are 1 in 37, about 3 percent. Still not a good bet.
The odds for the grand prize don't seem quite right either. (Okay, okay, I don't really expect you to have "noticed" that. I didn't either until I did a few calculations.)
If there are 5 draws from the numbers of 1 to 55 (the white balls) and 1 draw from the numbers 1 to 42 (the red ball), then a quick calculation would estimate the number of possibilities as:
In other words, the odds are 1 out of 21,137,943,750. Or, if you were thinking a little more clearly, realizing that the number of balls gets smaller as they are drawn, you might speedily calculate
the number of possible outcomes as:
But the odds as shown are somewhat better than 1 out of 1.7 billion. The first time I calculated the odds, I didn't keep in mind that the order doesn't matter, so any of the remaining chosen numbers
could come up at any time. Hence, here's the correct series of calculations:
OK, Mr. Big Shot Stats Guy (you are probably thinking), you're going to tell us that we should never play the lottery because, statistically, the odds will never be in our favor. Actually, using the
criteria of a fair payout, there is one time to play and to buy as many tickets as you can afford.
In the case of Powerball, you should play anytime the grand prize increases to past $146,107,962 (or double that amount if you want the lump sum payout). As soon as it hits $146,107, 963, buy, buy,
buy! Because the chances of matching five white balls and the one red ball are exactly one out of that big number, from a statistical perspective, it is a good bet anytime your payout is bigger than
that big number.
For Powerball and its number of balls and their range of values, 146,107,962 is the magic number. The idea that your chances of winning haven't changed but the payoff amount has increased to a level
where playing is worthwhile is similar to the concept of pot odds in poker.
You can calculate the "magic number" for any lottery. Once the payoff in that lottery gets above your magic number, you can justify a ticket purchase. Use the "correct series" of calculations in our
example for Powerball as your mathematical guide. Ask yourself how many numbers you must match and what the range of possible numbers is. Remember to lower the number you divide by one each time you
"draw" out another ball or number, unless numbers can repeat. If numbers can repeat, then the denominator stays the same in your series of multiplications.
One important hint about deciding when to buy lottery tickets has to do with determining the actual magic number, the prize amount, which triggers your buying spree. The amount that is advertised as
the jackpot is not, in fact, the jackpot. The advertised "jackpot" is the amount that the winner would get over a period of years in a regular series of smaller portions of that amount. The real
jackpot—the amount you should identify as the payout in the gambling and statistical sense—is the amount that you would get if you chose the one lump sum option. The one lump sum is typically a
little less than half of the advertised jackpot amount.
So, if you have determined that your lottery has grown a jackpot amount that says it is now statistically a good time to play, how many tickets should you buy? Why not buy one of each? Why not spend
$146,107,962 and buy every possible combination? You are guaranteed to win. If the jackpot is greater than that amount, then you'll make money, guaranteed, right? Well, actually not. Otherwise, I'd
be rich and I would never share this hack with you. Why wouldn't you be guaranteed to win? The probably is that you might be forced to...wait for it...split the prize! Argh! See the next section...
If you do win the lottery, you'd like to be the only winner, so in addition to deciding when to play, there are a variety of strategies that increase the likelihood that you'll be the only one who
picked your winning number.
First off, I'm working under the assumption that the winning number is randomly chosen. I tend not to be a conspiracy theorist, nor do I believe that God has the time or inclination to affect the
drawing of winning lottery numbers, so I'm going to not list any strategy that would work only if there were not randomness in the drawing of lottery numbers. Here are some more reasonable tips to
consider when picking your lottery numbers:
Let the computer pick
Let the computer do the picking, or, at least, choose random numbers yourself. Random numbers are less likely to have meaning for any other player, so they are less likely to have chosen them on
their own tickets. The Powerball people report that 70 percent of all winning tickets are chosen randomly by the in-store computer. (They also point out, in a bit of "We told you that results are
random" whimsy that 70 percent of all tickets purchased had numbers generated by the computer.)
Don't pick dates
Do not pick numbers that could be dates. If possible, avoid numbers lower than 32. Many players always play important dates, such as birthdays and anniversaries, prison release dates, and so on.
If your winning number could be someone's lucky date, that increases the chance that you will have to split your winnings.
Stay away from well-known numbers
Do not pick numbers that are well known. In the big October 2005 Powerball results, hundreds of players chose numbers that matched the lottery ticket numbers that play a large role in the popular
fictional TV show Lost. None of these folks won the big prize, but if they had, they would have had to divide the millions into hundreds of slices.
There is also a family of purely philosophical tips that have to do with abstract theories of cause and effect and the nature of reality. For example, some philosophers would say to pick last week's
winning numbers. Because, while you might not know for sure what is real and what can and cannot happen in this world, you do know that, at least, it is possible for last week's numbers to be this
week's winning numbers. It happened before; it can happen again.
Though your odds of winning a giant lottery prize are slim, you can follow some statistical principles and do a few things to actually control your own destiny. (The word for destiny in Italian, by
the way, is lotto.) Oh, and one more thing: buy your ticket on the day of the drawing. If too much time passes between your purchase and the announcement of the winning numbers, you have a greater
likelihood of being hit by lightning, drowning in the bathtub, or being struck by a minivan than you do of winning the jackpot. Timing is everything, and I'd hate for you to miss out.
Learn more about this topic from Statistics Hacks.
Want to calculate the probability that an event will happen? Be able to spot fake data? Prove beyond doubt whether one thing causes another? Or learn to be a better gambler? You can do that and much
more with 75 practical and fun hacks packed into Statistics Hacks. These cool tips, tricks, and mind-boggling solutions from the world of statistics, measurement, and research methods will not only
amaze and entertain you, but will give you an advantage in several real-world situations-including business. | {"url":"http://answers.oreilly.com/topic/520-how-to-play-smart-when-you-play-the-lottery/","timestamp":"2014-04-19T01:50:48Z","content_type":null,"content_length":"61417","record_id":"<urn:uuid:42164b54-b054-421f-9a27-3171ee63e3c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sherborn Statistics Tutor
Find a Sherborn Statistics Tutor
...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science
from West Point. My academic strengths are in mathematics and French.
16 Subjects: including statistics, French, elementary math, algebra 1
...I am an accomplished writer of essays and technical writing. I took many humanities courses in college, and got all As in them. I also write for a living, although technically.
90 Subjects: including statistics, English, reading, writing
...I am currently a doctoral student at the Boston Graduate School of Psychoanalysis. I have been tutoring elementary, middle, and high school boys and girls since 2010.I have been tutoring math,
reading, writing, vocabulary, and standardized tests for 3 years. My students develop a love of learning through a variety of creative, reflective, and enjoyable exercises.
69 Subjects: including statistics, English, calculus, physics
...I have a strong background in Math, Science, and Computer Science. I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or
review sheets that they have been assigned.
17 Subjects: including statistics, geometry, algebra 1, economics
...In addition to private tutoring, I have taught summer courses, provided tutoring in Pilot schools, assisted in classrooms, and run test preparation classes (MCAS and SAT). Students tell me I'm
awesome; parents tell me that I am easy to work with. My style is easy-going; my expectations are real...
8 Subjects: including statistics, geometry, algebra 1, SAT math | {"url":"http://www.purplemath.com/Sherborn_Statistics_tutors.php","timestamp":"2014-04-19T20:15:31Z","content_type":null,"content_length":"23878","record_id":"<urn:uuid:8caf499e-7f09-4e65-95a1-c23f5c382776>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Question
Click here to go to the NEW College Discussion Forum
Discus: SAT/ACT Tests and Test Preparation: May 2003 Archive: Math Question
Does anyone have an explanation to #25 in 10 real SAT's, November '96 test.. Section 2...
I couldn't type it here, it includes a picture..
muchas gracias
A got a 770 math score on this one... too bad I missed this one, I could've scored a 790!
Also, #10 in the same section:
10. If a line L is perpendicular to a segment AB at point E and AE=EB, how many points on line L are the same distance from point A as from point B?
E)All points
Answer - E
I just don't understand what they are asking.
These are the only ones I missed
that's too easy
well, think about it ...draw any point on line L and measure it's distance from A and to point B. it's the same distance. There are an infinite # of points possible that'll work
I just took this test today too! Perpendicular means a line intersecting another, creating four 90 degree angles. It says line L is perpendicular through AB at point E and also that AE=EB. E is the
midpoint, so E is equidistant from point A to point B. Line L is perpendicular to line AB so any given point on line L is equidistant from both point A and B.
That probably sounds jumbled, but its hard to explain without being able to draw a picture to show you, but if still have any questions ask me and i'll clear things up.
I guess tiger said it better...
I thought #25 was hard too.. I got it with like seconds to spare...plug in each answer choice and work out the rest of the angles on the picture. To make it easier right now, start with choice E cuz
that one is right!
If your still confused ask me a specific question. I'll help you out.
bart, what did you score?
one wrong, 800.
I got a 770... I really should watch out for those tricky questions...
Report an offensive message on this page E-mail this page to a friend | {"url":"http://www.collegeconfidential.com/discus/messages/69/10631.html","timestamp":"2014-04-20T15:53:31Z","content_type":null,"content_length":"14417","record_id":"<urn:uuid:c22124b3-8ea3-4e71-8378-cfd9e909cf85>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determinant of an updated Covariance matrix
up vote 3 down vote favorite
I am faced with the following problem :
Originally (at time 0) I have a number of data samples $x^0_{1...n}$ (normalised : $E[x] = 0, Var[x] = 1$) from which I have calculated the covariance matrix $C^0 = X^T X$ (where $X$ is the matrix of
data samples), and the corresponding determinant $|C^0|$ (I could also store all and any minors are necessary).
Given this information I would like to perform the following iterative process incurring the smallest computational cost possible :
At time $t+1$ I am presented with a new data sample $x^{t+1}_{new}$ (similarly normalised) which can replace any of my existing data samples. Thus if I discard example $x^t_k$ in favour of this new
sample, I have a new covariance matrix $C^{t+1}_{k,new}$. I would like to calculate (given $C^0$, its minors and determinant) $\forall t$ $argmax_k |C^{t+1}_{k,new}|$.
Note that at each time step $t+1$, if I decide to discard $x^t_k$ in favour of $x^{t+1}_{new}$ then $x^{t+1}_k = x^{t+1}_{new}$ .
My question is, is there a method to calculate the determinants without incurring a cost of $n^3$ per determinant per time step?
Thanks for the help.
linear-algebra st.statistics matrices
add comment
1 Answer
active oldest votes
First compute a Cholesky factorization of the covariance matrix. Now your tentative new covariance matrix is a rank-2 update of the old, $$ M_{new}=M_{old}+\frac{1}{n}x_{new}x_{new}^
T-\frac{1}{n}x_{old}x_{old}^T. $$ You can use Sylvester's formula here to compute the determinant of the update; for this you'll only need to solve a linear system, which is $O(n^2)$
using your Cholesky factorization.
Then when your "replacement" take place for real you just have to update the Cholesky factorization, and there are algorithms to do that (low-rank updates of Cholesky factorization)
up vote 4 down in $O(n^2)$ as well. Check Matlab's cholupdate for instance.
vote accepted
So you pay $O(n^3)$ at the first step and then $O(n^2)$ per step.
EDIT: better and clearer algorithm
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra st.statistics matrices or ask your own question. | {"url":"http://mathoverflow.net/questions/96239/determinant-of-an-updated-covariance-matrix","timestamp":"2014-04-18T23:20:48Z","content_type":null,"content_length":"50309","record_id":"<urn:uuid:24b202a1-87e3-4137-bc43-4e512b998f3d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mead, CO Science Tutor
Find a Mead, CO Science Tutor
...Please feel free to contact me with any questions you may have. I look forward to helping you achieve success in your studies!I have taught organic chemistry at the high school level for the
past seven years. It was part of the curriculum for a survey of chemistry class.
7 Subjects: including biology, chemistry, anatomy, physical science
...In addition, afternoon and evening hours were spent working individually with students of all types in credit accrual, subject tutoring, and test preparation. I teach Math through Algebra II
and some Trigonometry and Pre-Calculus; All science except advanced Physics; any history, and test prepar...
35 Subjects: including ACT Science, public speaking, ACT Math, ecology
...My concentration was in Biomedical sciences so my classes leaned more towards the medical side of biology. I have taken the courses Anatomy & Physiology with lab, Advanced Anatomy & Physiology
with lab, and Comparative vertebrate morphogenesis with lab. I have also completed dissections in sharks and cats.
13 Subjects: including ecology, biology, microbiology, anatomy
...My dissertation focussed on evolutionary ecology and population genetics and today I am a evolutionary developmental biologist. I have taught many classes at the college level including
general biology, genetics, conservation biology and non-majors biology. I also have extensive research experience in molecular biology and genetics.
13 Subjects: including physical science, chemistry, zoology, algebra 1
...I have home schooled both children (ages 8 and 11) and they excel in elementary math; both test above grade level. Recently, I also tutored a high school senior in AP Calculus. I have spent
years tutoring math and science, as well as volunteering with children of all ages.
7 Subjects: including physical science, chemistry, algebra 1, elementary math
Related Mead, CO Tutors
Mead, CO Accounting Tutors
Mead, CO ACT Tutors
Mead, CO Algebra Tutors
Mead, CO Algebra 2 Tutors
Mead, CO Calculus Tutors
Mead, CO Geometry Tutors
Mead, CO Math Tutors
Mead, CO Prealgebra Tutors
Mead, CO Precalculus Tutors
Mead, CO SAT Tutors
Mead, CO SAT Math Tutors
Mead, CO Science Tutors
Mead, CO Statistics Tutors
Mead, CO Trigonometry Tutors | {"url":"http://www.purplemath.com/Mead_CO_Science_tutors.php","timestamp":"2014-04-21T11:01:43Z","content_type":null,"content_length":"23756","record_id":"<urn:uuid:c8a4fa62-d1c1-4740-9401-d56104fb88ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Growing Affinity
A Growing Affinity
THE CALCULUS OF FRIENDSHIP: What a Teacher and a Student Learned about Life While Corresponding about Math. Steven Strogatz. xiv + 166 pp. Princeton University Press, 2009. $19.95.
Take a minute to think back a few years—okay, maybe a few more than that—to your high school days. Think past the awkward dances, the tortured relationships, the overhyped football games, to your
high school math teachers. Who were they? What were their lives like? What did they do when they weren’t teaching you how to factor, what a logarithm is or how to take derivatives? What are they up
to now?
Unlike many of us, Steven Strogatz can actually answer these questions, at least with regard to his high school calculus teacher, Don “Joff” Joffray. Strogatz shares those answers and much more in
The Calculus of Friendship. Part biography, part autobiography and part off-the-beaten-path guide to calculus, this quick read details 30 years of correspondence between Strogatz (who is now Jacob
Gould Schurman Professor of Applied Mathematics at Cornell University) and Joffray.
Calculus, Isaac Newton’s ingenious invention for modeling change mathematically, serves as both text and subtext for the letters that pass between Strogatz and Joff. Focusing almost exclusively on
questions of mathematics, these brief notes frame the unlikely friendship of a teacher and his star student. With the precision of an award-winning mathematician and the clarity of a best-selling
science author, Strogatz leads us on an excursion through some of the lesser-known mathematical sights—the ones usually reserved for the “members only” tour. All the while, we see the relationship
between the two men gradually change as they slowly (and I do mean slowly) break down the walls that appropriately separate teacher from student.
The mathematics covered in these letters is impressive for such a short volume—and for an exchange with a high school math teacher. The two men discuss pursuit problems. For instance, a dog starting
at (0,1) in the x,y plane chases a man running at the same speed along the x-axis, with the dog swerving in such a way that it is always running straight toward the man’s current location. What is
the equation for the curve traced by the dog’s path? Strogatz regales Joff with a geometrical proof that the square root of 2 is irrational. He recalls Joff telling the class about a former student
finding a formula for the nth term of the Fibonacci sequence (which begins 1,1,2,3,5,8,13, ... ):
Then Strogatz provides a succinct two-page proof. One of Joff’s later students asks whether the series
converges, leading to a discussion of Fourier series (and the surprising sum of the series).
Although the pair’s mathematical excursions pay off every two or three pages, the story of their relationship takes longer to develop. Five chapters into the book, you may feel like responding to the
book’s subtitle—What a Teacher and a Student Learned about Life While Corresponding about Math—with a resounding “Nothing!” Only Strogatz’s criticisms of the younger, more self-absorbed version of
himself keep this strand of the story alive, as when he admits the shame of never having acknowledged the death of Joff’s oldest son.
Over the years, their correspondence begins to include news of their lives along with the interesting mathematical tidbits. This broadening of their relationship comes just in time—another cold
mathematical exchange might have convinced us that mathematicians hail not from Mars or Venus but from Pluto. In the last few chapters, the two men write (and in some cases talk) about Strogatz’s
career path, Martin Gardner’s mountain-climbing-monk problem, Joff’s windsurfing, a clever use of dimensional analysis, Joff’s teaching award, and where to put the fuel gauge marks for a cylindrical
gas tank. The growing affinity of these two for each other as people, and not just as sources of mathematical knowledge, provides a more interesting frame for the still-captivating mathematical
Eventually Strogatz works up the nerve to travel to Joff’s home to discuss some of the personal things their letters glossed over. An account of the visit brings the book to a satisfying, if slightly
sappy, conclusion. Finally the author’s nonmathematical observations rise to the level of his mathematical ones, as in this wonderful passage about the complicated relationship between a teacher and
a gifted student:
I’m starting to realize what it was that he gave me.
He let me teach him.
Before I had any students, he was my student.
Somehow he knew that’s what I needed most. And he let me, and encouraged me, and helped me. Like all great teachers do.
As every educator knows, in the relationship between teacher and student, the learning happens in both directions.
David T. Kung is associate professor and chair of the department of mathematics and computer science at St. Mary’s College of Maryland. His interests include harmonic analysis, math education and the
"Penguins are 10 times older than humans and have been here for a very, very long time," said Daniel Ksepka, Ph.D., a North Carolina State University research assistant professor. Dr. Ksepka
researches the evolution of penguins and how they came to inhabit the African continent.
Because penguins have been around for over 60 million years, their fossil record is extensive. Fossils that Dr. Ksepka and his colleagues have discovered provide clues about migration patterns and
the diversity of penguin species.
Click the Title to view all of our Pizza Lunch Podcasts!
• A free daily summary of the latest news in scientific research. Each story is summarized concisely and linked directly to the original source for further reading.
An early peek at each new issue, with descriptions of feature articles, columns, Science Observers and more. Every other issue contains links to everything in the latest issue's table of
News of book reviews published in American Scientist and around the web, as well as other noteworthy happenings in the world of science books.
To sign up for automatic emails of the American Scientist Update and Scientists' Nightstand issues, create an online profile, then sign up in the My AmSci area. | {"url":"http://www.americanscientist.org/bookshelf/pub/a-growing-affinity","timestamp":"2014-04-20T03:30:22Z","content_type":null,"content_length":"115382","record_id":"<urn:uuid:af8b65fb-83ff-411e-820c-17d639a5a206>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
double integrals in polar coordinates
December 8th 2010, 08:26 PM #1
Dec 2010
double integrals in polar coordinates
I have to use a double integral in polar coordinates to find the area of the region inside the circle x^2 + y^2 = 4 and to the right of the line x =1.
I have it all graphed and I know how to do double integrals but I am confused as to what I am integrating, it is probably really simple but I was sick the day we went over this section in class
so I have no idea what I am doing.
Please help
Start by writing your double integral in cartesians first.
First, where do the two graphs intersect? When $\displaystyle x = 1$.
So $\displaystyle 1^2 + y^2 = 4$
$\displaystyle 1 + y^2 = 4$
$\displaystyle y^2 = 3$
$\displaystyle y = \pm \sqrt{3}$.
So if you take horiztonal strips, your $\displaystyle x$ values range from $\displaystyle 1$ to the graph $\displaystyle x = \sqrt{4 - y^2}$, and your $\displaystyle y$ values range from $\
displaystyle -\sqrt{3}$ to $\displaystyle \sqrt{3}$.
So what is your double integral in cartesians?
Last edited by Prove It; December 9th 2010 at 02:41 PM.
double integrals in cartesian coordinates
I have to use a double integral in polar coordinates to find the area of the region inside the circle x^2 + y^2 = 4 and to the right of the line x =1.
I have it all graphed and I know how to do double integrals but I am confused as to what I am integrating, it is probably really simple but I was sick the day we went over this section in class
so I have no idea what I am doing.
Please help
For future problems you can reference my tutorial on double integration at the top of this forum.
I'll offer a different approach to Prove It (actually it's the same approach just worded differently) and outline my approach via steps so that you can follow the same process for other
questions. Though I wouldn't write this in cartesian coordinates before moving to polar, I would go straight to polar.
Step 1: Draw the Domain
In this case we have a circle of radius 2 and a vertical line at x = 1. If we want the area to the right of this then we are looking at two quarter circles. For simplicity we will compute the
area of 1 of these quarters and multiply by 2.
Step 2: Set up the integral in general terms
$\iint_D f(x) dA$
Note that before we can approach this problem we need to find a set of bounds. Well, these bounds depend on which way we want to integrate our function.
This actually has some complexity to it because we are computing this integral in polar co-ordinates.
Step 3: Find Bounds
Let's look at the area of a regular circle of radius 2.
$A = \int_0^{2 \pi } d \theta \int_0^2 rdr = \pi (2^2)$
But in our case we have a nice line going right up the middle and our radius is no longer constant. If you draw a line from the origin to the radius of the circle you'll see that it crosses the
line x=1. Well, we dont want anything before that line so if we make a triangle we see that
$cos \theta = \frac{ 1 }{R} \to R= \frac{1}{cos \theta}$
Well this is good, so we now have our bounds for R
$\frac{1}{cos \theta } \le R \le 2$
We now need to evaluate theta. Well clearly we start at 0 (defined from the x-axis) but where do we end? Well we end where the line x=1 intersects the circle. This happens when x=1 so when we sub
x=1 into our circle, we get the y bound of $y = \sqrt{3}$. Now to get the theta angle we have a triangle with a height of square root 3 and a width of 1.
$Tan \theta = \frac{ \sqrt{3} }{1} \to \theta = 60$
So theta runs from 0 to 60.
Step 4: Put it all together
$A = 2 \int_0^{ \frac{ \pi }{3} } d \theta \int_{ \frac{1}{cos \theta} }^2 r dr$
thank you both for your help! <3
I also realise I have made a mistake which you have taken through to your working - the $\displaystyle x$ values range from $\displaystyle +1$ to $\displaystyle \sqrt{4 - y^2}$.
Also, if you are taking horizontal strips, then you are integrating with respect to $\displaystyle x$ first. So that means the double integral should be
$\displaystyle \int_{-\sqrt{3}}^{\sqrt{3}}{\int_{1}^{\sqrt{4-y^2}}{1\,dx}\,dy}$. Now follow Allan's steps to convert to polars.
Epilogue: You don't even need to convert to polars:
$\displaystyle \int_{-\sqrt{3}}^{\sqrt{3}}{\int_{1}^{\sqrt{4-y^2}}{\,dx}\,dy} = \int_{-\sqrt{3}}^{\sqrt{3}}{\left[x\right]_1^{\sqrt{4-y^2}}\,dy}$
$\displaystyle = \int_{-\sqrt{3}}^{\sqrt{3}}{\sqrt{4 - y^2} - 1\,dy}$
$\displaystyle = \int_{-\sqrt{3}}^{\sqrt{3}}{\sqrt{4 - y^2}\,dy} - \int_{-\sqrt{3}}^{\sqrt{3}}{1\,dy}$
Now you can make a trigonometric substitution to solve.
Yes when I was working it out (before allan's post) I found that slight mistake
again, thank you for all of your help.
December 8th 2010, 09:33 PM #2
December 9th 2010, 09:24 AM #3
Dec 2010
December 9th 2010, 10:41 AM #4
December 9th 2010, 01:44 PM #5
Dec 2010
December 9th 2010, 02:50 PM #6
December 9th 2010, 05:25 PM #7
Dec 2010 | {"url":"http://mathhelpforum.com/calculus/165764-double-integrals-polar-coordinates.html","timestamp":"2014-04-18T06:54:59Z","content_type":null,"content_length":"58676","record_id":"<urn:uuid:33d32a34-4945-41e9-81cb-bf9973fab286>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Recurrence Relation
February 7th 2009, 05:03 PM #1
MHF Contributor
May 2008
A Recurrence Relation
Define the function $f: \mathbb{N} \longrightarrow \mathbb{N}$ recursively by: $\sum_{d \mid n}f(d)=2^n, \ \forall n \in \mathbb{N}.$ Prove that $f(n)$ is divisible by $n$ for all $n \in \mathbb
A very small part of the answer
$\sum_{d \mid 1}f(d)=f(1)=2$
If n is prime $\sum_{d \mid n}f(d)=f(1)+f(n)=2^n$ therefore $f(n)=2^n-2$ which is divisible by n according to Fermat's little theorem
Fermat's little theorem - Wikipedia, the free encyclopedia
First, that recurrence relation determines completely the numbers $f(n)$, and they are in fact given by: $<br /> f\left( n \right) = \sum\limits_{\left. d \right|n} {\mu \left( {\tfrac{n}<br />
{d}} \right) \cdot 2^d } <br />$ -by Möbius' inversion formula.
Okay, but the path we will take is rather different. Consider the primitive strings of bits of length (*) $n$ these happen to satisfy the same recurrence relation, hence they must be equal - from
what I said above-.
(*) A string is not primitive if it is a concatenation of smaller ( all of them equal) strings. For example $<br /> \begin{array}{*{20}c}<br /> 1 & 0 & 1 & 0 \\<br /> <br /> \end{array} <br />$
is not a primtive string, since it's made out of $<br /> \begin{array}{*{20}c}<br /> 1 & 0 \\<br /> <br /> \end{array} <br />$
Take $<br /> \begin{array}{*{20}c}<br /> 1 & 1 & 0 \\<br /> <br /> \end{array} <br />$ for instance, this is a primitive string of bits of length 3. Next note that if we 'move' the places by 1 to
the right ( and the last one goes to the first, a sort of circular permutation) we get: $<br /> \begin{array}{*{20}c}<br /> 0 & 1 & 1 \\<br /> <br /> \end{array} <br />$ which is another
primitive string of length 3, and if we do it again we get: $<br /> \begin{array}{*{20}c}<br /> 1 & 0 & 1 \\<br /> <br /> \end{array} <br />$ which is again a primitive string. Note also that if
we do it again we get to the initial one.
So this suggests that we can partition the set of primitive strings of length $n$ into sets of $n$ elements in which each of the primitive strings is obtained by applying the operation -the
operation has a name but it's on the tip of my tongue, oh well, I'll call it Circular permutation- we have just used repeatedly to a certain primtive string. (clearly an equivalence relation)
First, it's easy to see that this operation applied to a primitive string generates another ( suppose the contrary and you'll get a contradiction by using a method similar to the one below.)
We have to show that indeed we have sets of $n$ different strings, always.
So, suppose we have a primitive string of length $n$, which we will assoctiate with a function $<br /> \delta :\mathbb{Z} \to \left\{ {0,1} \right\}<br />$ were we have: $<br /> \delta \left( i \
right) = \left\{ \begin{gathered}<br /> {\text{what's in the place }}i\left( {\bmod .n} \right){\text{ if }}i\left( {\bmod .n} \right) e 0 \hfill \\<br /> {\text{what's in the place n if }}i\left
( {\bmod .n} \right) = 0 \hfill \\ <br /> \end{gathered} \right.<br />$
Suppose that applying our operation repeatedly $s<n$ times we get to the same initial string, which is equivalent to the condition: $<br /> \delta \left( {i + s} \right) = \delta \left( i \right)
<br />$ then $<br /> \delta \left( {i + s \cdot m} \right) = \delta \left( i \right);\forall m \in \mathbb{Z}<br />$
Now consider $<br /> M= {\text{min}}\left\{ {k \in \mathbb{Z}^ + /\delta \left( i \right) = \delta \left( {i + k} \right);\forall i \in \mathbb{Z}} \right\}<br />$, and the integer division: $<br
/> n = M \cdot q + r<br />$ ; $<br /> q,r \in \mathbb{Z}/0 \leqslant r < M<br />$, since $<br /> n \in \left\{ {k \in \mathbb{Z}^ + /\delta \left( i \right) = \delta \left( {i + k} \right);\
forall i \in \mathbb{Z}} \right\}<br />$ we get: $<br /> \delta \left( i \right) = \delta \left[ {i + \underbrace {\left( {M \cdot q + r} \right)}_n \cdot h} \right] = \delta \left( {i + r \cdot
h} \right);\forall h \in \mathbb{Z}<br />$, hence, if r is not 0 we have: $<br /> r \in \left\{ {k \in \mathbb{Z}^ + /\delta \left( i \right) = \delta \left( {i + k} \right);\forall i \in \mathbb
{Z}} \right\} \wedge r < M<br />$ which is a contradiction, hence $r=0$ and $<br /> \left. M \right|n<br />$
But this implies that our primitive string is a concatenation of $M$ smaller, and equal, strings of length $M$, thus it's not a primitive string: CONTRADICTION
Now if $<br /> \xi <br />$ represents a string of length n, let $<br /> D \xi <br />$ be the string of length n resulting from doing the operation on $<br /> \xi <br />$.
We've just seen that, for any primitive string of length n, $<br /> \varepsilon <br />$, $<br /> D^i \varepsilon e \varepsilon<br />$ for all $<br /> 0 < i < n<br />$ (1)
So consider that for a primitive string we have that $<br /> D^0 \xi ;...;D^{n - 1} \xi <br />$ are not all different, that is $<br /> D^i \xi = D^j \xi <br />$ for some $n>j>i\geq{0}$, then $<br
/> D^i \xi = D^j \xi = D^{j - i} \left( {D^i \xi } \right)<br />$ and we get a contradiction by using (1), since we know that $<br /> D^i \xi <br />$ is also a primitive bit string of length n
Therefore the claim is proven since we have $<br /> f\left( n \right) = n \cdot S<br />$ where $S$ is the number of equivalence classes.
Last edited by PaulRS; February 9th 2009 at 08:22 AM.
that's an interesting combinatorial approach PaulRS. the problem has also an algebraic solution:
Fact: the number of monic irreducible polynomials of degree $n$ over a finite field with $p$ elements, where $p$ is prime, is: $\frac{1}{n}\sum_{d \mid n} \mu \left(\frac{n}{d} \right)p^d.$
i encourage anyone who's new to algebra (and number theory) to think about this beautiful result and see if they can prove it!
Last edited by NonCommAlg; February 9th 2009 at 07:13 PM.
that's an interesting combinatorial approach PaulRS. the problem has also an algebraic solution:
Fact: the number of irreducible polynomials of degree $n$ over a finite field with $p$ elements, where $p$ is prime, is: $\frac{1}{n}\sum_{d \mid n} \mu \left(\frac{n}{d} \right)p^d.$
i encourage anyone who's new to algebra (and number theory) to think about this beautiful result and see if they can prove it!
Here is a hint to this: Show $x^{p^n} - x = \prod_j p_j(x)$ --> then compare degrees.
Where the product if over all monic irreducible polynomials of order dividing $n$.
February 8th 2009, 04:05 AM #2
MHF Contributor
Nov 2008
February 9th 2009, 07:39 AM #3
February 9th 2009, 04:31 PM #4
MHF Contributor
May 2008
February 9th 2009, 06:06 PM #5
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/number-theory/72378-recurrence-relation.html","timestamp":"2014-04-19T21:31:17Z","content_type":null,"content_length":"63979","record_id":"<urn:uuid:f0305413-315b-47f9-ba9d-241472742ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monterey Park Science Tutor
Find a Monterey Park Science Tutor
I am currently a senior Math major at Caltech. I tutored throughout high school (algebra, calculus, statistics, chemistry, physics, Spanish, and Latin) and tutored advanced math classes during
college. Above all other things, I love to learn how other people learn and to teach people new things in...
28 Subjects: including chemistry, physics, French, Spanish
...I have a PhD in epidemiology and my coursework included several biostatistics courses. I use biostatistical methods regularly in my doctoral research work. I have tutored graduate students in
introduction to biostatistics courses.
31 Subjects: including genetics, biostatistics, zoology, anatomy
...I break down the concepts into smaller parts. I like to use repetition, key phrases and mnemonics to help my students remember the concepts. For example, to remember the relationship of the
sides of a right triangle and the functions sine, cosine, tangent: Oscar Had A Headache Over Algebra (sin...
31 Subjects: including physics, ADD/ADHD, Aspergers, autism
...Finally, I am fluent in German and have lived in Germany for a year. I can help a student learn German with understanding how how locals can speak it as well as assist with strategies in
mastering a second language.Through my university I took a public speech course and received top grades in ea...
11 Subjects: including psychology, English, reading, writing
...Mary's Academy. In addition, I've had the privilege of working for Apple Inc. for two years. There I learned various software and operating systems, I also lead workshops in various software
and operating systems.
10 Subjects: including philosophy, Spanish, literature, elementary math | {"url":"http://www.purplemath.com/monterey_park_ca_science_tutors.php","timestamp":"2014-04-16T04:38:36Z","content_type":null,"content_length":"23905","record_id":"<urn:uuid:81d14100-cffa-4a8a-ac42-968299165f96>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Tutors
Holland, OH 43528
Private math instruction from a Ph.D.
...I've provided individual or small group tutoring for pre-algebra, algebra, geometry, and
. Oh, and don't forget 20+ years of experience being a math student! Struggling? Often problems with today's homework are the result of missing something last week,...
Offering 10+ subjects including calculus | {"url":"http://www.wyzant.com/Rossford_Calculus_tutors.aspx","timestamp":"2014-04-19T15:57:00Z","content_type":null,"content_length":"57369","record_id":"<urn:uuid:91c5eb7c-18f5-4df0-a376-763efb6d5885>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
mrv1j: Month Ritzwoller-Lavely Coefficient Table
One-month time series (
) are merged into 3-month sets, centered on the second month. The time series are then transformed into the frequency domain using
. This creates power spectra, which are subsequently run through
, to find the mode frequencies. These are stored as STSDAS tables, which are then processed using
, which weeds out leaks and bad fits using the rejection criteria outlined in the sample table provided.
uses these files to construct a-coefficient tables, such as the
data products. The
files are a set of Legendre rotation coefficients for each [n,l] combination where there is sufficient good data. The relationship between the mode frequencies (
files) and Ritzwoller-Lavely coefficients (
files) is illustrated below:
• A description can be found in "Solar rotation inversions and the relationship between a-coefficients and mode splittings", Pijpers F.P. Astron. Astrophys. 1997 326:1235-1240. See section 2:
a-coefficient fitting.
• Details of their use can be found in "On comparing helioseismic two-dimensional inversion methods", Schou J., Christensen-Dalsgaard J., Thompson M.J., ApJ 1994 433:389-416.
• Sample table: Header and first 10 rows of mrv1j020523.txt (GONG Month 72, l=0-150)
In these files there is one row per coefficient, so a single multiplet (n,l combination) could occupy up to kmax (highest coefficient fit) rows.
Documentation Main Page | FITS Header Parameter Descriptions | IRAF GRASP Help Pages Revision: $Id: mrv1j.html,v 1.3 2004/10/08 22:27:40 khanna Exp $ | {"url":"http://gong.nso.edu/data/DMAC_documentation/Peakfind/mrv1j.html","timestamp":"2014-04-18T18:31:57Z","content_type":null,"content_length":"2858","record_id":"<urn:uuid:a1c66436-94b1-45dc-8df9-0ee1444e4298>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
hard direchlet function
October 28th 2010, 09:56 AM #1
Senior Member
Sep 2009
for the equation here
Can someone give me a hint to prove it is a density...
I am really stuck on this one, I think I am supposed to integrate this twice , but i am not sure how to. | {"url":"http://mathhelpforum.com/advanced-statistics/161317-hard-direchlet-function.html","timestamp":"2014-04-23T20:11:28Z","content_type":null,"content_length":"28433","record_id":"<urn:uuid:2a888707-7e71-4dee-b585-33bdc6b77b65>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Monthly October 2004
MONTHLY, October 2004
Lost in a Forest
by Steven R. Finch and John E. Wetzel
Steven.Finch@inria.fr, j-wetzel@uiuc.edu
Fifty years ago, Richard Bellman posed an interesting search problem that can be phrased as follows: A hiker is lost in a forest whose shape and dimensions (but not its orientation) are precisely
known to him. What is the best path for him to follow to escape from the forest? Construing "best" as meaning "shortest," we survey what is known for regions of various shapes, we clarify the
relationship with Leo Moser's well-known "worm" problem, and we consider some related questions.
Potter, Wielandt, and Drazin on the Matrix Equation AB = ∞ BA: New Answers to Old Questions
by Olga Holtz, Volker Mehrmann, and Hans Schneider
holtz@math.Berkeley.edu, hans@math.wisc.edu, mehrmann@math.TU-Berlin.de
In this partly historical and partly research-oriented note, as part of our continuing examination of the unpublished mathematical diaries of Helmut Wielandt we display a page dated 1951. There he
gives a new proof of a theorem due to H. S. A. Potter on the matrix equation AB = ∞BA, which is related to the q-binomial theorem, and asks some further questions, which we mostly answer. We also
describe results by M. P. Drazin and others on this equation.
Iterated Exponential
by Joel Anderson
In this article we discuss the following fascinating problem. Suppose a is a positive number and consider the sequence a, a^a, a^(a^a), K . For which values of a does this sequence converge? This
problem is remarkable both for its unexpected answer and the different threads that interweave in the development of its solution. We present a solution and provide some historical context.
The Sixty-Fourth William Lowell Putnam Mathematical Competition
by Leonard F. Klosinski, Gerald L. Alexanderson, and Loren C. Larson
The First Sixty-Five Years of the Putnam Competition
by Joseph A. Gallian
We survey the results of the Putnam Competition from its inception in 1938 through 2003. We include tables that provide the number of participants in each contest, the number of times each team has
placed in top five, the number of Putnam Fellows each school has had, and the top five scores and medians for all competitions between 1967-2003. There is a section that identifies individuals who
have excelled in the competitions and another section that identifies distinguished mathematicians and scientists who have performed well in the competitions.
A Simple Proof of the Hook Length Formula
by Kenneth Glass and Chi-Keung Ng
k.glass@qub.ac.uk, ckng@nankai.edu.cn
Playing Catchup with Iterated Exponentials
by R. L. Devaney, K. Josic, M. Moreno Rocha, P. Seal, Y. Shapiro, and A. T. Frumosu
bob@math.bu.edu, josic@math.uh.edu, mmoren02@tufts.edu, pseal@math.bu.edu, yshapiro@math.mit.edu, tais@math.bu.edu
An Intuitive Derivation of Heron’s Formula
by Daniel A. Klain
A New Proof of Darboux’s Theorem
by Lars Olsen
Evolution ofÂ…
Global Geometry
by M. F. Atiyah
Problems and Solutions
A Companion to Analysis. A Second First and First Second Course in Analysis.
by T. W. Kouml;rner
Reviewed by Steven G. Krantz
Mathematics Elsewhere: An Exploration of Ideas Across Cultures.
by Marcia Ascher
Reviewed by Marion D. Cohen | {"url":"http://www.maa.org/publications/periodicals/american-mathematical-monthly/american-mathematical-monthly-october-2004?device=mobile","timestamp":"2014-04-21T16:36:38Z","content_type":null,"content_length":"24744","record_id":"<urn:uuid:61b5c7e2-5a84-459c-8790-16b89f9aba36>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asymptotic least squares approximation for highly oscillatory differential equations
Seminar Room 1, Newton Institute
This talk presents a new approach for approximating highly oscillatory ordinary differential equations. By using the asymptotic expansion in a least squares system, we are able to obtain a result
that preserves the asymptotic accuracy of the expansion, while converging rapidly to the exact solution. We are thus able to accurately approximate such differential equations by solving a very small
linear system. We apply this method to the computation of highly oscillatory integrals, as well as second order oscillatory differential equations. | {"url":"http://www.newton.ac.uk/programmes/HOP/seminars/2007070511001.html","timestamp":"2014-04-18T20:43:56Z","content_type":null,"content_length":"4450","record_id":"<urn:uuid:ad5618e0-f2e3-445f-99ec-7d11d60d9be2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perturbative approach to angular momentum
5.1 Perturbative approach to angular momentum
We have already mentioned that when angular momentum is small, a critical exponent for 4.2) in first-order perturbation theory and for the massless scalar field (see Section 3.7), where second-order
perturbation theory in the scalar field is necessary to obtain an angular momentum perturbation in the stress-energy tensor [87]. However, neither of the predicted angular momentum scaling laws has
been verified in numerical evolutions.
For a perfect fluid with EOS < k < 1/9, precisely one mode that carries angular momentum is unstable, and this mode and the known spherical mode are the only two unstable modes of the spherical
critical solution. (Note by comparison that from dimensional analysis one would not expect an uncharged critical solution to have a growing perturbation mode carrying charge.) The presence of two
growing modes of the critical solution is expected to give rise to interesting phenomena [104]. Near the critical solution, the two growing modes compete. J and M of the final black hole are expected
to depend on the distance to the black hole threshold and the angular momentum of the initial data through universal functions of one variable that are similar to “universal scaling functions” in
statistical mechanics (see also the end of Section 2.7). While they have not yet been computed, these functions can in principle be determined from time evolutions of a single 2-parameter family of
initial data, and then determine J and M for all initial data near the black hole threshold and with small angular momentum. They would extend the simple power-law scalings of J and M into a region
of initial data space with larger angular momentum. | {"url":"http://www.univie.ac.at/EMIS/journals/LRG/Articles/lrr-2007-5/articlesu23.html","timestamp":"2014-04-19T17:13:28Z","content_type":null,"content_length":"7293","record_id":"<urn:uuid:29757dbc-b0b6-43fe-87b8-9ba057466013>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mccook, IL Prealgebra Tutor
Find a Mccook, IL Prealgebra Tutor
...I have a PhD. in experimental nuclear physics. I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra,
differential equations, advanced differential equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics.
10 Subjects: including prealgebra, calculus, physics, geometry
...I received my undergraduate degree in math and cello performance last May from Indiana University. During my undergraduate years, I received national awards (the Goldwater scholarship, a top
25 finish in the Putnam exam), and high recognition from the IU math department (the Ciprian Foias Prize,...
13 Subjects: including prealgebra, calculus, geometry, statistics
...Whether it be the 3 Rs or preparation in the performing arts, a remarkable instructor must go well beyond the subject matter at hand and help imbue the student with a spirit of wonder that, in
due time, will foster enthusiasm necessary for excellence. A child emulates and learns to communicate i...
37 Subjects: including prealgebra, English, geometry, biology
...I am also trained in specific methods of ACT tutoring. I have been tutoring a student I met through my teaching job since she was in third grade. She is now in fifth grade.
33 Subjects: including prealgebra, reading, English, writing
...I spent 3 years working in my selective liberal arts college's admissions office. I led tours, completed a summer internship, and conducted over 100 evaluative interviews. I was allowed to
read applications and observe the committee's final decision meetings, and I came away with a good sense of what factors mattered most in admitting or rejecting a student.
15 Subjects: including prealgebra, reading, writing, grammar
Related Mccook, IL Tutors
Mccook, IL Accounting Tutors
Mccook, IL ACT Tutors
Mccook, IL Algebra Tutors
Mccook, IL Algebra 2 Tutors
Mccook, IL Calculus Tutors
Mccook, IL Geometry Tutors
Mccook, IL Math Tutors
Mccook, IL Prealgebra Tutors
Mccook, IL Precalculus Tutors
Mccook, IL SAT Tutors
Mccook, IL SAT Math Tutors
Mccook, IL Science Tutors
Mccook, IL Statistics Tutors
Mccook, IL Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Argo, IL prealgebra Tutors
Brookfield, IL prealgebra Tutors
Countryside, IL prealgebra Tutors
Forest View, IL prealgebra Tutors
Hodgkins, IL prealgebra Tutors
La Grange Park prealgebra Tutors
La Grange, IL prealgebra Tutors
Lyons, IL prealgebra Tutors
Mc Cook, IL prealgebra Tutors
North Riverside, IL prealgebra Tutors
Riverside, IL prealgebra Tutors
Summit Argo prealgebra Tutors
Summit, IL prealgebra Tutors
Western, IL prealgebra Tutors
Willow Springs, IL prealgebra Tutors | {"url":"http://www.purplemath.com/Mccook_IL_Prealgebra_tutors.php","timestamp":"2014-04-20T16:21:50Z","content_type":null,"content_length":"24119","record_id":"<urn:uuid:1bbb7c8f-20da-459b-995e-1a1499afc2f1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential and Uniform distributions
January 23rd 2011, 10:42 AM #1
Junior Member
Oct 2008
Exponential and Uniform distributions
I'm stuck on this question:
If RV X has an exponential distribution does Y = ln(X) have a uniform
distribution? Derive the cumulative distribution function and density of Y.
No. Using the cumulative distribution function of X find that of Y, you will find that the CDF of Y is not proportional to y.
First of all.
If X is an exponential, then it's support is on $(0,\infty)$
If $Y=\ln X$, then Y's support is $(-\infty,\infty)$
and you can't have a uniform distribution on the real line.
Use your CDF of X, $1-e^{-\lambda x}$ or $1-e^{-x/\lambda}$
and make the appropriate substitution.
Last edited by matheagle; January 27th 2011 at 08:29 PM.
January 23rd 2011, 10:49 AM #2
Grand Panjandrum
Nov 2005
January 27th 2011, 05:34 PM #3 | {"url":"http://mathhelpforum.com/advanced-statistics/169118-exponential-uniform-distributions.html","timestamp":"2014-04-16T13:45:10Z","content_type":null,"content_length":"38426","record_id":"<urn:uuid:03b48d0d-1676-4320-8dec-82b34f14b1d8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
First-Year Programs
Contact Information
Office of First-Year Programs
Ben Trapanick, Director, First-Year Programs
Dwight Hall Room 116
E-mail: btrapanick@framingham.edu
Ashlee Givins, Assistant Director, First-Year Programs
Orientation Coordinator
Dwight Hall Room 116
E-mail: agivins@framingham.edu
To download the Official Accuplacer iPhone Study App, click here.
Students should review the fundamentals of algebra covered in high school courses prior to taking the Accuplacer Mathematics test. On this page we provide many sample questions and answers for the
Elementary Algebra test and the College Level Mathematics test.
The initial assessment measure is the Accuplacer Elementary Algebra Test. If your score on this test is satisfactory, you will be able to take any first-year level (100 level) mathematics course. If
you do not receive a satisfactory score on the Elementary Algebra Test, you will not be allowed to register for a credit-bearing mathematics course in your first semester at Framingham State.
However, if you do very well on the Elementary Algebra Test, you will also take the College Level Mathematics test, which will determine whether or not you can proceed to a sophomore level (200
level) course, such as precalculus or calculus.
Topics on the Elementary Algebra Test:
• Arithmetic of real numbers (including integers, rational numbers, irrational numbers)
• Simplifying algebraic expressions
• Evaluating algebraic expressions
• Absolute value
• Simplifying rational expressions
• Definition of integer exponents and operations involving expressions
• containing exponents
• Solving linear equations
• Solving equations involving fractions
• Ratio and proportion
• Solving linear inequalities
• Graphing linear equations in two variables
• Writing equations of lines
• Solving systems of linear equations
• Operations with polynomials
• Factoring polynomials
• Simplifying square root expressions
• Solving quadratic equations
Sample Accuplacer Questions: Elementary Algebra Test
These sample questions are to help students understand the format used on the Accuplacer Mathematics Test. The questions themselves are not indicative of the range of difficulty a test-taker will
1. If a number is divided by 4, then 3 is subtracted, the result is 0. What is the number?
A. 12
B. 4
C. 3
D. 2
2. 16x - 8 =
A. 8x
B. 8(2x-x)
C. 8(2x-1)
D. 8(2x-8)
3. If x² - x - 6 = 0, then x is
A. -2 or 3
B. -1 or 6
C. 1 or -6
D. 2 or -3
More Elementary Algebra sample problems
Get the answers to above questions here
Additional websites for algebra practice: www.mymathtest.com, www.algebrahelp.com, www.khanacademy.org/, Varsity Tutors
For further review, ask your high school or college mathematics department for review materials or textbooks to borrow to prepare for the test. Also, for a helpful website for additional review in
algebra, click here.
Additional Resources: | {"url":"http://www.framingham.edu/first-year-programs/placement-testing/math.html","timestamp":"2014-04-16T07:14:41Z","content_type":null,"content_length":"27038","record_id":"<urn:uuid:771b9530-4d9a-4556-9318-c8050afb4fb1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help my life is in game
August 18th 2010, 06:19 AM #1
Aug 2010
Help my life is in game
I need help with this (i know its easy but i dont know how to resolve this problem )
cosalfa = 5/13 sin alfa is - , cos alfa is + , tg alfa is - and ctg alfa is -
I need to find a) sin(60`-alfa) b) cos(alfa +45`)
i found out that sin alfa is 12/13
ps `is degree
please help
[Note that in the below I substituted x for your "alfa", so simplify things]
Your question is hard to understand, maybe try writing the problem in latex? Its not hard, its actually kinda cool when you get it to work. I kinda see your equations, but I dont understand the
question because I am not certain what the equations are? Does "is" mean "=" ? I reccomend trying latex. Heres how you do it. You write these symbols (without the quotation marks, but including
the "[" and "]" outside of the quotation marks) around any of your math stuff:
["math] equations go here [/math"]
Note that the last slash is a forward slash not a back slash, that messes me up all the time. Now, if you type in (without the quotation marks):
["math]cos(x) = 5/13 [/math"]
then you'll get:
$cos(x) = 5/13$
to make it allite more compact you can use a fraction, which is made by putting "\frac{top of fraction here}{bottom of fraction here}" into the "" symbols. So if you typed (without the
["math] cos(x) = \frac{5}{13} [/math"]
you'll get a nice pretty fraction like this:
$cos(x) = \frac{5}{13}$
Standard notation uses "tan(x)" to denote the tangent of x, not 'tg x'. It will just confuse people, just so you know for the furture to make things simpler on yourself. Same with 'ctg x'. Most
people will not know that you meant "cot(x)". As for typing these into the math symbol magic maker, do the same thing, like this (without the quotations of course):
["math] sin(x) < 0 [/math"]
["math] cos(x) > 0 [/math"]
["math] tan(x) < 0 [/math"]
["math] cot(x) < 0 [/math"]
These give you:
$sin(x) < 0$
$cos(x) > 0$
$tan(x) < 0$
$cot(x) < 0$
I'm starting to understand what your question is. It took me awhile to realize what you were asking. Just so you know also, the best way to say "the tangent of x is negitive" is to say that "tan
(x) < 0", as seen above. Now, if you don't mind, try rewriting your problem with these magic math symbols that make it look all fancy, and add any more details you may have left out, and also
please provide the exact wording of the question in your textbook or worksheet or wherever, that is if you have the text book or worksheet or whatever the question was written on. Post that all
back here, and I'll be happy to help you out with this problem. You might also want to take a quick look at this thread:
The problems in there sorta relate to what I think your problem is.
Last edited by mfetch22; August 18th 2010 at 08:49 AM.
Hello, TheGame!
$\cos\alpha = \frac{5}{13},\;\sin\alpha < 0$
. . $(a)\;\sin(60^o - \alpha)$
. . $(b)\;\cos(\alpha +45^o)$
Cosine is positive; $\alpha$ is in quadrant 1 or 4.
Sine is negative: $\alpha$ is in quadrant 3 or 4.
. . Hence: $\alpha$ is in quadrant 4.
. . And: . $\sin\alpha \:=\:\text{-}\frac{12}{13}$
$(a)\;\sin(60^o-\alpha) \;=\;\sin60^o\cos\alpha - \cos60^o\sin\alpha$
. . . . . . . . . . . . . $=\;\left(\dfrac{\sqrt{3}}{2}\right)\left(\dfrac{5} {13}\right) - \left(\dfrac{1}{2}\right)\left(\text{-}\dfrac{12}{13}\right)$
. . . . . . . . . . . . . $=\;\dfrac{5\sqrt{3} +12}{26}$
$(b)\;\cos(\alpha + 45^o) \;=\;\cos\alpha\cos45^o - \sin\alpha\sin45^o$
. . . . . . . . . . . . . $=\; \left(\dfrac{5}{13}\right)\left(\dfrac{1}{\sqrt{2} }\right) - \left(\text{-}\dfrac{12}{13}\right)\left(\dfrac{1}{\sqrt{2}}\ri ght)$
. . . . . . . . . . . . . $=\;\dfrac{5+12}{13\sqrt{2}} \;=\;\dfrac{17}{13\sqrt{2}}$
August 18th 2010, 06:58 AM #2
August 18th 2010, 09:59 AM #3
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/trigonometry/153992-help-my-life-game.html","timestamp":"2014-04-17T01:21:37Z","content_type":null,"content_length":"44379","record_id":"<urn:uuid:04d640de-7c67-4457-9ca2-f835eeefad8f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extension of Functional & Hahn-Banach
April 15th 2009, 02:05 PM #1
Junior Member
Mar 2009
Extension of Functional & Hahn-Banach
I am to illustrate a particular theorem by considering a functional f on $R^2$ defined by $f(x)=\alpha_1 \xi_1 + \alpha_2 \xi_2$, $x=(\xi_1,\xi_2)$, its linear extensions $\bar{f}$ to $R^3$ and
the corresponding norms.
I'm having a couple problems with this problem. For one, I haven't ever had to find linear extensions before, so I have no clue how to figure that out.
The Theorem to apply this to is the Hahn-Banach Theorem for Normed Spaces. I would want to show that the norms of f and the extensions are the same to illustrate this.
I think the norm of f is the sup|f(x)| over all x's in $R^2$ where, ||x||=1. And the norm of the extension is the sup| $\bar{f}(x)$| over all x's in $R^3$ where ||x||=1.
As you can see, I'm pretty lost on most of this. I think I know what I need to figure out, but I just don't have any idea how to get at that. Can anyone offer some guidance? Thank you so much.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/differential-geometry/83913-extension-functional-hahn-banach.html","timestamp":"2014-04-18T17:52:43Z","content_type":null,"content_length":"31268","record_id":"<urn:uuid:cf390ffa-98d3-4ca1-bf3f-f13ea77031e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thomas Wiegelmann
Publications (25)76.99 Total impact
[show abstract] [hide abstract]
ABSTRACT: Magnetic reconnection is one of the primary mechanisms for triggering solar eruptive events, but direct observation of this rapid process has been a challenge. In this Letter, using a
nonlinear force-free field (NLFFF) extrapolation technique, we present a visualization of field line connectivity changes resulting from tether-cutting reconnection over about 30 minutes during
the 2011 February 13 M6.6 flare in NOAA AR 11158. Evidence for the tether-cutting reconnection was first collected through multiwavelength observations and then by analysis of the field lines
traced from positions of four conspicuous flare 1700 Å footpoints observed at the event onset. Right before the flare, the four footpoints are located very close to the regions of local maxima of
the magnetic twist index. In particular, the field lines from the inner two footpoints form two strongly twisted flux bundles (up to ~1.2 turns), which shear past each other and reach out close
to the outer two footpoints, respectively. Immediately after the flare, the twist index of regions around the footpoints diminishes greatly and the above field lines become low-lying and less
twisted (<=0.6 turns), overarched by loops linking the two flare ribbons formed later. About 10% of the flux (~3 × 10^19 Mx) from the inner footpoints undergoes a footpoint exchange. This portion
of flux originates from the edge regions of the inner footpoints that are brightened first. These rapid changes of magnetic field connectivity inferred from the NLFFF extrapolation are consistent
with the tether-cutting magnetic reconnection model.
The Astrophysical Journal Letters 11/2013; 778(2):L36. · 6.35 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: In this paper, we study the magnetic energy (ME) structure contained in the solar corona over the active region NOAA 11158. The time period is chosen as from 0:00 to 06:00 UT on 2011
February 15, during which an X-class flare occurred. The nonlinear force-free field (NLFFF) and the potential field extrapolation are carried out to model the coronal magnetic field over this
active region, using high-quality photospheric vector magnetograms observed by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory as boundary conditions. We find that
the volume distribution for the density of the ME (B 2/8π) and the ohmic dissipation power (ODP, j 2/σ), in which j is the electric current density (c/4π∇ × B) and σ is the conductivity in the
corona, can be readily fitted by a broken-down double-power law. The turn-over density for the spectrum of the ME and ODP is found to be fixed at ~1.0 × 104 erg cm–3 and ~2.0 × 10–15 W cm–3
(assuming σ = 105 Ω–1 m–1), respectively. Compared with their first power-law spectra (fitted below the corresponding turn-over value) which remain unchanged, the second power-law spectra (fitted
above the corresponding turn-over value) for the NLFFF's ME and ODP show flare-associated changes. The potential field remains steady. These results indicate that a magnetic field with energy
density larger than the turn-over energy density plays a dominant role in powering the flare.
The Astrophysical Journal 01/2013; 764(1):86. · 6.73 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: The moss is the area at the footpoint of the hot (3 to 5 MK) loops forming the core of the active region where emission is believed to result from the heat flux conducted down to the
transition region from the hot loops. Studying the variation of Doppler shift as a function of line formation temperatures over the moss area can give clues on the heating mechanism in the hot
loops in the core of the active regions. We investigate the absolute Doppler shift of lines formed at temperatures between 1 MK and 2 MK in a moss area within active region NOAA 11243 using a
novel technique that allows determining the absolute Doppler shift of EUV lines by combining observations from the SUMER and EIS spectrometers. The inner (brighter and denser) part of the moss
area shows roughly constant blue shift (upward motions) of 5 km/s in the temperature range of 1 MK to 1.6 MK. For hotter lines the blue shift decreases and reaches 1 km/s for Fe xv 284 {\AA} (~2
MK). The measurements are discussed in relation to models of the heating of hot loops. The results for the hot coronal lines seem to support the quasi-steady heating models for non-symmetric hot
loops in the core of active regions.
Astronomy and Astrophysics 11/2012; · 5.08 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: The structure and dynamics of the solar corona is dominated by the magnetic field. In most areas in the corona magnetic forces are so dominant that all non-magnetic forces like plasma
pressure gradient and gravity can be neglected in the lowest order. This model assumption is called the force-free field assumption, as the Lorentz force vanishes. This can be obtained by either
vanishing electric currents (leading to potential fields) or the currents are co-aligned with the magnetic field lines. First we discuss a mathematically simpler approach that the magnetic field
and currents are proportional with one global constant, the so-called linear force-free field approximation. In the generic case, however, the relation between magnetic fields and electric
currents is nonlinear and analytic solutions have been only found for special cases, like 1D or 2D configurations. For constructing realistic nonlinear force-free coronal magnetic field models in
3D, sophisticated numerical computations are required and boundary conditions must be obtained from measurements of the magnetic field vector in the solar photosphere. This approach is currently
of large interests, as accurate measurements of the photospheric field become available from ground-based (for example SOLIS) and space-born (for example Hinode and SDO) instruments. If we can
obtain accurate force-free coronal magnetic field models we can calculate the free magnetic energy in the corona, a quantity which is important for the prediction of flares and coronal mass
ejections. Knowledge of the 3D structure of magnetic field lines also help us to interpret other coronal observations, e.g., EUV-images of the radiating coronal plasma.
Living Reviews in Solar Physics. 08/2012;
[show abstract] [hide abstract]
ABSTRACT: Both magnetic and current helicities are crucial ingredients for describing the complexity of active-region magnetic structure. In this Letter, we present the temporal evolution of
these helicities contained in NOAA active region 11158 during five days from 2011 February 12 to 16. The photospheric vector magnetograms of the Helioseismic and Magnetic Imager on board the
Solar Dynamic Observatory were used as the boundary conditions for the coronal field extrapolation under the assumption of nonlinear force-free field, from which we calculated both relative
magnetic helicity and current helicity. We construct a time-altitude diagram in which altitude distribution of the magnitude of current helicity density is displayed as a function of time. This
diagram clearly shows a pattern of upwardly propagating current helicity density over two days prior to the X2.2 flare on February 15 with an average propagation speed of ~36 m s–1. The
propagation is synchronous with the emergence of magnetic flux into the photosphere, and indicative of a gradual energy buildup for the X2.2 flare. The time profile of the relative magnetic
helicity shows a monotonically increasing trend most of the time, but a pattern of increasing and decreasing magnetic helicity above the monotonic variation appears prior to each of two major
flares, M6.6 and X2.2, respectively. The physics underlying this bump pattern is not fully understood. However, the fact that this pattern is apparent in the magnetic helicity evolution but not
in the magnetic flux evolution makes it a useful indicator in forecasting major flares.
The Astrophysical Journal Letters 05/2012; 752(1):L9. · 6.35 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Polar plumes are seen as elongated objects starting at the solar polar regions. Here, we analyze these objects from a sequence of images taken simultaneously by the three spacecraft
telescopes STEREO/EUVI A and B, and SOHO/EIT. We establish a method capable of automatically identifying plumes in solar EUV images close to the limb at 1.01 - 1.39 R in order to study their
temporal evolution. This plume-identification method is based on a multiscale Hough-wavelet analysis. Then two methods to determined their 3D localization and structure are discussed: First,
tomography using the filtered back-projection and including the differential rotation of the Sun and, secondly, conventional stereoscopic triangulation. We show that tomography and stereoscopy
are complementary to study polar plumes. We also show that this systematic 2D identification and the proposed methods of 3D reconstruction are well suited, on one hand, to identify plumes
individually and on the other hand, to analyze the distribution of plumes and inter-plume regions. Finally, the results are discussed focusing on the plume position with their cross-section area.
Solar Physics 11/2011; 283(1). · 3.26 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Understanding the solar outer atmosphere requires concerted, simultaneous solar observations from the visible to the vacuum ultraviolet (VUV) and soft X-rays, at high spatial resolution
(between 0.1" and 0.3"), at high temporal resolution (on the order of 10 s, i.e., the time scale of chromospheric dynamics), with a wide temperature coverage (0.01 MK to 20 MK, from the
chromosphere to the flaring corona), and the capability of measuring magnetic fields through spectropolarimetry at visible and near-infrared wavelengths. Simultaneous spectroscopic measurements
sampling the entire temperature range are particularly important. These requirements are fulfilled by the Japanese Solar-C mission (Plan B), composed of a spacecraft in a geosynchronous orbit
with a payload providing a significant improvement of imaging and spectropolarimetric capabilities in the UV, visible, and near-infrared with respect to what is available today and foreseen in
the near future. The Large European Module for solar Ultraviolet Research (LEMUR), described in this paper, is a large VUV telescope feeding a scientific payload of high-resolution imaging
spectrographs and cameras. LEMUR consists of two major components: a VUV solar telescope with a 30 cm diameter mirror and a focal length of 3.6 m, and a focal-plane package composed of VUV
spectrometers covering six carefully chosen wavelength ranges between 17 and 127 nm. The LEMUR slit covers 280" on the Sun with 0.14" per pixel sampling. In addition, LEMUR is capable of
measuring mass flows velocities (line shifts) down to 2 km/s or better. LEMUR has been proposed to ESA as the European contribution to the Solar C mission.
[show abstract] [hide abstract]
ABSTRACT: We have developed a method to automatically segment chromospheric fibrils from Halpha observations and further identify their orientation. We assume that chromospheric fibrils are
magnetic field-aligned. By comparing the orientation of the fibrils with the azimuth of the embedding chromospheric magnetic field extrapolated from the photosphere or chromosphere with the help
of a potential field model, the shear angle, a measure of nonpotentiality, along the fibrils is readily deduced. Following this approach, we make a quantitative assessment of the nonpotentiality
of fibrils in the active region NOAA 9661 and NOAA 11092. The spatial distribution and the histogram of the shear angle along fibrils are presented.
[show abstract] [hide abstract]
ABSTRACT: We investigate the fine structure of magnetic fields in the atmosphere of the quiet Sun. We use photospheric magnetic field measurements from {\sc Sunrise}/IMaX with unprecedented
spatial resolution to extrapolate the photospheric magnetic field into higher layers of the solar atmosphere with the help of potential and force-free extrapolation techniques. We find that most
magnetic loops which reach into the chromosphere or higher have one foot point in relatively strong magnetic field regions in the photosphere. $91%$ of the magnetic energy in the mid chromosphere
(at a height of 1 Mm) is in field lines, whose stronger foot point has a strength of more than 300 G, i.e. above the equipartition field strength with convection. The loops reaching into the
chromosphere and corona are also found to be asymmetric in the sense that the weaker foot point has a strength $B < 300$ G and is located in the internetwork. Such loops are expected to be
strongly dynamic and have short lifetimes, as dictated by the properties of the internetwork fields. Comment: accepted for ApJL Sunrise special issue, 8 Pages, 4 Figures
The Astrophysical Journal Letters 09/2010; · 6.35 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We present the results from a method to determine the 3D position and orientation of po-lar plumes from three corresponding images observed simultaneously by three spacecrafts, STEREO/
SECCHI A, B, and SOHO/EIT. We have applied both conventional stereoscopic triangulation and a new detection tool based on a combination of Hough and wavelet trans-form. We show that the obtained
plume orientation can help to verify magnetic field models in the pole region where surface observations are difficult and their extrapolation may be problematic. This automatic and systematic D
reconstruction is well suited to identify plumes individually in time and to follow their intensity variation. Typical lifetimes observed were found between 1-2 days. The plumes we have
reconstructed were not always rooted at a simultaneous EUV bright points and were sometime associated with a jet.
[show abstract] [hide abstract]
ABSTRACT: We present the propagation of the polar jets observed from the field of view of EUVI, COR1 to COR2 on board STEREO. We provide a method to test the free fall model both in 2D and 3D
dimensions by comparing the height-time images extracted from observations with the free fall model. By assuming all the particles in polar jets are ejected at the same time when it is initiated,
this method could produce the initial velocity distribution of the particles and tell us during the propagation whether the particles are ionized/recombined or experience some other processes.
The derived 3D orientations of the polar jets are used to test different magnetic field models around polar regions where the observation and extrapolation are not reliable. The estimated 3D
leading edge velocities by different telescopes are also investigated.
[show abstract] [hide abstract]
ABSTRACT: In solar eruptions, like flares and coronal mass ejections, free magnetic energy stored in the solar corona is converted into kinetic energy. Unfortunately the coronal magnetic field
cannot be measured directly. We can, however, reconstruct the coronal magnetic field from measurements of the photospheric magnetic field vector under the reasonable assumption of a force-free
coronal plasma. With a procedure dubbed preprocessing we derive force-free consistent boundary conditions, which are extrapolated into the solar corona with a nonlinear force-free extrapolation
code. The resulting 3D coronal magnetic field allows us to derive the magnetic topology and to computed the magnetic energy as well as an upper limited of the free energy available for driving
eruptive phenomena. We apply our code to measurements from several ground based vector magnetographs, e.g. the Solar Flare Telescope, SOLIS and the Big Bear Solar Observatory. Within our studies
we find a clear relationship between the stored magnetic energy and the strength of eruptions. In most cases not the entire free energy is converted to kinetic energy, but only a fraction.
Consequently, the post-flare magnetic field configuration is usually not entirely current free, but significantly closer to a potential field as before the flare.
[show abstract] [hide abstract]
ABSTRACT: SDO/HMI provides us high resolution full disk measurements of the photospheric magnetic field vector.We compute the field in the higher layers of the solar atmosphere from the measured
photospheric field under the assumption that the corona is force-free. However, those measured data are inconsistent with the above force-free assumption. Therefore, one has to apply some
transformations dubbed preprocessing to these data before nonlinear force-free extrapolation codes can be applied. Our force-free code is based on an optimization principle and takes the
spherical geometry of the sun in to account. Untill now, these extrapolations were applied only to a small surface area of the Sun so that cartesian geometry could be applied. We carry out both
full disk computations as well as computations of active regions. The code has been well tested with model equilibria and used with the ground based observations from SOLIS. We plan to show first
Nonlinear force-free coronal magnetic fields extrapolated from SDO/HMI in comparison with the coronal plasma in SDO/HMI.
[show abstract] [hide abstract]
ABSTRACT: Nonlinear force-free field (NLFFF) models are thought to be viable tools for investigating the structure, dynamics and evolution of the coronae of solar active regions. In a series of
NLFFF modeling studies, we have found that NLFFF models are successful in application to analytic test cases, and relatively successful when applied to numerically constructed Sun-like test
cases, but they are less successful in application to real solar data. Different NLFFF models have been found to have markedly different field line configurations and to provide widely varying
estimates of the magnetic free energy in the coronal volume, when applied to solar data. NLFFF models require consistent, force-free vector magnetic boundary data. However, vector magnetogram
observations sampling the photosphere, which is dynamic and contains significant Lorentz and buoyancy forces, do not satisfy this requirement, thus creating several major problems for force-free
coronal modeling efforts. In this article, we discuss NLFFF modeling of NOAA Active Region 10953 using Hinode/SOT-SP, Hinode/XRT, STEREO/SECCHI-EUVI, and SOHO/MDI observations, and in the process
illustrate the three such issues we judge to be critical to the success of NLFFF modeling: (1) vector magnetic field data covering larger areas are needed so that more electric currents
associated with the full active regions of interest are measured, (2) the modeling algorithms need a way to accommodate the various uncertainties in the boundary data, and (3) a more realistic
physical model is needed to approximate the photosphere-to-corona interface in order to better transform the forced photospheric magnetograms into adequate approximations of nearly force-free
fields at the base of the corona. We make recommendations for future modeling efforts to overcome these as yet unsolved problems.
The Astrophysical Journal 02/2009; 696(2). · 6.73 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We compare a variety of nonlinear force-free field (NLFFF) extrapolation algorithms, including optimization, magneto-frictional, and Grad–Rubin-like codes, applied to a solar-like
reference model. The model used to test the algorithms includes realistic photospheric Lorentz forces and a complex field including a weakly twisted, right helical flux bundle. The codes were
applied to both forced “photospheric” and more force-free “chromospheric” vector magnetic field boundary data derived from the model. When applied to the chromospheric boundary data, the codes
are able to recover the presence of the flux bundle and the field’s free energy, though some details of the field connectivity are lost. When the codes are applied to the forced photospheric
boundary data, the reference model field is not well recovered, indicating that the combination of Lorentz forces and small spatial scale structure at the photosphere severely impact the
extrapolation of the field. Preprocessing of the forced photospheric boundary does improve the extrapolations considerably for the layers above the chromosphere, but the extrapolations are
sensitive to the details of the numerical codes and neither the field connectivity nor the free magnetic energy in the full volume are well recovered. The magnetic virial theorem gives a rapid
measure of the total magnetic energy without extrapolation though, like the NLFFF codes, it is sensitive to the Lorentz forces in the coronal volume. Both the magnetic virial theorem and the
Wiegelmann extrapolation, when applied to the preprocessed photospheric boundary, give a magnetic energy which is nearly equivalent to the value derived from the chromospheric boundary, but both
underestimate the free energy above the photosphere by at least a factor of two. We discuss the interpretation of the preprocessed field in this context. When applying the NLFFF codes to solar
data, the problems associated with Lorentz forces present in the low solar atmosphere must be recognized: the various codes will not necessarily converge to the correct, or even the same,
Solar Physics 01/2008; 247(2):269-299. · 3.26 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: In this study new results are presented regarding the relationships between the coronal magnetic field and the intensities and Doppler shifts of ultraviolet emission lines. This
combination of magnetic field and spectroscopic data is used here to study material flows in association with the coronal field. We introduce the term ``coronal circulation'' to describe this
flow, and to indicate that the plasma is not static but flows everywhere in the extended solar atmosphere. The blueshifts and redshifts often seen in transition region and coronal ultraviolet
emission lines are interpreted as corresponding to upflows and downflows of the plasma on open (funnels) and closed (loops) coronal magnetic field lines, which tightly confine and strongly lead
the flows in the low-beta plasma. Evidence for these processes exists in the ubiquitous redshifts mostly seen at both legs of loops on all scales, and the sporadic blueshifts occurring in strong
funnels. Therefore, there is no static magnetically stratified plasma in the corona, since panta rhei, but rather a continuous global plasma circulation, being the natural perpetuation of
photospheric convection which ultimately is the driver.
The Astrophysical Journal 01/2008; · 6.73 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: The photospheric magnetic field vector is routinely measured with high accuracy from ground based and space born instruments. We use these measurements to prescribe suitable boundary
conditions for modelling the coronal magnetic field. Because of the low-beta plasma the magnetic field is in lowest order assumed to be force-free in the corona and upper chromosphere, but not in
the high-beta photosphere. We developed a program package which contains a preprocessing program and a nonlinear force-free coronal magnetic extrapolation code. Both programs are based on
optimization principles. The preprocessing routine uses the measured photospheric vector magnetogram as input and approximates the magnetic field vector in the force-free upper chromosphere.
These data are used as boundary condition for a nonlinear force-free extrapolation of the coronal magnetic field. We applied our method to study the temporal evolution of a flaring active region
as a sequence of nonlinear force-free equilibria. We found that magnetic energy was build up before the occurance of a flare and released after it. Furthermore, the 3D-magnetic field model allows
us to trace the temporal evolution of the energy flows in the flaring region.
[show abstract] [hide abstract]
ABSTRACT: CONTEXT: As the coronal magnetic field can usually not be measured directly, it has to be extrapolated from photospheric measurements into the corona. AIMS: We test the quality of a
non-linear force-free coronal magnetic field extrapolation code with the help of a known analytical solution. METHODS: The non-linear force-free equations are numerically solved with the help of
an optimization principle. The method minimizes an integral over the force-free and solenoidal condition. As boundary condition we use either the magnetic field components on all six sides of the
computational box in Case I or only on the bottom boundary in Case II. We check the quality of the reconstruction by computing how well force-freeness and divergence-freeness are fulfilled and by
comparing the numerical solution with the analytical solution. The comparison is done with magnetic field line plots and several quantitative measures, like the vector correlation, Cauchy
Schwarz, normalized vector error, mean vector error and magnetic energy. RESULTS: For Case I the reconstructed magnetic field shows good agreement with the original magnetic field topology,
whereas in Case II there are considerable deviations from the exact solution. This is corroborated by the quantitative measures, which are significantly better for Case I. CONCLUSIONS: Despite
the strong nonlinearity of the considered force-free equilibrium, the optimization method of extrapolation is able to reconstruct it; however, the quality of reconstruction depends significantly
on the consistency of the input data, which is given only if the known solution is provided also at the lateral and top boundaries, and on the presence or absence of flux concentrations near the
boundaries of the magnetogram.
Astronomy and Astrophysics 01/2007; · 5.08 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: AIMS: We develop an optimization principle for computing stationary MHD equilibria. METHODS: Our code for the self-consistent computation of the coronal magnetic fields and the coronal
plasma uses non-force-free MHD equilibria. Previous versions of the code have been used to compute non-linear force-free coronal magnetic fields from photospheric measurements. The program uses
photospheric vector magnetograms and coronal EUV images as input. We tested our reconstruction code with the help of a semi-analytic MHD-equilibrium. The quality of the reconstruction was judged
by comparing the exact and reconstructed solution qualitatively by magnetic field-line plots and EUV-images and quantitatively by several different numerical criteria. RESULTS: Our code is able
to reconstruct the semi-analytic test equilibrium with high accuracy. The stationary MHD optimization code developed here has about the same accuracy as its predecessor, a non-linear force-free
optimization code. The computing time for MHD-equilibria is, however, longer than for force-free magnetic fields. We also extended a well-known class of nonlinear force-free equilibria to the
non-force-free regime for purposes of testing the code. CONCLUSIONS: We demonstrate that the code works in principle using tests with analytical equilibria, but it still needs to be applied to
real data.
Astronomy and Astrophysics 01/2007; · 5.08 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We compare six algorithms for the computation of nonlinear force-free (NLFF) magnetic fields (including optimization, magnetofrictional, Grad–Rubin based, and Green's function-based
methods) by evaluating their performance in blind tests on analytical force-free-field models for which boundary conditions are specified either for the entire surface area of a cubic volume or
for an extended lower boundary only. Figures of merit are used to compare the input vector field to the resulting model fields. Based on these merit functions, we argue that all algorithms yield
NLFF fields that agree best with the input field in the lower central region of the volume, where the field and electrical currents are strongest and the effects of boundary conditions weakest.
The NLFF vector fields in the outer domains of the volume depend sensitively on the details of the specified boundary conditions; best agreement is found if the field outside of the model volume
is incorporated as part of the model boundary, either as potential field boundaries on the side and top surfaces, or as a potential field in a skirt around the main volume of interest. For input
field (B) and modeled field (b), the best method included in our study yields an average relative vector error En = 〈 |B−b|〉/〈 |B|〉 of only 0.02 when all sides are specified and 0.14 for the
case where only the lower boundary is specified, while the total energy in the magnetic field is approximated to within 2%. The models converge towards the central, strong input field at speeds
that differ by a factor of one million per iteration step. The fastest-converging, best-performing model for these analytical test cases is the Wheatland, Sturrock, and Roumeliotis (2000)
optimization algorithm as implemented by Wiegelmann (2004).
Solar Physics 04/2006; 235(1):161-190. · 3.26 Impact Factor
Top Journals
• 2013
□ Chinese Academy of Sciences
Peping, Beijing, China
• 2012
□ Kyung Hee University
Sŏul, Seoul, South Korea
• 2006–2012
□ Max Planck Institute for Solar System Research
Göttingen, Lower Saxony, Germany
• 1998–2000
□ Ruhr-Universität Bochum
☆ Institut für Theoretische Physik IV
Bochum, North Rhine-Westphalia, Germany | {"url":"http://www.researchgate.net/researcher/13173411_Thomas_Wiegelmann","timestamp":"2014-04-17T16:11:31Z","content_type":null,"content_length":"401308","record_id":"<urn:uuid:16484af4-163d-47eb-bb26-368d16a64f4d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding area of 3D triangle
September 12th 2010, 01:37 PM #1
Mar 2010
Finding area of 3D triangle
As you've probably guessed, I've started multi-variable calculus. I'm struggling a bit to wrap my head around it.
I'm sure this is an easy question, but I don't know where to begin.
Find the area of the triangle with vertices at A(1,1,1), B(2,3,4), C(5,2,6)
Any help is appreciated.
Got it! Terribly easy once I worked through it. I think I'm trying to visualize and look to deep when the answer isn't as complicated as I'm making it.
September 13th 2010, 12:21 AM #2
September 13th 2010, 11:27 AM #3
Mar 2010 | {"url":"http://mathhelpforum.com/calculus/155942-finding-area-3d-triangle.html","timestamp":"2014-04-17T13:23:25Z","content_type":null,"content_length":"35801","record_id":"<urn:uuid:87ae5df4-f23b-4b99-babb-d3b591848522>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Visual concealment for games using Javascript
In this article I will discuss the how to do visual concealment calculations as they relate to two-dimensional top-down games. By visual concealment I mean the act of hiding things that are either
outside of the field of view or behind an obstacle.
There is a JsFiddle available if you want to play around with the implementation.
After playing around a bit with very cool game I realized that I wanted to try to do something similar for an Android game. Having just watched Jonathan Blow's talk on prototyping (which is a really
good talk, by the way) I decided to knock up a simple prototype first and I felt that it would be more efficient to do that not on the Android platform as I then wouldn't have to wait for emulators
or debuggers to start on physical devices. I went for a JavaScript approach instead and, while the result is very different from what I hope the finished Android game will be, it proved a very
convenient way of making sure I could implement all the mechanics required.
Using the code
There are two downloads for this article; visualconcealment.zip which contains the basic code for testing this stuff and prototype.zip that contains a (kind-of) playable alpha version of a game.
The code is all HTML and JavaScript organised in two Eclipse, one to show the basic math and one that is a very simple game. There is no need to have Eclipse installed to run the code, just open
either example.html or game.html in your browser of choice that supports HTML5 Canvases.
I've tried to go for an Object-Oriented approach and also tried to organize my classes and files as I would have had this been a C# or Java project. That means the project contains a lot of .js
files, more than you see in most web projects of this size, and while this leads to a solution not as well suited for redistribution (as a library, for example) is does help I feel to have such a
In addition to the parts I'm going to cover in this article, there is quite a lot of stuff relating to collision detection and moving around in a physics based (kind of) fashion. These might make the
code a bit messier to read but I've left them in there as it makes it easier to play around and check out the effects of the concealment logic.
There is a limited number of unit tests included for the math-heavy classes that are using the QUnitJs framework.
The prototype will be a two-dimensional, top-down view game (like Pac-Man, for example) and obstacles will be represented by lines defined by two points.
The concealment logic must cater for hiding what is outside the field of view (essentially what is behind the player) as well as hiding things that are behind obstacles. Hiding of areas not visible
to the player must be rendered as a solid colour, obscuring any walls or other entities (such as computer controlled enemies), but must show a partial view of the entities if they are not completely
To achieve this I've gone for an implementation approach that is quite different to the one used in the game mentioned above, a filled polygon will be rendered over any areas found to be obscured.
This means it is harder to do dynamic lighting like where the viewable area gets darker the further away it is from the observer, but for the game I am aiming for that will work just fine I think.
The approach
There are essentially two different scenarios where a polygon has to be calculated;
• Outside of field of view
• Behind an obstacle
While both of them are fairly similar there are some differences between them so I'll go through each one separately. This section is math-heavy and it will help if you have a bit of Vector-math
experience because I won't go into details on how to add, subtract or scale vectors for example, and pretty much all of this logic boils down to a few vector operations.
I'm going to try to explain the math used using diagrams rather than equations, and leave the actual formulas in the code. What I want to describe is more about how to find the relevant points and
areas rather than how to calculate them (if that makes sense).
Outside field of view
Given an observer (blue triangle) in a "room" without any obstacles except the room's walls, and given the look-direction (green arrow) what falls outside of the field of view?
The obscured area would be the one showed in grey;
To find the vertices of that polygon the following steps are taken;
• Extend a line along the left boundary of the field-of-view and see which wall it intersects and where, call this point A.
• Extend a line along the right boundary of the field-of-view and see which wall it intersects and where, call this point B.
• Extend a line from the player to each "corner" on the outside walls and take the ones where the line falls outside the field of view, call these P1-Pn.
In the diagram below the red arrows finds points A and B, while the yellow arrows finds points P1 to P3. The dashed yellow line shows the result of step 3 but where the line falls within the field of
view so it is ignored.
To find the intersection between two lines I'm relying to two classes; Vector2D and Segment.
The Vector2D class is (obviously) a representation of a two-dimensional vector holding an x and a y value. It supplies standard vector operations such as add and length.
The Segment class represents a line or a "segment", i.e. a distance running from point A to point B. It is built using two Vector2D s. The Segment class has a method for finding intersections aptly
named findIntersection, this method implements a standard line-line intersection equation (which can be found along with a very good explanation over here).
Segment.findIntersection = function(a, b) {
var x1 = a.a.x;
var y1 = a.a.y;
var x2 = a.b.x;
var y2 = a.b.y;
var x3 = b.a.x;
var y3 = b.a.y;
var x4 = b.b.x;
var y4 = b.b.y;
var denominator = (x1 - x2)*(y3 - y4) - (y1 - y2)*(x3 - x4);
if (denominator == 0)
return new SegmentIntersection(a, b, false, null, false, false);
var xNominator = (x1*y2 - y1*x2)*(x3 - x4) - (x1 - x2)*(x3*y4 - y3*x4);
var yNominator = (x1*y2 - y1*x2)*(y3 - y4) - (y1 - y2)*(x3*y4 - y3*x4);
var px = xNominator / denominator;
var py = yNominator / denominator;
var point = new Vector2D(px, py);
return new SegmentIntersection(a, b, true, point, a.contains(point), b.contains(point));
The reason the method findIntersection returns a SegmentIntersection object rather than just the intersection point is because line-line intersection can get a bit complicated.
By complicated I mean that whilst the equation will give the intersection point (or nothing if the lines are parallel) with bounded lines starting at A and ending at B the intersection calculated
might not be within the bounds of the line but on some point on the extension of that line.
The SegmentIntersection class is user to record this information so that along with the actual intersection point it also provides information on whether it intersection line a or line b or indeed
Being able to detect if a line-line intersection has occurred in convenient as it makes is easy to calculate things like if an enemy sees the player or not. This is done by creating a Segment from
the enemy's position to the player's position and then iterate over all obstacles to see if there are any intersections, if there are; the player is obscured.
As a side note I think it's worth to mention that clever people would have just gone and used an already implemented math library (of which there are many), but since I like this stuff I've gone and
hand-rolled all of it.
Building a polygon
Having identified all the vertices of the polygon there is still one more thing we must do to get the polygon to render correctly. Since the vertices of a polygon has to appear "in order" for the
polygon to render as a single surface it is important that the found points are sorted. This can be either clockwise or counter-clockwise, it doesn't matter which one. If the vertices are not sorted
in this way the polygon might render incorrectly;
In this image the points (as numbered) are not sorted in a clockwise or counter-clockwise direction so the polygon intersects itself, yielding an incorrect area covered.
Sorting the vertices is a matter of sorting them by how many degrees they are off the direction the player looks in (shown by the green arrow). Since JavaScript arrays support a convenient sort
method this can be done like this;
var direction = Vector2D.normalize(entity.lookDirection);
var origin = entity.position;
polygonPoints.sort(function(lhs, rhs) {
if (lhs.equals(origin))
return 1;
if (rhs.equals(origin))
return -1;
if (lhs.equals(rPoint))
return -1;
if (rhs.equals(rPoint))
return 1;
if (lhs.equals(lPoint))
return 1;
if (rhs.equals(lPoint))
return -1;
var vl = Vector2D.normalize(Vector2D.sub(lhs, origin));
var vr = Vector2D.normalize(Vector2D.sub(rhs, origin));
var al = Vector2D.wrappedAngleBetween(vl, direction);
var ar = Vector2D.wrappedAngleBetween(vr, direction);
if (al < ar)
return 1;
return -1;
Where wrappedAngleBetween gives a signed angle that wraps to all positive angles;
Vector2D.signedAngleBetween = function(lhs, rhs) {
var na = Vector2D.normalize(lhs);
var nb = Vector2D.normalize(rhs);
return Math.atan2(nb.y, nb.x) - Math.atan2(na.y, na.x);
Vector2D.wrappedAngleBetween = function(lhs, rhs) {
var angle = Vector2D.signedAngleBetween(lhs, rhs);
if (angle < 0)
angle += TWO_PI;
return angle;
Walls and outer boundaries are owned by a Level class, so that class is best suited to do this calculation on a per entity (an entity being a player or a monster or whatever);
Level.prototype.calculateOutOfViewArea = function(entity) {
var polygonPoints = [];
var rDir = Vector2D.rotate(entity.lookDirection, entity.fieldOfView / 2);
var lDir = Vector2D.rotate(entity.lookDirection, -entity.fieldOfView / 2);
var rSegment = new Segment(entity.position, Vector2D.add(entity.position, rDir));
var lSegment = new Segment(entity.position, Vector2D.add(entity.position, lDir));
var lPoint = null;
var rPoint = null;
for(var i = 0; i < bounds.length; ++i) {
var bound = bounds[i];
var rBoundIntersection = Segment.findIntersection(bound, rSegment);
if (rBoundIntersection.isIntersection && rBoundIntersection.pointIsOnA && !fltEquals(0, Vector2D.angleBetween(rDir, Vector2D.sub(entity.position, rBoundIntersection.point)))) {
rPoint = rBoundIntersection.point;
var lBoundIntersection = Segment.findIntersection(bound, lSegment);
if (lBoundIntersection.isIntersection && lBoundIntersection.pointIsOnA && !fltEquals(0, Vector2D.angleBetween(lDir, Vector2D.sub(entity.position, lBoundIntersection.point)))) {
lPoint = lBoundIntersection.point;
var toCorner = Vector2D.sub(bound.a, entity.position);
var angle = Vector2D.angleBetween(toCorner, entity.lookDirection);
if (Math.abs(angle) > entity.fieldOfView / 2) {
var direction = Vector2D.normalize(entity.lookDirection);
var origin = entity.position;
polygonPoints.sort(function(lhs, rhs) {
if (lhs.equals(origin))
return 1;
if (rhs.equals(origin))
return -1;
if (lhs.equals(rPoint))
return -1;
if (rhs.equals(rPoint))
return 1;
if (lhs.equals(lPoint))
return 1;
if (rhs.equals(lPoint))
return -1;
var vl = Vector2D.normalize(Vector2D.sub(lhs, origin));
var vr = Vector2D.normalize(Vector2D.sub(rhs, origin));
var al = Vector2D.wrappedAngleBetween(vl, direction);
var ar = Vector2D.wrappedAngleBetween(vr, direction);
if (al < ar)
return 1;
return -1;
return polygonPoints;
In the above code snippet variables rDir and lDir represents the outer bounds of the field-of-view. These are calculated by taking the look-direction and then rotating that vector by half the
field-of-view radians, and that rotation has to be done twice. Once negative to find the left boundary and once positive to find the right boundary;
var rDir = Vector2D.rotate(entity.lookDirection, entity.fieldOfView / 2);
var lDir = Vector2D.rotate(entity.lookDirection, -entity.fieldOfView / 2);
These vectors then form what is illustrated as the two red vectors in the diagram above.
Calculating the vertices and sorting them using the above method yields a polygon like this;
Behind an obstacle
Calculating the obscured area behind an obstacle in the field-of-view is basically calculating the shadow cast by the obstacle had the observer been a light source. This is very similar to
calculating the polygon that covers everything outside the field-of-view but differs in that it starts with the obstacle, rather than the observers field of view.
• Extend a line from the observer through point a on the obstacle (Segment) and see where it hits a boundary wall, call this point A.
• Extend a line from the observer through point b on the obstacle (Segment) and see where it hits a boundary wall, call this point B.
• Extend a line from the player to each "corner" on the outside walls and take the ones where the line falls between the obstacle's a and b, call these P1-Pn.
In the diagram below, the red arrows finds point A and B, while the yellow arrows finds the corners, note that in this example there is only one yellow arrow meeting the criteria of falling between A
and B.
The shadow polygon that needs to be drawn is then the one that is made of of the points on the obstacle and where the red arrows intersects the outer walls and the yellow (not dashed) arrow intersect
the outer boundary corner.
Code-wise this is very similar to the code finding the area outside the field of view with the main difference is that it's obstacle based, it's called once per obstacle. Again, the same sort of
ordering of the vertices is required as the polygon has to be rendered in order.
Level.prototype.calculateShadowArea = function(entity, wall) {
var polygonPoints = [];
var endpoints = wall.getEndPoints();
for(var i = 0; i < endpoints.length; ++i) {
var wallPoint = endpoints[i];
var observerToWallEndPoint = new Segment(entity.position, wallPoint);
for(var j = 0; j < this.bounds.length; ++j) {
var bound = this.bounds[j];
var wallIntersection = Segment.findIntersection(observerToWallEndPoint, bound);
if (wallIntersection.isIntersection && wallIntersection.pointIsOnB && fltEquals(0, Vector2D.angleBetween(observerToWallEndPoint.getAtoB(), Vector2D.sub(wallIntersection.point, entity.position)))) {
for(var i = 0; i < this.bounds.length; ++i) {
var bound = this.bounds[i];
var observerToBoundryCornerA = new Segment(entity.position, bound.a);
var cornerIntersection = Segment.findIntersection(wall, observerToBoundryCornerA);
if (cornerIntersection.isIntersection && cornerIntersection.pointIsOnA && !fltEquals(0, Vector2D.angleBetween(observerToBoundryCornerA.getAtoB(), Vector2D.sub(entity.position, cornerIntersection.point)))) {
var source = new Segment(entity.position, wall.a);
polygonPoints.sort(function(lhs, rhs) {
if (lhs.equals(wall.a))
return 1;
if (rhs.equals(wall.a))
return -1;
if (lhs.equals(wall.b))
return -1;
if (rhs.equals(wall.b))
return 1;
var p2lhs = Vector2D.sub(lhs, source.a);
var p2rhs = Vector2D.sub(rhs, source.a);
var lhsAngle = Vector2D.angleBetween(p2lhs, source.getAtoB());
var rhsAngle = Vector2D.angleBetween(p2rhs, source.getAtoB());
if (lhsAngle < rhsAngle) {
return 1;
else {
if (lhsAngle > rhsAngle) {
return -1;
return 0;
return polygonPoints;
As a convenience the Segment class exposes it's two endpoints (Vector2Ds a and b) as an array as well through method getEndPoints, this tidies up code that needs to process across all endpoints as
they can be expressed as loops.
The Segment class also has a method called getAtoB which returns a vector v such that v = segment.b - segment.a, which of course represents the line from a to b. Or in code;
Segment.prototype.getAtoB = function() {
return Vector2D.sub(this.b, this.a);
Again this is there mainly so that code that uses Segment becomes a little neater, and an appropriate optimization would probably have been to cache this value.
The observant reader might point out that this approach isn't very efficient, if an obstacle is completely behind another obstacle the shadow for it will still be calculated and rendered! This is a
correct observation and I've deliberately left that optimisation out as I am going to redo all of this anyway for my Android version. The solution would be to check if both end-points of an obstacle
is inside an already calculated shadow-area.
Because the non-visible areas of the map is calculated as a polygon to be filled with a solid color the level should be rendered in this order;
• Level background
• Enemies and any other entities
• Field-of-view exclusion and obstacle shadow areas
• Walls/obstacles
• Player
This rendering order allow for enemies to be partially visible as they come out of an obscured area.
The below images show an example rendering with no occlusion, obstacle occlusion and finally both bostacle and field-of-view (red lines are obstacles);
No obscured areas rendered, map is fully visible. Areas hidden by obstacles are dark-gray. Areas hidden by obstacles and out side field-of-view are dark-gray.
Points of Interest
I was using JavaScript for prototyping but the experience of writing simple games in JavaScript was quite pleasant, and I might consider simply turning this little prototype into an actual game.
I was impressed by QUnitJs, the unit-test framework I used for testing the math-heavy parts. I haven't really used any other test framework for JavaScript so I haven't got anything to compare to but
it worked really well and it was easy creating test cases using it.
If you're not already unit testing your code you should check it out at qunitjs.com.
The game prototype in the prototype.zip archive is far from finished, it is possible to play around with it but you can't beat it.
Use WASD to control the blue guy, avoid the red guys, eat green flashing food and drop of in the pink-ish area. Shoot red guys with SPACE but that drains your energy.
2013-02-13: First version. | {"url":"http://www.codeproject.com/Articles/545808/Visual-concealment-for-games-using-Javascript?msg=4497621","timestamp":"2014-04-18T09:46:58Z","content_type":null,"content_length":"102839","record_id":"<urn:uuid:9b72c799-98ea-480c-9f71-5ecea56d1c93>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecture 6: Logic Design
V22.0436 - Prof. Grishman
Lecture 6: Logic Design -- Sequential Circuits
Representing sequential circuits (text, sec. C.10)
• a sequential circuit can be described as a finite state machine
• state machine includes a next-state function and an output function
(text, figure C.10.1)
□ we will consider a simple version where the output is just part of the current state
• next-state function can be represented by a graph or a table
• graph: finite state transition network
• transition table: table which gives new state as a function of current state and inputs
Finite state transition network
• a network consists of a set of nodes connected by arcs
• each node represents a state of the circuit (so a circuit with n bits of registers may have 2^n states)
• each arc represents a possible transition between states; it is labeled by the condition (if any) under which the transition is made
Transition table
In a transition table, the input columns represent the state of the circuit and the inputs to the circuit; the output columns represent the next state of the circuit.
Designing a sequential circuit
• (sometimes) draw a transition network for the circuit
• build a transition table
• use the transition table as the truth table for the "next state" combinatorial circuit
• convert this table to a circuit
• if necessary, build an output function truth table and convert it to a circuit
• example: binary up-counter; binary up-down counter
• example: "traffic light" circuit from text | {"url":"http://www.cs.nyu.edu/courses/fall09/V22.0436-001/lecture6.html","timestamp":"2014-04-18T08:07:58Z","content_type":null,"content_length":"2379","record_id":"<urn:uuid:8505d7d6-8415-4538-a85a-7987c79ef337>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
P-value of constant in regression - Statistics and econometrics help forum
P-value of constant in regression
When running a multiple linear regression I get very low p-values for most of the independent variables and for the model as a whole, but the p-value of the constant remains high. What causes this?
How does it influence my results? How it can be corrected?
Thank you in advance for any help. | {"url":"http://www.statisticsmentor.com/vbforum/showthread.php?t=65","timestamp":"2014-04-16T19:15:38Z","content_type":null,"content_length":"36435","record_id":"<urn:uuid:8bad6576-4ea7-4801-aaf2-499bfc7e1234>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Norristown, PA Trigonometry Tutor
Find a Norristown, PA Trigonometry Tutor
...I focus on the area(s) where the student needs help guiding his/her progress while encouraging the student to learn and solve problems independently. It is my goal to make sure each of my
students feels safe, comfortable, and appropriately challenged. As I hold myself to the highest standards, it is my policy to make sure that each student is completely satisfied with their
tutoring session.
21 Subjects: including trigonometry, reading, chemistry, writing
...If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of
the struggles all students face.
14 Subjects: including trigonometry, calculus, physics, geometry
...I have taught students from kindergarten to college age, and I build positive tutoring relationships with students of all ability and motivation levels. All of my students have seen grade
improvement within their first two weeks of tutoring, and all of my students have reviewed me positively. T...
38 Subjects: including trigonometry, English, Spanish, reading
...Aside from that, I occasionally tutored high school mathematics and other more advanced college courses, such as Advanced Calculus, Logic and Set Theory, Foundations of Math, and Abstract
Algebra. Many of these subjects I also tutored privately. In addition to this, I've done substantial work i...
26 Subjects: including trigonometry, reading, English, algebra 2
...Math can also require work but it should never be so hard as to make a student give up or cry. I am passionate about Math in the early years, from Pre-Algebra through Pre-Calculus. Middle
school and early High School are the ages when most children develop crazy ideas about their abilities regarding math.
9 Subjects: including trigonometry, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Norristown_PA_trigonometry_tutors.php","timestamp":"2014-04-19T12:12:28Z","content_type":null,"content_length":"24507","record_id":"<urn:uuid:d28e7fa1-8fb6-489a-82fb-55b84a259b74>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Point-Set Topology on The Unapologetic Mathematician
Let’s say we have a compact space $X$. A subset $C\subseteq X$ may not be itself compact, but there’s one useful case in which it will be. If $C$ is closed, then $C$ is compact.
Let’s take an open cover $\{F_i\}_{i\in\mathcal{I}}$ of $C$. The sets $F_i$ are open subsets of $C$, but they may not be open as subsets of $X$. But by the definition of the subspace topology, each
one must be the intersection of $C$ with an open subset of $X$. Let’s just say that each $F_i$ is an open subset of $X$ to begin with.
Now, we have one more open set floating around. The complement of $C$ is open, since $C$ is closed! So between the collection $\{F_i\}$ and the extra set $X\setminus C$ we’ve got an open cover of $X$
. By compactness of $X$, this open cover has a finite subcover. We can throw out $X\setminus C$ from the subcover if it’s in there, and we’re left with a finite open cover of $C$, and so $C$ is
In fact, if we restrict to Hausdorff spaces, $C$ must be closed to be compact. Indeed, we proved that if $C$ is compact and $X$ is Hausdorff then any point $x\in X\setminus C$ can be separated from
$C$ by a neighborhood $U\subseteq X\setminus C$. Since there is such an open neighborhood, $x$ must be an interior point of $X\setminus C$. And since $x$ was arbitrary, every point of $X\setminus C$
is an interior point, and so $X\setminus C$ must be open.
Putting these two sides together, we can see that if $X$ is compact Hausdorff, then a subset $C\subseteq X$ is compact exactly when it’s closed.
An amazingly useful property for a space $X$ is that it be “compact”. We define this term by saying that if $\{U_i\}_{i\in\mathcal{I}}$ is any collection of open subsets of $X$ indexed by any
(possibly infinite) set $\mathcal{I}$ so that their union $\bigcup\limits_{i\in\mathcal{I}}U_i$ is the whole of $X$ — the sexy words are “open cover” — then there is some finite collection of the
index set $\mathcal{A}\subseteq\mathcal{I}$ so that the union of this finite number of open sets $\bigcup\limits_{i\in\mathcal{A}}U_i$still contains all of $X$ — the sexy words are “has a finite
So why does this matter? Well, let’s consider a Hausdorff space $X$, a point $x\in X$, and a finite collection of points $A\subseteq X$. Given any point $a\in A$, we can separate $x$ and $a$ by open
neighborhoods $x\in U_a$ and $a\in V_a$, precisely because $X$ is Hausdorff. Then we can take the intersection $U=\bigcap\limits_{a\in A}U_a$ and the union $V=\bigcup\limits_{a\in A}V_a$. The set $U$
is a neighborhood of $X$, since it’s a finite intersection of neighborhoods, while the set $V$ is a neighborhood of $A$. These two sets can’t intersect, and so we have separated $x$ and $A$ by
But what if $A$ is an infinite set? Then the infinite intersection $\bigcap\limits_{a\in A}U_a$ may not be a neighborhood of $x$! Infinite operations sometimes cause problems in topology, but
compactness can make them finite. If $A$ is a compact subset of $X$, then we can proceed as before. For each $a\in A$ we have open neighborhoods $x\in U_a$ and $a\in V_a$, and so $A\subseteq\bigcup\
limits_{a\in A}V_a$ — the open sets $V_a$ form a cover of $A$. Then compactness tells us that we can pick a finite collection $A'\subseteq A$ so that the union $V=\bigcup\limits_{a\in A'}V_a$ of that
finite collection of sets still covers $A$ — we only need a finite number of the $V_a$ to cover $A$. The finite intersection $U=\bigcap\limits_{a\in A'}U_a$ will then be a neighborhood of $x$ which
doesn’t touch $V$, and so we can separate any point $x\in X$ and any compact set $A\subseteq X$ by neighborhoods.
As an exercise, do the exact same thing again to show that in a Hausdorff space $X$ we can separate any two compact sets $A\subseteq X$ and $B\subseteq X$ by neighborhoods.
In a sense, this shows that while compact spaces may be infinite, they sometimes behave as nicely as finite sets. This can make a lot of things simpler in the long run. And just like we saw for
connectivity, we are often interested in things behaving nicely near a point. We thus define a space to be “locally compact” if every point has a neighborhood which is compact (in the subspace
There’s an equivalent definition in terms of closed sets, which is dual to this one. Let’s say we have a collection $\{F_i\}_{i\in\mathcal{I}}$ of closed subsets of $X$ so that the intersection of
any finite collection of the $F_i$ is nonempty. Then I assert that the intersection of all of the $F_i$ will be nonempty as well if $X$ is compact. To see this, assume that the intersection is empty:
Then the complement of this intersection is all of $X$. We can rewrite this as the union of the complements of the $F_i$:
Since we’re assuming $X$ to be compact, we can find some finite subcollection $\mathcal{A}\subseteq\mathcal{I}$ so that
which, taking complements again, implies that
but we assumed that all of the finite intersections were nonempty!
Now turn this around and show that if we assume this “finite intersection property” — that if all finite intersections of a collection of closed sets $F_i$ are nonempty, then the intersection of all
the $F_i$ are nonempty — then we can derive the first definition of compactness from it.
Now that we have some vocabulary about separation properties down we can talk about properties of spaces as a whole, called the separation axioms.
First off, we say that a space is $T_0$ if every two distinct points can be topologically distinguished. This fails, for example, in the trivial topology on a set $X$ if $X$ has at least two points,
because every point has the same collection of neighborhoods — $\mathcal{N}(x)=\{X\}$ for all points $x\in X$. As far as the topology is concerned, all the points are the same. This turns out to be
particularly interesting in conjunction with other separation axioms, since we often will have one axiom saying that a property holds for all distinct points, and another saying that the property
holds for all topologically distinguishable points. Adding $T_0$ turns the latter version into the former.
Next, we say that a space is $R_0$ if any two topologically distinguishable points are separated. That is, we never have a point $x$ in the closure of the singleton set $\{y\}$ without the point $y$
being in the closure of $\{x\}$. Adding $T_0$ to this condition gives us $T_1$. A $T_1$ space is one in which any two distinct points are not only topologically distinguishable, but separated. In
particular, we can see that the singleton set $\{x\}$ is closed, since its closure can’t contain any other points than $x$ itself.
A space is $R_1$ if any two topologically distinguishable points are separated by neighborhoods. If this also holds for any pair of distinct points we say that the space is $T_2$, or “Hausdorff”.
This is where most topologists start to feel comfortable, though the topologies that arise in algebraic geometry are usually non-Hausdorff. To a certain extent (well, to me at least) Hausdorff spaces
feel a lot more topologically natural and intuitive than non-Hausdorff spaces, and you almost have to try to construct pathological spaces to violate this property. Back in graduate school, some of
us adapted the term to apply more generally, as in “That guy Steve is highly non-Hausdorff.”
One interesting and useful property of Hausdorff spaces is that the image of the diagonal map $\Delta:X\rightarrow X\times X$ defined by $\Delta(x)=(x,x)$ is closed. To see this, notice that it means
the complement of the image is open. That is, if $(x,y)$ is a pair of points of $X$ with $xeq y$ then we can find an open neighborhood containing the point $(x,y)$ consisting only of pairs $(z,w)$
with $zeq w$. In fact, we have a base for the product topology on $X\times X$ consisting of products two open sets in $X$. That is, we can pick our open neighborhood of $(x,y)$ to be the set of all
pairs $(z,w)$ with $z\in Z$ and $w\in W$, where $Z$ is an open subset of $X$ containing $x$ and $W$ is an open subset containing $y$. To say that this product doesn’t touch the diagonal means that $Z
\cap W=\varnothing$, which is just what it means for $x$ and $y$ to be separated by neighborhoods!
We can strengthen this by asking that any two distinct points are separated by closed neighborhoods. If this holds we say the space is $T_{2\frac{1}{2}}$. There’s no standard name for the weaker
version discussing topologically distinguishable points. Stronger still is saying that a space is “completely Hausdorff” or completely $T_2$, which asks that any two distinct points be separated by a
A space $X$ is “regular” if given a point $x\in X$ and a closed subset $C\subseteq X$ with $xotin C$ we can separate $\{x\}$ and $C$ by neighborhoods. This is a bit stronger than being Hausdorff,
where we only asked that this hold for two singletons. For regular spaces, we allow one of the two sets we’re separating to be any closed set. If we add on the $T_0$ condition we’re above $T_1$, and
so singletons are just special closed sets anyhow, but we’re strictly stronger than regularity now. We call this condition $T_3$.
As for Hausdorff, we say that a space is completely regular if we can actually separate $\{x\}$ and $C$ by a function. If we take a completely regular space and add $T_0$, we say it’s $T_{3\frac{1}
{2}}$, or “completely regular Hausdorff”, or “Tychonoff”.
We say a space is “normal” if any two disjoint closed subsets are separated by neighborhoods. In fact, a theorem known as Urysohn’s Lemma tells us that we get for free that they’re separated by a
function as well. If we add in $T_1$ (not $T_0$ this time) we say that it is “normal Hausdorff”, or $T_4$.
A space is “completely normal” if any two separated sets are separated by neighborhoods. Adding in $T_1$ we say that the space is “completely normal Hausdorff”, or $T_5$.
Finally, a space is “perfectly normal” if any two disjoint closed sets are precisely separated by a function. Adding $T_1$ makes the space “perfectly normal Hausdorff”, or $T_6$.
The Wikipedia entry here is rather informative, and has a great schematic showing which of the axioms imply which others. Most of these axioms I won’t be using, but it’s good to have them out here in
case I need them.
There’s a whole list of properties of topological spaces that we may want to refer to called the separation axioms. Even when two points are distinct elements of the underlying set of a topological
space, we may not be able to tell them apart with topological techniques. Points are separated if we can tell them apart in some way using the topology. Today we’ll discuss various properties of
separation, and tomorrow we’ll list some of the more useful separation axioms we can ask that a space satisfy.
First, and weakest, we say that points $x$ and $y$ in a topological space $X$ are “topologically distinguishable” if they don’t have the same collection of neighborhoods — if $\mathcal{N}(x)eq\
mathcal{N}(y)$. Now maybe one of the collections of neighborhoods strictly contains the other: $\mathcal{N}(x)\subseteq\mathcal{N}(y)$. In this case, every neighborhood of $x$ is a neighborhood of
$y$. a forteriori it contains a neighborhood of $y$, and thus contains $y$ itself. Thus the point $x$ is in the closure of the set $\{y\}$. This is really close. The points are topologically
distinguishable, but still a bit too close for comfort. So we define points to be “separated” if each has a neighborhood the other one doesn’t, or equivalently if neither is in the closure of the
other. We can extend this to subsets larger than just points. We say that two subsets $A$ and $B$ are separated if neither one touches the closure of the other. That is, $A\cap\bar{B}=\varnothing$
and $\bar{A}\cap B=\varnothing$.
We can go on and give stronger conditions, saying that two sets are “separated by neighborhoods” if they have disjoint neighborhoods. That is, there are neighborhoods $U$ and $V$ of $A$ and $B$,
respectively, and $U\cap V=\varnothing$. Being a neighborhood here means that $U$ contains some open set $S$ which contains $A$ and $V$ contains some open set $T$ which contains $B$[S:, and so the
closure of $A$ is contained in the open set, and thus in $U$. Similarly, the closure of $B$ must be contained in $V$.:S]. We see that the closure of $B$ is contained in the complement of $S$, and
similarly the closure of $A$ is in the complement of $T$, so neither $A$ nor $B$ can touch the other’s closure. Stronger still is being “separated by closed neighborhoods”, which asks that $U$ and
$V$ be disjoint closed neighborhoods. These keep $A$ and $B$ even further apart, since these neighborhoods themselves can’t touch each other’s closures.
The next step up is that sets be “separated by a function” if there is a continuous function $f:X\rightarrow\mathbb{R}$ so that for every point $a\in A$ we have $f(a)=0$, and for every point $b\in B$
we have $f(b)=1$. In this case we can take the closed interval $\left[-1,\frac{1}{3}\right]$ whose preimage must be a closed neighborhood of $A$ by continuity. Similarly we can take the closed
interval $\left[\frac{2}{3},2\right]$ whose preimage is a closed neighborhood of $B$. Since these preimages can’t touch each other, we have separated $A$ and $B$ by closed neighborhoods. Stronger
still is that $A$ and $B$ are “precisely separated by a function”, which adds the requirement that only points from $A$ go to ${0}$ and only points from $B$ go to $1$.
This list of separation
One theorem turns out to be very important when we’re dealing with connected spaces, or even just with a connected component of a space. If $f$ is a continuous map from a connected space $X$ to any
topological space $Y$, then the image $f(X)\subseteq Y$ is connected. Similarly, if $X$ is path-connected then its image is path-connected.
The path-connected version is actually more straightforward. Let’s say that we pick points $y_0$ and $y_1$ in $f(X)$. Then there must exist $x_0$ and $x_1$ with $f(x_0)=y_0$ and $f(x_1)=y_1$. By
path-connectedness there is a function $g:\left[0,1\right]\rightarrow X$ with $g(0)=x_0$ and $g(1)=x_1$, and so $f(g(0))=y_0$ and $f(g(1))=y_1$. Thus the composite function $g\circ f:\left[0,1\right]
\rightarrow Y$ is a path from $y_0$ to $y_1$.
Now for the connected version. Let’s say that $f(X)$ is disconnected. Then we can write it as the disjoint union of two nonempty closed sets $B_1$ and $B_2$ by putting some connected components in
the one and some in the other. Taking complements we see that both of these sets are also open. Then we can consider their preimages $f^{-1}(B_1)$ and $f^{-1}(B_2)$, whose union is $X$ since every
point in $X$ lands in either $B_1$ or $B_2$.
By the continuity of $f$, each of these preimages is open. Seeing as each is the complement of the other, they must also both be closed. And neither one can be empty because some points in $X$ land
in each of $B_1$ and $B_2$. Thus we have a nontrivial clopen set in $X$, contradicting the assumption that it’s connected. Thus the image $f(X)$ must have been connected, as was to be shown.
From this theorem we see that the image of any connected component under a continuous map $f$ must land entirely within a connected component of the range of $f$. For example, any map from a
connected space to a totally disconnected space (one where each point is a connected component) must be constant.
When we specialize to real-valued functions, this theorem gets simple. Notice that a connected subset of $\mathbb{R}$ is just an interval. It may contain one or both endpoints, and it may stretch off
to infinity in one or both directions, but that’s about all the variation we’ve got. So if $X$ is a connected space then the image $f(X)$ of a continuous function $f:X\rightarrow\mathbb{R}$ is an
An immediate corollary to this fact is the intermediate value theorem. Given a connected space $X$, a continuous real-valued function $f$, and points $x_1,x_2\in X$ with $f(x_1)=a_1$ and $f(x_2)=a_2$
(without loss of generality, $a_1<a_2$), then for any $b\in\left(a_1,a_2\right)$ there is a $y\in X$ so that $f(y)=b$. That is, a continuous function takes all the values between any two values it
takes. In particular, if $X$ is itself an interval in $\mathbb{R}$ we get back the old intermediate value theorem from calculus.
Tied in with the fundamental notion of continuity for studying topology is the notion of connectedness. In fact, once two parts of a space are disconnected, there’s almost no topological influence of
one on the other, which should be clear from an intuitive idea of what it might mean for a space to be connected or disconnected. This intuitive notion can be illustrated by considering subspaces of
the real plane $\mathbb{R}^2$.
First, just so we’re clear, a subset of the plane is closed if it contains its boundary and open if it contains no boundary points. Here there’s a lot more between open and closed than there is for
intervals with just two boundary points. Anyhow, you should be able to verify this by a number of methods. Try using the pythagorean distance formula to make this a metric space, or you could work
out a subbase of the product topology. In fact, not only should you get the same answer, but it’s interesting to generalize this to find a metric on the product of two arbitrary metric spaces.
Anyhow, back to connectedness. Take a sheet of paper to be your plane, and draw a bunch of blobs on it. Use dotted lines sometimes to say you’re leaving out that section of the blob’s border. Have
fun with it.
Now we’ll consider that collection of blobs as a subspace $X\subseteq\mathbb{R}^2$, and thus it inherits the subspace topology. We can take one blob $B$ and throw an open set around it that doesn’t
hit any other blobs (draw a dotted curve around the blob that doesn’t touch any other). Thus the blob is an open subset $B\subseteq X$ because it’s the intersection of the open set we drew in the
plane and the subspace $X$. But we could also draw a solid curve instead of a dotted one and get a closed set in the plane whose intersection with $X$ is $B$. Thus $B$ is also a closed subset of $X$.
Some people like to call such a subset in a topological space “clopen”.
In general, given any topological space we can break it into clopen sets. If the only clopen sets are the whole space and the empty subspace, then we’re done. Otherwise, given a nontrivial clopen
subset, its complement must also be clopen (why?), and so we can break it apart into those pieces. We call a space with no nontrivial clopen sets “connected”, and a maximal connected subspace $B$ of
a topological space $X$ we call a “connected component”. That is, if we add any other points from $X$ to $B$, it will be disconnected.
An important property of connected spaces is that we cannot divide them into two disjoint nonempty closed subsets. Indeed, if we could then the complement of one closed subset would be the other. It
would be open (as the complement of a closed subset) and closed (by assumption) and nontrivial since neither it nor its complement could be empty. Thus we would have a nontrivial clopen subset,
contrary to our assumptions.
If we have a bunch of connected spaces, we can take their coproduct — their disjoint union — to get a disconnected space with the original family of spaces as its connected components. [S:Conversely,
any space can be broken into its connected components and thus written as a coproduct of connected spaces. In general, morphisms from the coproduct of a collection of objects exactly correspond to
collections of morphisms from the objects themselves. Here this tells us that a continuous function from any space is exactly determined by a collection of continuous functions, one for each
connected component. So we don’t really lose much at all by just talking about connected spaces and trying to really understand them.:S]
Sometimes we’re just looking near one point or another, like we’ve done for continuity or differentiability of functions on the real line. In this case we don’t really care whether the space is
connected in general, but just that it looks like it’s connected near the point we care about. We say that a space is “locally connected” if every point has a connected neighborhood.
Sometimes just being connected isn’t quite strong enough. Take the plane again and mark axes. Then draw the graph of the function defined by $f(x)=\sin\left(\frac{1}{x}\right)$ on the interval $\left
(0,1\right]$, and by $f(0)=0$. We call this the “topologist’s sine curve”. It’s connected because any open set we draw containing the wiggly sine bit gets the point $(0,0)$ too. The problem is, we
might want to draw paths between points in the space, and we can’t do that here. For two points in the sine part, we just follow the curve, but we can never quite get to or away from the origin.
Incidentally, it’s also not locally connected because any small ball around the origin contains a bunch of arcs from the sine part that aren’t connected to each other.
So when we want to draw paths, we ask that a space be “path-connected”. That is, given points $x_0$ and $x_1$ in our space $X$, there is a function $f:\left[0,1\right]\rightarrow X$ with $f(0)=x_0$
and $f(1)=x_1$. Slightly stronger, we might want to require that we can choose this function to be a homeomorphism from the closed interval $\left[0,1\right]$ onto its image in $X$. In this case we
say that the space is “arc-connected”.
Arc-connectedness clearly implies path-connectedness, and we’ll see in more detail later that path-connectedness implies connectedness. However, the converses do not hold. The topologist’s sine curve
gives a counterexample where connectedness doesn’t imply path-connectedness, and I’ll let you try to find a counterexample for the other converse.
Just like we had local connectedness, we say that a space is locally path- or arc-connected if every point has a neighborhood which is path- or arc-connected. We also have path-components and
arc-components defined as for connected components. [S:Unfortunately, we don’t have as nice a characterization as we did before for a space in terms of its path- or arc-components. In the
topologist’s sine curve, for example, the wiggly bit and the point at the origin are the two path components, but they aren’t put together with anything so nice as a coproduct.:S]
[UPDATE]: As discussed below in the comments, I made a mistake here, implicitly assuming the same thing edriv said explicitly. As I say below, point-set topology and analysis really live on the edge
of validity, and there’s a cottage industry of crafting counterexamples to all sorts of theorems if you weaken the hypotheses just slightly.
We’ve defined the real numbers $\mathbb{R}$ as a topological field by completing the rational numbers $\mathbb{Q}$ as a uniform space, and then extending the field operations to the new points by
continuity. Now we extend the order on the rational numbers to make $\mathbb{R}$ into an ordered field.
First off, we can simplify our work greatly by recognizing that we just need to determine the subset $\mathbb{R}^+$ of positive real numbers — those $x\in\mathbb{R}$ with $x\geq0$. Then we can say $x
\geq y$ if $x-y\geq0$. Now, each real number is represented by a Cauchy sequence of rational numbers, and so we say $x\geq0$ if $x$ has a representative sequence $x_n$ with each point $x_n\geq 0$.
What we need to check is that the positive numbers are closed under both addition and multiplication. But clearly if we pick $x_n$ and $y_n$ to be nonnegative Cauchy sequences representing $x$ and
$y$, respectively, then $x+y$ is represented by $x_n+y_n$ and $xy$ is represented by $x_ny_n$, and these will be nonnegative since $\mathbb{Q}$ is an ordered field.
Now for each $x$, $x-x=0\geq0$, so $x\geq x$. Also, if $x\geq y$ and $y\geq z$, then $x-y\geq0$ and $y-z\geq0$, so $x-z=(x-y)+(y-z)\geq0$, and so $x\geq z$. These show that $\geq$ defines a preorder
on $\mathbb{R}$, since it is reflexive and transitive. Further, if $x\geq y$ and $y\geq x$ then $x-y\geq0$ and $y-x\geq0$, so $x-y=0$ and thus $x=y$. This shows that $\geq$ is a partial order.
Clearly this order is total because any real number either has a nonnegative representative or it doesn’t.
One thing is a little hazy here. We asserted that if a number and its negative are both greater than or equal to zero, then it must be zero itself. Why is this? Well if $x_n$ is a nonnegative Cauchy
sequence representing $x$ then $-x_n$ represents $-x$. Now can we find a nonnegative Cauchy sequence $y_n$ equivalent to $-x_n$? The lowest rational number that $y_n$ can be is, of course, zero, and
so $\left|y_n-(-x_n)\right|\geq x_n$. But for $-x_n$ and $y_n$ to be equivalent we must have for each positive rational $r$ an $N$ so that $r\geq\left|y_n-(-x_n)\right|\geq x_n$ for $n\geq N$. But
this just says that $x_n$ converges to ${0}$!
So $\mathbb{R}$ is an ordered field, so what does this tell us? First off, we get an absolute value $\left|x\right|$ just like we did for the rationals. Secondly, we’ll get a uniform structure as we
do for any ordered group. This uniform topology has a subbase consisting of all the half-infinite intervals $(x,\infty)$ and $(-\infty,x)$ for all real $x$. But this is also a subbase for the metric
we got from completing the rationals, and so the two topologies coincide!
One more very important thing holds for all ordered fields. As a field $\mathbb{F}$ is a kind of a ring with unit, and like any ring with unit there is a unique ring homomorphism $\mathbb{Z}\
rightarrow\mathbb{F}$. Now since $1gt;0$ in any ordered field, we have $2=1+1>0$, and $3=2+1>0$, and so on, to show that no nonzero integer can become zero under this map. Since we have an injective
homomorphism of rings, the universal property of the field of fractions gives us a unique field homomorphism $\mathbb{Q}\rightarrow\mathbb{F}$ extending the ring homomorphism from the integers.
Now if $\mathbb{F}$ is complete in the uniform structure defined by its order, this homomorphism will be uniformly complete. Therefore by the universal property of uniform completions, we will find a
unique extension $\mathbb{R}\rightarrow\mathbb{F}$. That is, given any (uniformly) complete ordered field there is a unique uniformly continuous homomorphism of fields from the real numbers to the
field in question. Thus $\mathbb{R}$ is the universal such field, which characterizes it uniquely up to isomorphism!
So we can unambiguously speak of “the” real numbers, even if we use a different method of constructing them, or even no method at all. We can work out the rest of the theory of real numbers from
these properties (though for the first few we might fall back on our construction) just as we could work out the theory of natural numbers from the Peano axioms.
We’ve defined the topological space we call the real number line $\mathbb{R}$ as the completion of the rational numbers $\mathbb{Q}$ as a uniform space. But we want to be able to do things like
arithmetic on it. That is, we want to put the structure of a field on this set. And because we’ve also got the structure of a topological space, we want the field operations to be continuous maps.
Then we’ll have a topological field, or a “field object” (analogous to a group object) in the category $\mathbf{Top}$ of topological spaces.
Not only do we want the field operations to be continuous, we want them to agree with those on the rational numbers. And since $\mathbb{Q}$ is dense in $\mathbb{R}$ (and similarly $\mathbb{Q}\times\
mathbb{Q}$ is dense in $\mathbb{R}\times\mathbb{R}$), we will get unique continuous maps to extend our field operations. In fact the uniqueness is the easy part, due to the following general property
of dense subsets.
Consider a topological space $X$ with a dense subset $D\subseteq X$. Then every point $x\in X$ has a sequence $x_n\in D$ with $\lim x_n=x$. Now if $f:X\rightarrow Y$ and $g:X\rightarrow Y$ are two
continuous functions which agree for every point in $D$, then they agree for all points in $X$. Indeed, picking a sequence in $D$ converging to $x$ we have
$f(x)=f(\lim x_n)=\lim f(x_n)=\lim g(x_n)=g(\lim x_n)=g(x)$.
So if we can show the existence of a continuous extension of, say, addition of rational numbers to all real numbers, then the extension is unique. In fact, the continuity will be enough to tell us
what the extension should look like. Let’s take real numbers $x$ and $y$, and sequences of rational numbers $x_n$ and $y_n$ converging to $x$ and $y$, respectively. We should have
$s(x,y)=s(\lim x_n,\lim y_n)=s(\lim(x_n,y_n))=\lim x_n+y_n$
but how do we know that the limit on the right exists? Well if we can show that the sequence $x_n+y_n$ is a Cauchy sequence of rational numbers, then it must converge because $\mathbb{R}$ is
Given a rational number $r$ we must show that there exists a natural number $N$ so that $\left|(x_m+y_m)-(x_n+y_n)\right|<r$ for all $m,n\geq N$. But we know that there’s a number $N_x$ so that $\
left|x_m-x_n\right|<\frac{r}{2}$ for $m,n\geq N_x$, and a number $N_y$ so that $\left|y_m-y_n\right|<\frac{r}{2}$ for $m,n\geq N_y$. Then we can choose $N$ to be the larger of $N_x$ and $N_y$ and
So the sequence of sums is Cauchy, and thus converges.
What if we chose different sequences $x'_n$ and $y'_n$ converging to $x$ and $y$? Then we get another Cauchy sequence $x'_n+y'_n$ of rational numbers. To show that addition of real numbers is
well-defined, we need to show that it’s equivalent to the sequence $x_n+y_n$. So given a rational number $r$ does there exist an $N$ so that $\left|(x_n+y_n)-(x'_n+y'_n)\right|<r$ for all $n\geq N$?
This is almost exactly the same as the above argument that each sequence is Cauchy! As such, I’ll leave it to you.
So we’ve got a continuous function taking two real numbers and giving back another one, and which agrees with addition of rational numbers. Does it define an Abelian group? The uniqueness property
for functions defined on dense subspaces will come to our rescue! We can write down two functions from $\mathbb{R}\times\mathbb{R}\times\mathbb{R}$ to $\mathbb{R}$ defined by $s(s(x,y),z)$ and $s(x,s
(y,z))$. Since $s$ agrees with addition on rational numbers, and since triples of rational numbers are dense in the set of triples of real numbers, these two functions agree on a dense subset of
their domains, and so must be equal. If we take the ${0}$ from $\mathbb{Q}$ as the additive identity we can also verify that it acts as an identity real number addition. We can also find the negative
of a real number $x$ by negating each term of a Cauchy sequence converging to $x$, and verify that this behaves as an additive inverse, and we can show this addition to be commutative, all using the
same techniques as above. From here we’ll just write $x+y$ for the sum of real numbers $x$ and $y$.
What about the multiplication? Again, we’ll want to choose rational sequences $x_n$ and $y_n$ converging to $x$ and $y$, and define our function by
$m(x,y)=m(\lim x_n,\lim y_n)=m(\lim(x_n,y_n))=\lim x_ny_n$
so it will be continuous and agree with rational number multiplication. Now we must show that for every rational number $r$ there is an $N$ so that $\left|x_my_m-x_ny_n\right|<r$ for all $m,n\geq N$.
This will be a bit clearer if we start by noting that for each rational $r_x$ there is an $N_x$ so that $\left|x_m-x_n\right|<r_x$ for all $m,n\geq N_x$. In particular, for sufficiently large $n$ we
have $\left|x_n\right|<\left|x_N\right|+r_x$, so the sequence $x_n$ is bounded above by some $b_x$. Similarly, given $r_y$ we can pick $N_y$ so that $\left|y_m-y_n\right|<r_y$ for $m,n\geq N_y$ and
get an upper bound $b_y\geq y_n$ for all $n$. Then choosing $N$ to be the larger of $N_x$ and $N_y$ we will have
$\left|x_my_m-x_ny_n\right|=\left|(x_m-x_n)y_m+x_n(y_m-y_n)\right|\leq r_xb_y+b_xr_y$
for $m,n\geq N$. Now given a rational $r$ we can (with a little work) find $r_x$ and $r_y$ so that the expression on the right will be less than $r$, and so the sequence is Cauchy, as desired.
Then, as for addition, it turns out that a similar proof will show that this definition doesn’t depend on the choice of sequences converging to $x$ and $y$, so we get a multiplication. Again, we can
use the density of the rational numbers to show that it’s associative and commutative, that $1\in\mathbb{Q}$ serves as its unit, and that multiplication distributes over addition. We’ll just write
$xy$ for the product of real numbers $x$ and $y$ from here on.
To show that $\mathbb{R}$ is a field we need a multiplicative inverse for each nonzero real number. That is, for each Cauchy sequence of rational numbers $x_n$ that doesn’t converge to ${0}$, we
would like to consider the sequence $\frac{1}{x_n}$, but some of the $x_n$ might equal zero and thus throw us off. However, there can only be a finite number of zeroes in the sequence or else ${0}$
would be an accumulation point of the sequence and it would either converge to ${0}$ or fail to be Cauchy. So we can just change each of those to some nonzero rational number without breaking the
Cauchy property or changing the real number it converges to. Then another argument similar to that for multiplication shows that this defines a function from the nonzero reals to themselves which
acts as a multiplicative inverse.
Okay, in a uniform space we have these things called “Cauchy nets”, which are ones where the points of the net are getting closer and closer to each other. If our space is sequential — usually a
result of assuming it to be first- or second-countable — then we can forget the more complicated nets and just consider Cauchy sequences. In fact, let’s talk as if we’re looking at a sequence to
build up an intuition here.
Okay, so a sequence is Cauchy if no matter what entourage we pick to give a scale of closeness, there’s some point along our sequence where all of the remaining points are at least that close to each
other. If we pick a smaller entourage we might have to walk further out the sequence, but eventually every point will be at least that close to all the points beyond it. So clearly they’re all
getting pressed together towards a limit, right?
Unfortunately, no. And we have an example at hand of where it can go horribly, horribly wrong. The rational numbers $\mathbb{Q}$ are an ordered topological group, and so they have a uniform
structure. We can give a base for this topology consisting of all the rays $(a,\infty)=\{x\in\mathbb{Q}|a<x\}$, the rays $(-\infty,a)=\{x\in\mathbb{Q}|x<a\}$, and the intervals $(a,b)=\{x\in\mathbb
{Q}|a<x<b\}$, which is clearly countable and thus makes $\mathbb{Q}$ second-countable, and thus sequential.
Okay, I’ll take part of that back. This is only “clear” if you know a few things about cardinalities which I’d thought I’d mentioned but it turns out I haven’t. It was also pointed out that I never
said how to generate an equivalence relation from a simpler relation in a comment earlier. I’ll wrap up those loose ends shortly, probably tomorrow.
Back to the business at hand: we can now just consider Cauchy sequences, instead of more general Cauchy nets. Also we can explicitly give entourages that comprise a base for the uniform structure,
which is all we really need to check the Cauchy condition: $E_a=\{(x,y)\in\mathbb{Q}\times\mathbb{Q}|\left|x-y\right|<a\}$. I did do absolute values, didn’t I? So a sequence $x_i$ is Cauchy if for
every rational number $a$ there is an index $N$ so that for all $i\geq N$ and $j\geq N$ we have $\left|x_i-x_j\right|<a$.
We also have a neighborhood base $\mathcal{B}(q)$ for each rational number $q$ given by the basic entourages. For each rational number $r$ we have the neighborhood $\{x\in\mathbb{Q}|\left|x-q\right|
<r\}$. These are all we need to check convergence. That is, a sequence $x_i$ of rational numbers converges to $q$ if for all rational $r$ there is an index $N$ so that for all $i\geq N$ we have $\
And finally: for each natural number $n\in\mathbb{N}$ there are only finitely many square numbers less than $2n^2$. We’ll let $a_n^2$ be the largest such number, and consider the rational number $x_n
=\frac{a_n}{n}$. We can show that this sequence is Cauchy, but it cannot converge to any rational number. In fact, if we had such a thing this sequence would be trying to converge to the square root
of two.
The uniform space $\mathbb{Q}$ is shot through with holes like this, making tons of examples of Cauchy sequences which “should” converge, but don’t. And this is all just in one little uniform space!
Clearly Cauchy nets don’t converge in general. But we dearly want them to. If we have a uniform space in which every Cauchy sequence does converge, we call it “complete”.
Categorically, a complete uniform space is sort of alike an abelian group. The additional assumption is an extra property which we may forget when convenient. That is, we have a category $\mathbf
{Unif}$ of uniform spaces and a full subcategory $\mathbf{CUnif}$ of complete uniform spaces. The inclusion functor of the subcategory is our forgetful functor, and we’d like an adjoint to this
functor which assigns to each uniform space $X$ its “completion” $\overline{X}$. This will contain $X$ as a dense subspace — the closure $\mathrm{Cl}(X)$ in $\overline{X}$ is the whole of $\overline
{X}$ — and will satisfy the universal property that if $Y$ is any other complete uniform space and $f:X\rightarrow Y$ is a uniformly continuous map, then there is a unique uniformly continuous $\bar
{f}:\overline{X}\rightarrow Y$ extending $f$.
To construct such a completion, we’ll throw in the additional assumption that $X$ is second-countable so that we only have to consider Cauchy sequences. This isn’t strictly necessary, but it’s
convenient and gets the major ideas across. I’ll leave you to extend the construction to more general uniform spaces if you’re interested.
What we want to do is identify Cauchy sequences in $X$ — those which should converge to something in the completion — with their limit points in the completion. But more than one sequence might be
trying to converge to the same point, so we can’t just take all Cauchy sequences as points. So how do we pick out which Cauchy sequences should correspond to the same point? We’ll get at this by
defining what the uniform structure (and thus the topology) should be, and then see which points have the same neighborhoods.
Given an entourage $E$ of $X$ we can define an entourage $\overline{E}$ as the set of those pairs of sequences $(x_i,y_j)$ where there exists some $N$ so that for all $i\geq N$ and $j\geq N$ we have
$(x_i,y_j)\in E$. That is, the sequences which get eventually $E$-close to each other are considered $\overline{E}$-close.
Now two sequences will be equivalent if they are $\overline{E}$-close for all entourages $E$ of $X$. We can identify these sequences and define the points of $\overline{X}$ to be these equivalence
classes of Cauchy sequences. The entourages $\overline{E}$ descend to define entourages on $\overline{X}$, thus defining it as a uniform space. It contains $X$ as a uniform subspace if we identify $x
\in X$ with (the equivalence class of) the constant sequence $x, x, x, ...$. It’s straightforward to show that this inclusion map is uniformly continuous. We can also verify that the
second-countability of $X$ lifts up to $\overline{X}$.
Now it also turns out that $\overline{X}$ is complete. Let’s consider a sequence of Cauchy sequences $(x_k)_i$. This will be Cauchy if for all entourages $\overline{E}$ there is an $\bar{N}$ so that
if $i\geq\bar{N}$ and $j\geq\bar{N}$ the pair $((x_k)_i,(x_k)_j)$ is in $\overline{E}$. That is, there is an $N_{i,j}$ so that for $k\geq N_{i,j}$ and $l\geq N_{i,j}$ we have $((x_k)_i,(x_l)_j)\in E$
. We can’t take the limits in $X$ of the individual Cauchy sequences $(x_k)_i$ — the limits along $k$ — but we can take the limits along $i$! This will give us another Cauchy sequence, which will
then give a limit point in $\overline{X}$.
As for the universal property, consider a uniformly continuous map $f:X\rightarrow Y$ to a complete uniform space $Y$. Then every point $\bar{x}$ in $\overline{X}$ comes from a Cauchy sequence $x_i$
in $X$. Being uniformly continuous, $f$ will send this to a Cauchy sequence $f(x_i)$ in $Y$, which must then converge to some limit $\bar{f}(\bar{x})\in Y$ since $Y$ is complete. On the other hand,
if $x_i'$ is another representative of $\bar{x}$ then the uniform continuity of $f$ will force $\lim f(x_i)=\lim f(x_i')$, so $\bar{f}$ is well-defined. It is unique because there can be only one
continuous function on $\overline{X}$ which agrees with $f$ on the dense subspace $X$.
So what happens when we apply this construction to the rational numbers $\mathbb{Q}$ in an attempt to patch up all those holes and make all the Cauchy sequences converge? At long last we have the
real numbers $\mathbb{R}$! Or, at least, we have the underlying complete uniform space. What we don’t have is any of the field properties we’ll want for the real numbers, but we’re getting close to
what every freshman in calculus thinks they understand.
Now I want to toss out a few assumptions that, if they happen to hold for a topological space, will often simplify our work. There are a lot of these, and the ones that I’ll mention I’ll dole out in
small, related collections. Often we will impose one of these assumptions and then just work in the subcategory of $\mathbf{Top}$ of spaces satisfying them, so I’ll also say a few things about how
these subcategories behave. Often this restriction to “nice” spaces will end up breaking some “nice” properties about $\mathbf{Top}$, and Grothendieck tells us that it’s often better to have a nice
category with some bad objects than to have a bad category with only nice objects. Still, the restrictions can come in handy.
First I have to toss out the concept of a neighborhood base, which is for a neighborhood filter like a base for a topology. That is, a collection $\mathcal{B}(x)\subseteq\mathcal{N}(x)$ of
neighborhoods of a point $x$ is a base for the neighborhood filter $\mathcal{N}(x)$ if for every neighborhood $N\in\mathcal{N}(x)$ there is some neighborhood $B\in\mathcal{B}(x)$ with $B\subseteq N$.
Just like we saw for a base of a topology, we only need to check the definition of continuity at a point $x$ on a neighborhood base at $x$.
Now we’ll say that a topological space is “first-countable” if each neighborhood filter has a countable base. That is, the sets in $\mathcal{B}(x)$ can be put into one-to-one correspondence with some
subset of the natural numbers $\mathbb{N}$. We can take this collection of sets in the order given by the natural numbers: $B_i$. Then we can define $U_0=B_0$, $U_1=U_0\cap B_1$, and in general $U_n=
U_{n-1}\cap B_n$. This collection $U_i$ will also be a countable base for the neighborhood filter, and it satisfies the extra property that $m\geq n$ implies that $U_m\subseteq U_n$. From this point
we will assume that our countable base is ordered like this.
Why does it simplify our lives to only have a countable neighborhood base at each point? One great fact is that a function $f:X\rightarrow Y$ from a first-countable space $X$ will be continuous at
$x$ if each neighborhood $V\in\mathcal{N}_Y(f(x))$ contains the image of some neighborhood $U\in\mathcal{N}_X(x)$. But $U$ must contain a set from our countable base, so we can just ask if there is
an $i\in\mathbb{N}$ with $f(B_i)\in V$.
We also picked the $B_i$ to nest inside of each other. Why? Well we know that if $f$ isn’t continuous at $x$ then we can construct a net $x_\alpha\in X$ that converges to $x$ but whose image doesn’t
converge to $f(x)$. But if we examine our proof of this fact, we can look only at the base $B_i$ and construct a sequence that converges to $x$ and whose image fails to converge to $f(x)$. That is, a
function from a first-countable space is continuous if and only if $\lim f(x_i)=f(\lim x_i)$ for all sequences $x_i\in X$, and sequences are a lot more intuitive than general nets. When this happens
we say that a space is “sequential”, and so we have shown that every first-countable space is sequential.
Every subspace of a first-countable space is first-countable, as is every countable product. Thus the subcategory of $\mathbf{Top}$ consisting of first-countable spaces has all countable limits, or
is “countably complete”. Disjoint unions of first-countable spaces are also first-countable, so we still have coproducts, but quotients of first-countable spaces may only be sequential. On the other
hand, there are sequential spaces which are not first-countable whose subspaces are not even sequential, so we can’t just pass to the subcategory of sequential spaces to recover colimits.
A stronger condition than first-countability is second-countability. This says that not only does every neighborhood filter have a countable base, but that there is a countable base for the topology
as a whole. Clearly given any point $x$ we can take the sets in our base which contain $x$ and thus get a countable neighborhood base at that point, so any second-countable space is also
first-countable, and thus sequential.
Another nice thing about second-countable spaces is that they are “separable”. That is, in a second-countable space $X$ there will be a countable subset $S\subseteq X$ whose closure $\mathrm{Cl}(S)$
is all of $X$. That is, given any point $x\in X$ there is a sequence $x_i\in S$ — we don’t need nets because $X$ is sequential — so that $x_i$ converges to $x$. That is, in some sense we can
“approximate” points of $X$ by sequences of points in $S$, and $S$ itself has only countably many points.
The subcategory of all second-countable spaces is again countably complete, since subspaces and countable products of second-countable spaces are again second-countable. Again, we have coproducts,
but not coequalizers since a quotient of a second-countable space may not be second-countable. However, if the map $X\rightarrow X/\sim$ sends open sets in $X$ to open sets in the quotient, then the
quotient space is second-countable, so that’s not quite as bad as first-countability.
Second-countability (and sometimes first-countability) is a property that makes a number of constructions work out a lot more easily, and which doesn’t really break too much. It’s a very common
assumption since pretty much every space an algebraic topologist or a differential geometer will think of is second-countable. However, as is usually the case with such things, “most” spaces are not
second-countable. Still, it’s a common enough assumption that we will usually take it as read, making explicit those times when we don’t assume that a space is second-countable.
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
April 2014
M T W T F S S
« Sep | {"url":"https://unapologetic.wordpress.com/category/topology/point-set-topology/page/2/","timestamp":"2014-04-20T13:22:50Z","content_type":null,"content_length":"192312","record_id":"<urn:uuid:4936e5de-2b01-4c67-bb52-80324866e3c5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing 3n couples into triplets
January 9th 2009, 08:02 AM
Dividing 3n couples into triplets
There are $3n$ married couples. In how many ways can they be arranged into triplets, if in each triplet, there must not be a married couple ?
I should solve this using the Principle of Inclusion-Exclusion, but don't really know where to start and how to solve this problem. I'd be very grateful for any help!
January 9th 2009, 02:00 PM
Let $a_n$ be the desired number and $b_n$ be the number of arrangements into triplets in which, in at least one triplet, there's a married couple. Then $a_n+b_n$ is the total number of
arrangements of our 6n persons into triplets. That number is: $C=\binom{ 6n}{3 }\cdot \binom{ 6n-3}{3 }...\binom{ 3}{3 }$
We have: $C =\frac{ (6n)!}{3!\cdot (6n-3)! }\frac{ (6n-3)!}{3!\cdot (6n-6)! }...=\frac{(6n)! }{ 6^{2n} }$
Let's name the married couples as $C_1$ ... $C_{3n}$ each couple is a set of 2 persons. ( they are disjoint)
Consider now $P_k$ to be the set of all arrangements of the 3n couples into triplets such that $C_k$ is included in one triplet. Then the union of $P_1$ ... $P_{3n}$ is $b_n$. Do that by using
Inclusion-Exclusion and then remember that $a_n=C-b_n$
January 9th 2009, 02:33 PM
Paul, your $C$ counts labeled partitions.
Whereas, groups of three are not labeled. Therefore divide by $(2n)!$. | {"url":"http://mathhelpforum.com/discrete-math/67440-dividing-3n-couples-into-triplets-print.html","timestamp":"2014-04-19T22:25:09Z","content_type":null,"content_length":"8534","record_id":"<urn:uuid:632cc08a-88f0-4ac9-a470-a1bfe74b3edb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Date: 2 May 1995 07:40:45 -0400
From: Milli Kutuphane
Subject: (none)
Date: 2 May 1995 11:13:06 -0400
From: Dr. Sydney
Subject: Re: Equations!!
There are a couple of ways to approach this problem... I'll give you a few
hints, and then you can see if you can take it from there, okay?
First, what if you squared both sides of the first equation? Write out
(x +y)^2 = 5^2
and then see if you can see something that will help you solve the problem.
A more conventional way of doing the problem would be to first solve for x
and then solve for y (or first solve for y and then solve for x). Rewrite
the first equation as y = 5 - x. Now Substitute this y in the second
equation. Now your second equation has only x's in it, and you should be
able to solve for x. Once you have x, figure out what y is, and then plug
those numbers into the third equation and see what you get!
I tend to think the first way of doing it is neater, just because it
involves less work and has sort of a trick involved, but either way will get
you the answer you want. If you have any questions about this, write back!
--Sydney, "Dr. Math" | {"url":"http://mathforum.org/library/drmath/view/57625.html","timestamp":"2014-04-20T21:38:36Z","content_type":null,"content_length":"5938","record_id":"<urn:uuid:08b303e0-7787-4184-9822-c0934eee036d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
DSpace at IIT Bombay: Fast algorithms for binary cross-correlation
DSpace at IIT Bombay >
IITB Publications >
Proceedings papers >
Please use this identifier to cite or link to this item: http://dspace.library.iitb.ac.in/jspui/handle/100/2657
Title: Fast algorithms for binary cross-correlation
Authors: MUKHERJI, S
Issue Date: 2005
Publisher: IEEE
Citation: IGARSS 2005: IEEE International Geoscience and Remote Sensing Symposium, Vols 1-8, Proceedings,340-343
Cross-correlation is widely used to match images. Cross-correlation of windows where pixels have binary values is necessary when thresholded sign-of-laplacian images are matched.
Nishihara proposed that the sign of the laplacian of an image be used as a characteristic (that is robust to illumination changes and noise) to match images. We have reduced the number
of multiplications required in the computation of the laplacian by half by making use of the symmetry of the convolution masks. Thresholding the sign of the laplacian of an image results
in a binary image and image matching can then be done by matching corresponding windows of the binary images using cross-correlation. Leberl proposed fast algorithms for computing binary
Abstract: cross-correlation. We propose a fast implementation of his best algorithm. We also propose, in this paper, a bit-based algorithm which makes use of the fact that the binary data in the
windows can be represented by bits. The algorithm packs the bits into integer variables and then uses the logical operations to identify matching bits in the windows. This very packing
of bits into integer variables (that enables us to use the logical operations, hence speeding up the process) renders the step of counting the number of matching pixels difficult. This
step, then, is the bottleneck in the algorithm. We solve this problem of counting the number of matching pixels by an algebraic property. The bit-method is exceedingly simple and is most
efficient when the number of bits is close to multiples of 32. Binary cross-correlation can also be computed and, hence, speeded up by the FFT. We conclude the paper by comparing the
bit-based method, Leberl's algorithm and the FFT-based method for computing binary cross-correlation.
URI: http://dspace.library.iitb.ac.in/xmlui/handle/10054/16066
ISBN: 0-7803-9050-4
Appears in Proceedings papers
Files in This Item:
There are no files associated with this item.
View Statistics
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated. | {"url":"http://dspace.library.iitb.ac.in/jspui/handle/100/2657","timestamp":"2014-04-21T04:48:35Z","content_type":null,"content_length":"17170","record_id":"<urn:uuid:9c8f5818-aa7a-4ec0-ac69-b2d0acf6ad08>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rolling Window Regression (For Beginners)
28 Sep 2011
A beginners tool for analysing time varying coefficients within regression analysis.
%Author: Karan Puri (PhD Researcher in Finance University of Bath) [Email:kp212@bath.ac.uk]
%A simple exampe of rolling window regression
%The purpose of this file is to show the user a starting point in understading time varying coefficients
%The example used here is based on on the function 'regress' which performs OLS on the given parameters. However, the loop structure can be applied to other methods
%I am currently working on making the file more efficient. Comments are highly welcome
Y=rand(211,10); %replace with dependant variable values, NOTE: Y cannot be a matrix; I have worked around it by creating a higher loop which goes through each col of Y and repeats the analysis
X=rand(211,1); %replace with independant variable values,X can be a vector or matrix, if latter then update regress as well!
steps=size(Y,1); % this function gives you the size of the colume vector
d=24;%this is discrete window length you wish to regress over - Change as per requirements
p=1; %number of regressors - Change as per requirements
adjustment=(d-1)/(d-p-1); % regressor adjustment for multiple regression
j=1; %increment level so in this case we are running a rolling regression with a fixed window length of 24 data points that updates with 1 point increment. The loop will work for j>1 as well.
for k=1; % this is for the dependant variable so 1=col 1 and so on..increase k as per requirements
for i=1:j:steps-d;
[b bint r rint stats]=regress(Y(i:23+i,k),[ones(24,1),X(i:23+i,1)]);
R_Squared(i,k)=stats(1); %stats provides other information as well such as F-stat;p-value etc that can be added to this easily
%Adj_R_Squared(i)=1-(1-R_Squared(i))*adjustment; %Use this when the number of regressors is greater than 1
error(:,i,k)=r(:,1); %if k>1 then the residulas will be 3D but are easily understood based on the matrix structure
if CI_Alpha(i,1,k)*CI_Alpha(i,2,k)<0;
if CI_Beta(i,1,k)*CI_Beta(i,2,k)<0;
subplot(3,2,1),plot(Alpha); legend('Rolling_Alpha')
subplot(3,2,2),plot(Beta); legend('Rolling_Beta')
%you can add as many here by changing the structure of the subplot command. Alternativly use the plot command for individual figures.
A beginners tool for analysing time varying coefficients within regression analysis.
%Author: Karan Puri (PhD Researcher in Finance University of Bath) [Email:kp212@bath.ac.uk]
%A simple exampe of rolling window regression
%The purpose of this file is to show the user a starting point in understading time varying coefficients
%The example used here is based on on the function 'regress' which performs OLS on the given parameters. However, the loop structure can be applied to other methods
%I am currently working on making the file more efficient. Comments are highly welcome
Y=rand(211,10); %replace with dependant variable values, NOTE: Y cannot be a matrix; I have worked around it by creating a higher loop which goes through each col of Y and repeats the analysis
X=rand(211,1); %replace with independant variable values,X can be a vector or matrix, if latter then update regress as well!
steps=size(Y,1); % this function gives you the size of the colume vector
d=24;%this is discrete window length you wish to regress over - Change as per requirements
p=1; %number of regressors - Change as per requirements
adjustment=(d-1)/(d-p-1); % regressor adjustment for multiple regression
j=1; %increment level so in this case we are running a rolling regression with a fixed window length of 24 data points that updates with 1 point increment. The loop will work for j>1 as well.
for k=1; % this is for the dependant variable so 1=col 1 and so on..increase k as per requirements
for i=1:j:steps-d;
[b bint r rint stats]=regress(Y(i:23+i,k),[ones(24,1),X(i:23+i,1)]);
R_Squared(i,k)=stats(1); %stats provides other information as well such as F-stat;p-value etc that can be added to this easily
%Adj_R_Squared(i)=1-(1-R_Squared(i))*adjustment; %Use this when the number of regressors is greater than 1
error(:,i,k)=r(:,1); %if k>1 then the residulas will be 3D but are easily understood based on the matrix structure
if CI_Alpha(i,1,k)*CI_Alpha(i,2,k)<0;
if CI_Beta(i,1,k)*CI_Beta(i,2,k)<0;
subplot(3,2,1),plot(Alpha); legend('Rolling_Alpha')
subplot(3,2,2),plot(Beta); legend('Rolling_Beta')
%you can add as many here by changing the structure of the subplot command. Alternativly use the plot command for individual figures.
%Author: Karan Puri (PhD Researcher in Finance University of Bath) [Email:kp212@bath.ac.uk] %-------------------------------------------------------------------------- %A simple exampe of rolling
window regression %The purpose of this file is to show the user a starting point in understading time varying coefficients %The example used here is based on on the function 'regress' which performs
OLS on the given parameters. However, the loop structure can be applied to other methods %I am currently working on making the file more efficient. Comments are highly welcome
%-------------------------------------------------------------------------- clc; clear; Y=rand(211,10); %replace with dependant variable values, NOTE: Y cannot be a matrix; I have worked around it by
creating a higher loop which goes through each col of Y and repeats the analysis X=rand(211,1); %replace with independant variable values,X can be a vector or matrix, if latter then update regress as
well! steps=size(Y,1); % this function gives you the size of the colume vector d=24;%this is discrete window length you wish to regress over - Change as per requirements p=1; %number of regressors -
Change as per requirements adjustment=(d-1)/(d-p-1); % regressor adjustment for multiple regression j=1; %increment level so in this case we are running a rolling regression with a fixed window
length of 24 data points that updates with 1 point increment. The loop will work for j>1 as well. for k=1; % this is for the dependant variable so 1=col 1 and so on..increase k as per requirements
for i=1:j:steps-d; [b bint r rint stats]=regress(Y(i:23+i,k),[ones(24,1),X(i:23+i,1)]); Alpha(i,k)=b(1); Beta(i,k)=b(2); R_Squared(i,k)=stats(1); %stats provides other information as well such as
F-stat;p-value etc that can be added to this easily %Adj_R_Squared(i)=1-(1-R_Squared(i))*adjustment; %Use this when the number of regressors is greater than 1 error(:,i,k)=r(:,1); %if k>1 then the
residulas will be 3D but are easily understood based on the matrix structure CI_Alpha(i,:,k)=bint(1,:); CI_Beta(i,:,k)=bint(2,:); if CI_Alpha(i,1,k)*CI_Alpha(i,2,k)<0; Significance_Alpha(i,k)=0; else
Significance_Alpha(i,k)=1; end if CI_Beta(i,1,k)*CI_Beta(i,2,k)<0; Significance_Beta(i,k)=0; else Significance_Beta(i,k)=1; end end end %Graphs subplot(3,2,1),plot(Alpha); legend('Rolling_Alpha')
subplot(3,2,2),plot(Beta); legend('Rolling_Beta') %you can add as many here by changing the structure of the subplot command. Alternativly use the plot command for individual figures. | {"url":"http://www.mathworks.com/matlabcentral/fileexchange/33051-rolling-window-regression-for-beginners/content/Rolling_Window_Regression.m","timestamp":"2014-04-17T10:15:15Z","content_type":null,"content_length":"21211","record_id":"<urn:uuid:6b68fa0b-83a1-4e3c-ad12-413014c629c6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Find Perpendicular Vectors in 3 Dimensions
Edit Article
Edited by Lucky7, Puddy, Justpeachy567, Denise and 3 others
In order to find perpendicular vectors in three dimensions, you need to know how to find the determinant of a matrix and how to find the cross product of two vectors.
1. 1
Allow a general form for three vectors. For the purposes of this article, a=<a[1], a[2], a[3]>, b=<b[1], b[2], b[3]> and c=<c[1], c[2], c[3]> where c is perpendicular to both a and b.
2. 2
Know the definition of the cross product. The cross product will give a vector c that is perpendicular to both a and b.
3. 3
Find the cross product of a and b.
The wikiHow How to Find the Determinant of a 3X3 Matrix may be useful.
□ Step-1
□ Step-2
□ Step-3. Done.
4. 4
Put in component form. Finished!
• If you are given one vector and one point, you will need to find a second vector using the components of the vector and the ordered triple of the point.
• You need either one vector and a point or two vectors to use this method.
• Remember that you can insert a "k" as a scalar to find all parallel vectors to c.
• If you are only given one vector, come up with another point in the plane so you can find a second vector b. | {"url":"http://www.wikihow.com/Find-Perpendicular-Vectors-in-3-Dimensions","timestamp":"2014-04-18T03:09:41Z","content_type":null,"content_length":"65638","record_id":"<urn:uuid:a28363f6-9fd0-470c-ac09-dd6357652754>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Simplify : e^(2lnx)+3e^ln(2x)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fb94557e4b05565342e447e","timestamp":"2014-04-20T18:59:08Z","content_type":null,"content_length":"60409","record_id":"<urn:uuid:267fbfd0-fa15-4235-8f87-1c7164a654e0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Author NumberFormat.format()
Hi there,
Oct 31, I stepped trough the following code:
Posts: 10
When reaching line 13 the format(double d) method returns the String "987.12346" instead of "987.12345"?
Why is the format(double d) method returning "987.12346" with the digit 6 instead of 5?
Oct 31, Thanks,
Posts: 10 Max
Ranch Hand
Aug 18,
2009 Because last (fifth) number is rounded up. If 5 was followed by digits 0 to 4, result would be 987.12345.
Posts: 174
I like...
Thanks Zandis,
Oct 31, where is this rule stated? Does this rule applies to float types too?
Posts: 10 So if the double had been 987.123459 the output would be 987.12349.
If the double had been 987.123451 the output would be 987.12345.
I found this document on the Wikipedia about rounding floating point but I dont understand it http://en.wikipedia.org/wiki/Rounding#Types_of_roundings.
Keeper Besides the rounding problem, note that you should not expect a float or double to be able to hold numbers of arbitrary precision. A float has about 6 decimal digits of precision, and a
double about 15 digits. Floats and doubles are stored as binary fractions and they can't represent some decimal numbers to exact precision; for example the number 0.1 cannot be stored in
Joined: a float or double exactly. Some calculations will inevitably lead to rounding errors.
Aug 16,
2005 The API documentation of DecimalFormat says:
Java Beginners FAQ - JavaRanch SCJP FAQ - The Java Tutorial - Java SE 7 API documentation
I like... Scala Notes - My blog about Scala
Oct 31, Thanks Jesper.
Posts: 10
Ranch Hand
i think if you require strict floating-point numbers you would need to specify strictfp in either the method or class declaration. wouldnt this give you the exact number? or would you
Joined: need BigDecimal involved?
Aug 21,
2009 dont mind me, i'm new
Posts: 49
SCJA, ITIL V3 Foudation, Studying for SCJP6
Hi Rob,
Oct 31, I just posted the code to understand the rules of rounding double primitive types.
Posts: 10 You would probably use BigDecimal.
Ok so the rules are 0 to 4 rounds down and 5 to 9 rounds up for example:
Oct 31,
2009 This would result to 987.12346.
Posts: 10
If I use a float primitive type:
This would result to 987.12347.
Why do we get 7 instead of 6. Where do I find these rules in the Java documentation?
subject: NumberFormat.format() | {"url":"http://www.coderanch.com/t/471123/java-programmer-SCJP/certification/NumberFormat-format","timestamp":"2014-04-17T16:26:28Z","content_type":null,"content_length":"39039","record_id":"<urn:uuid:3c3e7bfd-ed37-4ab5-8fd7-fde50bd8d5b9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next Article
Contents of this Issue
Other Issues
ELibM Journals
ELibM Home
EMIS Home
A Real Inversion Formula for the Laplace Transform in a Sobolev Space
K. Amano, S. Saitoh and A. Syarif
All authors: Gunma Univ. Japan, Dept. Math., Fac. Eng., Kiryu, 376-8515, Japan
Abstract: For the real-valued Sobolev-Hilbert space on $[0,\infty)$ comprising absolutely continuous functions $F = F(t)$ normalized by $F(0) = 0$ and equipped with the inner product $(F_1,F_2) = \
int_0^\infty (F_1(t)F_2(t) + F_1'(t)F_2'(t))\,dt$ we shall establish a real inversion formula for the Laplace transform.
Keywords: laplace transform, real inversion formula, Sobolev space, reproducing kernel, Mel\-lin transform, Szegö space
Classification (MSC2000): 44A10, 30C40
Full text of the article:
Electronic fulltext finalized on: 7 Aug 2001. This page was last modified: 9 Nov 2001.
© 2001 Heldermann Verlag
© 2001 ELibM for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/ZAA/1804/12.html","timestamp":"2014-04-18T15:46:40Z","content_type":null,"content_length":"3604","record_id":"<urn:uuid:6c1eab43-57a3-44be-a0fb-1b63ebf03839>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Snacks - All Snacks
View Snack
Evil monsters have been awakened by an ancient force, and it's up to you to stop them and untangle the mystery behind their return.
Addresses: Place values and decimals. Building different numbers on both sides of the decimal place. Creating facility with 10ths and 100ths places. Improving general addition, multiplication,
division, and subtraction skills, in the context of working with decimals. | {"url":"http://mathsnacks.com/snacks.php","timestamp":"2014-04-18T15:39:30Z","content_type":null,"content_length":"21032","record_id":"<urn:uuid:e3fa08f5-7800-4305-9682-7f7e3f0dcdcc>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is a Natural Number?
A natural number, which can also be called a counting number, is represented by the digits from 1, 2, 3 through to infinity. The number 0 is included if natural numbers are defined as non-negative
integers, but not if they are defined as only positive integers. In math, there must be an infinite number of natural number digits, since each natural number is defined in part by having a number
that follows it. These numbers are also whole numbers, not fractions or decimals, and can be used for counting or ordering.
The main distinction between a natural number and an integer is that natural numbers, with the exception of zero, are only positive. There is no number below zero, and a natural number can't be
followed by zero, such as is the case with -1,0. Essentially this defines natural numbers as anything zero or above that is whole and not fractional. Zero is generally considered to be the only
natural number that is not positive.
The concept of zero evolved long after civilizations began using counting numbers. Earliest records of counting numbers from 1-10 date to over 4000 years ago, when the use of specific written code to
signify place were used by the Babylonians. The Egyptians wrote hieroglyphs for each digit, but it wasn't until about 1000 BC that the concept of zero was created by the Mayan and Olmec
Though the Olmec and Mayan groups show the first records of the use of zero, the concept of zero also developed in India, in the 7th century BCE. It was the Indian use, rather than Mesoamerican use
that was adopted by civilizations like the Greeks.
There are many ways in which natural numbers can be used in math applications. They can limit problems by suggesting that the answer must be a natural number. They are also studied in specific
application in set theory, mathematics that evaluates sets of things. Number theory may evaluate natural numbers as part of the set of integers or independently to see if they behave in certain ways
or exhibit certain properties.
Perhaps one of the widest uses of natural numbers comes to us very "naturally." When we are young we learn to count from 0 onward. Even young children can easily begin to learn the difference between
one and two, or explain how old they are. This study continues as children begin school and learn to manipulate natural numbers, how to multiply, divide, add and subtract them. Only after the concept
of natural numbers is learned is the concept of integers introduced, and the possibility of negative numbers, which can confound some kids at first, is usually learned in fourth or fifth grade at | {"url":"http://www.wisegeek.com/what-is-a-natural-number.htm","timestamp":"2014-04-19T05:19:23Z","content_type":null,"content_length":"71299","record_id":"<urn:uuid:0e7b8029-1236-4528-a1f7-7b1a11edfed7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Tetrahedron Model in the Context of a Complete Conservation Cycle
Authors: John A. Gowan
"Noether's Theorem" states that in a multicomponent field such as the electromagnetic field (or the metric field of spacetime), symmetries are associated with conservation laws. In matter, light's
(broken) symmetries are conserved by charge and spin; in spacetime, light's symmetries are protected by inertial forces, and conserved (when broken) by gravitational forces. All forms of energy
originate as light; matter carries charges which are the symmetry/entropy debts of the light which created it (both concepts are required to fully integrate gravity - which has a double conservation
role - with the other forces). Charges produce forces which act to return the material system to its original symmetric state, repaying matter's symmetry/entropy debts. Repayment is exampled by any
spontaneous interaction producing net free energy, including: chemical reactions and matter-antimatter annihilation reactions; radioactivity, particle and proton decay; the nucleosynthetic pathway of
stars, and Hawking's "quantum radiance" of black holes. Identifying the broken symmetries of light associated with each of the 4 charges and forces of physics is the first step toward a conceptual
unification. The charges of matter are the symmetry debts of light.
Comments: 6 pages.
Download: PDF
Submission history
[v1] 12 Nov 2009
[v2] 25 Sep 2010
Unique-IP document downloads: 73 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/0911.0033","timestamp":"2014-04-21T07:14:08Z","content_type":null,"content_length":"8120","record_id":"<urn:uuid:5bc4b421-4aad-4460-962d-d3892d236b0d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pulse a LED
Anyone can blink a LED,
but you'll really impress your friends* if you can pulse a LED!
There are numerous ways you could go about this: use separate "for" loops to ramp the brightness up and down, retrieve values from a look-up table, etc. It turns out there's an easy and effective way
to do this using trigonometry. If you don't know what that means, or DO know and are scared, don't worry - it's very simple. I used this technique on the Elevator TARDIS to generate the pulsing blue
light on top of Doctor Who's time machine, but it's also great for warning lights, art projects, and just plain looking cool.
Instead of boring you with theory, let's jump right in with an example. Copy and paste this into the Arduino of your choice, and hook up a LED and resistor to pin 11: (This example requires a pin
capable of PWM, hence pin 11)
const int LED = 11;
void setup()
void loop()
float in, out;
for (in = 0; in < 6.283; in = in + 0.001)
out = sin(in) * 127.5 + 127.5;
That's pretty neat! But that code's pretty weird... what's going on here?
Waves 101
If you ever had a trigonometry class, you probably learned (and may have forgotten) about the sine function. (If you haven't taken trigonometry yet, you'll be able to impress your teacher with this
Sine is a function (you put a number in, you get a number out) that generates sine waves. Sine waves appear in nature all the time (ocean waves, audio waves, radio waves, pendulum motion), and
because they vary smoothly over time (and are easy to make, as we'll see below), we'll borrow them to make our LED vary smoothly between on and off.
In the Arduino, the sine function is named sin(). It takes a floating-point value (known as a "float") as an input and returns a float as an output. Unlike integers (whole numbers), which you use for
most things, floating-point values give you the ability to use decimal points (such as 3.14159). The most basic equation for sin() is:
out = sin(in)
For reasons that we'll gloss over for the moment (see Wikipedia if you're curious), the input number is actually an angle. When you vary this angle from 0 to 360 degrees (all the way around a
circle), the wave will complete one full circuit. If you step through a bunch of angles and graph the output of sin(), you'll get a sine wave:
In this graph, the numbers you put into the function are along the bottom, and the number the function returns are on the left. For example, if you put in 90 degrees, you'll get back 1.0. By stepping
through the intermediate values between 0 and 360, you can generate a nice smooth wave.
Rule #1
Now's a good time to learn rule #1: sin() doesn't want the angle in degrees, it wants it in radians. Radians are angles based on pi. When you start from an angle of 0, halfway around a circle (180
degrees) is pi radians (3.141), and all the way around a circle (360 degrees) is 2 * pi radians (6.283). (If this sounds familiar, it is related to the circumference of a circle being 2 * pi * the
radius. More at Wikipedia). So, to put angles into sin(), use numbers between 0 and 2 * pi like this:
This explains the weird numbers in the "for" loop in the above code:
for (in = 0; in < 6.283; in = in + 0.001)
What we're doing is stepping along the wave, with input values from 0 to 6.283 (2 * pi). We're stepping by 0.001 so we get a lot of intermediate points which makes a very smooth curve, and
incidentally slowing the computer down so the pulse isn't too fast. (More on modifying the speed in "Tips" below).
Home on the range
Next let's look at the output. The sine function will always return a floating-point number that ranges from -1 to +1. You can see this in the above graphs (look at the output numbers on the left
side). This is a slight problem for us, since we want to use the analogWrite() function to drive the LED at various brightnesses; analogWrite() takes a value from 0 to 255, not -1 to +1.
To solve this problem, we'll add a bit more math to our equation:
out = sin(in) * range + offset
This will multiply and shift the -1 to +1 output to anything we want!
Here's a shortcut when picking range and offset: if you're trying to get to a number that ranges from 0 to n, use 1/2 n for both the range and offset. For example, we want our wave to range from 0 to
255 for analogWrite(), so we'll use 127.5 for both our range and offset (127.5 = 255 / 2). Multiplying -1 to +1 by 127.5 gives us a range from -127.5 to +127.5. (Multiplication always happens before
addition when the computer is processing an equation.) Then when we add in the offset of 127.5, we get a range from 0 to 255! This explains the other line of code:
out = sin(in) * 127.5 + 127.5;
Which gives us our final input-to-output graph:
Now when you step the input from 0 to 6.283, the output will vary from 0 to 255. Pretty neat, huh! Just pipe this into analogWrite(), and your LED will pulse away.
Varying the speed
The speed at which the LED pulses is controlled by the amount of time it takes you to step through a whole waveform (from 0 to 2 * pi).
We're stepping by 0.001 in the above code. Try changing that to 0.0001. The LED will pulse 10 times slower. Try changing it to 0.01. The LED will pulse 10 times faster. Often, you can find a pleasing
value by modifying this number a few times until it's where you want it.
If you want (or need) greater control, you can add a precise time delay and be more formal about the values in the loop. For example, let's say you want to pulse your LED once per second. First, we
want to decide how many samples we want to take of the waveform. Let's say 1000, which will make it easy to use a 1ms delay between samples. Next we'll divide 6.283 by 1000 to find our step size,
which is 0.00628. Putting this value into the loop and adding the 1ms delay, we get this:
void loop()
float in, out;
for (in = 0; in < 6.283; in = in + 0.00628)
out = sin(in) * 127.5 + 127.5;
This code pulses the LED exactly once per second by taking 1000 samples, and waiting 1ms between each one. You can easily change this to any speed you wish.
Starting from 0 brightness
If you start with an input angle of 0 as the above code does, you can see from the above graphs that the output will start with a value of 127.5, or 50% brightness. This may not look great if you
want your LED to start pulsing from 0% brightness. Here's a way to fix that:
First, instead of starting from 0, we'll start from the point on the wave where we'll get an output of 0. From the above graph, we can see that the output value of 0 (on the left-hand side),
corresponds with an input value of 4.712 (on the bottom edge). So, if we start our for loop from 4.712, we'll begin with an output of 0. Great - but if we just go from 4.712 to 6.283 and start over
again, we won't be getting the whole wave, just the last quarter, which may look cool (try it!), but isn't what we're looking for.
Here's a trick. The sine wave actually repeats forever even after 6.283. In theory, you could keep increasing the angle to infinity and keep getting a sine wave back (though in practice you'll
eventually run out of precision on the floating-point variable). So all we need to do is change the start and stop points of the "for" loop.
We've already got the new starting point, 4.712. To find our new end point, we'll add 2 * pi (the length of one wave) to 4.712. By adding one wavelength to any output value, we'll end up back there
again when the wave has finished, or in other words, a full wavelength starts and stops at the same output value. Our new stopping point is 10.995 (4.712 + 6.283). Replace the "for" loop in the above
example with the one below, and see what it does (you can hit the reset button on the Arduino whenever you want to restart the code and see that it really does start at zero brightness):
for (in = 4.712; in < 10.995; in = in + 0.001)
Now we're starting at 4.712, and tracing an entire wave from there (output = 0) to 10.995 (output = 0). The LED now nicely starts at 0% brightness!
Doing other things while pulsing a LED
The above code shows how to pulse a LED in a "for" loop. But sometimes you might want to do other things such as wait for input, sound an alarm, etc. while pulsing the LED. One way to do this is to
break the for loop apart and to rely on the main loop() function to keep running your code:
void loop()
static float in = 4.712;
float out;
// do input, etc. here - as long as you don't pause, the LED will keep pulsing
in = in + 0.001;
if (in > 10.995)
in = 4.712;
out = sin(in) * 127.5 + 127.5;
As long as your other code doesn't pause and prevent the LED code from running, the LED will keep pulsing. (This is a great way to simulate "multitasking" on a small computer).
One thing to notice is the "static" modifier in the above declaration of "in". If we didn't use this, "in" would be initialized to 4.712 each time loop() loops, which would keep the LED output
permanently at 0. By adding this modifier, you're telling the compiler to not keep initializing that variable every time we go through the loop.
Have fun!
We hope that this tutorial adds to your toolkit for making better looking Arduino programs. Let us know if you have any questions or comments, and we always love to hear about your projects!
-Your friends at SparkFun
* This statement strongly depends on the exact friends involved. But if your friends are anything like us, they'll definitely be impressed!
to post comments.
Thank you for this code it is very useful for the project I am working on. I am completely new to all this so you help has been invaluable. I know this may sound so basic for you guys but how can
I change the code so that I can run LED lights from a few pins at once?
If I understand your question correctly, you are trying to make more than 1 LED pulse.
The modification would be fairly simple. Let’s say you’re putting 3 LEDs on pins 9, 10 and 11. You could then modify the code as follows.. replace
const int LED = 11;
const int LED1 = 9;
const int LED2 = 10;
const int LED3 = 11;
Note that these are only for ease-of-use. Next you would replace…
Of course, this makes all 3 LEDs pulse exactly the same, and you might as well have put them in parallel with each other and used a transistor off just 1 pin to drive them (the Arduino
wouldn’t be happy sourcing the current for all 3 from 1 pin).
So you could make them pulse one after the other:
for (in = 0; in < 6.283; in = in + 0.001)
out = sin(in) * 127.5 + 127.5;
out = sin(in + 2.094) * 127.5 + 127.5;
out = sin(in + 4.188) * 127.5 + 127.5;
Those numbers being added are 1/3rd and 2/3rd along the wave. Check the ‘starting from 0 brightness’ in this article for how this works.
As things get more complex - which may be a ways off for you yet - you’ll probably want to start looking into the addressable LED products so that you only need to use a few pins with which
you can individually control hundreds of LEDs. For now, I hope this answers your question :)
You might also consider using cos (cosine) instead of sine to take care of starting at zero. This would eliminate the other math since cosine naturally begins at 0, and you can still use the same
range and offset values. Just a thought to make things a little easier.
Awesome, thanks guys!
Very useful for my project right now, I have been using arrays of sin waves, this will give me much more control.
What would be a good way to do this and not bog down the processor as much???
I know I could just run it once then store it into an array, but I would like change some of the values and timing using a potentiometer.
I would also like to automate it to drastically change the wave for different effects, but Im afraid that running multiple sin() rhythms in a loop would drastically slow the program down.
○ Does the cpu not handle floats as easy as ints?
It’s very true that floating point operations take considerably more processor time than integer operations. But processors are still pretty speedy, so don’t discount calculating sine
waves directly until you do some testing.
For example, I was curious about your question, so I just ran a test on an Arduino Uno (16MHz), and it could do 1000 sine calculations, including range and offset, and setting an
analog out, in 7.5ms. That’s 133,000 per second. If you’re updating all six PWM outputs with different sine waves, that’s still over 20,000 per second.
That might be good enough, but if it isn’t, note that the eye can be fooled by update rates well under 100 times per second. You may be able to do less-frequent sine calculations and
still have the output look good, with plenty of processor cycles left over for whatever else you need to do.
Optimization is great (and sometimes essential), but don’t forget that the better solution might be the “inefficient” one if it still works correctly and gets your project up and
running faster. =)
Very true, thank you. I didn’t realize it blazed through that many a second, thats actually incredible.
Wonderful! Now.. Let me ask you this. Lets say I would like to simulate a rotating beacon as it would be seen from afar.. a Coastal light house for example. This would have the LED slowly ramp up
to peak brightness hold for a second, then fade back out, repeat, etc, etc
Actually, I think I got it pretty close to what I’m trying to do here by playing with the speed and delay
Neat project! You might try breaking it into two for() loops, one for the rising and one for the falling, something like this:
void loop()
float in, out;
// rising
for (in = 4.712; in < 7.854; in = in + 0.001)
out = sin(in) * 127.5 + 127.5;
delay(1000); // pause at peak brightness
// falling
for (in = 1.570; in < 4.712; in = in + 0.001)
out = sin(in) * 127.5 + 127.5;
delay(5000); // pause while dark
I pulled the numbers from the above graphs to find the waveform start and end values for the rising and falling sections of the wave. If it seems to work, you can play with the delay numbers
to get a timing you like.
Thanks! Got it tweaked a bit and looking pretty decent. The idea is to break it down for and put it on a ATtiny and it’s own little dedicated board for instillation in a piece of
Lighthouse shaped garden statuary, y'know.. over by the goldfish pond. the idea is to simulate in code.. the circuit described here: | {"url":"https://www.sparkfun.com/tutorials/329","timestamp":"2014-04-17T12:36:43Z","content_type":null,"content_length":"70719","record_id":"<urn:uuid:d77e2b59-aab4-4a64-8a7c-879f6129f7a8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Big O
January 5th, 2013, 04:20 PM #1
Junior Member
Join Date
Oct 2012
Thanked 0 Times in 0 Posts
I got this question , it is a theory question on Big 0
1-if n is 2 what is the final value for counter
2-if n is 5 what is the final value for counter
3-in light of this derive a expression for the number of times the inner loop will run in terms of n
4- state the Big O complexity
int counter =0
for (int i = n; i <=3*n; i ++){
while(i >+n){
i --:
counter ++
can anyone tell me how to get the values for counter
What happens when you compile and execute the code and print out the results?
Please edit your post and wrap your code with
<YOUR CODE HERE>
to get highlighting and preserve formatting.
If you don't understand my answer, don't ignore it, ask a question.
it is not a question about the code, it is a theory question on Big O complexity. It asks to figure out what the value for counter is when the values are put in for n.
Have you tried executing the code? what does it do?
If you don't understand my answer, don't ignore it, ask a question.
program keeps running
Then it has infinite complexity.
If you suspect that it is intended to finish eventually then post the code you are using. And describe the program's behaviour: especially the output suggested by Norm in #2.
[Edit] I think a couple of values for n as suggested by the original question is insufficient to see anything much. Try getting the values of counter for a whole bunch of values of n - say from
zero to ten - and then graphing them (counter vs n).
The question is not on the code , it is not meant to be used to run a program, it is a theory question to figure out the value for COUNTER when you put the values in for firstly n=2 and then n=5
January 5th, 2013, 04:32 PM #2
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,958 Times in 1,932 Posts
January 5th, 2013, 04:40 PM #3
Junior Member
Join Date
Oct 2012
Thanked 0 Times in 0 Posts
January 5th, 2013, 05:14 PM #4
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,958 Times in 1,932 Posts
January 5th, 2013, 05:21 PM #5
Junior Member
Join Date
Oct 2012
Thanked 0 Times in 0 Posts
January 5th, 2013, 05:32 PM #6
January 5th, 2013, 05:35 PM #7
Junior Member
Join Date
Oct 2012
Thanked 0 Times in 0 Posts | {"url":"http://www.javaprogrammingforums.com/java-theory-questions/21553-big-o.html","timestamp":"2014-04-19T08:44:02Z","content_type":null,"content_length":"71495","record_id":"<urn:uuid:82efc5f6-9a3b-44f7-b7f6-70381fb321d3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where to find Excel template to calculate interest compounded monthly?
General Question
Where to find Excel template to calculate interest compounded monthly?
Cannot seem to find this anywhere, and do not have time to create an Excel macro. Thanks in advance.
Observing members: 0 Composing members: 0
7 Answers
do you have a fixed monthly interst rate? . . . are there any principal additions or subtractions involve?
Fixed monthly interest rate of 1.0%.
I could probably use one of the many calculators/templates available online, but I cannot for the life of me figure out how to convert 1.0% monthly interest to an annual interest rate…
Microsoft has this solution. Otherwise, go here and click on the sentence that says “If algebra isn’t your cup of tea, use our template here. Fill in the yellow cells to see the final amount.”
There’s a link to an Excel spreadsheet with the formula.
XL: How to Calculate Compound Interest
Suppose you have $1,000.00 in an investment account. The account pays 8 percent interest and this interest is compounded annually. How much will the investment be worth at the end of three years?
There are two ways to find the amount:
where PV is present value, R is the interest rate, and N is the number of investment periods.
Use a Fixed Formula
The following formula typed into a cell on a worksheet, returns the correct value of $1,259.71:
However, all of the information is ‘hard-coded’ into the formula and you must manually change the formula any time the figures change.
Create a Function Macro to Determine Compound Interest
Function Yearly_Rate(PV As Double, R As Double, N As Double) As Double
Yearly_Rate = PV*(1+R)^N ‘Performs computation
End Function
12 months in a year… so multiply your monthly rate by 12 to get annual rate. On the flip, if you have an annual rate and want to get monthly rate, divided the rate by 12.
If youmust have a template, there’s one here: http://www.fido.gov.au/fido/fido.nsf/byheadline/Compound+interest+calculator?openDocument
The sample is defaulting to annual interest. You can make adjustments to the template by providing monthly interest rate instead of annual interest rate and indicate number of months instead of years
for the calculation period.
Answer this question
This question is in the General Section. Responses must be helpful and on-topic. | {"url":"http://www.fluther.com/78278/where-to-find-excel-template-to-calculate-interest-compounded-monthly/","timestamp":"2014-04-18T02:12:35Z","content_type":null,"content_length":"40402","record_id":"<urn:uuid:01908f50-1d03-407c-822c-307bcc4a53a3>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
directional derivative
April 29th 2007, 09:32 AM #1
Apr 2007
directional derivative
let f(x,y) = 4x^3y^2.
a)Find a unit vector in the direction in which f decreases most rapidly from the point (2,1).
b) what is the rate of change of f in the direction given in (a).
a function f(x,y) increases most rapidly at a point (x0,y0) in the direction of gradf(x0,y0) and decreases most rapidly in the direction of -gradf(x0,y0)
gradf(x,y) = <fx , fy> = <12(y^2)(x^2) , 8(x^3)y>
gradf(2,1) = <48 , 64>
so -gradf(2,1) = <-48, -64>
u = <-48, -64>/(sqrt(48^2 + 64^2)) = <-48/80 , -64/80> = <-3/5 , -4/5> ....unit vector in the direction of most rapid decrease
b) what is the rate of change of f in the direction given in (a).
the rate of change in the direction of u is given by
|-gradf(2,1)| = 80
Hi Jhevon,
I got -80 for my part B answer which is slightly different from your answer 80. I am checking why?
Thank you again.
i guess you used -|gradf(2,1)|
you are probably correct. i know that the rate of change is concerned with the magnitude of the gradient vector, so i said to myself, it couldn't be negative and put the minus sign inside the
absolute values. but now that i think about it, it is valid to have a negative rate of change, as it denotes the rate of change in the opposite direction
I agree with you.
April 29th 2007, 10:57 AM #2
April 29th 2007, 11:29 AM #3
Apr 2007
April 29th 2007, 11:38 AM #4
April 29th 2007, 11:47 AM #5
Apr 2007 | {"url":"http://mathhelpforum.com/calculus/14296-directional-derivative.html","timestamp":"2014-04-19T03:48:44Z","content_type":null,"content_length":"42531","record_id":"<urn:uuid:e6893b46-446c-4fd0-b7a9-f322474a7c9f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
vertex tranforms for skeletal animation [Archive] - OpenGL Discussion and Help Forums
04-13-2004, 08:27 AM
i have been succesful in animating a mesh with skeletal animation (joint based skeleton). the process was simple and straighforward:
1. produce the absolute matrix for each joint by concatenating the joint's poarent's absolute matrix with the joint's relative matrix
2. for each vert in the mesh, translate it into joint-space for whatever joint it is associated with
3. each time through the loop, transform the vert by it's joint's matrix.
this worked great. now i want to implement vertex skinning. my first thought is that putting the vert into joint-space won't work, because the vert could be associated with more than one joint. i
just can't get my head around how to create the correct matrices and how exactly to transform the verts. i know i need to take the joint's position into account when transforming the vert. i'm just
not sure exactly how to do it.
i'd appreciate any input you guru's may have</brownnosing> | {"url":"https://www.opengl.org/discussion_boards/archive/index.php/t-159343.html?s=24c82c39e50d3641abb26a79534ac854","timestamp":"2014-04-20T13:43:55Z","content_type":null,"content_length":"7927","record_id":"<urn:uuid:f04cb784-617e-4c32-9b28-9821a4304e65>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from December 30, 2008 on The Unapologetic Mathematician
We’ve laid out the spaces of symmetric and antisymmetric tensors. We even showed that if $V$ has dimension $d$ and a basis $\{e_i\}$ we can set up bases for $S^n(V)$ and $A^n(V)$. Now let’s count how
many vectors are in these bases and determine the dimensions of these spaces.
The easy one will be the antisymmetric case. Every basic antisymmetric tensor is given by antisymmetrizing an $n$-tuple $e_{i_1}\otimes...\otimes e_{i_n}$ of basis vectors of $V$. We may as well
start out with this collection in order by their indices: $i_1\leq...\leq i_n$. But we also know that we can’t have any repeated vectors or else the whole thing collapses. So the basis for $A^n(V)$
consists of subsets of the basis for $V$. There are $d$ basis vectors overall, and we must pick $n$ of them. But we know how to count these. This is a number of combinations:
Now what about symmetric tensors? We can’t do quite the same thing, since now we can allow repetitions in our lists. Instead, what we’ll do is this: instead of just a list of basis vectors of $V$,
consider writing the indices out in a line and drawing dividers between different indices. For example, consider th basic tensor of $\left(\mathbb{F}^5\right)^{\otimes 4}$: $e_1\otimes e_3\otimes e_3
\otimes e_4$. First, it becomes the list of indices
Now we divide $1$ from $2$, $2$ from $3$, $3$ from $4$, and $4$ form $5$.
Since there are five choices of an index, there will always be four dividers. And we’ll always have four indices since we’re considering the fourth tensor power. That is, a basic symmetric tensor
corresponds to a choice of which of these eight slots to put the four dividers in. More generally if $V$ has dimension $d$ then a basic tensor in $S^n(V)$ has $n$ indices separated by $d-1$ dividers.
Then the dimension is again given by a number of combinations:
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives | {"url":"https://unapologetic.wordpress.com/2008/12/30/","timestamp":"2014-04-18T23:24:01Z","content_type":null,"content_length":"43283","record_id":"<urn:uuid:9c8d8206-fed7-4f6a-ac44-8db0886d6370>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - The Refutation of Bohmian Mechanics
The strength of standard QM is that
-- it can safely ignore all irrelevant variables,
-- it can transform to arbitrary symplectic coordinate systems in phase space,
-- it can work on arbitrary Lie groups adapted to the problem,
without leaving the framework of the theory.
BM has no such option, hence is strictly inferior to the standard view.
Thus it is fully justified that the main stream ignores BM.
The presentation ''Not even wrong. Why does nobody like pilot-wave theory?'' at
diagnoses the disease but only has a historical view rather than an answer to that question. The real answer is that the need for BM is marginal compared to the need for QM. BM subtracts from QM too
much without giving anything relevant in return.
Though through lip service it encompasses all of QM, in practice it excludes many systems of practical interest because they are not formulated with enough pointer degrees of freedom (and often
cannot be
formulated with few enough pointer degrees of freedom to be tractable by BM means). Simulating quantum computing via BM would be a nightmare. | {"url":"http://www.physicsforums.com/showpost.php?p=3272358&postcount=100","timestamp":"2014-04-20T00:55:17Z","content_type":null,"content_length":"9827","record_id":"<urn:uuid:419bbb91-1995-415c-aeaa-0040903c6dee>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
integration by parts
February 3rd 2007, 06:42 PM
integration by parts
hi, i have to integrate the following problem by parts. i started to do it, but then i got stuck halfway through. if anyone could show me how it is done, i would be grateful.
INT (square root of y) * (ln(y)) dy
February 3rd 2007, 07:09 PM
Let u = ln(y)
du = 1/y
dv = sqrt(y)
v = (2*y^(2/3))/3
uv - int(vdu)
[2*y^(3/2)*ln(y)]/3 - int((2*sqrt(y))/3)
[2*y^(3/2)*ln(y)]/3 - (4*y^(3/2))/9 + C
Which is equivalent to:
[2*y^(3/2)*(3*ln(y) - 2)]/9 + C
February 3rd 2007, 07:10 PM
You have,
$\int \sqrt{x}\ln x dx$
$u=\ln x$ and $v'=\sqrt{x}=x^{1/2}$
$u'=1/x$ and $v=(2/3)x^{3/2}$
$\frac{2}{3}\ln x\cdot x^{3/2}-\frac{2}{3}\int \frac{x^{3/2}}{x} dx$
$\frac{2}{3}\ln x \cdot x^{3/2}-\frac{2}{3} \int x^{1/2} dx$
You can do it from there.
February 3rd 2007, 07:13 PM
By parts.
INT.[u]dv = u*v -INT.[v]du -------(i)
INT.[sqrt(y) *ln(y)]dy -------------(1)
You can try two ways. I did, and I found this is the correct assumption.
Let u = ln(y)
Hence, du = (1/y)dy
And so, dv = sqrt(y) dy = (y^(1/2))dy
Hence, v = 1/(3/2) *y^(3/2) = (2/3)y^(3/2)
INT.[sqrt(y) *ln(y)]dy -------------(1)
Apply the assumptions, and (i),
= ln(y)*[(2/3)y^(3/2)] -INT.[(2/3)y^(3/2)][(1/y)dy]
= (2/3)(y^(3/2))ln(y) -(2/3)INT.[y^(1/2)]dy
= (2/3){y^(3/2) ln(y) -(2/3)(y^(3/2))} +C
= [(2/3)y^(3/2)]{ln(y) -2/3} +C
= (2/3)(y)(sqrt(y))[ln(y) -2/3] +C --------------answer.
= [ln(y) *(2/3)y^(3/2)] - | {"url":"http://mathhelpforum.com/calculus/11094-integration-parts-print.html","timestamp":"2014-04-17T00:50:46Z","content_type":null,"content_length":"8610","record_id":"<urn:uuid:e7890066-cd95-478a-8e1b-b14220252488>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Bicycles, and.... what else is there?
As always happens in fall, the Low-Key Hillclimbs have taken up a large chunk of my time, leaving less time for blog posts. But it was worth it: the series was an unqualified success, with every
climb coming off well, the last few finding valuable seams in the weather. At Hamilton riders experienced moderate rain on the descent, and for some towards the end of the climb, but it was warm
enough that the long descent was still tolerable in the wet.
One aspect of the series worthy of revision, however, is the scoring system. Never before were artifacts in the median-time-normalized scoring more obvious. So for 2012, I am finally overcoming
inertia and changing from the median-based scoring we've essentially used since 2006.
I've described in preceding posts a scheme to calculate a reference "effective" time for each climb. With this scheme, instead of taking a median each week, we take a geometric mean where effective
times for riders (adjusted for male, female, hybrid-electric) are adjusted by the rider's "rating", which represents how the riders tend to do relative to the reference time. It's an iterative
calculation which is repeated until rider ratings and reference times are self-consistent, weighting means by heuristic weighting factors to give higher priority to riders who do more climbs, and
climbs with more riders, since these provide better statistics.
Here's a comparison of this approach with the median-based system used this year. I plot on the x-axis each rider's rating and on the y-axis that rider's score for each designated week. In this case
I used weeks 5 (Palomares Road) and 6 (Mix Canyon Road). These climbs are at opposite ends of a spectrum: Palomares is short with plenty of low-grades, while Mix Canyon is relatively long with
extended steep grades.
Here's the plots. I've omitted riders who did only one climb, as for them their rating from the one climb they did is equal to their rating.
With the 2011 scoring scheme, you can clearly see that there is a lack of low-weighted riders relative to Palomares. As a result, moderately-rated riders in particular were given low scores, since
the median rider was, relative to the entire series, above average (rated over 100). In contrast at Palomares there were more low-weighted riders.
So then I replace the median time with a reference time, adjusting each rider's effective time by his/her rating. Now you can see the scores for Mixed Canyon have been boosted:
But there's an issue here: the curve for Mix Canyon is steeper. So relatively slower riders score lower, while relatively faster riders score higher, then they did or would at Palomares. So I added a
bit of complexity: I compare the spread in scores with the spread in rider ratings and I make sure that the ratio of these spreads is the same week-after-week. I call the adjustment factor the "slope
factor". The result is here:
Now the curves line up nicely! Sure, each rider may score in a given week more or less than his rating, but the overall trend is very similar.
I'll add in the other weeks. First, here's the 2011 formula:
You can see distinct curves for different weeks. Some weeks a rider of a given ability is more likely to score higher, some lower. This isn't what we're after, as we want riders to have the
opportunity to excel on any week.
So I add in the adjusted effective reference time, and then the slope factor, and here's what we get:
All of the weeks have generally overlapping curves. No more fear of turning our for a tough climb or a climb in difficult conditions, and have your score buried in obscurity because there's a
disproportionate number of fast riders. Or similarly, no more volunteering for a week only to have your volunteer score end up lower than riders you finish ahead of week after week, simply because
the median times were relatively long due to rider turn-out.
To me, this system looks like it's working nicely.
In two previous posts, I described an attempt to revise the scoring code for the Low-Key Hillclimbs. The scoring has placed a priority on simplicity. At first, we normalized times to the fastest man
and woman each week. But then everyone's score was exceptionally sensitive to the fastest rider. Then I switched to using the median time for normalization, first separately for men and women, then
using combing them with an empirically determined conversion factor for women. But while median is less sensitive to any single individual showing up, nevertheless the most challenging climbs tend to
attract fewer beginner riders, deflating the scores for these weeks. So the alternative approach is to iteratively rate each climb using a reference time based on the rating of riders who show up,
and assign each rider a rating based on the reference times (and their results) of the climbs they do.
A concern about this approach is that if I use all available information equally, I re-rate each rider and each climb after each week's results. This yields scores for previous weeks changing each
time results for a new week become available. This is in principle an undesirable feature. It could be avoided by forcibly freezing ratings for climbs each week, rating only new climbs using results
including those which preceed it. You might call this approach causal scoring (nothing is affected by future events). However, before taking such a compromize, I wanted to test whether pragmatically
this is a problem. Obviously if relative scores from previous weeks are highly volatile then it makes tactical decisions difficult. For example, your previous scores might all be better than the
scores of another rider, then you mark him and out-sprint him in this week's climb, but afterwards you've fallen behind in the standings because of a re-evaluation of the reference times for previous
weeks. This is something of a pathological example, but it's in principle possible, so it needs to be tested using realistic data.
So I ran the scoring code for 2011 data, which exist for seven weeks of climbs. Two climbs, Kings Mountain Road and Mount Hamilton Road, have not yet occurred.
After week 1, Mountebello Road, there is only one climb on which to determine a reference time, so I revert back to using the median time. I could also use the geometric mean, which would be closer
to what I do when there's multiple times, but the median works well so I stick with that. The climb's field is then by definition average. There is no adjustment for the strength of the field.
Then I add data from week 2, Sierra Road. Now we see that some riders did both weeks. On one or the other week, using median times, these riders would score lower (it turns out they generally would
score lower on Sierra Road). I then assume that on the week they score lower the average rider was stronger, and adjust the climb reference time so riders who did both, on average, score the same
(using geometric means). Then each week other riders are scored relative to these repeat riders. This causes a re-evaluation of the reference time for the first week: it's no longer the median time.
Now I add week 3, and I can use all data from riders who did at least two of the climbs to rate the fields of riders relative to each other. These riders are used to re-establish reference times for
the first two weeks.
And the process continues until I have added data from all seven weeks.
Here's the test results. First I plot the ratio of each week's median time to its reference time. So if this number is more than 100%, that means that the reference time will be less than the median
time, and riders will score lower than with the 2011 algorithm. This adjustment is because, according to the algorithm, on average there were more slower riders and fewer faster riders on that climb.
The plot shows this ratio for each climb plotted at the end of each week. After one week, there is only one point: for week 1, Montebello, and of course since that climb uses the median time it is at
100%. After two weeks there are two points: one for Montebello and one for Sierra Road. That curve is orange. Here Montebello is around 102.5% and Sierra Road is around 97.5%, so there were stronger
riders at Sierra Road. Week 3 was Page Mill and that came out between the first two climbs. You can see how each week the reference time for each climb is adjusted, generally upward since as the
series has continued later climbs have attracted on average stronger riders, it seems. So each week scores from Montebello, week 1, would tend to drop a bit as the algorithm assigns relatively higher
scores to riders with a similar relative placing at later climbs.
This seems like it might be a problem, having things change over time. And this is true for someone who has a particular score goal, like 100 points. They may have 100.1 points after Montebello only
to find that has dropped to 95.1 points later in the series. But for the standings, all that matters is how points from one climb compare to points of another. For example, if after two weeks rider
A, who climbed only Montebello, scored 101 points there while rider B, who climbed only Sierra Road, scored 100 points there than rider A is ahead of rider B. After week 3 perhaps rider A's score for
week 1 drops to 99 points and rider B's score drops to 98 points, but that's okay as long as the gap between the two doesn't change much.
So next I plot the ratio of a climb's reference time to the reference time for Montebello. If the two change by the same proportion this ratio doesn't change, and a comparison of riders between the
two climbs won't change much. As hoped, this ratio doesn't change much as new results are added to the analysis.
The resolution of that plot is limited, so in the next plot I how much each of these ratios changes after each week of results is added. Using the example before of riders A and B, for rider A to
keep his 1-point gap over rider B, we want this ratio to be stable to around 1%. From the plot you can see that none of the comparisons between any of the weeks and week 1 changes by more than 0.5%.
The biggest change is between week 2 and week 3, but still these change relative to each other by barely over 1%. So scores shifting relative to each other over the course of the series doesn't seem
to be a big problem. So the scoring system seems to work pretty well, at least if you don't mind scores drifting a bit together.
I seem to have debugged the new Low-Key Hillclimbs scoring algorithm, so tested it on 2011 data for the completed first six weeks.
Recall the method is to calculate a rider's rating (not used for overall rankings) based on the natural logarithm of the ratio of his time each week to that climb's reference time. Meanwhile the
climb's reference time is calculated as the average the natural logs of the times of the riders in the climb, subtracting their ratings. These "averages" are weighted by heuristic statistical weights
which assign more importance to riders who did more climbs, and to a lesser extent to climbs with more riders. Each of these factors depends on the others, so the solution is done self-consistently
until it converges, in this case until the sum of the squares of the reference times changes by less than 10^-6 seconds^2. This took 8 iterations in my test.
To avoid contaminating the results I check for annotations that a rider has experienced a malfunction or wrong turn during a climb, or that he was on a tandem, unicycle, or was running. These factors
would generally invalidate week-to-week comparisons for these results, so I don't use them. So a rider whose wheel pops out of true during a climb and is forced to make time-consuming adjustments
before continuing won't have his rating penalized by this, assuming that incident makes it into the results data.
All times here are adjusted for division (male, female, or hybrid-electric), as I've described.
week 1 median = 2149.50
week 1 reference = 2054.26
week 1 ratio = 104.636%
week 1 quality = 0.0398
week 2 median = 1760.50
week 2 reference = 1762.51
week 2 ratio = 99.886%
week 2 quality = 0.0096
week 3 median = 2614.00
week 3 reference = 2559.27
week 3 ratio = 102.139%
week 3 quality = 0.0237
week 4 median = 2057.50
week 4 reference = 2119.96
week 4 ratio = 97.054%
week 4 quality = -0.0140
week 5 median = 1237.50
week 5 reference = 1246.35
week 5 ratio = 99.290%
week 5 quality = 0.0310
week 6 median = 2191.00
week 6 reference = 2322.56
week 6 ratio = 94.335%
week 6 quality = -0.0254
Here the week "quality" is the average rating score of riders in the climb. You can see in general the ratio of the median to reference times tracks this quality score, although one is based on a
weighted geometric mean, and the other is a population median.
In general less steep more popular climbs (1, 3, 5) have rider "qualities" which are positive, meaning times were somewhat slower, while steeper, more challenging climbs (4 and 6, but to a lesser
extent 2) tended to have negative "qualities", indicating riders were generally faster. The exception here is week 2, Sierra Road. While this road is considered cruelly steep by the Tour of
California, apparently Low-Keyers have a higher standard of intimidation, and it still managed a positive quality score with a ratio quite close to 100%. It essentially fell between the super-steep
climbs and the more gradual climbs.
A side effect of this, even if I don't use this analysis for the overall scores (this year's score algorithm can't be changed mid-stream, obviously, although it's tempting, I admit...), is I get to
add a new ranking to the overall result: rider "rating". This is a bit like the ratings that are sometimes published in papers for rating professional teams, not a statement of accomplishment, but a
guide to betters on who is likely to beat whom. Don't take these results to Vegas, though, as they're biased towards riders who did steeper climbs, which produce a greater spread in scores. I could
compensate for this with an additional rating for climbs (how spread the scores were), but I'll leave it as it is. I like "rewarding" riders for tackling the steep stuff, even if it's only in such an
indirect fashion.
For the test, I posted the overall results with the official algorithm and with this test scoring algorithm so they can be compared. One thing to note is only this single page is available with the
test algorithm, any linked results will be the official score:
Riders who did both Mix (week 6) and Bohlman (week 4) really benefit from this new approach. Coincidentally that includes me and my "team" for the series (Team Low-Key, even though my racing team is
Team Roaring Mouse, which I strongly support).
The whole key to comparing scores from week-to-week is to come up with a set of reference times for each week. Then the rider's score is 100 × this reference time / the rider's time, where times have
first been adjusted if the rider is a woman or a hybrid-electric rider. Presently this reference time is the time of the median rider finishing the climb that week. But if riders who would normally
finish in more than the median time don't show up one week, for example Mix Canyon Road, everyone there gets a lower than normal score. That's not fair. So instead we can do an iterative calculation.
Iterative calculations are nice because you can simplify a complicated problem by converting it into a series of simpler problem. The solution of each depends on the solution of every other. But if
you solve them in series, then solve them again, then again, eventually you approach the self-consistent solution which you would have gotten with a single solution of the full, unsimplified problem,
except that problem might be too difficult to solve directly. So here's how we proceed:
1. For each climb, there is a reference time, similar to the median time now. The reference time is the average of the adjusted times for riders doing the climb.
2. For each rider, there is a time adjustment factor. The time adjustment factor is the average of the ratio of the rider's time for a week to that week's reference time. So if a rider always does a
climb 10% over that climb's reference time, that rider's adjustment factor will be 1.1.
We have a problem here. The climb reference times depend on rider adjustment factors, and rider adjustment factors depend on climb reference times. We need to know the answer to get the answer. But
this is where an iterative solution comes in. We begin by assuming each rider's adjustment factor is 1. Then we calculate the reference times for the climbs. Then we assume these reference times are
correct and we calculate the rider adjustment factors. Then we assume these are correct and we recalculate the climb reference times. Repeat this process enough times and we get the results we're
after. Once we have a reference time for each climb, we plug these into the present scoring algorithm where we now use median time, and we're done. The rest is the same. One minor tweak: not
everyone's time should contribute equally to a climb's reference time, and not every climb should contribute equally to a rider's adjustment factor. This is in the realm of weighted statistics.
Riders doing more climbs get a higher weighting factor, and climbs with more riders get a higher weighting factor. The climb weighting factor depends on the sums of the weighting factors of riders
doing the climb, and the rider adjustment factor depends on the sum of the weights of the climbs the rider did. So this is another part of the iterative solution. But this tweak is unlikely to make a
significant difference. The basic idea is as I described it. There's an alternative which was suggested by BikeTelemetry in comments on my last post on this topic. That would freeze scores for each
week rather than re-evaluating them based on global ratings. That I haven't had time to test, but the code for the algorithm described here is basically done; just ironing out a few bugs.
The San Francisco mayor's election was yesterday, and it looks like Ed Lee won it with around 30% of eligible voters voting.
Quoting the San Francisco Examiner, referencing critics:
"...the career bureaucrat would be nothing more than a shill for powerful City Hall insiders. Lee also was dogged by accusations of voter manipulation by an independent expenditure committee that
supported the mayor and other backers laundering campaign donations, which prompted a District Attorney’s Office investigation..."
He attracted a huge number of donations, driving up the amount the city needed to pay in public financing. His donations were largely from out-of-city donors, many laundered through low-income
workers to circumvent the $500 donation limit. Then there were the nominally unaffiliated supporters, for example those who produced and distributed free copies of the book of his life story.
Meanwhile, he violated the law by refusing the disclose details of public contacts within the required time limit. This was so obviously fee-for-service it couldn't have been any clearer.
There was massive fraud in the early voting. Housing managers in Chinatown were reportedly collecting the absentee ballots of tenants and filling them in en mass. Shady "voting booths" were set up
where voters brought their absentee ballots for "assistance". "Helpers" were filmed filling ballots out for people, and in other cases stencils were handed out so voters could fill in Lee's slot
without the risk of voting for any other candidate. No doubt these absentee ballots were procured with further "assistance"
Lee sat out most of the series of mayoral debates, instead choosing to go hang out at bars and get close to his people. His campaign slogan, "Gets it Done", couldn't have embraced mediocrity any
more, mediocrity where the city desperately needs vision and leadership to get it back on track to fiscal responsibility. Indeed, even the deal he claims to have brokered himself to get high-income
city employees to contribute to their pensions and health care, Proposition C, he himself is gutting by promising these same workers (SFFD and SFPD) a compensating pay raise. Not only does that cover
the pension contribution, but actively increases the city's unfunded liability further by increasing pension payouts, which are proportional to salary. Oh pity the poor police officer, most of whom
make well over $100k/year, and can retire on that with full health benefits at an age when public-sector workers are typically mid-way through their careers, wondering how they're going to be able to
Lee supports Proposition B, which pays for road maintenance with debt. Why is debt needed? Because the budget for road work has been diverted to other things, like the massive city worker salary &
benefits budget. Lee, "who gets it done", was the Public Works director from 2000 to 2005, the one most directly responsible for road infrastructure. He's part of the reason the roads are in such
sorry condition, and a Proposition B is claimed to be needed.
Lee is a shill, a puppet, a tool of the money machine. It was clear as can be he had to go. Nobody I've interacted with admits to supporting him, and I even walked door-to-door in Mission Terrace as
a David Chiu volunteer. Yet how can someone who is so clearly unpopular, so clearly corrupt, so clearly a tool of outside money, be elected?
Well, it doesn't help that the ten-thousand-member San Francisco Bike Coalition campaigned for him. SFBike endorses candidates based on a member survey on which members were explicitly asked to rank
candidates based on responses to a lame curve-ball set of questions, and Lee finished 3rd. Yet surely the stories of corruption and fraud which followed that survey could have tempered the zeal with
which SFBike daily bombarded the internet with messages encouraging a position for Lee on the 3-slot ranked-choice ballot? I even asked an officer of the coalition: had Lee made a racist comment,
would you continue to support him? He admitted probably not. Yet naked corruption and fraud is okay?
And I'm also sure he got support from many of the 27 thousand city employees, 3% of the San Francisco population. The number of city employees has reportedly increased under Lee's brief tenure as
mayor, and he's taken no steps toward making the efficiency improvements which are so desperately needed (efficiency means fewer workers and people working harder and learn new skills; none of these
are popular with existing workers).
But the real culprit here is the lethargy of the voters. Turn-out was weak. It was reportedly very low even among protesters at Occupy San Francisco. Perhaps many of these protesters were from
out-of-the-city, I don't know. But if you're going to smugly call for the downfall of the Man in the streets and then not exercise your legal obligation to vote for representative government, you're
destroying the integrity of your own message. The reason the banks are able to get away with so much is voters aren't providing adequate oversight of their elected representatives. And there's no
better example of that than yesterday's election. This is especially true because several candidates made an explicit point to support and defend the rights of free speech and public assembly of
these same protesters.
San Franciscans will continue to whine and complain about how the city is in such a sorry state, how budgets are unsustainable, how public transit is little more than a fiscal tar pit, how there is
no vision for how the city is to move forward. Yet many of those same felt they had more important things to do with their time than participate in yesterday's election. Those people are the
definition of passive-aggressive, an affliction which is in epidemic proportions. These people deserve mediocrity, corruption, and insider deals. They deserve Ed Lee.
If you live in San Francisco and didn't vote, either yesterday or absentee, I extend my finger in your general direction. You, my friend, is what is wrong with this city.
In nature, if you can't do what it takes to survive, you die, your genes are eliminated from the pool, and someone else takes your place.
Maybe what takes your place is better, maybe not. But if not, it will also die, be eliminated, until eventually something able to do what it takes comes along and so, by this process, things
generally improve over time.
This is my theory of voting. Rule #1: if the incumbent isn't doing a good job, vote them out.
So often in elections I hear about the "lesser of two evils". "I don't like the incumbent XXX, but he's better than YYY." Sorry: the rule of natural selection says I vote XXX out of office anyway.
Maybe YYY is even worse. But then I vote YYY out at the first opportunity.
Eventually corrupt and unqualified candidates will stop running. Eventually you get someone good in office.
But if you vote "lesser of two evils", things will never change. You'll always have candidates who suck, just slightly less than the competition. We'll remain mired in the corrupt stagnation which
we've had at all levels of government for as long as I remember.
So my first rule is if I don't like the way things are going, the incumbent doesn't get a vote. I pick from the alternatives.
Low-Key scoring has gone through various phases.
In the 1990's, we scored based on fastest rider. The fastest man and the fastest women each week would score 100. Those slower would score based on the percentage of the fastest rider's score. This
was super-simple, but when an exceptionally fast rider would show up, everyone else would score lower than normal. Additionally, this was frustrating for the fastest rider (typically Tracy Colwell
among the men), since no matter how hard he or she pushed himself, the result would be the same 100 points.
So with Low-Key 2.0 in 2006, we switched to using the median rider (again treatng men and women separately). The median is much less sensitive to whether a particular individual shows up or not, so
scores were now more stable. However, there was still an issue with women, and most especially with our hybrid-electric division, since smaller turnouts in these again made the score sensitive to who
showed up.
So in 2010 I updated the system so now all riders were scored using a single median time, except instead of actual time, I used an "effective mens's time" using our history of Low-Key data to
generate conversion factors from women's and hybrid electric's times to men's times. Mixed tandem's were scored by averaging a men's and a women's effective time.
This worked even better. Now if just a few women show, it's possible for them to all score over 100 points, as happened at Mix Canyon Road this past Saturday.
But the issue with Mix Canyon Road was because the climb is so challenging, and for many it was a longer than normal drive to reach, the turn-out among more endurance-oriented riders was relatively
poor. The average rider at Mix would have scored over 100 points during, for example, Montebello (data here). It seems almost everyone who did both climbs had "a bad day" at Mix. That is far from the
There is another scoring scheme I've been contemplating for many years. It's one which doesn't use a median time for each week, but rather compares the times of riders who did multiple weeks to come
up with a relative time ratio for each climb. So if, for example, five riders did both Montebello and Mix, and if each one of them took exactly 10% longer to climb Mix, then a rider on Mix should
score the same as a different rider on Montebello as long as the Mix rider's time was exactly 10% longer than the Montebello rider's time, once again after adjusting for whether the rider is a male,
female, or hybrid-electric.
So why haven't I made this switch yet? It sounds good, right?
Well, for one it's more work for me. I'd need to code it. But that's not too bad because I know exactly what I need to do to make it work.
Another is it's harder to explain. It involves iterative solution, for example. I like things which are easy to explain. Median time is simple.
But another is it would mean scores for any week wouldn't be final until the entire series was complete. So a rider might celebrate scoring 100.01 points on Montebello, only to see that score drop to
below 100 points later in the series. Why? Because the time conversion factor for a given climb would depend on how all riders did on that climb versus other climbs. And it's not as simple as I
described: for example if rider A does climbs 1 and 2, and rider B does climbs 2 and 3, then that gives me valuable information about how climb 1 compares to climb 3. In effect I need to use every
such connection to determine the conversion factor between these climbs.
But while scores might change for a climb, the ranking between riders during the climb would not. That's the most important thing. Finish faster than someone and you get a higher score. The
conversion factor between men and women, for example, would stay the same. That's based on close to 10 years of data, so no need to continue to tweak that further.
I'll need to get to work on this and see if I can make progress. I'll describe my proposed algorithm next post.
approaching the Low-Key finish (Cara Coburn photo)
Yesterday I rode the Diabolical Duo at Mount Vaca.
First: Mix Canyon Road. Coordinator Barry Burr did an excellent job organizing this one, definitely the "road trip" ride for many in the 2011 Low-Key Hillclimb schedule. For my car pool it wasn't a
big deal: one hour from San Francisco, even stopping for gas. Rides like Alba Road, Bonny Doon, Henry Coe, Jamison Creek Road, and Hicks Road we've done in the past are all substantially further,
with plenty more of comparable distance. But most of our riders live closer to San Jose than to San Francisco, and for them the trip was further.
But even from San Jose this trip was worth it. A big part of it was our Strava event: The Diabolical Duo. The Low-Key Hillclimb covered just the first part of this: to complete the Duo, riders needed
to also climb nearby Gates Canyon Road.
Inspiration for the Duo event came from The Toughest Ascent Blog. I won't even try to describe these roads: the blog already does an excellent job. All I'll say is they are seriously tough climbs.
Take Mix: after a mellow start, it hits riders with a few steep pitches. Wow, that was tough, I thought, and then I hit the chalk "Hammer Gel Time" coordinator Barry had written on the road, marking
the beginning of the "tough part". Tough? What had we just gone through?!?!
But what followed was truly impressive. When I hit the tight switchback from the Toughest Ascent Blog, I had to laugh. This was just insane!
But I got through that in my 34/27. Coming out of these switchbacks, you might expect some relief, but no luck: the road continues steeply until a following switchback. In the climb yesterday we had
a photographer in that corner. Cool, I thought, the end of the steepness.
However, after turning that corner my hopes were crushed like a walnut shell under a cycling shoe. Not only was the steep stuff not over, but the road actually got even steeper still beyond this
corner. This final part of the climb has been suppressed from lactic acid poisoning of the short-term memory centers of my brain. It wasn't even the raw grade numbers, but all that we had been
through leading into those numbers.
Cara at the happy face marking the approaching finish (Lisa Penzel photo)
But like all climbs, this one finally ended. After the finish line for the Low-Key, Barry had set up a second line further up Blue Ridge Road, a gravel road which Mix Canyon intersects. This road was
a bit wash-boarded but ridable even in my 20mm 182 gram sew-up tires. It took only a few minutes to hit this second line, the true summit of this climb. But I'd been wasted from my effort leading to
the top of Mix, so I my pace here was far from impressive, especially given the distraction of the rough surface.
Returning to the Low-Key finish line at the top of Mix, I was a wasted shell of a human. I was still warm from climbing, but eventually the chilly 12C air penetrated, and I started to shiver.
Fortunately I had warm clothes in the volunteer car, so was able to rectify this. I then got some fruit from the event refreshments. I was done.
Done, except that Ammon (with whom Cara and I had carpooled that morning) was waiting for me so we could do the second climb in the Duo, Gates Canyon Road. I couldn't disappoint him, I figured, I had
to try.
So eventually we were off back down Mix. Ammon descends much faster than I can, especially with my flaky carbon Edge rims with their unsmooth braking surface which makes them prone to skidding, and
my narrow tires pumped to 140 psi hardly inspired confidence. But eventually I made it down where Ammon and other riders were grouped. We set off together towards Gates Canyon, a few miles away. It
was a nice ride along well-named Pleasants Valley Road.
Gates is interesting. It starts out almost flat, barely ascending from the valley. Then soon after the proper climbing begins, still not steep, one hits the road indicating the unpaved road. This is
steep enough that grip is an issue. Ammon had zoomed ahead, and I was riding at that point with Low-Key regular James Porter, and while he was able to ride this, my tire skidded on the gravel. I
could have let out some air pressure, but didn't want to risk damaging my rims if the pressure got too low, so just walked here. It was slow going. James was by now gone.
In sections, the road became just plain dirt. This had not been indicated on the Toughest Ascent Blog, so represented a recent change. I worried about clogging my Speedplay cleats, but they were
But the dirt didn't last too long, and beyond it the pavement was very good, surprisingly good for what seemed to be a road to nowhere. But as soon as the pavement re-appeared, it bent into a
disturbing, highly vertical angle. This road was even steeper than Mix had been! I was barely turning my 34/27, grinding away up the ferocious grade.
Many riders reported this had been tougher than Mix. Honestly I hadn't felt that way. Sure, it was steep, but I wasn't in nearly the same hurry, and I could focus on just getting up the thing rather
than every second getting the most of out my legs. So here I was substantially less traumatized when I reached the end of the pavement, the end of the Strava segment. I then walked a bit on the
gravel road which continued on to Mix Canyon Road, but just far enough to get a good view, then turned back.
As I got ready to descend, a local rider I'd passed arrived at the top, then soon after a group of Low-Keyers descended the dirt road I'd just walked down. The Low-Keyers had climbed to the high
point of that road, almost to Mix, but had turned back. One had crashed and was bleeding. Just a flesh wound, though...
The local told us how he climbed Gates Canyon every Saturday, Mix Canyon every Sunday. He lived in Vacaville, he explained, and these were the local climbs. I was in awe. Deja-vu from Maui, where
riders would climb 10 thousand foot Haleakala every week or two. No matter how extraordinary a climb, there are those for whom it becomes the ordinary.
We said goodbye to the local rider, descended together to Pleasants Valley Road. The descent wasn't bad. The fine gravel which I couldn't grip was fine descending, and the deeper dirt I could easily
run. It was fun. At the bottom we then split up as we rode to our respective rides home.
Ammon, it turned out, had completed the root to Mix Canyon, and descended that instead of Gates. Very cool.
I didn't hear a single rider complain the longer-than normal drive hadn't been worth it. This day absolutely made the 2011 Low-Key series. If every one of the three remaining climbs is canceled from
rain, I'd still say the series was a success. I'll never forget climbing these two roads. | {"url":"http://djconnel.blogspot.com/2011_11_01_archive.html","timestamp":"2014-04-20T16:12:07Z","content_type":null,"content_length":"204304","record_id":"<urn:uuid:ae3fc2db-f976-4689-beef-6b61f9e7f474>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
shadow carving
SHADOW CARVING
Silvio Savarese*, Marco Andreetto*, Holly Rushmeier **, Fausto Bernardini **, Pietro Perona*
* California Institute of Technology ** IBM T.J. Watson Research Center
Abstract. The shape of an object may be estimated by observing the shadows on its surface. Assuming that a conservative estimate of the object shape is available, our method analyzes images of
the object illuminated with known point light sources and taken from known camera locations. The surface estimate is adjusted using the shadow regions to produce a refinement that is still a
conservative estimate. A proof of correctness is provided. The method has been tested and validated with experimental results.
Introduction and motivation. We introduce a method for using self-shadows to estimate shape of an object that can be implemented using inexpensive lighting and imaging equipment.We assume that we
have as a starting point a conservative estimate of object shape - that is, the volume enclosed by the current surface estimate completely contains the physical object.We analyze images of the
object illuminated with known point light sources taken from known camera locations. We adjust the current surface estimate using the shadow regions to produce improved shape estimates that
remain conservative. A proof of correctness is provided. Shape from shadows has the advantage that it does not rely on surface texture for establishing correspondence, or on a model of the
surface reflectance characteristics. However, past methods for shape from shadows can give either poor results or fail to converge when applied to physical data that contains error. Our method is
robust with respect to a conservative classification of shadow regions - pixels in shadow may be mislabelled as being lit.No assumptions about the surface topology are made (multi-part objects
and occlusions are allowed), although any non-smooth regions over the object's surface are supposed to be detected.Our motivation for pursuing this work is the construction of an inexpensive and
robust scanning system based on shape from silhouettes and shape from shadows.Furthermore, the method can be located within a similar volumetric framework as that employed by traditional
shape-from-silhouette and recent multiple view algorithms.
Setup and Geometry. Consider an object in 3-D space and a point light source L illuminating it. A certain number of shadows are cast over the object by parts of the object itself. The scene is
observed by a calibrated camera with center Oc. See figure below.
In order to simplify the discussion we consider a slice of the scene. A slice is defined by one of the (infinite possible) planes defined by the light source and the center of the camera. Thus,
now we talk about image line, object's area and contour rather than image plane, object's surface and volume. The extension to the 3D case is immediate by observing that the slice may sweep the
entire object's volume.
The method. We start from a conservative estimate of the object countour (upper bound estimate). See figure below. The object is depicted in brown whereas its upper bound is in yellow. The light
source L illuminates the object and cast a shadow S on its surface. Let us suppose that we are able to process the image in order to separate the shadows from the rest of the observed object. We
call Si the observed shadow. What do the observed object self-shadows tell us about the real surface? The main information comes from the inconsistencies between the shadows which would be
produced by the estimated surface and the observed shadows which are actually produced by the real surface. Thus, we could remove (carve out) regions from the current object estimate in order to
reduce the inconsistencies, and therefore incrementally compute better estimates of the object's shape, while making sure that at each step an upper bound estimate is maintained (conservative
carving). We call this procedure shadow carving because we carve out areas from the object by using observed shadows. Here is an example of how to find a carvable area:
1. The observed shadow Si and the center of the camera define the green sector in figure.
2. Call Se the projection of the observed shadow Si into the current upper bound object estimate
3. The light source and Se defines the pink sector in figure.
4. The intersection between the green sector, the pink sector and the current upper bound object area (in yellow) give the carvable area (in red).
Thus, the carvable area can be removed from the current upper bound estimate of the object's contour, generating an improved conservative approximation.
The process can be iterated by using multiple light sources. As the the light source changes, different shadows are cast over object contour. Thus, for each light source position, different
carvable areas can be found and removed, producing incrementally better estimates of the object's shape. See figure below.
We provide a proof of correctness. Namely, we show that the carvable area is well defined in the general case (i.e. complex surface topology, shadows occluded by other object parts, surfaces with
low albedo or high specular regions) and always lying outside the actual object. See the technical paper for details.
Results. The method has been implemented and tested in the simple trial system shown below. Using each of several lights in turn, and a camera in front, allows multiple sets of shadow data to be
obtained for each object position.
Some results of the carving process are shown below. Upper panels: two views of the original object. Middle panels: reconstruction by using space carving only. Lower panels: reconstruction by
shadow carving. Notice shadow carving successfully recovers the concavities which cannot be handled by space carving.
Conclusions. We presented an analysis of the problem of refining a conservative estimate of an objects shape by observing the shadows on the object when it is lit by a known point light source.
We showed that a well defined portion of volume can be removed from the current object estimate. We proved a theorem that guarantees that when this portion is carved away from the shape, the
shape still remains conservative. We demonstrated that this insight leads to an algorithm that can work correctly on real images. We called this method shadow carving.
Shadow carving improves previous work on shape from shadow in that it is more robust with respect to the classification of shadow regions and is not restricted to 2.5D terrain surfaces, but
rather it may be applied to measuring the objects in the round. In order to validate our theory, we have implemented a reconstruction system that combines information from silhouettes and
shadows. The new system uses inexpensive digital cameras and lamps. Our experiments with real and synthetic objects confirms that the property of conservative carving is achievable in practice
and show that shadow carving produces a much better surface estimate than shape from silhouettes alone. Future research directions include finding the optimal configuration of lights and cameras
that maximizes the amount of volume that can be carved away at each iteration.
S. Savarese, M. Andreetto, H. Rusmeier, F. Bernardini, P. Perona, “3D Reconstruction by Shadow Carving: Theory and Practical Evaluation”, International Journal of Computer Vision (IJCV) , vol 71,
no. 3, pp. 305-336, March 2007.
S. Savarese, Holly Rushmeier, Fausto Bernardini and P. Perona, "Shadow Carving", in Proc. of the Int. Conf. on Computer Vision, Vancouver, Canada, June 2001.
S. Savarese, H. Rushmeier, F. Bernardini and P. Perona, "Implementation of a Shadow Carving System for Shape Capture", in Proc. of 1st International Symposium on 3D Data Processing, Visualization
and Transmission, IEEE Press, 2002, pp. 107-114. | {"url":"http://www.vision.caltech.edu/savarese/shadow_carving.html","timestamp":"2014-04-16T16:36:25Z","content_type":null,"content_length":"11823","record_id":"<urn:uuid:95fb7f95-3275-4d79-8b47-d45bd7661192>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Three Methods of Noise Figure Measurement
Keywords: rf,rfic,wireless,noisefigure measurement,gain,meter,y factor, rf ics, rfics
Related Parts
Three Methods of Noise Figure Measurement
Abstract: Three different methods to measure noise figure are presented: Gain method, Y-factor method, and the Noise Figure Meter method. The three approaches are compared in a table.
In wireless communication systems, the "Noise Figure (NF)," or the related "Noise Factor (F)," defines the noise performance and contributes to the receiver sensitivity. This application note
describes this important parameter and details ways to measure it.
Noise Figure and Noise Factor
Noise Figure (NF) is sometimes referred to as Noise Factor (F). The relationship is simply:
NF = 10 * log10 (F)
Noise Figure (Noise Factor) contains the important information about the noise performance of a RF system. The basic definition is:
From this definition, many other popular equations of the Noise Figure (Noise Factor) can be derived.
Below is a table of typical RF system Noise Figures:
Category MAXIM Products Noise Figure* Applications Operating Frequency System Gain
LNA MAX2640 0.9dB Cellular, ISM 400MHz ~ 1500MHz 15.1dB
HG: 2.3dB WLL 3.4GHz ~ 3.8GHz HG: 14.4dB
LNA MAX2645
LG: 15.5dB WLL 3.4GHz ~ 3.8GHz LG: -9.7dB
Mixer MAX2684 13.6dB LMDS, WLL 3.4GHz ~ 3.8GHz 1dB
Mixer MAX9982 12dB Cellular, GSM 825MHz ~ 915MHz 2.0dB
Receiver System MAX2700 3.5dB ~ 19dB PCS, WLL 1.8GHz ~ 2.5GHz < 80dB
* HG = High Gain Mode, LG = Low Gain Mode
Measurement methods vary for different applications. As shown in the table above, some applications have high gain and low noise figure (Low Noise Amplifiers under HG mode), some have low gain and
high noise figure (mixers and LNAs under LG mode), some have very high gain and wide range of noise figure (receiver systems). Measurement methods have to be chosen carefully. In this article, a
Noise Figure Meter as well as two other popular methods - "gain method" and "Y factor method" - will be discussed.
Using a Noise Figure Meter
Noise Figure Meter/Analyzer is employed as shown in Figure 1.
Figure 1.
The noise figure meter, such as Agilent N8973A Noise Figure Analyzer, generates a 28VDC pulse signal to drive a noise source (HP346A/B), which generates noise to drive the device under test (DUT).
The output of the DUT is then measured by the noise figure analyzer. Since the input noise and Signal-to-Noise ratio of the noise source is known to the analyzer, the noise figure of the DUT can be
calculated internally and displayed. For certain applications (mixers and receivers), a LO signal might be needed, as shown in Figure 1. Also, certain parameters need to be set up in the Noise
Figure Meter before the measurement, such as frequency range, application (Amplifier/Mixer), etc.
Using a noise figure meter is the most straightforward way to measure noise figure. In most cases it is also the most accurate. An engineer can measure the noise figure over a certain frequency
range, and the analyzer can display the system gain together with the noise figure to help the measurement. A noise figure meter also has limitations. The analyzers have certain frequency limits.
For example, the Agilent N8973A works from 10MHz to 3GHz. Also, when measuring high noise figures, e.g., noise figure exceeding 10dB, the result can be very inaccurate. This method requires very
expensive equipment.
Gain Method
As mentioned above, there are other methods to measure noise figure besides directly using a noise figure meter. These methods involve more measurements as well as calculations, but under certain
conditions, they turn out to be more convenient and more accurate. One popular method is called "Gain Method", which is based on the noise factor definition given earlier:
In this definition, "Noise" is due to two effects. One is the interference that comes to the input of a RF system in the form of signals that differ from the desired one. The second is due to the
random fluctuation of carriers in the RF system (LNA, mixer, receiver, etc). The second effect is a result of Brownian motion, It applies in thermal equilibrium to any electronic device, and the
available noise power from the device is:
P[NA] = kTΔF,
Where k = Boltzmann's Constant (1.38 * 10^-23 Joules/ΔK),
T = Temperature in Kelvin,
ΔF = Noise Bandwidth (Hz).
At room temperature (290ΔK), the noise power density P[NAD] = -174dBm/Hz.
Thus we have the following equation:
NF = P[NOUT] - (-174dBm/Hz + 10 * log[10](BW) + Gain)
In the equation, P[NOUT] is the measured total output noise power. -174dBm/Hz is the noise density of 290°K ambient noise. BW is the bandwidth of the frequency range of interest. Gain is the system
gain. NF is the noise figure of the DUT. Everything in the equation is in log scale. To make the formula simpler, we can directly measure the output noise power density (in dBm/Hz), and the equation
NF = P[NOUTD] + 174dBm/Hz - Gain
To use the "Gain Method" to measure the noise figure, the gain of the DUT needs to be pre-determined. Then the input of the DUT is terminated with the characteristic impedance (50Ω for most RF
applications, 75Ω for video/cable applications). Then the output noise power density is measured with a spectrum analyzer.
The setup for Gain Method is shown in Figure 2.
Figure 2.
As an example, we measure the noise figure of the MAX2700. At a specified LNA gain setting and V[AGC], the gain is measured to be 80dB. Then, set up the device as show above, and terminate the RF
input with a 50Ω termination. We read the output noise density to be -90dBm/Hz. To get a stable and accurate reading of the noise density, the optimum ratio of RBW (resolution bandwidth) and VBW
(video bandwidth) is RBW/VBW = 0.3. Thus we can calculate the NF to be:
-90dBm/Hz + 174dBm/Hz - 80dB = 4.0dB.
The "Gain Method" can cover any frequency range, as long as the spectrum analyzer permits. The biggest limitation comes from the noise floor of the spectrum analyzer. As shown in the equations, when
Noise Figure is low (sub 10dB), (P[OUTD] - Gain) is close to -170dBm/Hz. Normal LNA gain is about 20dB. In that case, we need to measure a noise power density of -150dBm/Hz, which is lower than the
noise floor of most spectrum analyzers. In our example, the system gain is very high, thus most spectrum analyzers can accurately measure the noise figure. Similarly, if the Noise Figure of the DUT
is very high (e.g., over 30dB), this method can also be very accurate.
Y Factor Method
Y Factor method is another popular way to measure Noise Figure. To use the Y factor method, an ENR (Excess Noise Ratio) source is needed. It is the same thing as the noise source we mentioned
earlier in the "Noise Figure Meter" section. The setup is shown in the Figure 3:
Figure 3.
The ENR head usually requires a high DC voltage supply. For example, HP346A/B noise sources need 28VDC. Those ENR heads works are a very wide band (e.g.10MHz to 18GHz for the HP346A/B) and they have
a standard noise figure parameter of their own at specified frequencies. An example table is given below. The noise figures at frequencies between those markers are extrapolated.
Table 1. Example of ENR of Noise Heads
HP346A HP346B
Frequency (Hz) NF (dB) NF (dB)
1G 5.39 15.05
2G 5.28 15.01
3G 5.11 14.86
4G 5.07 14.82
5G 5.07 14.81
Turning the noise source on and off (by turning on and off the DC voltage), an engineer measures the change in the output noise power density with a spectrum analyzer. The formula to calculate noise
figure is:
In which ENR is the number given in the table above. It is normally listed on the ENR heads. Y is the difference between the output noise power density when the noise source is on and off.
The equation comes from the following:
An ENR noise head provides a noise source at two "noise temperatures": a hot T = TH (when a DC voltage is applied) and a cold T = 290°K. The definition of ENR of the noise head is:
The excess noise is achieved by biasing a noisy diode. Now consider the ratio of power out from the amplifier (DUT) from applying the cold T = 290°K, followed by applying the hot T = T[H] as inputs:
Y = G(Th + Tn)/G(290 + Tn) = (Th/290 + Tn/290)/(1 + Tn/290).
This is the Y factor, from which this method gets its name.
In terms of Noise figure, F = Tn/290+1, F is the noise factor (NF = 10 * log(F))Thus, Y = ENR/F+1. In this equation, everything is in linear regime, from this we can get the equation above.
Again, let's use MAX2700 as an example of how to measure noise figure with the Y-factor method. The set up is show above in Figure 3. Connect a HP346A ENR noise head to the RF input. Connect a 28V
DC supply voltage to the noise head. We can monitor the output noise density on a spectrum analyzer. By Turning off then turning on the DC power supply, the noise density increased from -90dBm/Hz to
-87dBm/Hz. So Y = 3dB. Again to get a stable and accurate reading of the noise density, RBW/VBW is set to 0.3. From Table 1, at 2GHz, we get ENR = 5.28dB. Thus we can calculate the NF to be 5.3dB.
In this article, three methods to measure the noise figure of RF devices are discussed. They each have advantages and disadvantages and each is suitable for certain applications. Below is a summary
table of the pros and cons. Theoretically, the measurement results of the same RF device should be identical, but due to limitations of RF equipment (availability, accuracy, frequency range, noise
floor, etc), we have to carefully choose the best method to get the correct results.
Suitable Applications Advantage Disadvantage
Noise Figure Super low NF Convenient, very accurate when measuring super low (0-2dB) NF. Expensive equipment, frequency range limited
Gain Method Very high Gain or very Easy setup, very accurate at measuring very high NF, suitable for any Limited by Spectrum Analyzer noise floor. Can't deal with systems with low gain
high NF frequency range and low NF.
Y Factor Method Wide range of NF Can measure wide range of NF at any frequency regardless of gain When measuring Very high NF, error could be large.
Related Parts
MAX2105 Direct-Conversion Tuner ICs for Digital DBS Applications
MAX2640 300MHz to 2500MHz SiGe Ultra-Low-Noise Amplifiers
MAX2641 300MHz to 2500MHz SiGe Ultra-Low-Noise Amplifiers Free Samples
MAX2642 900MHz SiGe, High-Variable IP3, Low-Noise Amplifier Free Samples
MAX2643 900MHz SiGe, High-Variable IP3, Low-Noise Amplifier Free Samples
MAX2645 3.4GHz to 3.8GHz SiGe Low-Noise Amplifier/PA Predriver Free Samples
MAX2648 5GHz to 6GHz Low-Noise Amplifier in 6-Pin UCSP
MAX2649 5GHz Low-Noise Amplifier with Shutdown
MAX2654 1575MHz/1900MHz Variable-IP3 Low-Noise Amplifiers Free Samples
MAX2655 1575MHz/1900MHz Variable-IP3 Low-Noise Amplifiers Free Samples
MAX2656 1575MHz/1900MHz Variable-IP3 Low-Noise Amplifiers
MAX2684 3.5GHz Downconverter Mixers with Selectable LO Doubler
MAX2700 1.8GHz to 2.5GHz, Direct-Downconversion Receivers
MAX9982 825MHz to 915MHz, SiGe High-Linearity Active Mixer Free Samples
Next Steps
EE-Mail Subscribe to EE-Mail and receive automatic notice of new documents in your areas of interest.
Download Download, PDF Format (74kB)
APP 2875: Nov 21, 2003
TUTORIAL 2875, AN2875, AN 2875, APP2875, Appnote2875, Appnote 2875 | {"url":"http://www.maximintegrated.com/app-notes/index.mvp/id/2875","timestamp":"2014-04-20T10:48:20Z","content_type":null,"content_length":"79494","record_id":"<urn:uuid:bd83e43a-682a-425c-8811-eda11c0c8d87>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Photo-Realistic Computer Graphics
Spring 2002
Photo-Realistic Computer Graphics TTh 2:00
CIS 5930 499 Dirac Science Library
Dr. David C. Banks
08 January
Create a notebook for the course. Put printed copies of papers in it.
Create a Web area for the course.
[S:Find a Web log tool (such as blogger or slashcode or PHP-nuke) so you can add comments each week. Google info on "web logging". :S]
(Jan 13, 2002) Note: The above assignment is cancelled. Do not attempt to use a Web logging tool for this course. Too much time and effort are required.
Homework 00
Visit POVray. Download POVray to a machine of your choice. Try out their examples. Create a scene of your own. Create an animation by moving the camera, rendering, and saving frames. Put your
results under your course Web page.
An Improved Illumination Model
Distributed Ray Tracing
Framework for Realistic Image Synthesis
Distribution Ray Tracing [pdf version]
15 January
Homework 01 Visit radiance. Download. Create images. Put them on your Web page.
Part 2 Create a sphere in OpenInventor. Sample the sphere using rejection on a random vector p: if (0.0 < p.length() < sphere.radius) p.normalize(); At each sample point, calculate the emittance in
the direction of the camera. Put a small sphere at the sample point, with color given by the emittance.
You are welcome to use or modify my code. The header files are not included, nor is pointOnSphere.cxx included.
Adaptive Quadrature
Monte Carlo Methods
Practitioner's assessment
22 January
Homework 02
Download BMRT. Make images for your Web page.
Gordon Erlebacher
Department of Mathematics
Florida State University
"On the Challenge of Visualizing Vector Fields"
Friday, January 25, 2002 4:30 P.M.
499 Dirac Science Library
Part 2 Make your Inventor program read the scene-description from a file. For example:
center 1.0 2.0 -3.0
radius 1.5
[ 1050.3 ], # red
[ 1692.8 ], # green
[ 2319.2 ] # blue
Make the number of samples be a command-line flag. Distribute the number of samples across the entire scene. As the application runs, continually delete old samples and resample at new locations.
The emittance can be any large value, so you must somehow convert it to something between 0 and 1.0 . Divide all the colors by maxEmittance, then raise to a power gamma (supplied by the command
Optical Models
Display of Surfaces
The Foundations of Photo-realistic Rendering
29 January
Homework 03
Download and run bv. Download and run brdfview
Part 2 Create an isotropic emittance format. The emittance varies with the dot product D between a surface normal and vector vOut. Then create a scene with some spheres having different emittance
functions. Use linear interpolation, or else use some filter function of your choice.
DEF MY_EMITTANCE Emittance
[ # red
0.0, 0.0, 0.0, 0.0, 0.0, # negative D
0.1, 2.0, 3.0, 8.5, 9.0
# green
0.0, 0.0, 0.0, 0.0, 0.0, # negative D
0.1, 1.0, 2.0, 3.0, 3.5
# blue
0.0, 0.0, 0.0, 0.0, 0.0, # negative D
0.1, 1.5, 4.0, 6.0, 7.0
emittance USE MY_EMITTANCE
Progressive radiosity.
Radiosity overview.
Photon mapping.
Light field.
05 February
Homework 04
Download the Light field package and make some images.
Create rays around a sphere having center c. When the sphere is sampled at point p, create a cylinder through c and p. Assign the cylinder a length L and radius r according to some reasonable guess
that depends on the number of samples (specified in the command line). Where a cylinder passes through another sphere, indicate the intersection by placing a small sphere at that point. Color the
intersection-sphere marker according to the emittance function from the sphere that is stabbed.
You can use or modify the following sphere-intersection code.
Light field
View Interpolation (Shenchang Eric Chen) (search the Web)
12 February
Homework 05
Download or implement a ray-triangle intersection routine.
Download or implement a marching-cubes triangle generator. Create a dragger to select the isovalue of the scalar function.
Download or create a 3D scalar field (such as a 512-byte header, 217x217x217 1-byte dataset of a human brain). Consult the man page for fopen() if you haven't done file I/O under unix before. Or
ask fellow students for help.
Specify a viewpoint (from the command line, or via a dragger) in the scene with the brain isosurface. Randomly sample directions from a sphere around the viewpoint. Show where each ray intersects
the isosurface (by looping over the triangles in the isosurface to see which one is stabbed).
Implicit Surfaces
Polygonizing a Scalar Field
19 February
Homework 06
Kiril Vidimce
"Normal Meshes"
11:00 am Monday, February 25, 2002
Dirac 499 Seminar Room
Part 1 Search the Web for "trilinear interpolation". This is a simple scheme for determining the value of a function f(x,y,z) at points that are not on the grid. Implement it to use with your
scalar function on a 3D grid from last week (copy; paste).
Part 2
The partial derivative d/dx of a scalar function can be found by taking the difference between function values in the x direction.
d/dx f(x[i], y[i] z[i]) = (f(x[i]+1, y[i] z[i]) - f(x[i]-1, y[i] z[i]))/2
The partial derivatives d/dy and d/dz are defined in a similar way.
Notice that you can compute these partials even when the point p=(x,y,z) is off the grid. Just use your interpolation function.
You must guard against going off the grid when you add or subtract 1 from the coordinate of the point.
Write a routine that computes the normal vector at a vertex on your isosurface. Use the gradient. The components of the gradient are the partials (df/dx,df/dy,df/dz). The normal of the isosurface
through point p lies parallel to the gradient. Should the normal point in the positive or negative direction of the gradient? It depends on the dataset. Be prepared to try both ways.
For every vertex in the isosurface, sample some spherical directions in the hemisphere defined by the normal.
Part 3
To make sampling reasonable on the brain isosurface, first resample the data at a coarser resolution. This is easy to do. Create a new coordinate system x2,y2,z2 with dimensions x2Dim, y2Dim,
z2Dim. Write a function to convert from x2,y2,z2 coordinates to x,y,z coordinates. Loop through the new grid. At each point p2 in the new coordinates, find the corresponding point p in the old
coordinates. Use the interpolate routine to evaluate f(p).
Part 4
Make a globally illuminated image of the brain. Define 2 or 3 light sources (triangles that you put somewhere, with a simple cosine emittance distribution). For each vertex on the isosurface,
sample some directions on the hemisphere around the vertex. Collect the radiance that is received from any emitter hit by a ray. Scale that radiance by albedo*cos(theta).
Paul S. Heckbert and Michael Garland, "Multiresolution Modeling for Fast Rendering".
He et al., "Voxel-based object simplification"
26 February
Homework 07
Download and run the Stanford volPack volume rendering package. Try it on your scalar volume.
Part 2 For each ray coming from your viewing sphere, follow the ray through a scalar-valued volume. The points q on the ray are defined by the equation q = p + t*dp, where p is the viewpoint and dp
is the unit vector defining the ray direction. Different values of t yield different points q along the ray. Specify (either in the command line or in your parameter file) how closely spaced the
samples are along the ray. Call this distance dt.
Assume the volume is emissive, with emittance E given by the dot product between vOut and the normal (plus/minus the unit gradient), multiplied by a "transfer function" g(). Integrate the emittance
along the ray.
Lin += E(q)*dt
To make a certain isovalue c be the dominant one, make the tranfer function spike near c. For example, if h(x,y,z) is the volumetric scalar function, let g(h) = exp( -(h-c)*(h-c)/(s*s) ). The value
of s determines the width of the spike.
Color the sphere sample according to the accumulated radiance.
Part 3 Instead of being emissive, make each point in the volume have a transmittance function f(vIn, vOut). The function is purely transmissive in the forward direction, so f(v, v) = 1.0 . That is,
the radiance L coming into a point continues forward so you can simply add the radiances along the ray. Single scattering is accomplished by adding L(vIn)*f(vIn, vOut) for light coming from a
source. You can make the source be one or two isolated points, or you can sample them from a region like a triangle or rectangle. Use f=normal.dot(-vIn)*g(h(q)), where h(q) is the transfer function
Debevec, Acquiring the Reflectance Field of a Human Face
05 March
Homework 08
Download Paul Heckbert's Radiosity Visualizer. Make images for your Web page.
Part 2 Add absorption to your volume renderer. Create a second transfer function tau(h), where h() is the scalar function on the 3D domain. The function tau is the extinction coefficient in Max's
paper from Homework 02. When tau=0, no absorption occurs; when tau=infty, light is completely absorbed. The radiance L(p[k]) along the ray has two components: emission and scattering. Assume the
emittance comes from a couple of polygons outside the volume. The scattering uses your scattering function from the previous homework, plus absorption.
If points p and p' lie on the ray from the viewpoint, with p closer than p', then you can implement absorption as follows. Let dp be the ray direction (passing through p and p'). When vin=-dp,
light is being transferred from p' to p. At this point and in this direction you get
L(p, vout)= exp(-tau(p)*dt)L(p, vin)
where dt is the length of the step from p to p', assuming tau to be constant.
When you select random directions about each point p for sampling the incoming radiance L, be sure that -dp is one of these directions. You can use different step sizes for different directions
surrounding the point p. In particular, you are free to take big step sizes toward the lights.
Siggraph 2002. Look over the list of courses. Make a short list of courses you want to attend. Some possibilities are listed below.
│ 2│Advanced Global Illumination │Doutre, Bala │Sunday │8:30-12:15 │
│ 5│Image-Based Lighting │Debevec │Sunday │1:30-5:15 │
│ 7│Introducing X3D │Daly │Sunday │8:30-5:15 │
│10│Level Set and PDE Methods for Computer Graphics │Breen, Sapiro, Fedkiw, Osher │Sunday │8:30-5:15 │
│12│Modeling Techniques for Medical Applications │Metaxas │Sunday │8:30-5:15 │
│16│RenderMan in Production │Gritz │Sunday │8:30-5:15 │
│17│State of the Art in Hardware Shading │Olano, Boyd, McCool, Mark, Mitchell │Sunday │8:30-5:15 │
│25│Using Tensor Diagrams to Represent and Solve Geometric Problems │Blinn │Monday │1:30-5:15 │
│29│Beyond Blobs: Recent Advances in Implicit Surfaces │Yoo, Turk, Dinh, Hart, O'Brien, Whitaker│Monday │8:30-5:15 │
│36│Real-Time Shading Languages │Olano, Hart, Heidrich, Mark, Perlin │Monday │8:30-5:15 │
│39│Acquiring Material Models Using Inverse Rendering │Marschner, Ramamoorthi │Monday │8:30-12:15 │
│43│A Practical Guide to Global Illumination Using Photon Mapping │Jensen │Tuesday │1:30-5:15 │
│44│Image-Based Modeling │Grzeszczuk │Tuesday │1:30-5:15 │
│54│Obtaining 3D Models With a Hand-Held Camera │Pollefeys │Wednesday│10:30-12:15│
[S:12 March:S]
Semester break
19 March Homework 08
Download Garland's code; try it out.
They Might Be Giants
Friday, March 22 2002
10:30 Cow Haus
Tickets $16
Template Graphics Software has a visualization product called Amira. Install an evaluation copy on your machine at home, or use it from the Vis machines. Type "help" in the command window, then go
through the list of demos. Under "geometry reconstruction", click the bottom example (surface simplification). Practice simplifying the mesh. Press the pencil icon to see what the buttons can do.
Modify the vertices. Modify the edges.
• Load colin's brain and make an isosurface. Convert it to a surface and decimate it repeatedly. Save a sequence of simplified meshes.
• Load your .iv file from your own isosurface tool. Let Amira decimate it. Save. Repeat. Produce a sequence of simpler meshes.
• Use Garland's code to simplify your mesh. Save. Repeat. Produce a sequence of simpler meshes.
Create an LOD node (man soLOD) for each sequence of simplified meshes. For example,
#Inventor V2.1 ascii
File { name "data/brain-amiraIso-amiraSimplify.200000.iv" }
File { name "data/brain-amiraIso-amiraSimplify.100000.iv" }
File { name "data/brain-amiraIso-amiraSimplify.050000.iv" }
File { name "data/brain-amiraIso-amiraSimplify.025000.iv" }
File { name "data/brain-amiraIso-amiraSimplify.012500.iv" }
Surface Simplification Using Quadric Error Metrics
26 March Homework 10
Create two scenes of Colin's brain. Scene1 is a high resolution isosurface. Scene2 is a decimated version of scene1. While you are debugging, let scene1 have maybe 5,000 polygons and scene2 1,000
Part 1 Include a luminaire in the scene (maybe a couple of triangles, or maybe a triangulated sphere). The luminaire has an emittance distribution function of your choosing; start with a simple
cosine distribution. For each vertex in scene1, randomly shoot rays (either in the whole sphere, or just in the hemisphere defined by the surface normal). If a ray hits an emitter, compute the
contribution the incoming radiance makes to that vertex and accumulate it to the total of incident radiance. The incident radiance is diminished as the light source deviates from the normal
direction, obeying a cosine law. Increase the radiance by the reflected radiance.
gather(SceneGraph &scene1)
foreach vert in scene1
vert.reflective = 0 // start a new gather
do NumSamples times // gather radiance
vIn = randomSphereSample(vert.surfaceNormal() ); // incident direction
vert2 = nearestPoint(vert, scene1.intersect(vIn) );
if (vert2 != Null)
receivedRadiance += vert2.radiance(vIn) // radiance = emissive + reflective
*(-vIn).dot(vert.normal); // dot > 0
vert.reflective += receivedRadiance * vert.reflectance;
Initialize the scene by setting vert.emissive = vert.reflective = 0 except for the luminaires. In the pseudocode above, a vertex has a reflectance (the BRDF), a surfaceNormal, and radiance
(composed of emissive radiance and reflective radiance).
You can do the gather() step just one time and produce a close approximation to the correct radiance. If you perform the gather step two or more times, you will account for inter-reflections in the
Part 2 Accelerate the basic illumination scheme. Replace scene1 with the decimated scene2 in the intersection test. (Extra credit: to accelerate further, replace the original scene with the
decimated scene2, compute one or more passes of light transport, then assign radiance to each vert of scene1 by interpolating. You only need to find the triangle in scene2 where the vert from
scene1 should lie, then interpolate the three values from the vertices of the triangle in scene2.)
Part 3 Create an animation. When you render the scene, turn off the headlight (and all lights). Make each vertex have zero diffuse and specular components. Set the emissive component to be the
vertex's radiance. Change the isovalue, make a new image, and repeat. Change the isovalue in steps from about value=70 to about value=90 for this dataset.
Part 4 In case your Web area for the course is not up to date, create Web pages for homework 0, homework 1, homework 2, and homework 3. Link them from your course web page, putting a thumbnail
image with each link. Put descriptive information on your homework page so that a casual visitor will understand what you did, what machine you used, how long it took to run, how much code was
involved, what steps were needed to get code to compile. In general, make it be like a page you yourself would like to visit.
02 April
Homework 11
Part 1 Update or create Web pages for homework 04, homework 05, homework 06, homework 07.
Part 2 Final Project. Choose a project from among the following, or propose your own. The final project will include a Web page that describes the project, links to code and documentation, images
and animations. Project demos will be give during the final two weeks of classes, once or twice as a "work in progress" and once as a final presentation.
• *Blurred radiance. Create a 3D array radianceMesh[] containing a radiance value and a weight at each location. Loop through isovalues f(x,y,z) in Colin's brain data, from fMin to fMax, in
increments of df. Maybe use df=0.01*(fMax-fMin) to produce 100 isosurfaces. For each isovalue, produce an isosurface and compute the radiance at each vertex (you will need to introduce one or
more luminaires).
Assign the vertex's radiance to nearby grid points in radianceMesh[]. Use a filter function filter() to weight the contribution.
for (f = fMin; f < fMax; f+= df)
scene = scalar3Dmesh.getIsosurface(f)
foreach vert in scene
foreach radianceMesh.gridpoint near vert
gridpoint = radianceMesh.gridpoint
gridpoint.value += filter(|vert-gridpoint|) * f(vert)
gridpoint.weight += filter(|vert-gridpoint|)
foreach gridpoint in radianceMesh
if (gridpoint.weight > 0.0)
gridpoint.value /= weight
You now have produced (or estimated) the radiance at every point in the entire 3D volume.
Combine the estimated radiance in radianceMesh with your isosurface routine for the brain data. When you interpolate a vertex location on an isosurface, interpolate the radiance from the
corresponding radianceMesh. If your isosurface generation runs in real time, the illuminated isosurface will also be generated in real time.
• Photon mapping. Search the Web, read the book on photon mapping. Implement photon mapping for some datasets we have used (seminar room; colin's brain).
• *Hybrid rendering. Use a commercial-grade renderer (such as BMRT or povray or radiance) to render images of the brain isosurfaces. Use equally-spaced isovalues from fMin to fMax. Take the image
and project the colors onto the vertices in the mesh (ray-trace from the viewpoint to the vertex, finding where the ray intersects the image). Blend the vertex's color into the 3D volume's
nearby grid points and save the resulting weighted RGB colors. Modify your isosurface program to interpolate the 3D grid of RGB colors and paint them on each isosurface when it is created.
• *Acquiring reflectance. Use a digital camera to collect reflectance from a surface. Control the direction of the light and the camera to sample the 4-dimensional space of vectors vIn, vOut.
Build a table containing the averages of these reflectances across many points on the surface. Then reconstruct the surface appearance using this reflectance function by rendering a polygonal
mesh whose reflectance is determined by the table.
• Light field. Use a digital camera to collect images from an array of viewpoints in a plane. Take pictures of one of the plastic brains in the Vis Lab, perhaps from a 10x10 array of viewpoints.
Then use a commercial-grade renderer (your own, BMRT, povray, radiance, etc) to render a similar set of images of Colin's brain. Use the Stanford lightfield viewer to display the two scenes.
• View-dependent illumination. Modify the gather() step from your previous homework so that the gathering is direction-dependent, and the reflectance function is direction-dependent as well.
• Volume rendering. Download the open-source volumizer code from SGI onto a Vis Lab linux or irix machine. Modify their demo code so that it will read in a dataset like Colin's brain and display
it volumetrically. Include command-line flags to specify the dimensions, etc. (xDim 217 yDim 217 zDim 217 byte 1 header 512 big-endian). Put draggers in the scene to allow the user to change
the transfer function. If the transfer function is exp(-(f*f/s*s)), let one dragger specify f and the other specify s.
• * Volume/surface illumination. Define a transfer function exp(-(f*f/s*s)) for a ray in the volume. The ray issues from a 3D grid point p0 in a scalar field f(p) (Colin's brain). Let f(p0)
define the transfer function, so that the volume is transparent except for regions having the same value as p0. Apply opacity and single scattering to accumulate the radiance along the ray.
Shoot multiple rays from each grid point in the volume. Collect the incident radiance into a 3D radiance grid. Combine it with your isosurface program so that each vertex in an isosurface has
radiance interpolated from the radiance grid.
09 April
Homework 12
Part 1 Update or create Web pages for homework 08, homework 09, homework 10, homework 11.
16 April
Final Project Demos
22 April
Thursday April 25 7:30-9:30 am DSL 499.
Each assignment is due one week after it is given, unless otherwise noted. Make your Web assignments and programming assignments available for me to fetch via a script as of 11:59pm each Monday.
They should be rooted in a directory such as www.server/~user/cis5930/hw00/hw00.tar.gz for fetching. Be sure to include a readme.txt text file that describes your project, gives credit for any
code you copied, and explains how to compile and run your code.
This course requires a significant amount of reading. Be prepared to lead discussion on the reading during class.
Homework will be demonstrated in class each week, using the PowerWall in the Dirac Science Library Seminar Room. The programs are described informally below, and in more detail during class. The
goal of these programs is to allow you to investigate and demonstrate aspects of global illumination and radiative heat transfer.
If I am invited to review an actual paper submitted to this year's SIGGRAPH conference, you will help with the process. The ethical aspects of the review process are important; they can be found
at www.siggraph.org/s2001/review on the Web. | {"url":"http://www.cs.fsu.edu/~banks/courses/2002.0/homework/hw.html","timestamp":"2014-04-17T19:16:03Z","content_type":null,"content_length":"37809","record_id":"<urn:uuid:587137c1-f7c0-4c55-aa50-c0c3ee58ffba>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
\[\sqrt[3]{-729x^3}\] simplify
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e4473520b8b3609c71fd488","timestamp":"2014-04-20T13:49:00Z","content_type":null,"content_length":"39556","record_id":"<urn:uuid:cad86f17-4811-4178-a065-9ccdec8a1349>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
I forgot to ask complete question, if I want to find the wavlength that gives rise to the third line, would the wavelength occur on n=3 or between n=1 and n=3?
Best Response
You've already chosen the best response.
in order to an atom to emit light, its electrons have to drop between levels
Best Response
You've already chosen the best response.
thanks. So the right answer is between n=3 and n=1? Thanks.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f317265e4b0fc09381f4d32","timestamp":"2014-04-19T22:51:48Z","content_type":null,"content_length":"32660","record_id":"<urn:uuid:49fa6340-ba56-485b-9297-537c273a0a51>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
 The Graduate School of Mathematical Sciences was established in 1992 in order to foster a culture of mathematics and mathematical sciences from an international standpoint, as well as to
contribute to the overall development of society. It is a unified graduate school for mathematics and related areas and the Graduate School of Mathematical Sciences is in full charge of mathematics
education at the University of Tokyo.
  We accept each year 53 graduate students for our Master program and 32 for the Ph.D. program. The courses of the Graduate School are given in all fields of mathematical sciences, from algebra,
geometry, and analysis to applied mathematics. The courses and seminars are in English when there are students who do not speak Japanese. Besides these courses, we invite many researchers from
businesses and private universities to teach application-oriented subjects including economics, finance and information technology. We have courses to train students in actuarial and statistical
sciences, which are directly connected to real world experience. Students conduct research in an independent and fulfilling environment, supported from time to time by their thesis advisors. They
study as independent scholars with free and ample access to various facilities. For example, the library of the Graduate School of Mathematical Sciences is one of the best libraries in mathematics in
the world. The graduates of the School work at universities and colleges, research institutes, government ministries, finance and insurance institutions, information technology companies, and so
forth. They actually contribute the development of society in many fields. The Graduate School grew out of two independent departments of mathematics that existed within the University of Tokyo: one
in the Faculty of Science on the Hongo campus and the other in the College of Arts and Sciences on the Komaba Campus. All the faculty members of these two departments joined in the new graduated
school in 1992. Hence, this year 2012 is the 20th anniversary of the establishment of the Graduate School of Mathematical Sciences. We have our building of the Graduate School of Mathematical
Sciences at the southeast edge of the Komaba Campus since 1995.
  Presently, the number of tenured professors and associate professors of the Graduate School of Mathematical Sciences is about 55. Besides tenured professors and associate professors, we have
visiting professors and overseas visiting professors. Members of the Graduate School conduct leading-edge research in all fields of mathematical sciences, from algebra, geometry, and analysis to
applied mathematics. The long tradition of advanced scholarly research since before the merger of the two departments of mathematics helps the Graduate School of Mathematical Sciences function as an
international research center. We host over 150 researchers from around the world each year and there are many overseas exchange students in the Graduate School. Thus we are truly an international
mathematics hub. In 2005, we established the Tambara Institute of Mathematical Sciences in Gunma Prefecture, a mountain villa devoted to seminars and summer schools with a full hostel service, as a
venue for international researchers to meet and interact.
  Even in these 20 years, we experienced a new stage in the evolution of mathematics. There has been tremendous progress in areas where pure mathematics and other branches of sciences collude,
and mathematical knowledge has become the backbone of various sciences like physics, biology, chemistry, information theory, engineering, economics, etc. These developments show the importance of
collaborations with other branches of sciences as well as with the society.
  We are intimately collaborating with the Kavli Institute for the Physics and Mathematics of the Universe (Kavli-IPMU) which is the first institure in Todai Institutes for Advanced Study
(TODIAS). It was founded in 2007 by the World Premier International Research Center Initiative (WPI) of the Japanese government. It received a very high international evaluation and it became a
member of the Kavli institutes in this year 2012.
  Within the University of Tokyo, the department of mathematics has a long history. It was founded in 1881 and it has always managed to keep its long tradition of sustaining a high academic
level. It has maintained a rich library collection, a common research room, and it succeeds in sending graduates to fulfill a wide variety of roles in society. In keeping with these fine traditions,
the Graduate School of Mathematical Sciences aims at fulfilling its social duty by offering excellent education and by producing outstanding research results. All members of the Graduate School of
Mathematical Sciences will make every effort to meet these exciting challenges. | {"url":"http://www.ms.u-tokyo.ac.jp/summary_e/message_e.html","timestamp":"2014-04-17T15:33:33Z","content_type":null,"content_length":"15678","record_id":"<urn:uuid:042f3552-e73c-4f00-ab01-8777e8753b23>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
cone and the plane intersection
June 27th 2013, 11:14 PM #1
May 2013
cone and the plane intersection
the curve is the intersection of the cone t z=√(x²+y²) and the plane 3z=y+4.
How do i find the intersection.I know its sounds like a stupid question .
do i put x,y,z in terms of cylindrical coordinates or spherical?????
so far I am getting some value in terms of r.
Guidance is muchly appreciated.
Re: cone and the plane intersection
You already have z in terms of x and y, substitute it into the second equation to give the relation that is your intersection...
June 27th 2013, 11:33 PM #2 | {"url":"http://mathhelpforum.com/calculus/220200-cone-plane-intersection.html","timestamp":"2014-04-19T21:06:51Z","content_type":null,"content_length":"32721","record_id":"<urn:uuid:79c8cac6-f660-47f7-a6be-b2191b82b384>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstract Algebra - Define and Prove
December 15th 2012, 09:26 AM
Abstract Algebra - Define and Prove
Attachment 26247
This looks like an application of the First Isomorphism theorem. I asked one of my classmates how she would go on on doing this and she said that Iwould want to define a homomorphism f: ℝ^2 -> ℝ
where f(ℝ^2) = ℝ and ker(f)=N.She also said that given my definition for N, she would suggest using f(x,y)=3x-4y as my homomorphism.
I am having trouble following her advice. Can anyone further explain what I should do to tackle this problem? Thanks!
December 15th 2012, 09:34 AM
Re: Abstract Algebra - Define and Prove
What's the problem ? Using the first isomorphism theorem you have following her advice that Rē/N =~ R. It is clear that ker(f) is N.
December 15th 2012, 10:03 AM
Re: Abstract Algebra - Define and Prove
Remember isomorphism theorem says, if G, H are groups, and $\phi:G \to H$ is a homomorphism, if $\phi(G) = H$ (every element in H gets hit under the homomorphism), then $\frac{G}{Ker(\phi)} = H$.
Clearly, you got your homomorphism from $\mathbb{R}^2 \to \mathbb{R}$, $\phi(x,y) = 3x-4y$. You also got your kernel, which is your set N, all you need to show is that the homomorphism is
surjective (onto) $\mathbb{R}$
December 15th 2012, 10:45 AM
Re: Abstract Algebra - Define and Prove
I have the following:
Define mapping f: ℝ^2 -> ℝ as follows:
f(x,y) = 3x - 4y
Claim: f is a homomorphism
Pick any (x,y) in ℝ^2. Then f(x,y) = f(x)*f(y) = 3x - 4y = (x+x+x)-(y+y+y+y) = x+x+x-y-y-y-y = f(x*y). Hence, it perserves the operation.
Claim: f is onto.
Pick any (x,y) in ℝ^2 such that (x,y) = (1,0). Then...
I don't think I'm going the right way on showing f is onto.
December 15th 2012, 11:08 AM
Re: Abstract Algebra - Define and Prove
Your claim of showing f is a homomorphism is a bit off.
An element of your domain $x \in \mathbb{R}^2$ is off the form $x = (x_0, y_0)$. So to show this group homomorphism, take, $x, \bar{x} \in \mathbb{R}^2$, so $x = (x_1, y_1), \bar{x} = (x_2, y_2)$
. So $\phi(x * \bar{x}) = \phi( (x_1 + x_2, y_1 + y_2) ) = \phi((x_1, y_1)) + \phi((x_2, y_2))$
As for onto:
Say i pick an element in $p \in \mathbb{R}$, i just want an $x = (x_0, y_0) \in \mathbb{R}^2$ such that $\phi(x) = 3x_0 - 4y_0 = p$. Doing some manipulation we get, $x = \frac{p + 4y}{3}$ and
clearly $(\frac{p + 4y}{3}, y) \in \mathbb{R}^2$. So $\phi(x) = \phi((\frac{p + 4y}{3}, y) = p$ for any p they picked, so we can indeed hit any element in $\mathbb{R}$, so it is onto.
December 15th 2012, 11:13 AM
Re: Abstract Algebra - Define and Prove
Thank you, thank you, thank you, jackncoke. You have a way of explaining things. Thanks!
December 15th 2012, 12:18 PM
Re: Abstract Algebra - Define and Prove
E-mailed my professor and she said the following:
Second part is not correct. You said immediately to take x such that f(x)=p. But you have to show that such an x actually exists, you can't assert it as true.
You need to start by taking an element b and then actually finding an x such that f(x)=b.
December 15th 2012, 12:23 PM
Re: Abstract Algebra - Define and Prove
R^2 is a vector space. the map f(x,y) = 3x - 4y is a linear map. linear maps are "vector space homomorphisms" meaning, among other things, that they are group homomorphisms of the underlying
abelian group of the vector space of their domain.
N is the null space of a linear map, meaning it is also the kernel of a group homomorphism. by the rank-nullity theorem (this is the "vector space analogue" of the first isomorphism theorem for
groups), 2 = dim(N) + dim(im(f)). if we can find ONE non-zero pair (x,y) in N and f(x,y) in im(f), this means this sum is 2 = 1 + 1.
clearly, (4,3) is in N, and is non-zero. also, f(1,1) = 3 - 4 = -1. thus dim(N) = 1, and dim(im(f)) = 1. since im(f) is a 1-dimensional subspace of R, it is all of R: that is, f is onto. hence AS
VECTOR SPACES:
R^2/N and R are isomorphic, meaning they are isomorphic as (abelian) groups.
now one can do this directly (as jakncoke did) solely within "the group arena":
f((x,y) + (x',y')) = f((x+x',y+y')) = 3(x+x') -4(y+y') = 3x + 3x' - 4y - 4y' = 3x - 4y + 3x' - 4y' = f((x,y)) + f((x',y')) <--f is a homomorphism
f((4-a,3-a)) = 3(4-a) - 4(3-a)) = 12 - 3a - 12 + 4a = a <--f is onto
so why did i use vector spaces? to show that group theory's usefulness actually extends BEYOND groups, and that sometimes "extra structure" actually makes things EASIER.
December 15th 2012, 12:57 PM
Re: Abstract Algebra - Define and Prove
This is the forum that i looking for many days Thank you everyone.
car accident lawyer Austin
December 15th 2012, 01:06 PM
Re: Abstract Algebra - Define and Prove
E-mailed my professor and she said the following:
Second part is not correct. You said immediately to take x such that f(x)=p. But you have to show that such an x actually exists, you can't assert it as true.
You need to start by taking an element b and then actually finding an x such that f(x)=b.
That is what i did! Look, I took a $p \in \mathbb{R}$ (Any random number p), and i said, the element $(\frac{p+4y_0}{3}, y_0)$ for some arbitrary number $y_0$, lets just say $y_0 = 1$. So $(\frac
{p+4}{3}, 1) \in \mathbb{R}^2$ , The said element, $\phi( (\frac{p+4}{3}, 1) ) = 3*\frac{p+4}{3} - 4 = p$. I just took an element from my domain group, and i showed you that that element maps to
the arbitary element you gave me from the codomain group.
December 15th 2012, 01:08 PM
Re: Abstract Algebra - Define and Prove
Okay, maybe I didn't make myself clear with my professor. Thank you for explaing once again.
December 15th 2012, 02:03 PM
Re: Abstract Algebra - Define and Prove
some other values you might have used:
(x,y) = (-p,-p)
(x,y) = (p/3,0)
(x,y) = (0,-p/4)
in general: any vector (x,y) of the form:
a(4,3) - (p,p) will do.
jakcoke's solution has a = (p+1)/3 (or more generally: (p+y[0])/3), my solution has a = 1. i just don't like fractions. | {"url":"http://mathhelpforum.com/advanced-algebra/209884-abstract-algebra-define-prove-print.html","timestamp":"2014-04-17T18:50:24Z","content_type":null,"content_length":"17288","record_id":"<urn:uuid:094a74aa-ef67-4722-88ec-8fdc4f83f1db>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Archive | January 27, 2006 | Chegg.com
Physics Archive: Questions from January 27, 2006
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous360 asked
1 answer
• Anonymous360 asked
1 answer
• Anonymous360 asked
1 answer
• Anonymous asked
i)that she/he moves i... Show more
A person is moving in one dimension, with coordinate x,starting at x=0. Assume
i)that she/he moves in steps of length l
ii)that the probability that she/he takes a step to the leftis p, while the probability of taking a step to the right isq=p-1
iii)that all the steps are independent (i.e. the probabilityof takin the n+1th step left or right is independent on what theprebious n steps were.)
1.Find the probability P(N,m) that after N successive stepsthe person will be at position x=ml, m is greater than or equal to0, i.e. m steps to the right of the origin.
2.Find the average number of steps to the left, <nL>taken after N steps
3.Find the average number of steps to the right, <nR>taken after N steps. How is <nR> related to <nL>?
4. Find the avereage of the difference of the left minus theright steps, <nR-nL>. taken after N steps.
5.Find the avreage of the square of the number of steps to theright, <n(square)R>.
6.Find the dispersion, (delta)nR of nR. Compare to the avreagevalue. Comment on the large-N limit of the ratio(delta)nR/<nR>
7. Find the mean square displacement from the origin after Nsteps, i.e. <(nR-nL)(square)>.
• Show less
0 answers
• Anonymous360 asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
3 answers
• Anonymous asked
1 answer
Get the most out of Chegg Study | {"url":"http://www.chegg.com/homework-help/questions-and-answers/physics-archive-2006-january-27","timestamp":"2014-04-18T16:34:01Z","content_type":null,"content_length":"53686","record_id":"<urn:uuid:083a0a0e-04a1-4ecf-a352-be10b0aed5f2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gainesville, GA Math Tutor
Find a Gainesville, GA Math Tutor
...It doesn't have to be boring and tedious. I like to make silly characters out of the concepts from the book and it has proved to help students understand and remember them better. I have been
a school teacher for the last 21 years.
21 Subjects: including prealgebra, algebra 1, English, reading
...I have an undergraduate degree in Finance and a MBA that covered all aspects of business; Marketing, Economics, Finance, Accounting and Management. I played 4 years of high school tennis in
the State of Georgia. In my sophomore year we were 3rd in the State for 3AAA schools.
18 Subjects: including algebra 2, algebra 1, prealgebra, finance
I hold a bachelor's degree in Secondary Education and a master's degree in Education. I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High
School level in both private and public schools.
10 Subjects: including linear algebra, logic, algebra 1, algebra 2
...The past four years of college have really helped me determine which subjects I love to teach. These include psychology, language arts, reading, writing, English, Spanish, and science.
Personally, math requires more of a mental effort, but since it is one of the hardest subjects to master, it is important to get as much outside help as possible!
46 Subjects: including algebra 1, algebra 2, biology, ACT Math
Hi, My name is Brittany and I tutor English, language arts and math for all grades. Currently, I am teaching middle school reading and language arts. I have a master's in teaching from Georgia
State University.
13 Subjects: including algebra 1, SAT math, prealgebra, English
Related Gainesville, GA Tutors
Gainesville, GA Accounting Tutors
Gainesville, GA ACT Tutors
Gainesville, GA Algebra Tutors
Gainesville, GA Algebra 2 Tutors
Gainesville, GA Calculus Tutors
Gainesville, GA Geometry Tutors
Gainesville, GA Math Tutors
Gainesville, GA Prealgebra Tutors
Gainesville, GA Precalculus Tutors
Gainesville, GA SAT Tutors
Gainesville, GA SAT Math Tutors
Gainesville, GA Science Tutors
Gainesville, GA Statistics Tutors
Gainesville, GA Trigonometry Tutors
Nearby Cities With Math Tutor
Alpharetta Math Tutors
Athens, GA Math Tutors
Buford, GA Math Tutors
Duluth, GA Math Tutors
Dunwoody, GA Math Tutors
Johns Creek, GA Math Tutors
Lawrenceville, GA Math Tutors
Oakwood, GA Math Tutors
Roswell, GA Math Tutors
Sandy Springs, GA Math Tutors
Smyrna, GA Math Tutors
Snellville Math Tutors
Suwanee Math Tutors
Westside, GA Math Tutors
Woodstock, GA Math Tutors | {"url":"http://www.purplemath.com/gainesville_ga_math_tutors.php","timestamp":"2014-04-17T07:35:59Z","content_type":null,"content_length":"23869","record_id":"<urn:uuid:7fbaa540-4aef-46ea-9acc-22e083438d94>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US6999928 - Method and apparatus for speaker identification using cepstral covariance matrices and distance metrics
This invention relates to speaker identification using cepstral covariance matrices and distance metrics.
Automatic verification or identification of a person by their speech is attracting greater interest as an increasing number of business transactions are being performed over the phone, where
automatic speaker identification is desired or required in many applications. In the past several decades, three techniques have been developed for speaker recognition, namely (1) Gaussian mixture
model (GMM) methods, (2) vector quantization (VQ) methods, and (3) various distance measure methods. The invention is directed to the last class of techniques.
The performance of current automatic speech and speaker recognition technology is quite sensitive to certain adverse environmental conditions, such as background noise, channel distortions, speaker
variations, and the like. The handset distortion is one of the main factors that contribute to degradation of the speech and speaker recognizer. In the current speech technology, the common way to
remove handset distortion is the cepstral mean normalization, which is based on the assumption that handset distortion is linear.
In the art of distance metrics speech identification, it is well known that covariance matrices of speech feature vectors, or cepstral vectors, carry a wealth of information on speaker
characteristics. Cepstral vectors are generally obtained by inputting a speech signal and dividing the signal into segments, typically 10 milliseconds each. A fast Fourier transform is performed on
each segment and the energy calculated for each of N frequency bands. The logarithm of the energy for each band is subject to a cosine transformation, thereby yielding a cepstral vector having N
elements. The frequency bands are not usually equally spaced, but rather are scaled, such as mel-scaled, for example, as by the equation mf=1125 log(0.0016f+1), where f is the frequency in Hertz and
mf is the mel-scaled frequency.
Once a set of N cepstral vectors, c1, c2 . . . cN, has been obtained a covariance matrix may be derived by the equation:
S=[(c 1−m)^T(c 1 −m)+(c 2 −m)^T(c 2 −m)+ . . . +(cN−m)^T(cN−m)]/N(1)
where T indicates a transposed matrix, m is the mean vector m=(c1+c2+ . . . +cK)/K where K is the number of frames of speech signal, and S is the N×N covariance matrix.
Let S and S be covariance matrices of cepstral vectors of clips of testing and training speech signals, respectively, that is to say that S is matrix for the sample of speech that we wish to identify
and S is a matrix for the voice signature of a known individual. If the sample and signature speech signals are identical, then S=S, which is to say that SS^−1 is an identity matrix, and the speaker
is thereby identified as the known individual. Therefore, the matrix SS^−1 is a measure of the similarity of the two voice clips and is referred to as the “similarity matrix” of the two speech
The arithmetic, A, geometric, G, and harmonic, H, means of the eigenvalues I(i=1, . . . , N) of the similarity matrix are defined as follows: $A ( λ 1 , … , λ N ) = 1 N ∑ t = 1 N λ i = 1 N
Tr ( S Σ - 1 ) (2a) G ( λ 1 , … , λ N ) = ( ∏ i = 1 N λ i ) 1 / N = ( Det ( S Σ - 1 ) ) 1 / N (2b) H ( λ 1 , … , λ N ) = N ∑ i = 1 N ( 1 λ i ) - 1 = N ( Tr ( S
Σ - 1 ) ) - 1 (2c)$
where Tr( ) is the trace of a matrix and Det( ) is the determinant of a matrix.
These values can be obtained without explicit calculation of the eigenvalues and therefore are significantly efficient in computation. Also, they satisfy the following properties: $A ( 1 λ 1 , …
, 1 λ N ) = 1 H ( λ 1 , … , λ N ) (3a) G ( 1 λ 1 , … , 1 λ N ) = 1 G ( λ 1 , … , λ N ) (3b) H ( 1 λ 1 , … , 1 λ N ) = 1 A ( λ 1 , … , λ N ) (3c)$
Various distance measures have been constructed based upon these mean values, primarily for purposes of speaker identification, the most widely known being: $d 1 ( S , Σ ) = A H - 1 (4a) d 2 ( S
, Σ ) = A G - 1 (4b) d 3 ( S , Σ ) = A 2 GH - 1 (4c)$
d [4](S,Σ)=A−log(G)−1(4d)
wherein if the similarity matrix is positive definite, the mean values satisfy the equation A≧G≧H with equality if and only if λ[1]=λ[2]= . . . =λ[N]. Therefore, all the above distance measures
satisfy the positivity condition. However, if we exchange S and S (or the position of sample and signature speech signals), S S^−1>SS^−1 and I[i]>1/I[i], and find that d[1 ]satisfies the symmetric
property while d[2], d[3], and d[4 ]do not. The symmetry property is a basic mathematic requirement of distance metrics, therefore d[1 ]is generally in more widespread use than the others.
As stated, the cepstral mean normalization assumes linear distortion, but in fact the distortion is not linear. When applied to cross-handset speaker identification (meaning that the handset used to
create the signature matrices is different than the one used for the sample) using the Lincoln Laboratory Handset Database (LLHD), the cepstral mean normalization technique has an error rate in
excess of about 20%. Consider that the error rate for same-handset speaker identification is only about 7%, and it can be seen that channel distortion caused by the handset is not linear. What is
needed is a method to remove the nonlinear components of handset distortion.
Disclosed is a method of automated speaker identification, comprising receiving a sample speech input signal from a sample handset; deriving a cepstral covariance sample matrix from said first sample
speech signal; calculating, with a distance metric, all distances between said sample matrix and one or more cepstral covariance signature matrices; determining if the smallest of said distances is
below a predetermined threshold value; and wherein said distance metric is selected from $d 5 ( S , Σ ) = A + 1 H - 2 , d 6 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 4 , d 7 ( S , Σ ) = A 2 H
( G + 1 G ) - 1 , d 8 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 1 , d 9 ( S , Σ ) = A G + G H - 2 ,$
fusion derivatives thereof, and fusion derivatives thereof with $d 1 ( S , Σ ) = A H - 1.$
In another aspect of the invention, the method further comprises identifying said sample handset; identifying a training handset used to derive each said signature matrix; wherein for each said
signature matrix, an adjusted sample matrix is derived by adding to said sample matrix a distortion matrix comprising distortion information for said training handset used to derive said signature
matrix; and wherein for each signature matrix, an adjusted signature matrix is derived by adding to each said signature matrix a distortion matrix comprising distortion information for said sample
In another aspect of the invention, the step of identifying said sample handset further comprises calculating, with a distance metric, all distances between said sample matrix and one or more
cepstral covariance handset matrices, wherein each said handset matrix is derived from a plurality of speech signals taken from different speakers through the same handset; and determining if the
smallest of said distances is below a predetermined threshold value.
In another aspect of the invention, said distance metric satisfies symmetry and positivity conditions.
In another aspect of the invention, said distance metric is selected from $d 1 ( S , Σ ) = A H - 1 , d 5 ( S , Σ ) = A + 1 H - 2 , d 6 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 4 , d 7 ( S , Σ
) = A 2 H ( G + 1 G ) - 1 , d 8 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 1 , d 9 ( S , Σ ) = A G + G H - 2 ,$
and fusion derivatives thereof.
In another aspect of the invention, the step of identifying said training handset for each said signature matrix further comprises calculating, with a distance metric, all distances between said
signature matrix and one or more cepstral covariance handset matrices, wherein each said handset matrix is derived from a plurality of speech signals taken from different speakers through the same
handset; and determining if the smallest of said distances is below a predetermined threshold value.
In another aspect of the invention, said distance metric satisfies symmetry and positivity conditions.
In another aspect of the invention, said distance metric is selected from $d 1 ( S , Σ ) = A H - 1 , d 5 ( S , Σ ) = A + 1 H - 2 , d 6 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 4 , d 7 ( S , Σ
) = A 2 H ( G + 1 G ) - 1 , d 8 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 1 , d 9 ( S , Σ ) = A G + G H - 2 ,$
and fusion derivatives thereof.
Disclosed is a method of automated speaker identification, comprising receiving a sample speech input signal from a sample handset; deriving a cepstral covariance sample matrix from said first sample
speech signal; calculating, with a distance metric, all distances between an adjusted sample matrix and one or more adjusted cepstral covariance signature matrices, each said signature matrix derived
from training speech signals input from a training handset; determining if the smallest of said distances is below a predetermined threshold value; wherein for each said signature matrix, said
adjusted sample matrix is derived by adding to said sample matrix a distortion matrix comprising distortion information for said training handset used to derive said signature matrix; and wherein
each said adjusted signature matrix is derived by adding to each said signature matrix a distortion matrix comprising distortion information for said sample handset.
In another aspect of the invention, said distance metric satisfies symmetry and positivity conditions.
In another aspect of the invention, said distance metric is selected from $d 1 ( S , Σ ) = A H - 1 , d 5 ( S , Σ ) = A + 1 H - 2 , d 6 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 4 , d 7 ( S , Σ
) = A 2 H ( G + 1 G ) - 1 , d 8 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 1 , d 9 ( S , Σ ) = A G + G H - 2 ,$
and fusion derivatives thereof.
In another aspect of the invention, said sample handset is identified by a method comprising calculating, with a distance metric, all distances between said sample matrix and one or more cepstral
covariance handset matrices, wherein each said handset matrix is derived from a plurality of speech signals taken from different speakers through the same handset; and determining if the smallest of
said distances is below a predetermined threshold value.
In another aspect of the invention, said distance metric satisfies symmetry and positivity conditions.
In another aspect of the invention, wherein said distance metric is selected from $d 1 ( S , Σ ) = A H - 1 , d 5 ( S , Σ ) = A + 1 H - 2 , d 6 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 4 , d 7
( S , Σ ) = A 2 H ( G + 1 G ) - 1 , d 8 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 1 , d 9 ( S , Σ ) = A G + G H - 2 ,$
and fusion derivatives thereof.
In another aspect of the invention, for each said signature matrix, said training handset is identified by a method comprising calculating, with a distance metric, all distances between said
signature matrix and one or more cepstral covariance handset matrices, wherein each said handset matrix is derived from a plurality of speech signals taken from different speakers through the same
handset; and determining if the smallest of said distances is below a predetermined threshold value.
In another aspect of the invention, said distance metric satisfies symmetry and positivity conditions.
In another aspect of the invention, said distance metric is selected from $d 1 ( S , Σ ) = A H - 1 , d 5 ( S , Σ ) = A + 1 H - 2 , d 6 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 4 , d 7 ( S , Σ
) = A 2 H ( G + 1 G ) - 1 , d 8 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 1 , d 9 ( S , Σ ) = A G + G H - 2 ,$
and fusion derivatives thereof.
Disclosed is a program storage device, readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for automated speaker identification, said
method steps comprising receiving a sample speech input signal from a sample handset; deriving a cepstral covariance sample matrix from said first sample speech signal; calculating, with a distance
metric, all distances between said sample matrix and one or more cepstral covariance signature matrices; determining if the smallest of said distances is below a predetermined threshold value; and
wherein said distance metric is selected from $d 5 ( S , Σ ) = A + 1 H - 2 , d 6 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 4 , d 7 ( S , Σ ) = A 2 H ( G + 1 G ) - 1 , d 8 ( S , Σ ) = ( A +
1 H ) ( G + 1 G ) - 1 , d 9 ( S , Σ ) = A G + G H - 2 ,$
fusion derivatives thereof, and fusion derivatives thereof with $d 1 ( S , Σ ) = A H - 1.$
Disclosed is a program storage device, readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for automated speaker identification, said
method steps comprising receiving a sample speech input signal from a sample handset; deriving a cepstral covariance sample matrix from said first sample speech signal; calculating, with a distance
metric, all distances between an adjusted sample matrix and one or more adjusted cepstral covariance signature matrices, each said signature matrix derived from training speech signals input from a
training handset; determining if the smallest of said distances is below a predetermined threshold value; wherein for each said signature matrix, said adjusted sample matrix is derived by adding to
said sample matrix a distortion matrix comprising distortion information for said training handset used to derive said signature matrix; and wherein each said adjusted signature matrix is derived by
adding to each said signature matrix a distortion matrix comprising distortion information for said sample handset.
FIG. 1 is a flowchart of an embodiment of the invention.
FIG. 2 is a graph of experimental data.
FIG. 3 is a graph of experimental data.
Referring to FIG. 1, the process of the invention begins at node 10 wherein a cepstral covariant sample matrix S is generated from a speech signal received from a test subject whom the practitioner
of the invention wishes to identify. The derivation may be by any one of a number of known methods such as those described in A. Cohen et al., On text-independent speaker identification using
automatic acoustic segmentation, ICASSP, pp. 293–296, 1985; and S. B. Davis et al., Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences, IEEE,
1980, the disclosures of both of which are incorporated by reference herein in their entirety.
At node 20, the optional step of determining the test subject's handset is executed. In real-world applications, it is most probable that the handset used to create signature matrices are different
than the one used to receive the test subject's voice for the sample matrix. The distortion added to the matrices by the different handsets blurs the similarity of the voices and results in higher
misidentification rates.
The method of determining the test subject's handset is by calculating the distances between the sample matrix and a database of handset matrices, each representative of a particular make and model
of handset. The shortest distance determines the identity of the test subject's handset. Such a method is described in commonly assigned copending U.S. patent application Wang et al., METHOD AND
APPARATUS FOR HANDSET IDENTIFICATION, filed Aug. 21, 2001, U.S. patent application Ser. No. 09/934,157, the disclosures of which are incorporated by reference herein in their entirety.
The generation of handset matrices is performed for a particular handset by having a substantial number of different speakers provide speech samples through the handset, preferably at least ten such
samples, more preferably at least twenty. A cepstral covariance matrix is then generated for the handset from all the samples, thereby creating a handset matrix M. Because all the speakers are
different, the speaker characteristics of the covariance matrix are smeared away, leaving the handset information.
In a preferred embodiment, a database of handset matrices will be kept and updated periodically as new makes and models of handsets come on the market.
Flow now moves to optional node 30 where a database of signature matrices is access and the first signature matrix S retrieved.
Control now flows to node 40 where the optional adjustment of the sample and signature matrices is performed. This part of the process corrects the cross-handset distortion caused by the test subject
using a different handset then that which was used to generate the signature matrix—the training handset. This is done by first adding to the sample matrix S a distortion matrix D[h ]corresponding to
the training handset. Of course, if it has been determined that the test subject is using the same handset as the training handset, then this step may be skipped altogether, though executing it
anyway will do no harm. Preferably, information identifying the training handset will be stored with each signature matrix for rapid identification. A slower method would be to test the signature
matrix in the same manner the sample matrix was tested at node 20.
To generate the distortion matrix, the handset matrix of the training handset is multiplied by a scaling factor I, such that:
where D[h ]is the distortion matrix for handset h and M[h ]is the handset matrix taken over all the speech samples for handset h. The scaling factor I will be chosen so as to provide the greatest
accuracy of speaker identification. This is determined experimentally, but may be expected to be approximately equal to about 0.4 or 0.8 as can be seen in FIG. 2. The graph in FIG. 2 was generated
from the example described below.
By adding the sample and distortion matrices, there is generated a adjusted sample matrix S′ that now contains information about the distortion caused by the training handset. Note that S′ already
has information about the distortion of the handset used by the test subject because that was the handset used to generate S.
Note, however, that if this is a cross-handset situation, then the signature matrix must also be adjusted. Therefore, the signature matrix is added to the distortion matrix corresponding to the
handset detected at node 20, namely the handset being used by the test subject. Now both the adjusted sample matrix S′ and the adjusted signature matrix S′ have distortion information for both
handsets in addition to voice information.
Control now flows to node 50 where the distance between the adjusted sample S′ and adjusted signature S′ matrices is calculated. Because we have adjusted for cross-handset situations, the distance
will be a function of the difference in voice information rather than handset information. As stated above, there are four well known distance formulae in use as are described in H. Gish, Robust
discrimination in automatic speaker identification, Proceedings ICASSP 1990, vol. 1, pp. 289–292; F. Bimbot et al., Second-order statistical measures for test-independent speaker identification, ECSA
workshop on automatic speaker recognition, identification and verification, 1994, pp. 51–54; and S. Johnson, Speaker tracking, Mphil thesis, University of Cambridge, 1997, and references therein; the
disclosures of all of which are incorporated by reference herein in their entirety. Of those, the first d[1 ]is the most favored for its symmetry and positivity. To this collection may be added five
new inventive distance measures: $d 5 ( S , Σ ) = A + 1 H - 2 (6a) d 6 ( S , Σ ) = ( A + 1 H ) ( G + 1 G ) - 4 (6b) d 7 ( S , Σ ) = A 2 H ( G + 1 G ) - 1 (6c) d 8 ( S , Σ ) = ( A + 1 H
) ( G + 1 G ) - 1 (6d) d 9 ( S , Σ ) = A G + G H - 2 (6e)$
all of which satisfy the positivity and symmetry conditions. Along with d[1], these distance metrics may be fused in any combination as described in K. R. Farrell, Discriminatory measures for speaker
recognition, Proceedings of Neural Networks for Signal Processing, 1995, and references therein, the disclosures of which are incorporated by reference herein in their entirety. The example at the
end of this disclosure demonstrates how fusion is accomplished.
Control now flows through nodes 60 and 30 in a loop until the distances between the adjusted sample matrix S′ and every adjusted signature matrix S′ is calculated.
After all the signature matrices have been run through and distances calculated for all of them, control flows to node 70 where the smallest distance is examined to determine if it remains below a
predetermined threshold value. If not, control flows to termination node 80, indicating that the sampled voice failed to match any of those in the signature database. If, however, the distance is
below the chosen threshold, then control flows to termination node 90, indicating that a positive identification has been made.
The method of the invention may be embodied in a software program on a computer-readable medium and rigged so that the identification process initiates as soon as a call comes in and the person on
the line has spoken his first words.
An LLHDB (Lincoln Laboratory Handset Database) corpus of recorded utterances was used, such as is described in D. A. Reynolds, HTIMIT and LLHDB: speech corpora for the study of handset transducer
effects, ICASSP, pp. 1535–1538, May 1977, Munich, Germany, the disclosures of which are incorporated by reference herein in their entirety. Twenty eight female and 24 male speakers were asked to
speak ten sentences extracted from the TIMIT corpus and the rainbow passage (from the LLHDB corpus) over nine handsets and a Sennheizer high-quality microphone. The average length of the spoken
rainbow passages was 61 seconds. In this experiment, the rainbow passage was used for generating signature matrices, and the remaining utterances for sample matrices. One handset chosen at random was
designated “cb1” and another “cb2”. These are the actual handsets used for same-handset and cross-handset testing.
A 13 static mel-cepstra and a 13 delta mel-cepstra were calculated from a five frame interval. For each utterance, one full covariance matrix was calculated. The results are shown in Table I:
TABLE I
Metric Static (%) Static + Ä (%) Improvement (%)
d[1] 10.79 6.94 35.7
d[2] 14.64 9.25 36.8
d[3] 11.56 7.13 38.3
d[4] 26.20 26.20 0
d[5] 14.45 11.56 20
d[6] 26.20 23.89 8.8
d[7] 11.75 8.09 31.1
d[8] 10.98 6.94 36.8
d[9] 10.79 6.94 36.7
As can be seen from the data in Table I, inclusion of delta cepstral in the cepstral vectors greatly improves accuracy. It can also be seen that some of the novel distance measures yield results
comparable to that of the widely used d[1]. A data fusion of d[1 ]and d[7 ]was performed to yield a new distance metric:
d [1-7]=(1−a)d [1] +ad [7](7)
where a is referred to as the fusion coefficient and 0 £ a £ 1. The linear combination of two symmetric and positive distance metrics yields a fused metric that is also symmetric and positive.
The distances between the handset matrices and the sample matrices were calculated for each distance formula and the error rates of handset detection derived, resulting in the data of Table II as
d[1] d[2] d[3] d[4] d[5] d[6] d[7] d[8] d[9]
11.9% 12.6% 11.7% 15% 13.3% 17.5% 13.2% 11.5% 11.9%
The error rate of handset detection was graphed using distance metric d[1], as can be seen in FIG. 2. From the graph of FIG. 2, it was decided to set the distortion scaling factor I=0.4 for this
experiment. It should be noted that the setting of an optimal distortion scaling factor is optional because, as can be seen from FIG. 2, optionally setting I=1.0 does not introduce that much more
error. The distortion scaling factor is largely independent of the distance metric used to graph it, but rather is dependent upon handset characteristics. Therefore, if highly accurate results are
desired, a different distortion scaling factor graph should be generated for every different handset pair.
Distortion matrices were generated for each handset and the distances calculated to the speech samples.
FIG. 3 shows the error rate of speaker identification as a function of the fusion coefficient a. The experiment demonstrates that when N=26 and a=0.25, the error rate obtained is only 6.17% for the
fused metric, as opposed to 6.94% for the d[1 ]metric alone—an 11% improvement.
It can therefore be seen that the invention provides good speaker identification performance with a variety of choices of symmetrical and positive distance metrics. It can be seen that the use of
addition of delta cepstra in the cepstral vectors can decrease the error rate by as much as 38% and the use of data fusion with novel distance metrics further decreases error rates by about 11%. A
further reduction in error rates of about 17% may be obtained through a novel method of cross-handset adjustment.
It is to be understood that all physical quantities disclosed herein, unless explicitly indicated otherwise, are not to be construed as exactly equal to the quantity disclosed, but rather about equal
to the quantity disclosed. Further, the mere absence of a qualifier such as “about” or the like, is not to be construed as an explicit indication that any such disclosed physical quantity is an exact
quantity, irrespective of whether such qualifiers are used with respect to any other physical quantities disclosed herein.
While preferred embodiments have been shown and described, various modifications and substitutions may be made thereto without departing from the spirit and scope of the invention. Accordingly, it is
to be understood that the present invention has been described by way of illustration only, and such illustrations and embodiments as have been disclosed herein are not to be construed as limiting to
the claims. | {"url":"http://www.google.ca/patents/US6999928","timestamp":"2014-04-18T00:53:58Z","content_type":null,"content_length":"147876","record_id":"<urn:uuid:c94f140f-eb1e-436e-b5da-8a36f0d5440e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some Probability And Statistics On The Individual Time Trial
The Tour de France (TDF) is the marquee cycling event on the calender for any top international pro cyclist as well as their squads. Everyone wants to do well here because its arguably the biggest
and most glamorous stage for displaying athletic talent. The competition is tough, the fans are many, the stages are epic and the prize money is fat.
In this post, I'm trying to figure out what kind of a statistical distribution is seen in the finishing times from this year's prologue TT (Tour de France). I will also try to quantify the
probability of getting close to the fastest time trialist in the world. Alberto Contador tried pretty darn well. How well?
Only one way to find out these things.
So here's what I did.
Step 1 : I obtained Cyclingnews.com data for the TDF Prologue TT on July 4, 2009. I obtained 180 data points corresponding to all the competing cyclists.
Step 2 : To make sense of this data clutter, I put them into Microsoft Excel 2007 and ran a descriptive statistics analysis on it. Here's what I obtained. What you're about to see is powerful.
Fig 1 : Descriptive statistical figures for the finishing times of a sample set of 180 cyclists from the Tour de France 2009.
So is my sample set taken from a normal distribution or something different?
Let's try to answer that reasonably with the table above.
The mean, median and mode are very close to each other which MAY indicate its normally distributed. The average of the average deviation of each cyclist from the mean was 0.63 min or 37.8 seconds.
The minimum time belonged to
Fabian Cancellara
, with a blitzy 19.53 mins whereas the maximum time belonged to
Yauheni Hutarovich
. I also have a Kurtosis and Skewness of 0.558 and -0.068 respectively.
Positive Kurtosis indicates a relatively appreciable peak which makes me suspect the distribution is leptokurtic (too tall instead of normally high). The book
Using Multivariate Statistics (Tabachnick & Fidell, 1996)
explains that if my Kurtosis statistic is more than 2 times [sqrt(24/180)] = 0.73, the data is not normally distributed. Since 0.558 is less than 0.73, we're ok.
Negative Skewness indicates that my data is left skewed. The same book mentioned above explains that if my Skewness statistic is more than 2 times [sqrt(6/180)] = 0.365, the distribution is not
normal. Since -0.068 is less than 0.365, we're ok here as well.
Step 3 : The above only gives rough indications of the type of distribution. Nothing beats setting up a visual of the spread. So I made a histogram, with a chosen bin width of 0.20 min.
Fig 2 : The histogram for the data set. Please see source of data on CyclingNews.
The graph agrees with the skewness and kurtosis statistics. The data has central tendency but is ever so slightly skewed towards the left. This is the data for the best cyclists in the world. Not
really a Gaussian, but not too far away from it either. What kind of distribution it is will take more analysis and tests for goodness of fit, which I'm going to tackle some other time.
So What Does All This Mean?
Looking at the data and Fig 2, we can say that the course conditions in Monaco on that July day were such that nearly 48% of all 180 cyclists managed to get times below the average, which might mean
they were pretty fit and came well prepared (or something else worked in their favor which I can't quantify). Thus, the 48th percentile is the average time, i.e 21 min and 30 seconds.
To put it in another fashion, the probability of a world class cyclist racing on this course in a time less than the average time is 0.48.
52% of the 180 performed under par, with about 8% of those 52 giving exactly average times. The probability is 0.52 that a cyclist is at average time or above it on this course.
We can also say that 72% of the 180 cyclists lie between one standard deviation on both sides of the average, 93% lie between two standard deviations about the average and 99% lie between 3 standard
deviations. Pretty close to the 68-95-99 rule obeyed by normal distributions eh?
Alberto Contador Vs Fabian Cancellara As Time Trialists
Our last question is the most interesting. So if you're a top pro at the peak of your abilities, what are you chances of ever getting close to Fabian Cancellara's blitzkrieg results? Then the next
question would be, how close do you want to get to 'Spartacus'? Within 2%? 3%?
Let's do 2% as a start. Within 2% is 23 seconds difference. Now that's probably the limit of what a time trialist can accept to cap the gap, so to speak!!
Let's look at what Contador obtained that day from the data. Bert raced the course 18 seconds slower than Cancellara for an amazing second place. In other words, there was a mere 1.54% time
difference between the best all round cyclist in the world and the fastest time trialist in the world. Just 4 cyclists managed to come within 2% of Cancellara's time - Contador, Wiggins, Kloden and
Evans. 4/180 = 0.02 = 2%.
In other words, just 2% of the 180 cyclists got a time less than or equal to 19 minutes and 55 seconds (this 2% window we're talking about).
Put in another way, this is the 2nd percentile. This is where the glory is at. And the money. And the kisses from the long legged European girls.
The probability that you're in this 23 second window from the best man on the bike is low. Just 0.022 or 1 in 45 chance. Keep in mind this is for the best in the world.
Now you know why you and I are not racing in the Tour de France. Let's just scratch our butts and cheer these beasts on.
* * *
8 comments:
1. Anonymous7:32 PM
I hear there's a 1 in 40 chance of winning "Mega Millions" http://www.encyclopediabranigan.com/2009/04/there-is-1-in-40-chance-of-winning-with.html
2. Yeah damn the Tour de France. Go win the bucks haha
3. Contador is definitely in league to thwart Cancellara. He just needs to mature as a time trialist but considering he has overall GC plans, I doubt he would give one discipline so much undue
attention. Definitely two of the best athletes in pro cycling of today.
4. A bit late here but terrific stuff, Ron.
5. Anonymous5:17 PM
The main conclusion I find after reading your analysis is that if you want to win the tour of france your time trail skills have to be pretty close to the best time trial rider, mr cancellara.
that's why contador, wiggins and evans are there in the top!!! Andy would be the exception though. So for the next tour of france, look for the 3 or 4 fisrt riders in the prologue and ganble some
money on them!!!
6. Nice blog. Sorry to nit pick, but your analysis is flawed. It is not using a random sample.
Short of Cancellara crashing, I know that I, as a recreational rider, have ZERO chance of coming within 2% of his TT time. There are very few in the peloton that have a comparable sustainable
power output as Cancellara.
The proper way to calculate the "Probability of being within 2% of Fabian Cancellara's TT Times at the TDF" one would take a sample or recent history of Cancellara's say 40K TT times or average
power output for such effort and build an appropriate confidence interval.
7. Sorry I titled it wrong. It is not the "Probability of being within 2% of Fabian Cancellara's TT Times at the TDF" but rather a calculation of the 2nd percentile of the distribution. I will
change this title.
Otherwise, I don't see any flaws .
8. I understand you better, but your post-hoc analysis is more than just the title.
"Alberto Contador Vs Fabian Cancellara As Time Trialists
Our last question is the most interesting. So if you're a top pro at the peak of your abilities, what are you chances of ever getting close to Fabian Cancellara's blitzkrieg results?"
The answer is to increase your average power output over 40K. Your distribution of "top pros" includes sprinters (high power output over short period of time), TT specialists and climbers (high
If Fabian Cancellara's average power output (or average speed) is: X and Alberto Contador's is X - Y, then you can build _respective_ (normal) distributions for them and then see where their
distributions overlap. (Alternatively, confidence intervals work well, too). If you do the same for Fabian and another "Top Pro" cyclist, whom incidentally rides as a domestique for a team, you
can see how such distributions do not overlap and hence the domestique has zero chance of winning or coming within % percent of Fabian's time.
P.S. I enjoyed reading your "The Anatomy Of A Cancellara Attack" work. | {"url":"http://cozybeehive.blogspot.com/2009/11/some-probability-and-statistics-on.html?showComment=1257482314666","timestamp":"2014-04-16T19:10:49Z","content_type":null,"content_length":"112929","record_id":"<urn:uuid:b9c9c1b5-2382-454c-9b9f-5ce57b36e189>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trig Identity - I'm so close (i think..)
October 14th 2009, 09:58 AM #1
Oct 2009
Trig Identity - I'm so close (i think..)
Im still trying to figure out these identities. Ive never had so much trouble with math, ugh. I think I almost figured this one out but I must have taken a wrong turn somewhere. Here is the
I started with the left side and used the lcm to get...
Then simplified to get...
and finally...
I obviously messed something up somewhere can anyone tell me where I went wrong?
Im still trying to figure out these identities. Ive never had so much trouble with math, ugh. I think I almost figured this one out but I must have taken a wrong turn somewhere. Here is the
I started with the left side and used the lcm to get...
Then simplified to get...
e^(i*pi) - you're fine up to here
and finally...
I obviously messed something up somewhere can anyone tell me where I went wrong?
$\frac{2-2\sin\theta}{(1-\sin\theta)(\cos\theta)} = \frac{2(1-\sin\theta)}{(1-\sin\theta)(\cos\theta)}$
$1-sin\theta$ will cancel leaving $\frac{2}{cos\theta} = 2sec\theta$
Oh! Awesome Thanks!
October 14th 2009, 10:01 AM #2
October 14th 2009, 10:07 AM #3
Oct 2009 | {"url":"http://mathhelpforum.com/trigonometry/108019-trig-identity-i-m-so-close-i-think.html","timestamp":"2014-04-18T16:34:51Z","content_type":null,"content_length":"38361","record_id":"<urn:uuid:d7be88d5-52d5-4b0c-b468-53674775396f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Active learning with heteroscedastic noise
András Antos, Varun Grover and Csaba Szepesvari
Theoretical Computer Science Volume 411, Number 29-30, , 2010. ISSN 0304-3975
We consider the problem of actively learning the mean values of distributions associated with a finite number of options. The decision maker can select which option to generate the next observation
from, the goal being to produce estimates with equally good precision for all the options. If sample means are used to estimate the unknown values then the optimal solution, assuming that the
distributions are known up to a shift, is to sample from each distribution proportional to its variance. No information other than the distributions’ variances is needed to calculate the optimal
solution. In this paper we propose an incremental algorithm that asymptotically achieves the same loss as an optimal rule. We prove that the excess loss suffered by this algorithm, apart from
logarithmic factors, scales as n−{3/2}, which we conjecture to be the optimal rate. The performance of the algorithm is illustrated on a simple problem.
PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00004761/","timestamp":"2014-04-17T04:07:31Z","content_type":null,"content_length":"8231","record_id":"<urn:uuid:106ea8ac-f361-466d-96ca-a3dda38f1157>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many stickers does a cube need?
This is a continuation of
When I saw that topic I suddenly wanted to peel stickers off to get an optimally unstickered cube. But I didn't like how the problem was left unsolved
The problem
You have a 3x3 cube, and a limited number of stickers. You want to save as many stickers as you can. Any two pieces stickered identically are considered indistinguishable. Any state where all faces
have one color on them are considered solved states, even if it wasn't the same as the original "solved" state. You have no memory of which sticker were removed, but you do remember the color scheme.
For simplicity, its the BOY color scheme. When scrambled, the cube can be in any spatial orientation.
How many stickers can you remove and still keep one unique solution?
Limitations of a legal cube position:
Only an even number of edges can have swapped orientation
The orientation twists of corners must add up to a whole number of twists
Only an even number of (edge-swaps + corner-swaps + center axis quarter turns) are allowed
Now here's my solution
I tried to make a huge even-numbered cycle of adjacent corners, since adjacent corners with 1 sticker are interchangeable.
Keeping 1 sticker is better than keeping 2, but keeping none severely limits your options, because any 0-sticker corner can swap with 0- or 1-sticker corners indistinguishably.
I used an even number because that requires an odd number of swaps, which means the edges must have an odd number of swaps as well. I can't get past 6, because there are only 6 colors. An 8-cycle
leaves 2 pairs of identical edges.
This is not a unique state, but the other solved state require at least 1 pair of edges to swap. If I fix the edges, then this alternative solution will be visually apparent.
I started with the 6-cycle for edges, and I made a small modification. Luckily, I came across a state with only 1 visibly solved state. At least I think. Please verify this step - its the weakest
There is a 3-cycle of indistinguishable edges, but if you use it, you end up with an odd number of orientation swaps. So it can't be legally solved from there. I think.
Only 2 centers are needed to uniquely fix the center axis. There will only be 1 visibly correct state.
Corners II
There are still 2 corners with 3 stickers. You only need 2 stickers to identify any corner. So I removed one from each. As for why I specifically removed a yellow and a blue sticker...you'll see
Final Roundup
All centers are fixed. The corners has 1 alternate state, but that leaves at least 1 swap with the edges. The edges are fixed. So there is only 1 solution.
-12 stickers from the corners
-6 stickers from the edges
-4 stickers from the centers
-2 stickers from the corners (II)
24 stickers removed (30 stickers remaining)
How pretty, every face has an equal number of stickers! (That's why I specifically removed 1 yellow and 1 blue from Corners II)
You can test this cube out using this applet. I can't find any others that lets you blank out stickers and play by dragging.
http://www.randelshofer.ch/rubik/virtua ... tions.html
So ... is this the optimal solution? Is this a solution?
I'm going to try learning puzzle design!
*10 years later...*
Yes! I made a sphere! Now, what was step 2 again?
I remember going through that post a while back because of someone else's cube they had taken stickers off of. I now stand by my guess of 24 because someone made it. Who knows, maybe someone can take
off 25 or 26 in the near future, and maintain a single solution
For all of you that bought a KO 8x8x8: You should have bought a V8!
Nice problem. But why restrict the color scheme to BOY? For your stickering, if you don't know it's BOY, is there another solution?
I don't really know how to go about solving this if the color scheme is unknown, but I don't think it's easy to solve a cube originally in a certain color scheme into another color scheme, so it's
not really going to invalidate a solution. Here's what I think:
Most solutions would contain the 2 centers. If you fix everything else, you can get away with 1. The centers don't move, so to solve in a different color scheme, only the other 4-5 centers are
allowed to be exchange colors. Say you wanted to exchange the colors of a set of faces (at least 2 faces). Then all cubies on either of these faces must change places. But at least 3 cubies (likely
6-12) of them intersect with the faces of the known centers, and when these pieces move, they will look out of place. And the new ones that take their place will also look out of place. So you have
to give 2 homes for 6-24 pieces.
Any non-center piece can be identified by 2 stickers, so any piece with 2 stickers will look misplaced unless it's where it should be. So you have at least 6 cubies with 1 or 0 colors. Having <=1
color is not enough - that one color must be a color that the piece should have in its new position (after the color scheme swap). It takes some effort to find a state where these 6-24 cubies fulfill
these conditions and cannot swap with one another.
I'm going to try learning puzzle design!
*10 years later...*
Yes! I made a sphere! Now, what was step 2 again?
bhearn wrote:
Nice problem. But why restrict the color scheme to BOY? For your stickering, if you don't know it's BOY, is there another solution?
Looking at it, no two corners are identical, so it's possible to eventually figure it out.
My budding baby blog, Twisted Interests!
Ok, I tried to solve this cube a few times in the virtual simulation. Its a lot harder than a fully stickered 3x3
First solve: Cannot do the last F2L pair without making huge changes to the cross - the 2 candidates for the last edge were both in the cross. When I finally solved F2L, the last layer was easy and
the solved cube looked identical to how I set it up.
Second solve: F2L went by acceptably, and I tried to make it different from the original. However, the last layer had what both pseudo-OLL and pseudo-PLL parity. According to how I designed the
scheme, that means I used the alternate corner configuration and the 3-cycle in the edges.
Third solve: Again, the last F2L pair was blocked, but I overcame the obstacle in a different way. I ended up with only pseudo-OLL parity. The reason why I didn't get only pseudo-PLL parity so far is
probably because I consciously tried to make the white face look different from how it was set-up.
Fourth solve: I focused more on the corners and left the edges in the original configuration. When I got to the last layer, I figured that the orientation of the last 2 corners couldn't work out.
There was no valid place for the two yellow stickers. But they both didn't have a yellow sticker, so it was possible to make each face of the cube the same color an alternate way. Thus I found a
secondary solution
Fine, my solution failed. Here's why:
Recall the 12 stickers I took from the corners. I intended them to only have 1 alternative solution, and that was supposed to be the 6-cycle. But it turns out that you can also do 2 swaps and some
legal orientation changes; that won't affect the edges.
The blue lines represent the alternative solution I had under control. The red lines show the pairs of swaps that I didn't notice. The yellow and white cubies stay on the same face, while the red and
orange ones get oriented in opposite directions. This is what the cube looks like after the swap:
I'm trying to find another way to remove 14 stickers from the corners without introducing too many alternative solutions. I'll tell you if I found one. But for now, you can remove 13, and here's how
it looks.
This one is unique, and it would work on a 2x2x2 as well.
I'm going to try learning puzzle design!
*10 years later...*
Yes! I made a sphere! Now, what was step 2 again?
JubilantJD wrote:
I'm trying to find another way to remove 14 stickers from the corners without introducing too many alternative solutions. I'll tell you if I found one. But for now, you can remove 13, and here's how
it looks.
This one is unique, and it would work on a 2x2x2 as well.
I believe this is exactly the solution I presented for the 2x2x2 in the following post:
Anyway, I think it's cool that this topic has been revived, and I hope you succeed in finding a minimal stickering!
I believe this is exactly the solution I presented for the 2x2x2 in the following post:
Oh, I vaguely remembered reading this stickering pattern and testing it out before. I checked just now and found that it was from your post in the old topic. So, I guess I should've given credit
I have a case with 14 removed corner stickers in the works. It seems promising: there are no indistinguishable 2-swaps, and there is my intended 6-cycle. But who knows, maybe a wild 5-cycle lurks in
I'm going to try learning puzzle design!
*10 years later...*
Yes! I made a sphere! Now, what was step 2 again?
This thread
on the SpeedSolving forum discusses this topic. It might help. Also,
a video
was made based on one of the proposed solutions.
The SpeedSolving thread was started in 2010. That was before
this one
(which has already been mentioned). Despite the difference in dates, I don't think that there was any sharing of information. I'm not sure though; I didn't completely go through both threads.
I just read that speedsolving topic, and I thought the solution with 24 removed stickers and 2 solvable color schemes is actually really cool. It certainly wasn't too hard to solve though, compared
to the various prototypes I made and tested on the virtual cube.
Yes, the corner solution I thought I had really had a 5-cycle lurking in there...ironic
But I found a way to easily determine whether extra solutions exist or not: number the cubies, write the numbers down in a circle, and draw arrows whenever a piece can take the place of another - but
the pieces doesn't necessarily have to swap with one another. Any complete cycle in the picture means the numbered cubies can cycle with one another in that way.
Here's a solution with 14 removed corner stickers that passes the drawing test (with only 1 swap possible) and worked out every time I solved it in the virtual simulator.
Its modified from the 13-sticker solution posted earlier.
I then removed the edge stickers and center stickers according to my previous plans, adjusting the orientation of each set to make each face have the same number of stickers and to increase
Interestingly, this scheme can solved into something that looks almost exactly like its mirror reflection, color scheme and all. Only 1 face was not reflected; it was rotated 180 degrees instead. I
see that color schemes are not too hard to change after all!
I'm going to try learning puzzle design!
*10 years later...*
Yes! I made a sphere! Now, what was step 2 again?
JubilantJD wrote:
But I found a way to easily determine whether extra solution exist or not: number the cubies, write the numbers down in a circle, and draw arrows whenever a piece can take the place of another - but
the pieces doesn't necessarily have to swap with one another. Any complete cycle in the picture means the numbered cubies can cycle with one another in that way.
This ignores parity. If there is an odd-cycle in one piece type, there need to also be an odd-cycle in the other piece type otherwise the parity restriction of the edges+corners would prevent using
the cycle.
You also need to account for pairs of odd-cycles in one piece type because that keeps the permutation parity the same. So for example two 2-cycles in the edges would allow for a 2-2 swap which
doesn't need to affect corners.
Prior to using my real name I posted under the account named bmenrigh.
Yeah, that's how I checked my states. I intended my corners to only have one even permutation available, because the edges must also be messed up to make that permutation. And since my edges are
visibly wrong when messed up at all, the one even permutation cannot make another solution.
What I didn't know is that there can also be orientation blocking. My edges have a 3-cycle available, but using it and orienting them to looked solved will make the cube unsolvable with OLL parity. I
don't know how that works.
I'm going to try learning puzzle design!
*10 years later...*
Yes! I made a sphere! Now, what was step 2 again?
JubilantJD wrote:
What I didn't know is that there can also be orientation blocking. My edges have a 3-cycle available, but using it and orienting them to looked solved will make the cube unsolvable with OLL parity. I
don't know how that works.
Yep you'll have to make sure that you track the total twist of each cycle to and then only choose cycles such that the total twist modulo 3 is 0.
I'm pretty sure all of this is basically the same as using the Stabilizer() function in GAP and then asking GAP what the size of the group is.
Prior to using my real name I posted under the account named bmenrigh.
Thinking about your proposal more... This will only check if there is a unique solution for a given color scheme. It can't find alternative colors schemes. This is the same problem Taus had with his
GAP code.
Prior to using my real name I posted under the account named bmenrigh. | {"url":"http://www.twistypuzzles.com/forum/viewtopic.php?p=308544","timestamp":"2014-04-21T05:28:47Z","content_type":null,"content_length":"75101","record_id":"<urn:uuid:d8856d75-12c7-4346-bd53-dbf09d5a0413>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding a limit
I'm familiarized with finding limits of most kinds of functions. I was struck by a problem: What if the variables of the function belong to different sets of numbers?
My point being, given the function:
With n belonging to the set of natural numbers and q belonging to the set of rational numbers.
How do I avaluate the following limit (if possible):
lim f(n,q) as n→∞ and q→∞
This may be a silly question but care to answer please.
Thank you
You have to define your n and q better.
As an example, let's take the limit ##\lim_{x \rightarrow \infty} \frac{\lfloor x \rfloor}{x}##, where ##x \in \mathbb{R}##
You can't simply apply L' Hopital's because the numerator is a discontinuous function.
But you can find the limit by defining the fractional part of x as ##\{x\}## and rewriting the limit as:
##\lim_{x \rightarrow \infty} \frac{x - \{x\}}{x} = 1 - \lim_{x \rightarrow \infty} \frac{\{x\}}{x} = 1##
So a lot depends on exactly what you intend the numerator and denominator to signify. | {"url":"http://www.physicsforums.com/showthread.php?p=4245287","timestamp":"2014-04-20T05:53:35Z","content_type":null,"content_length":"36605","record_id":"<urn:uuid:d59c788f-38f9-4945-8682-ffd3fd55230f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Linguistics & Cartesian Logic | Part 2 of 2 | Cartesian Questioning Model
Posted by: admin, In Category: Communication Strategy, Effective Communication, Fallacies / Logic, NLP (Neurolinguistic Programming)
Becoming a Communication Expert with Cartesian Linguistics Part 2.
During any effective communication, a communication expert will seek to understand fully and multidimensionally what their communication partner is saying. When I use the term multidimensional, this
simply means that there are many different directions by which to look at and evaluate subject matter. The concept of multidimensionality comes from the field of quantum linguistics which I define a
bit more thoroughly in part 1 of this series and my post on Sleight of Mouth Patterns.
In referencing yesterday’s post, we talked about seeking out boundary conditions when communicating with someone so that we may better understand what their thought processes are made up of. As a
communication expert, in fully understanding the subject matter boundary conditions of a person’s communication, you will come closer and closer to objectively understanding them and their
In Neuro Linguistic Programming (NLP), one of the most powerful questioning strategies that has arisen from their communication models that allows us to define these boundary conditions is the usage
of Cartesian logic. Mathematical Cartesian Logic, as most already know has it’s roots in geometry. The beauty of this type of logic is that it allows us to see all of the “material” surrounding a
thought. I know it seems a bit space aged, but stick with me…
Cartesian coordinates are represented by the following:
• A, B = Theorem (Quadrant I)
• -A, B = Converse (Quadrant II)
• -A, -B = Non mirror image reverse (Quadrant III)
• A, -B = Inverse (Quadrant IV)
So how the heck does this apply to linguistics and communication? Let look at it in a literal sense then we can apply it to a real world example… This is ESPECIALLY helpful as a decision making
Let’s use the term “Kevin Weber teaches”. Now from a Cartesian perspective, the literal boundary conditions for a communication are sought out by contemplating the 4 coordinates.
□ The theorem is:
☆ A(Kevin Weber)
☆ B(teaches)
□ The inverse is:
☆ A(Kevin Weber)
☆ –B(Not teaches [doesn’t teach])
□ The converse is:
☆ –A(Not Kevin Weber[anyone other than Kevin])
☆ B(teaches)
□ The non mirror image reverse is:
☆ –A(Not Kevin Weber[anyone other than Kevin])
☆ –B(Not teaches [doesn’t teach])
Whew! OK, and you thought you were done with this stuff in high school. We are going to get to a real world communication example where you can apply this today in a moment. But before we do that,
please let me point out the beauty of this first. What you see above is an extremely thorough and eye open view of the subject of Kevin Weber teaching. By looking at each coordinate separately you
can see/uncover so much viable, pertinent and directly linked information pertaining to your subject matter. In a decision making process, this is essential to prevent your blind spots… OK, now on to
Let’s say you are in a situation where you are attempting to sell something… Or maybe you are trying to get your teenager to take out the garbage. The “client” as we will call them is really on the
fence as to what they should do… Cognitive dissonance and/or Inertia are very difficult enemies in the decision making process and will generally (understatement of the year) push someone toward the
path of least resistance. So let’s help them apply some logic to this… The follow questioning model is derived from the Cartesian Coordinates:
□ The theorem is: “What will happen if you take out the trash?”
☆ A(I will)
☆ B(take out the trash)
□ The inverse is: “What will happen if you DON’T take out the trash?”
☆ A (What will happen)
☆ –B(if I don’t take out the trash?)
□ The converse is: “What WON’T happen if you take out the trash?”
☆ –A (What WON’T happen)
☆ B ( if you take out the trash?)
□ The non mirror image reverse is: “What WON’T happen if you DON’T take out the trash?”
☆ –A (What WON’T happen)
☆ –B (if you DON’T take out the trash?)
So as you can see, when the “problem” or subject of the communication is assessed in this way, this leaves little room for cognitive dissonance to grab a hold of.
Let’s take a look at a situation where you are seeking out boundary conditions on a potential “problem”. Imagine that you are at a night club and you want to ask a woman out on a date. Although the
conversation has been going fairly well, she seems (based on your sensory acuity) to be on the fence about whether she is interested in your or not. So where do you begin to sell you “attributes”?
Which ones should you talk about? How should they be represented?
Well, if you don’t know the boundary conditions to her indecision about whether she is interested or not, how would you know what to talk about? So take a look at the Cartesian coordinates…
What we want to find out is what could make you interesting enough for her to say “yes” so let’s start there.
□ The theorem is: “What WILL happen if you do go out with me?”
☆ A(Will you go out)
☆ B(with me?)
□ The inverse is: “What WILL happen if you don’t go out with me?”
☆ A (What WILL happen)
☆ –B (if you don’t go out with me?)
□ The converse is: “What WON’T happen if you do go out with me?”
☆ –A (What WON’T happen)
☆ B (if you do go out with me?)
□ The non mirror image reverse is: “What WON’T happen if you don’t go out with me?”
☆ –A (What WON’T happen)
☆ –B (if you don’t go out with me?)
The answers to this type of questioning will give you incredibly valuable insight into what she may be afraid of, uninterested in or uncomfortable with. Now although Cartesian Questioning may not
tackle the emotional aspect of a problem, it can be a powerful tool in your communication arsenal… The resulting information is a thorough and much more objective evaluation bringing you one step
closer to being a communication expert.
To learn more about Quantum Linguistics, Cartesian Logic and other information that can help you refine your own effective communication, please explore the rest of my blog The Communication Expert,
or if I am online, please feel free to connect with me via Skype.
The Communication Expert | David J. Parnell
12 Responses to Quantum Linguistics & Cartesian Logic | Part 2 of 2 | Cartesian Questioning Model
1. I love this model, I work with people in addiction and when we apply the questioning model it completely scambles their strategies. Ultimately, I find there are two effects; 1. If the client is
willing to find a solution to their problem, the solution appears like a sudden revelation. 2. If the client is resistant to change, the result is usually an aggressive response.
So, a lot of my time is spent dealing/rolling with resistance.
2. Hi Tony,
I agree, it is a great model for the willing participant
Especially from an addiction standpoint, although someone is “willingly” working with you it is very possible (even likely) that this is their explicit (cognitive) mind imposing direct volitional
control (which is inherently temporary) over implicit drives/urges (which have considerably more “staying” power). The result often is just plain cognitive dissonance and unearthing the roots of
that dissonance NEEDS to happen first, then the actual change models can be used effectively. I think that you will agree that if this doesn’t take place, the addict’s brain is unbelievably well
trained to rationalize and defeat any external change mechanisms
Again, I appreciate your comment and please feel free to contact me if there is anyway that I might be of assistance.
3. I love your article. it is explained well with good examples. i am just a bit confused, isn’t it true that inverse is the same with converse? the difference lies with the negativity. one lies in
the first part of the sentence while the second one lies in the latter part. the answer to these two questions is no different. it will be great if you can add answers to those questions and see
the boundaries. Great article btw.
4. If only more people would read about this.
5. Super great writing. Honest!
6. http://www.davidjparnell.com‘s done it once more! Superb read.
7. Robbie,
Thanks so much for the question. The difference does indeed lie in where the negative (or absence) is placed. In an A – B type of statement, whether the opposite or absence of A as opposed to B
(or vice versa) is present will make a world of difference. Take the example of “Jason is the boss.” If Jason is A and being the boss is B, I think you can see where negating one or the other
will dramatically change the meaning of the statement.
8. I love it!
9. This is such a great resource that you are providing and you give it away for free. I enjoy seeing websites that understand the value of providing a prime resource for free. I truly loved reading
your post. Thanks!
10. Hi,
I studied a lot of NLP stuff, but I dont know how these questions can help someone. Please, give an example.
11. I really love quantum linguistics. I like the fact you are trying to correlate the four questions (what will happen if….) with Cartesian Coordinates.
Yet I can’t see which logic you have used to get from “What will happen if you take out the trash” to the axis of “I will take out the trash….etc” You have taken one sentence and made up the axis
by deleting the sentence to your ‘fancy’ “What will happen if you take out the trash” does not in any way mean “I will take out the trash” – so the premise is slanted right at the beginning. And
that’s the theme all through the examples. Cartesian coordinates take a premix and then filters it to find the ‘truth’ of it. But if you dissect the theorem before you start working on the actual
sentence, the truth has already been corrupted. The coordinated do need two things to begin with (I.E. cause and effect – the means this), yet when you decide ‘this means this’ before you start,
the process does not work well from a training perspective. I think you need to keep the four questions and the coordinates separate. Let me know if I have missed something?
12. Hi Terry,
Thanks for the question/s. I’ll try to answer them below:
“Yet I can’t see which logic you have used to get from “What will happen if you take out the trash” to the axis of “I will take out the trash….etc””
This is because these sentences aren’t connect through a string of formal logic. This would require if/then’s, or either/or’s, for example, that connect two or more complete thoughts. The various
coordinates are not (necessarily) connected in such a way.
“You have taken one sentence and made up the axis by deleting the sentence to your ‘fancy’ “What will happen if you take out the trash” does not in any way mean “I will take out the trash” – so
the premise is slanted right at the beginning. And that’s the theme all through the examples.”
(i) No, I didn’t make up the axis, the coordinates make up the axis. I simply applied the terms of the coordinates to the initial theorem. (ii) Yes, “What will happen…” does not mean “I will take
out the trash.” It is not supposed to–I’m not sure where you’ve arrived at the conclusion that it should. (iii) That the premise is slanted presupposes that it is an attempt at determining an
objective truth, which it, along with any other Cartesian efforts (in the fields of NLP/Linguistics), are/is not.
“Cartesian coordinates take a premix and then filters it to find the ‘truth’ of it.”
No, they don’t. Cartesian coordinates simply take the participants through a number of different perspectives, as they relate to the initial premise/theorem, so as to jar or potentially shift any
static, preconceived notions/understandings/beliefs/etc, by illuminating the information “attached” to the premise/theorem.
“But if you dissect the theorem before you start working on the actual sentence, the truth has already been corrupted. The coordinated do need two things to begin with (I.E. cause and effect –
the means this), yet when you decide ‘this means this’ before you start, the process does not work well from a training perspective.”
This statement is confusing to me. If you dissect the theorem, you are simply left with an incomplete sentence–the separation of the subject from the predicate–and left with little more than
words. So, in that sense, yes, you are correct, because an incomplete sentence really can’t convey a “truth.” However, from a training perspective, your perspective on the way that Cartesian
logic should be used is flawed (in the NLP sense). Cartesian logic no more a tool for proving right/wrong, truth/lie, etc., than a bulldozer is for flying, or a rain shower is for drying off.
Cartesian logic, in a therapeutic or conversational/persuasion sense, is, again, for the purpose of providing different perspectives on a subject and illuminating the criteria/beliefs/information
/etc. that is/are attached to, and support, that subject. If you want to prove someone wrong, or find a “truth,” you’ll need to use formal systems of logic. Cartesian logic is not that tool,
regardless of the term “logic” it its title.
“I think you need to keep the four questions and the coordinates separate.”
Agreed. They are.
I hope this helps you to better understand the nature and proper deployment of Cartesian logic. Thanks for the questions/comments. | {"url":"http://www.davidjparnell.com/fallacies-logic/cartesian-logic-2","timestamp":"2014-04-20T23:32:37Z","content_type":null,"content_length":"56391","record_id":"<urn:uuid:2a805f51-4ed5-46f2-a396-b5545089d536>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Systematic review and meta-analysis of strategies for the diagnosis of suspected pulmonary embolism | BMJ
1. ^2 Department of Clinical Epidemiology, INSERM U 729, Université Paris V, Assistance Publique Hopitaux de Paris, Hôpital Européen Georges Pompidou, Paris, France
1. Correspondence to: G Meyer, Service de Pneumologie-soins intensifs, Hôpital Européen Georges Pompidou, 20 rue Leblanc, 75015 Paris, France
Objectives To assess the likelihood ratios of diagnostic strategies for pulmonary embolism and to determine their clinical application according to pretest probability.
Data sources Medline, Embase, and Pascal Biomed and manual search for articles published from January 1990 to September 2003.
Study selection Studies that evaluated diagnostic tests for confirmation or exclusion of pulmonary embolism.
Data extracted Positive likelihood ratios for strategies that confirmed a diagnosis of pulmonary embolism and negative likelihood ratios for diagnostic strategies that excluded a diagnosis of
pulmonary embolism.
Data synthesis 48 of 1012 articles were included. Positive likelihood ratios for diagnostic tests were: high probability ventilation perfusion lung scan 18.3 (95% confidence interval 10.3 to 32.5),
spiral computed tomography 24.1 (12.4 to 46.7), and ultrasonography of leg veins 16.2 (5.6 to 46.7). In patients with a moderate or high pretest probability, these findings are associated with a
greater than 85% post-test probability of pulmonary embolism. Negative likelihood ratios were: normal or near normal appearance on lung scan 0.05 (0.03 to 0.10), a negative result on spiral computed
tomography along with a negative result on ultrasonography 0.04 (0.03 to 0.06), and a D-dimer concentration < 500 μg/l measured by quantitative enzyme linked immunosorbent assay 0.08 (0.04 to 0.18).
In patients with a low or moderate pretest probability, these findings were associated with a post-test probability of pulmonary embolism below 5%. Spiral computed tomography alone, a low probability
ventilation perfusion lung scan, magnetic resonance angiography, a quantitative latex D-dimer test, and haemagglutination D-dimers had higher negative likelihood ratios and can therefore only exclude
pulmonary embolism in patients with a low pretest probability.
Conclusions The accuracy of tests for suspected pulmonary embolism varies greatly, but it is possible to estimate the range of pretest probabilities over which each test or strategy can confirm or
rule out pulmonary embolism.
Pulmonary embolism is a common and serious disease. Clinical signs and symptoms allow the clinician to determine the pretest probability of someone having pulmonary embolism (the clinical
probability) but are insufficient to diagnose or rule out the condition.1 Laboratory tests and imaging are thus required in all patients with suspected pulmonary embolism.2 Since 1990 a large number
of diagnostic tests and strategies have been evaluated for pulmonary embolism. As the design, clinical setting, and reference methods differ between studies, the diagnostic value of most tests may
seem inconsistent. Although several reviews have been published on this topic,1^–3 systematic reviews that may clarify the role of the different diagnostic tests are lacking.
We carried out a systematic review to assess the likelihood ratios of the diagnostic tests used for suspected pulmonary embolism. For clinical purposes, we estimated the range of pretest
probabilities over which each test can accurately confirm or exclude pulmonary embolism.
Materials and methods
We searched Medline, Embase, and Pascal Biomed for studies published from January 1990 to September 2003 using the search terms ((pulmonary embol* or pulmonary thromboembol*) and (diagnosis or
diagnostic) and (angiography or arteriography or (follow adj up) or followup or (management adj stud*)) and (PY = 1990-2003) and (study or studies or trial) and (LA = ENGLISH). We also manually
searched published bibliographies and our own personal libraries. We retained only studies published in English. We excluded abstracts, editorials, reviews, case reports, and case series.
Data selection
Two reviewers (PMR, GM) independently selected potentially relevant studies. Studies were included if they evaluated tests or strategies aimed at confirming or excluding pulmonary embolism
(confirmation or exclusion diagnostic strategies, respectively) and they met the following criteria: the reference method was pulmonary angiography for confirmation strategies and clinical follow-up
or pulmonary angiography for exclusion strategies; the study was prospective; participants were recruited consecutively; and the test being evaluated and the reference test were interpreted
We excluded retrospective studies; follow-up studies with more than 5% of patients lost to follow-up or those that used additional imaging to pulmonary angiography in patients with a negative
experimental test result; studies in which crude data could not be extracted for the calculation of positive and negative likelihood ratios; and studies that had specific populations. Each study was
graded according to the reference method and the characteristics of the patients (see table A on bmj.com). For studies with multiple publications, we used data from the most recent publication.
Data extraction
Two investigators (PMR, GM) independently abstracted data on the design; study size; setting; characteristics of the patients; type of reference standard; and the number of true positive, true
negative, false positive, and false negative test results.
When we used follow-up as the reference method, we considered all the patients with a negative test result to have a false negative result if they developed deep vein thrombosis or pulmonary embolism
during the three month follow-up period. We classified deaths believed to be caused by pulmonary embolism as thromboembolic events. When we could not extract data from published articles, we
contacted the authors. Discrepancies in data abstraction between investigators were resolved by a third author (PD).
Statistical analysis
We calculated the positive likelihood ratio for confirmation diagnostic strategies and the negative likelihood ratio for exclusion diagnostic strategies. We used the adjusted Wald method to calculate
95% confidence intervals.4 Summary estimates of the likelihood ratios were calculated as a weighted average, and we calculated the confidence intervals using the DerSimonian and Laird random effects
method.5 Homogeneity tests were carried out to evaluate the consistency of findings across the studies. We used Cochran's Q heterogeneity statistic and the quantity I^2 to determine the percentage of
total variation across the studies due to heterogeneity rather than to chance.6 When I^2 was more than 0%, we explored possible reasons for heterogeneity, such as patient populations (selected or
unselected patients) and the nature of the reference method (angiography or composite reference standard), using subgroup analysis based on the three categories for study quality (see table A on
Analyses were carried out in STATA (release 6).
Clinical practice perspectives
We considered that a confirmation strategy was accurate enough to diagnose pulmonary embolism when the post-test probability was above 85%, and that an exclusion strategy was accurate enough to
exclude pulmonary embolism when the post-test probability was below 5%.3 We used Bayes's theorem to calculate the probability of pulmonary embolism, conditioned by the likelihood ratio as a function
of the pretest probability.7
We identified 1012 potentially eligible articles. After scanning the abstracts and titles we screened 93 for possible retrieval. We selected 66 articles for more detailed evaluation; 48 of these were
included in the final analysis (see figure on bmj.com).8^–55 The studies totalled 11 004 patients with suspected pulmonary embolism. The condition was confirmed in 3329 patients and excluded in 7675
(prevalence 30%). We did not analyse studies that used electron beam computed tomography as this technique is no longer used.8 9 See tables B-D on bmj.com for characteristics of the included studies.
Confirmation diagnostic strategies
Table 1 and figure 1 summarise the confirmation diagnostic strategies and their pooled positive likelihood ratios.
Two studies evaluated lung scintigraphy.10 11 The prospective investigation of pulmonary embolism diagnosis study assessed the performances of ventilation and perfusion lung scans.10 Miniati et al
studied the value of a perfusion lung scan without ventilation.11 We were unable to pool the results of these two studies as they used different diagnostic criteria and evaluated two different
We found significant heterogeneity among the five studies on magnetic resonance angiography.24^–28
Exclusion diagnostic strategies
Table 2 and figure 2 summarise the exclusion diagnostic strategies and their pooled negative likelihood ratios.
Nine studies analysed the value of a negative result on spiral computed tomography for excluding pulmonary embolism; however, one used a specific definition for negative results.37 We detected
significant heterogeneity in the study group, but not in the two grade A studies.12 13
We found heterogeneity in the group of ultrasonography studies. Five of the six studies were carried out in patients with a non-diagnostic ventilation and perfusion lung scan and one in patients
selected on the basis of clinical probability and D-dimer testing.18^–21 42 43 Wells et al studied the negative diagnostic value of serial ultrasonography after a non-diagnostic ventilation and
perfusion lung scan.44
Table 2 and figure 3 summarise the studies that evaluated D-dimers for the exclusion of pulmonary embolism (see also table D on bmj.com). In the analysis we included 12 studies that evaluated three
different quantitative D-dimer enzyme linked immunosorbent assays, including two classic microplate methods45^–50 53 and one rapid quantitative method.34 41 43 51 52 One study used a different
cut-off threshold so we excluded it from the calculation of summary negative likelihood ratios.53 We detected significant heterogeneity in the study group, but we found no heterogeneity in the grade
B 34 41 45 46 or grade C studies.43 47^–52
Studies that used seven different quantitative D-dimer latex agglutination assays met our inclusion criteria.13 36 49 50 53 55 Two studies evaluated several latex D-dimer tests in the same patients
so we excluded them from the calculation of summary negative likelihood ratios.49 50 One study used a different cut-off value so we excluded that from the calculation of the summary negative
likelihood ratios too.53 Three studies could be pooled.13 36 55
Two studies that evaluated a semiquantitative agglutination latex assay had significant heterogeneity.49 54 A whole blood agglutination D-dimer assay was evaluated in three studies, with no
significant heterogeneity.31 35 51
Clinical practice perspectives
For each strategy we calculated the post-test probability as a function of the pretest probability (figs 4 and 5). For each diagnostic strategy we express the accuracy of diagnostic decisions as a
function of the pretest probability (fig 6).
Relation to pretest probability
Confirmation of pulmonary embolism
In patients with a high pretest probability; a positive result with spiral computed tomography, ultrasonography, echocardiography, or magnetic resonance angiography; or a high probability ventilation
perfusion lung scan are associated with a post-test probability of over 85%, allowing pulmonary embolism to be accurately diagnosed. Patients with a moderate pretest probability require additional
imaging after a positive echocardiography result. In patients with a low pretest probability, the post-test probability was below 85% for all tests and therefore further investigations would be
needed to confirm pulmonary embolism (fig 6).
Exclusion of pulmonary embolism
In patients with a low clinical probability; negative test results for D-dimers or with spiral computed tomography or magnetic resonance angiography; or a normal or low probability lung scan are
associated with a post-test probability of below 5%. In this situation, additional testing would not be needed to rule out pulmonary embolism. Conversely, patients with a negative echocardiography
result and a normal venous ultrasonography result would require additional testing to rule out pulmonary embolism, even when the clinical probability was low. In patients with a moderate pretest
probability, a negative quantitative D-dimer enzyme linked immunosorbent assay result, a normal or near normal lung scan, or a combination of normal spiral computed tomography results and normal
venous ultrasonography results accurately exclude pulmonary embolism. In patients with a high pretest probability, the residual post-test probability remained above 5% for all diagnostic tests (fig 6
). In these patients, additional testing would be required to confidently exclude pulmonary embolism.
Large differences exist in the accuracy of diagnostic tests used to confirm or rule out pulmonary embolism. Ventilation perfusion lung scanning, spiral computed tomography, and ultrasonography of the
leg veins all had positive likelihood ratios above 10. When these tests are positive in patients with a moderate or high clinical probability of pulmonary embolism they provide a post-test
probability greater than 85%. A normal or near normal ventilation perfusion lung scan result, a combination of spiral computed tomography and ultrasonography, and quantitative D-dimer enzyme linked
immunosorbent assay results had negative likelihood ratios below 0.10 and can exclude pulmonary embolism in patients with a low or moderate pretest probability. Spiral computed tomography alone, a
low probability ventilation perfusion lung scan, magnetic resonance angiography, the latex Tinaquant D-dimer test, and the haemagglutination D-dimer test have higher negative likelihood ratios and
can exclude pulmonary embolism only in patients with a low clinical probability. Echocardiography and ultrasonography seem unable to exclude pulmonary embolism.
The most straightforward approach for determining the accuracy of a diagnostic test is to carry out a cross sectional study in unselected patients, with independent, blinded assessments of test and
reference methods.56 Our literature search identified only three studies that used such a stringent design in patients with suspected pulmonary embolism.9 12 13 Pulmonary angiography is the reference
method for the diagnosis of pulmonary embolism, but it has the limitations of being an invasive procedure with associated risks, and physicians are reluctant to carry it out in all patients.57 58
Clinical follow-up of untreated patients with negative test results is considered a valuable alternative to this risky reference method,59 as the number of symptomatic thromboembolic events during a
three month follow-up period without anticoagulant treatment reflects the number of false negative tests.60 Nevertheless, inclusion of follow-up studies in our analysis is associated with some
drawbacks. Blinding is not maintained and some false negative test results may be undetected. In addition, in most of these studies a positive angiogram was not used to confirm the diagnosis of
pulmonary embolism, and the rate of false positive test results may have been miscalculated. However, the criteria used to confirm pulmonary embolism (positive results with computed tomography or
ultrasonography, high probability ventilation perfusion lung scan) are widely accepted.2 3
We expressed test performance as likelihood ratios. According to Bayes's theorem, the likelihood ratio indicates the extent of change in the odds of disease after a test result.56 Likelihood ratios
can be calculated irrespective of the format of the test result: dichotomous (spiral computed tomography, ultrasonography, magnetic resonance angiography, qualitative D-dimer test), ordinal
(ventilation perfusion lung scan), or continuous (quantitative D-dimer tests).
Some of our findings are limited by the low number of studies meeting our quality criteria, especially those of lung scanning. Systematic reviews of diagnostic studies are hampered by the
heterogeneity of results, even when attempts to define the most homogeneous set of studies are made, as in our study.61 In studies dealing with quantitative D-dimer enzyme linked immunosorbent assay
results, the negative likelihood ratio of grade B studies were homogeneous but lower than those of grade C studies. This discrepancy may be explained by differences in the study population and by the
use of pulmonary angiography as the reference method in most grade C studies as opposed to follow-up in most grade B studies. Pulmonary angiography is difficult to interpret in patients selected on
the basis of a previous lung scan result (grade C studies), leading to a high risk of misclassification.19 These patients are also likely to have small pulmonary emboli, which may increase the rate
of false negative D-dimer test results.52
What is already known on this topic
The accuracy of diagnostic tests for suspected pulmonary embolism varies largely between studies, and the appropriate clinical setting for their use is unclear
What this study adds
When the clinical probability is moderate or high, pulmonary embolism is confirmed by a high probability lung scan and a positive result on spiral computed tomography or venous ultrasonography
When clinical probability is low, these results require confirmation by pulmonary angiography
In patients with a low or moderate clinical probability, the condition can be excluded by a negative quantitative D-dimer test result, a normal or near normal lung scan, or normal findings on spiral
computed tomography and venous ultrasonography
When clinical probability is high, these results require confirmation by pulmonary angiography
As proposed by Kearon, we assumed that the diagnosis of pulmonary embolism was accurate when the post-test probability was above 85% and that pulmonary embolism could be safely ruled out when the
risk of venous thromboembolism was below 5%.3 We defined a range of pretest probabilities at which each test can confirm or rule out pulmonary embolism, with an acceptable risk of misdiagnosis (fig 6
). As a general rule, the results suggest that discordance between clinical probability and the diagnostic test result requires additional studies.
Our findings allow the calculation of the post-test probability of pulmonary embolism provided that the pretest probability has been estimated before the test. Our results also suggest that the
performances of several diagnostic tests remain poorly defined and need additional evaluation.
• Additional tables and a figure are on bmj.com
• Contributors GM had the idea for the study; he is guarantor for the paper. PMR, GM, and PD designed the study. PMR and GM assessed the studies for inclusion, extracted the data, and drafted the
paper. IC, GC, and HS provided advice on interpretation of the results. IC did the statistical analysis. All authors revised the paper critically and approved the final manuscript.
• Funding None.
• Competing interests None declared.
• Ethical approval Not required. | {"url":"http://www.bmj.com/content/331/7511/259?view=long&pmid=16052017","timestamp":"2014-04-18T00:40:30Z","content_type":null,"content_length":"273711","record_id":"<urn:uuid:aafde321-e7bc-436c-a581-f0870ca1ea14>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/mimi_x3/medals/1","timestamp":"2014-04-20T03:25:52Z","content_type":null,"content_length":"115726","record_id":"<urn:uuid:db5a5236-ed16-4583-bf81-b73d2215d6a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exam II Questions
Can't you take the exam for me?
What exam are you taking? Could I bring all my reference materials with me? I still remember how to do this stuff, but it sure is nice having the references readily available when you don't remember
the exact formulas, values of constants, etc. | {"url":"http://www.physicsforums.com/showthread.php?t=139328&page=4","timestamp":"2014-04-19T12:31:41Z","content_type":null,"content_length":"54259","record_id":"<urn:uuid:6e80dc87-5dae-44c6-a2a3-fa3d32d89169>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 836: Classical Electrodynamics (Spring, 2001)
[ Introduction and General Format | Syllabus ]
[ Problem Sets | Problem Set 1]
[Problem Set 2| Problem Set 3| Problem Set 4| Problem Set 5]
[Problem Set 6|Problem Set 7|Problem Set 8| Problem Set 9]
[ Term Project ]
[Office Hours; Grader| Random Information]
Physics 836 is the third quarter of a full-year sequence on Classical Electrodynamics. The text will continue to be ``Classical Electrodynamics,'' by J. D. Jackson (3rd Edition). However, I expect to
deviate significantly from the material in this book during the latter part of the quarter. The course will meet MWF from 9:25 to 10:25 in Smith 2186, with (rare) meetings Friday at 8:30. The
instructor is David Stroud.
You can choose to be graded in either of two ways. Method I: Homework and final only. If you choose this method, your grade will be based on the problem sets (about 40%) and a final (about 60%).
Method II: Homework, paper, and final. If you choose this method, the grade will be based on problem sets (about 25%), a final (about 35%) and a paper (about 40%). The paper can be either a written
paper, an oral presentation, or a computer project. Possible topics will be announced in a few days, and I will also consider your own choices. You can also work in teams of two, with one person
giving a written presentation and one a talk. There will not be a midterm this quarter.
The final exam will be given on Wednesday, June 6 from 9:30 - 11:30 in our usual classroom. It will be open book (that is, you may bring any or all of Jackson, a book of math tables, your class
notes, and homework solutions). The exam will be comprehensive - that is, it will cover topics from any part of Physics 836.
If you have not already given me your email address, please do so in class or by email (stroud@ohpsty.mps.ohio-state.edu). I plan to send problem sets and other announcements by email.
I expect to cover Special Relativity (i. e., roughly, the material in Jackson, Ch. 11 and 12) early in the quarter. Beyond that, I will discuss a range of special topics. These will be selected from
the following: radiation from moving charges; Cherenkov radiation; photonic band gap materials and electromagnetic waves propagating in periodic media; X-ray diffraction from solids; inhomogeneous
dielectric media and percolation theory. (I don't expect to cover all of these.)
I expect to have problem sets as in previous quarters, approximately weekly but probably slightly less frequently. Unless otherwise specified, they are due each Wednesday by 5PM in my mailbox (or
preferably, the grader's mailbox) in the physics office (or you can turn them in at classtime, of course). Unless otherwise stated, each problem is worth 10 points.
Due Wednesday, April 4 by 5PM
1. Jackson, Problem 11.3.
2. Jackson, Problem 11.6.
3. Jackson, Problem 11.8 (a).
4. Jackson, Problem 11.5. [In this problem, prove only the second identity, since the first was derived in class. You can start from eq. (11.31).]
Due Wednesday, April 11 by 5PM
1. Write out Jackson (11.141) and (11.143) and show that they give the Maxwell equations (i. e. the eight scalar equations which make up the Maxwell equations).
2. Jackson, problem 11.15
3. Jackson 11.14 (a), (b).
4. Consider an infinitely long wire of charge \lambda per unit length (in its rest frame). The wire moves with speed v parallel to its length, relative to the lab frame K
Calculate the magnetic induction B in the lab frame in two ways:
(a) Find the current density in the lab frame from a Lorentz transform, and and use Ampere's Law.
(b). Find the electric field in the rest frame of the wire and Lorentz-transform the fields.
Show that both methods give the same answer.
5.Write out the components of Jackson, eq. (11.144) explicitly and show that they give the Lorentz force equation and the corresponding power equation as discussed in class.
Due Wednesday, April 18
1. Jackson 11.22(a).
2. Jackson, 12.3
3. Jackson 12.5(b).
4. Starting from the Lagrangian (12.12) for a charged particle in a specified electromagnetic field, fill in the steps to obtain Jackson's equations (12.16) and (12.17).
Due Wednesday, April 25 by 5PM
1.Jackson 11.21(a).
2. As we showed in class, in a large electromagnetic cavity of volume V = abc, the allowed k points are (k_x, k_y, k_z), where k_x = 2\pi/a, k_y = 2\pi/b, and k_z = 2\pi/c.
(a). Suppose for simplicity that all three box edges are of length L. Find the density of modes per unit volume in k-space, recalling that there are two modes for each k point.
(b). Calculate the number of modes between angular frequency \omega and $\omega + d\omega, in the limit of very large L.
(c). Suppose that the total energy in a mode of frequency $\omega$ at temperature T is \hbar\omega/[exp(\hbar\omega/k_BT) - 1], where \hbar is Planck's constant. Let E(\omega)d\omega equal the energy
in the cavity between frequency \omega and \omega + d\omega. Find the frequency for which E(\omega) is maximum at a given temperature. Estimate this frequency for (a) room temperature, and (b) the
surface temperature of the sun.
3. Jackson 11.27 (a).
Due Wednesday, May 2 5PM
1. (a). Fill in the steps needed to go from Jackson, eq. (14.25) to Jackson, eq. (14.26).
2. Verify eq. (14.66).
3. Jackson, problem (14.4). Here, the word ``discuss'' means ``calculate.''
4. Jackson, problem (14.5)
5. (Optional) Repeat the calculation done in class for the solution to the wave equation for a ``blip source,'' but carry out a contour integral suitable for the advanced solution rather than the
retarded solution, and obtain the advanced solution.
Due Wednesday, May 9 by 5PM
1. Jackson, problem (14.9). (a), (b), (c)
2. (a) Consider the reciprocal lattice of a Bravais lattice. Show that the reciprocal of this reciprocal lattice is the original Bravais lattice. (b). Show that the reciprocal of a
face-centered-cubic lattice is a body-centered cubic lattice.
(c). Show that the reciprocal of a simple hexagonal lattice is also a simple hexagonal lattice, but rotated with respect to the original one. (Note: ``simple hexagonal'' means the same thing as what
I called in class ``stacked triangular'').
Due Wednesday, May 23 by 5PM
1. As stated in class, a diamond structure can be viewed as two interpenetrating face-centered-cubic lattices, one of which is displaced by vector b = (a/4, a/4, a/4)$ relative to the other, where a
is the cube edge.
(a). Write down the basis vectors for the reciprocal lattice of the face centered cubic structure.
(b). Show that the X-ray scattering cross-section from silicon (in which the atoms are located on a diamond lattice) vanishes for certain reciprocal lattice vectors of the fcc lattice, and find those
reciprocal lattice vectors.
(c). For which reciprocal lattice vectors do the scattered waves from the two sublattices interfere constructively?
2. Suppose that a given crystal has one atom per Bravais lattice point R, and that the total electron density has the form
n_e(x) = \sum_{\bf R}n_{at}(x- R),
where n_{at}(x) is the electron density associated with a given atom. As shown in class, the differential scattering cross-section at reciprocal lattice vector K is proportional to |n_e(K)|^2, where
n_e(K) is the Fourier transform of the electron density.
Find an expression for |n_e(K)| in the case when n_{at}(x) is the electron density characteristic of the 1s state of hydrogen.
3. Consider a periodic layered magnetically permeable medium, with alternating layers of permeability \mu_1 and \mu_2, and thicknesses d_1 and d_2. The dielectric constant of each layer is unity. (We
are using esu units here.) Consider a linearly polarized plane electromagnetic wave of frequency propagating in a direction perpendicular to the layers. The first layer of medium 1 occupies the
region from z = 0 to z = d_1; the first layer of medium 2 occupies $_1 < z < d_1 + d_2.
(a). Write down the most general form of the wave (i) for 0 < z < d_1; (ii) for d_1 < z < d_1 + d_2.
(b). Use Bloch's theorem, combined with the known boundary conditions at d_1 and d_1 + d_2, to write down four conditions on the four unknown amplitudes you found in part (a).
(c). Hence, write down a determinantal condition which determines the frequencies for a given Bloch vector k. Do not attempt to solve this condition.
The optional term project for Physics 836 will consist of an oral or written presentation, or a computer project, on a special topic involving applications of electromagnetism to a problem of current
research interest. The project should be designed as follows:
You do not need to do any original research yourself.
The term paper may consist of an oral or a written presentation, or a computer project.
If written, the paper should be of order 10-15 pages, TYPED and double-spaced. It should include suitable references and may also include a few carefully selected figures. Note: the equations DO NOT
have to be typed.
The deadline for the paper is Monday, June 4 at 11 AM. No extensions will be allowed. I would very much prefer to receive the papers by Friday, June 1.
If an oral presentation, the talk should be about 30 minutes, of which about 5-8 should be left for discussion. The paper can be presented using transparencies (preferred) or as a chalk talk. If I
can find a way to get a projector into the classroom (doubtful), I might consider a powerpoint presentation.
The paper will be graded on both presentation and content. By ``presentation,'' I mean clarity and organization of the exposition. By ``content,'' I mean how well you understand the chosen topic, as
revealed in the paper or talk. In judging content, I will take into account the difficulty of the topic chosen - I do not expect the same degree of mastering of the most difficult topics, and I
expect GREATER understanding of those topics which have already been partly covered in class (i. e., you will have to go beyond the classroom material). At the moment I expect to give about 40% for
presentation and 60% for content.
A non-inclusive list of some possible topics is given below. I will be adding to these in the next few days, and I invite you to propose your own topics (but these must be approved by me).
I will consider allowing projects done in teams of two. In this case, one person will give the talk and one will write the paper. Such projects must be approved in advance.
The time table for these projects is given at the end of this assignment (subject to small changes).
Some possible term paper topics for Physics 836:}
Synchrotron radiation: frequency-distribution, applications to studies of solids, biology, etc.
Electromagnetic wave propagation through anisotropic materials: this includes several possible topics, including magneto-optics, optical rotation, optically active materials (such as biological
Problems in nonlinear optics: optical Kerr effect, second harmonic generation, self-focusing of electromagnetic waves.
Cherenkov radiation and its applications.
Topics in magnetohydrodynamics
Electrodynamics of superconductors (London equations, Ginzburg-Landau equations, Abrikosov flux lattice).
Models for the complex dielectric functions of real materials.
Problems in light scattering: Rayleigh scattering, Brillouin scattering, Raman scattering, critical opalescence, electromagnetic scattering from periodic arrays of scatterers.
Aspects of the Hall effect and magnetoresistance of materials.
Interactions between gravitation and electromagnetic radiation.
Quantization of the electromagnetic field.
Light scattering from rough surfaces
Medical applications of electromagnetic radiation.
Electrorheological and/or magnetorheological fluids
Left-handed materials: nature and applications
Percolation theory: network models and simulations (this would make a good computer project).
Photonic band gap materials (this will be covered in class, but would make a good, if challenging, computer project)
Optical traps, optical lattices, etc.
Note: further topics will be added in the next several days. Some of the topics listed above are actually groups of several topics.
Time Table:
By Monday, April 23: written proposal of topic due. (This does not commit you to that topic, nor does it commit you to do a project.)
By Friday, May 4: half-page outline plus two or three references are due. Also, you should choose a written or oral presentation or a computer project by that date.
Monday, May 21: written papers due at beginning of class. Also, oral presentations will begin on or about this date. N. B.: All students required to attend all oral presentations.
My office hours will be Monday from 4-5PM and Tuesdays and Wednesdays 11:30 A. M. to 12:30 PM, in Smith 4034. If you are unable to make the office hours, please email me and I will arrange an
appointment. My office phone is (292)-8140, and I can easily be reached by email (stroud@ohstpy.mps.ohio-state.edu).
The grader will continue to be Masa-oki Kusunoki (masa@pacific.mps.ohio-state.edu).
o Charles Augustin de Coulomb
oSimeon Denis Poisson
o Pierre Simon Marquis de Laplace
oNOT Pierre Simon Marquis de Laplace
oJohann Karl Friedrich Gauss
oWilhelm Bessel
oAndre Marie Ampere
oMichael Faraday
oJames Clerk Maxwell
oHendrik Kramers
oHendrik Lorentz
oAlbert Michelson
oAlbert Einstein | {"url":"http://www.physics.ohio-state.edu/~stroud/Eandm.html","timestamp":"2014-04-20T23:26:58Z","content_type":null,"content_length":"16879","record_id":"<urn:uuid:da619432-6d79-4300-98d6-ae09d6fb92ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
How the **** do i figure out the volume of a triangle
02-12-2009 #1
How the **** do i figure out the volume of a triangle
Obviously i fail at math but im trying to figure out the volume of this triangle-ish box im working on designing. I think im going in the wrong direction here but i need about 7 cubes net, and a
box no taller than 15 inches, has to be slant face as the subs are 15.5 inches wide. Im gonna attempt to do this myself but im having trouble planning everything out before i get the materials
Heres my max specs. I could go bigger but i wanna leave a little airspace around the box too.
15.5 inches tall
18 inches deep
55 inches wide (this is pushing the limits of my trunk, i have 60 inches of width in total, at the trunk opening)
heres what i got so far
Im better than you.
Re: How the **** do i figure out the volume of a triangle
add top and bottom depth and divide by 2, thats the length of the depth
woc47, nigro013, defconx3, louisiana_crx, bld 25, supa_c, lancer1978, sxkicker89, 310w6, more, i forget
Re: How the **** do i figure out the volume of a triangle
separate that into a triangle and a rectangle. for the triangle its (1/2)(base)(height)(length) and then for the rectangle its just (H)(W)(L)
Re: How the **** do i figure out the volume of a triangle
Measure the depth and the 1/2 the height use that as your "D" in ( L * W * D/1728) and there is your answer
"humility in the face of overwhelming genius will obviously amount to hypocrisy"
Re: How the **** do i figure out the volume of a triangle
If all else fails, build it, and fill it with packing materials.
Then empty the packing materials into a square or rectangular container.
Measure said packing materials.
Maxima is SOLD.
New project COMING SOON!
Re: How the **** do i figure out the volume of a triangle
use geometry skills learned in 10th grade
1994 Thunderbird
JVC KD-AV7010
Alpine 6x9 3-ways
Pioneer 6x8 3-ways (rear)
JVC CH-X1500 12-disc MP3 Changer
Alpine Type X 12" 2 cubes @ 30hz
MTX 404 runs the inside
MTX 1501d runs the X
Optima G75/25 RT up front
2x Optima G31 YT in the trunk
200a EA HO Alt (1/0 Big 3)
ALL MY WIRE IS WELDING CABLE
DO NOT SEND ME ANYTHING USING UPS. I WILL NOT DEAL WITH THEM EVER AGAIN
OFFICIAL FEEDBACK THREAD
Re: How the **** do i figure out the volume of a triangle
im getting just under 7 like 6.8 assuming those are outer dimensions using 3/4"
XBL: ANGRYCHEESETROL
PSN: ANGRYCHEESETROLL
Re: How the **** do i figure out the volume of a triangle
You are a fucking retarded
Re: How the **** do i figure out the volume of a triangle
A triangle is two dimensional; thus no volume.
A prizm is the word you are looking for.
Black cat, white cat...all that matters is that it catches mice. -Deng Xiaoing
I do homework for money. I specialize in finance. PM me for estimates.
If someone uses me as a reference, please contact me first.
Re: How the **** do i figure out the volume of a triangle
that will get me about 4 1/2 cubes before displacement. Im gonna have to figure out something else
Im better than you.
Re: How the **** do i figure out the volume of a triangle
Area of that triangle = (1/2)(12)(14) + (4)(14)
Multiply it by how many inches you want to extrude it and you'll get its volume in cubic inches.
Divide it by 12*12*12 to get cubic feet.
Ben Hemp
1992 GMC Sonoma Ext Cab NEW BUILD
2008 MECA S1 KY State Champion, 151.0 dB
2008 MECA S1 WorldChampion, 151.8 dB
2010 MECA scores: 150.9 dB S2, 151.6 dB S3
Refs: smd4life, Subst4nce, wccoug21, dacheatham, thegreatestpenn, jshak07
Re: How the **** do i figure out the volume of a triangle
there is a calculator for that specific box shape on BCAE.
2005 Saab 93 turbo Linear Build coming soon.
Rear Bench Seat: 2 fusion 6.5” marine speakers
Driver and Passenger Panels: 2 Fusion 6.5” Power Plant
Profile 4ch ap1040^
Tower: 4 Soundstream Picasso 6.5”
Profile 4ch ap1040^
Bow: 2 Infinity 5.5”
Profile 2ch ap600^
Subs: 2 Alpine Type R’s in separate 2.5cube boxes with 4” aero ports tuned at 32hz. Waterproofed with 4 coats of resin and marine paint.
Hifonics bxi1606d
200a H/O marine grade alternator (100 at idle)
1/0 knukonceptz to and from all the amps, same 1/0 connecting 2 Optima Yellow Top batteries. (batteries a on either side of engine compartment)
Re: How the **** do i figure out the volume of a triangle
half way down
ref: looney tune, blackbonnie, BallinByNature, MzLady, MUD, craigzter, boltpride, 01Ranger, dennit469, sundownz, stishdr, Flipx99, SocialStealth, CgFire, Xprime4, loudwhat?
ebay: farmerehsan(old) siddiquiehsan(new)
<img src="http://www.bouncingtrucks.com/gallery/albums/userpics/10001/004%7E0.GIF">
PM ME IF U NEED TRAIN HORNS OR ANYTTHING TO DO WITH AIR/BAGGING i get it for the cheapness
Its going to take 50 bucks sahwee
paypal is gregoriahson@gmail.com
let me make sure i still have em though
But really 50 for noods of the sis lol
Re: How the **** do i figure out the volume of a triangle
2.982 cubic feet?
ref: looney tune, blackbonnie, BallinByNature, MzLady, MUD, craigzter, boltpride, 01Ranger, dennit469, sundownz, stishdr, Flipx99, SocialStealth, CgFire, Xprime4, loudwhat?
ebay: farmerehsan(old) siddiquiehsan(new)
<img src="http://www.bouncingtrucks.com/gallery/albums/userpics/10001/004%7E0.GIF">
PM ME IF U NEED TRAIN HORNS OR ANYTTHING TO DO WITH AIR/BAGGING i get it for the cheapness
Its going to take 50 bucks sahwee
paypal is gregoriahson@gmail.com
let me make sure i still have em though
But really 50 for noods of the sis lol
Re: How the **** do i figure out the volume of a triangle
Just seperate it into two pieces...the rectangle on the right and the triangle on the left....then simply find the area of the rectangle and then just (height x width = y....then y/2 will get you
the area of the triangle)...then add that to the rectangle's area and multiply it by your depth of the object for your volume
<First and only dual-alternator 'real' Saturn (pre 2002) on the planet>
Team Concepts, Team Shok Industries Team XS Power
Soon: Mystery subs + AQ20k
12.75 cubes net ~25Hz tuning 200 sq inches of port (lowz?)
AC position- 250 amp alt
Stock alt position- 225 amp alt
(2) XS Power D3100's
4 power+ 4 ground runs of 1/0-gauge to rear
152.1 @ 40Hz legal, certified...just getting started.
02-12-2009 #2
02-12-2009 #3
Join Date
Feb 2006
Clemson, SC
0 Post(s)
02-12-2009 #4
02-12-2009 #5
02-12-2009 #6
02-12-2009 #7
Join Date
Apr 2008
on the country roads
0 Post(s)
02-12-2009 #8
02-12-2009 #9
02-12-2009 #10
02-12-2009 #11
02-12-2009 #12
02-12-2009 #13
02-12-2009 #14
02-12-2009 #15
Join Date
Jul 2006
Dayton, OH
1 Post(s) | {"url":"http://www.caraudio.com/forums/enclosure-design-construction-help/384702-how-****-do-i-figure-out-volume-triangle.html","timestamp":"2014-04-17T13:16:31Z","content_type":null,"content_length":"102991","record_id":"<urn:uuid:1c39e5fe-cbca-4b46-bb62-64a3dadbed00>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
the diff. between 2 integers is at least 14. the smaller integer is 2. what is the larger integer? ex: x<50
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/513d74a6e4b01c4790d2b145","timestamp":"2014-04-18T00:16:39Z","content_type":null,"content_length":"37087","record_id":"<urn:uuid:63288f67-b461-4642-85a2-946ecdc73f39>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
PIMS Distinguished Lecturer: Prof. Mladen Bestvina
PIMS Distinguished Lecturer: Prof. Mladen Bestvina
• Start Date: 05/28/2012
• End Date: 05/30/2012
Prof. Mladen Bestvina (University of Utah)
University of British Columbia
Monday, May 28, 3:00 pm
Wednesday, May 30, 3:00 pm
WMAX 110, 1933 West Mall
May 28:
Asymptotic dimension
Most of the talk will be an introduction to asymptotic dimension, a notion of dimension invariant under quasi-isometries. I will try to outline a proof of a recent result (joint with Bromberg and
Fujiwara) that mapping class groups have finite asymptotic dimension.
May 30:
Second bounded cohomology
Most of the talk will be an introduction to (second) bounded cohomology of a discrete group. I will explain classical constructions of bounded cocycles and recent results (joint with Bromberg and
Fujiwara) regarding mapping class groups and a construction of bounded cocycles with coefficients in an arbitrary unitary representation.
Mladen Bestvina works in the area of geometric group theory. He is a Distinguished Professor in the Department of Mathematics at the University of Utah.
Bestvina is a three-time medalist at the International Mathematical Olympiad. He received the Alfred P. Sloan Fellowship in 1988–89 and a Presidential Young Investigator Award in 1988–91. He gave an
Invited Address at the International Congress of Mathematicians in Beijing in 2002, as well as a Unni Namboodiri Lecture in Geometry and Topology at the University of Chicago.
Bestvina received an Alfred P. Sloan Fellowship and an NSF Presidential Young Investigator Award in 1988. He gave an Invited Address at the International Congress of Mathematicians in Beijing in
2002, as well as the Unni Namboodiri Lectures in Geometry and Topology at the University of Chicago. He is an associate editor of the Annals of Mathematics (http://en.wikipedia.org/wiki/ | {"url":"http://www.pims.math.ca/scientific-event/120528-pdlpmb","timestamp":"2014-04-17T06:55:41Z","content_type":null,"content_length":"36048","record_id":"<urn:uuid:c7eedcbc-1110-44e4-8939-85b236e214f5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Comput. Maths. Math. Phys., Vol.35, No.5, pp. 539551, 1995
Pergamon c 1994 Elsevier Science Ltd
Printed in Great Britain. All rights reserved
A.S. ANTIPIN
(Revised version 16 December 2002)
It is proved that proximal methods converge to the sharp, strongly convex, and degenerate
xed points of extremal mappings.
1. STATEMENT OF THE PROBLEM
We will consider the problem of calculating the xed point of the extremal mapping
Argmin{(w, v
) : w }, v , (1.1)
where the function (w, v) is dened on the square × and Rn
. We shall assume that
(w, v) is convex with respect to the variable w for each xed v. Also, the set Argmin{(w, v) : | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/423/2520042.html","timestamp":"2014-04-17T19:23:45Z","content_type":null,"content_length":"8029","record_id":"<urn:uuid:69094c51-dae7-456e-ac6b-9aa111c90469>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theoretical quantum optics is my main research interest, that is, the quantum scale interaction of light with atoms, molecules, or solid state materials.
Quantum optics not only displays the principles of quantum mechanics in their purest form but also makes them visible on a rather macroscopic scale which can lead to many surprising effects. One of
those is so-called electromagnetically induced transparency, i.e., the suppression of resonant light absorption, which happens without decreasing the interaction strength. Based on this and similar
effects are many of the subfields studied in my group, e.g., quantum information processing, superradiance, or metamaterials. The success of these projects is based on (a) the fact that light can
probe and manipulate the response of atoms (and other quantum systems) and (b) the use of so-called quantum interference.
Recent highlights of our research include ideas on "light storage" and a quantum optics-based negative index of refraction without absorption, as well as quantum optics with molecules and cooperative
(superradiant) systems. As is typical for this field, we have some very close collaborations with experimental groups. | {"url":"http://www.phys.uconn.edu/~syelin/overview.htm","timestamp":"2014-04-18T08:03:15Z","content_type":null,"content_length":"6911","record_id":"<urn:uuid:450ad289-9fa9-4cf1-8190-0ad2079e7eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
RDF Semantics / error in RDFS entailment lemma
From: Herman ter Horst <herman.ter.horst@philips.com> Date: Fri, 28 Oct 2005 13:51:36 +0200 To: www-rdf-comments@w3.org, phayes@ihmc.us Message-ID:
Following the period during which I reviewed preliminary
versions of the RDF Semantics spec (on behalf of the
WebOnt WG) I investigated various aspects of the RDF
semantics, which led to two papers [1] [2] that contain
information that can be viewed as complementing the spec [3].
In this message I include several comments relating to [3],
derived from [1] [2].
In particular I discovered an error that remains
in the RDFS entailment lemma and that was missed earlier.
A) In [3] decidability and complexity of RDFS entailment are
not considered. The completeness proof in [3] does not prove
decidability of RDFS entailment because the RDFS closure graphs
used are infinite.
In [2] it is proved that these infinite RDFS closure graphs
can be replaced by finite, partial RDFS closure graphs that
can be computed in polynomial time. This is used in [2]
to prove that RDFS entailment is decidable, NP-complete, and
in P if the target RDF graph has no blank nodes.
More specifically, a decision procedure for RDFS entailment
can be described as follows.
Given: a finite set S of finite RDF graphs, a finite RDF graph G:
- form a merge M(S) of S
- add to M(S) all RDF, RDFS axiomatic triples, except those
including a container membership property rdf:_i
(this is a finite set of axiomatic triples)
- for each container membership property rdf:_i that appears
in either S or G, add the four (RDF, RDFS) axiomatic triples
that include rdf:_i; if S and G do not include a container
membership property, then add the four axiomatic triples
for just one container membership property
- apply rule lg to any triple containing a literal, in such
a way that distinct, well-formed XML literals with the same
value are associated with the same surrogate blank node
- apply rules rdf2 and rdfs1 to each triple forming a
well-formed XML literal or a plain literal, respectively
- apply rule rdf1, rule gl and the remaining RDFS entailment
rules until the graph is unchanged
In [2] I call the graph H that is now obtained a partial
RDFS closure: it is finite (see [2] for the proof).
It can be decided whether S RDFS-entails G by checking whether
H contains an instance of G as a subset or whether H
contains an XML-clash (see [2] for the proof).
In this decision procedure, the entailment rules are interpreted
in a broader way than in [3]: blank nodes are allowed
in predicate position. See the following comment.
B) There is an error in the RDFS entailment lemma [3].
For example, the three RDF triples
p subPropertyOf b .
b domain u .
v p w .
(where b is a blank node) RDFS-entail the triple
v type u .
This RDFS-entailment cannot be found with the axiomatic
triples and entailment rules for RDFS.
These entailment rules are therefore not complete.
In [2] it is proved that this problem can be solved by extending
RDF graphs to allow blank nodes in predicate position.
The origin of the problem seems to be a 'mismatch' between
the notion of RDF graph and the semantics of RDF.
The semantics allows blank nodes to refer to properties,
while RDF graphs do not allow blank nodes in predicate
It seems that if blank nodes would be allowed in predicate
position in RDF graphs, then the abstract syntax (i.e.
RDF graphs) would fit more closely with the semantics.
C) The definition of RDFS interpretations in [3] does not make it
obvious that the many uses of the functions IEXT and ICEXT in
this definition are inside their domains, as required (i.e.
that IEXT is used only for properties and ICEXT only for classes).
In [2] it is proved that the description of RDFS interpretations
in [3] is well-defined. This is done by using a slightly
reformulated, equivalent version of the definition.
D) In [2] the completeness, decidability and complexity results
for RDFS are extended to a version of datatyped reasoning
that weakens D-entailment.
Earlier discussion on points C and D took place on rdf-comments
in the period during which I reviewed preliminary versions of
the RDF Semantics spec.
Herman ter Horst
Philips Research
[1] H. ter Horst, Extending the RDFS Entailment Lemma,
Proceedings 3rd International Semantic Web Conference (ISWC2004),
Springer LNCS 3298, pp. 77-91.
The following is a revised and extended version of [1]
[2] H. ter Horst, Completeness, decidability and complexity of
entailment for RDF Schema and a semantic extension involving the
OWL vocabulary, Journal of Web Semantics 3 (2005) pp. 79-115.
[3] http://www.w3.org/TR/rdf-mt/
Received on Friday, 28 October 2005 11:53:44 GMT
This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 21 September 2012 14:16:34 GMT | {"url":"http://lists.w3.org/Archives/Public/www-rdf-comments/2005OctDec/0003.html","timestamp":"2014-04-19T09:34:24Z","content_type":null,"content_length":"13291","record_id":"<urn:uuid:3a292fcf-96be-47f4-a492-3b2642871a3d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Post a New Question | Current Questions
10th grade Chemistry
go to tenth grade sience from google then go to the site that you think will help you the most
Thursday, December 3, 2009 at 10:37am
4th grade
Please indicate you subject, not your grade, and avoid name changes.
Monday, November 30, 2009 at 11:46pm
For the 2nd part i got around 1100k by playing around, but not sure if it is correct
Sunday, November 22, 2009 at 2:52am
juazez-lincoln elemen tary
dere mr robles 2nd room 305 your the best.
Tuesday, November 17, 2009 at 10:23pm
Find the center and ardius of the circle (x-1)2nd power+y2nd power=36
Sunday, November 15, 2009 at 2:56pm
I couldn't find anything in the 1st link, but thanks for the 2nd one!
Wednesday, November 11, 2009 at 6:00pm
4th grade
The only word that I can think of that would fit is Creux which is a harbour. A somewhat hard question at Grade 4.
Wednesday, November 11, 2009 at 4:13am
9th Chemistry
Why is there a big space in the periodic table between the 2nd & 3rd periods? The activity of the electrons is related somehow.
Sunday, November 8, 2009 at 8:54pm
3rd grade
3rd grade Explain how to add with carrying give example
Wednesday, November 4, 2009 at 8:43pm
The cicular graph is constructed based on percentages of each grade. Each percentage is then distributed over the full circle of 360°. If there are twice as many of each grade, the percentage of each
grade will not change, therefore the circular graph (pie-chart) will not ...
Sunday, November 1, 2009 at 6:56pm
3rd grade
Are you sure that is 3rd grade? I am a 4th grader and I don't know any thing about it. ps. When you are done with homework go to clubpegiun. It is really cool.
Sunday, November 1, 2009 at 8:43am
12th grade (Physics)
What does km stand for (I'm only in 4th grade)?
Sunday, November 1, 2009 at 8:36am
12th Grade Math
you should Pre-Calculus exams grade 12 exams joined week
Thursday, October 29, 2009 at 10:06pm
3 grade
what i dont get your question and your in 3rd grade and they ask you this
Thursday, October 29, 2009 at 9:38pm
2nd grade
Try here: http://www.google.com/search?q=shrinking+patterns+in+math&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a Sra
Thursday, October 29, 2009 at 2:13am
3rd grade
As for how you do it with the new math, no I can not, but the "old way" yes I can! Put 1500 down on paper and directly under it write 1132. Now the last 0 of 1500 represents 10. Subtract 2 from that
10 and get 8. Carry the 1 from 10 to add to the 3 = 4. Subtract 4 ...
Wednesday, October 28, 2009 at 8:00pm
3rd grade
This is not third grade.
Saturday, October 24, 2009 at 3:27pm
where exactly did you get the 38 and 36 from in the 2nd portion of the question?
Thursday, October 22, 2009 at 8:36pm
Grade 8 Science
scroll all the way down to Grade 8 Science help needed on this page.
Wednesday, October 21, 2009 at 8:04pm
my 2nd grader has a spelling word "cultures" and need to put into a sentence..
Monday, October 19, 2009 at 3:10pm
Linear Programming
OKay My question is: Two factories manufacture 3 different grades per paper. the company that owns the factories has contracts to supply at least 16 tons of low grade, 5 tons of medium grade, and at
least 20 tons of high grade paper. It cost $1000 per day to operate the first ...
Friday, October 16, 2009 at 8:30pm
beneath is the 1st one and I think douma is the 2nd one
Monday, October 12, 2009 at 6:01pm
Th3 first word is error nd the 2nd analyze. Sra
Saturday, October 10, 2009 at 11:49am
Algebra 1A
From a base elevation of 8200 ft. a mountain peak in colorado rises to a summit elevation of 14588 ft. over a horizonal distance of 15842 ft. find the grade of the peak. The grade of the peak. The
grade of the peak is %. Round to the nearest tenth as needed. Someone help ...
Thursday, October 8, 2009 at 1:54pm
6th grade Reading
Worksheet for An Ant's Life and The Best School Year Ever Vocabulary im in 6th grade
Wednesday, October 7, 2009 at 3:43pm
english 7th grade
Patrick, you get to take 7th grade Eng over again. Read the Wikepedia article.
Tuesday, October 6, 2009 at 7:46pm
3rd grade
3RD GRADE? Technically, I would say the Equivalence Property is the correct answer. But at third grade, I wonder if the teacher is focusing on the Additive property. Ask the student if he/she has
heard these terms.
Tuesday, October 6, 2009 at 5:19pm
3rd grade
what does property mean in 3 rd grade math example 4+8= 12 find the property
Tuesday, October 6, 2009 at 5:15pm
12th grade History?
It would also help to indicate the subject rather than your grade in the "School Subject" space.
Tuesday, October 6, 2009 at 11:37am
spread false stories of roman Catholic plots:this word has 5 letters, 2nd letter is a, and the last letter is s.
Sunday, October 4, 2009 at 9:33pm
this act made it illegal to to hold an individual in prison without trial: this word has 12 letters in it and the 2nd letter is a
Sunday, October 4, 2009 at 9:08pm
To. Ms. Sue
This act made it illegal to hold an individual in prison without trial: It has 12 letters and the 2nd letter is A : Habeas Corpus
Sunday, October 4, 2009 at 8:30pm
To. Ms. Sue
This act made it illegal to hold an individual in prison without trial: It has 12 letters and the 2nd letter is A
Sunday, October 4, 2009 at 8:23pm
To. Ms. Sue
spread false stories of roman Catholic plots: it has 5 letters, the 2nd letter is a and the last one is s.
Sunday, October 4, 2009 at 8:21pm
x =# fifth grade students. y = # fourth grade students z = # third grade students. ========================= x + y + z = 33 x = z + 6 y = z Solve for x, y, z.
Thursday, October 1, 2009 at 11:46pm
5th grade Math
i was in 5th grade last year and believe me its EASIER than you will think it is!! ;)
Wednesday, September 30, 2009 at 5:13pm
go on google type in probability and click on the 2nd search.
Monday, September 28, 2009 at 10:18pm
7th grade Pre-Algebra
Using the definition, 10(to the negative sixth power) = 1/10to the six power and x(negative2) = 1/xto the second power The definition also shows that 1/10to the six power = 10to the negative 6th
power and 1/xto the 2nd power = x(negative2) I think thats the directions :S 1. ...
Tuesday, September 22, 2009 at 9:02pm
7th grade
This is a bit heavy reading for 7th grade BUT if you scroll through it you will be able to find a number of proteins listed.
Tuesday, September 22, 2009 at 7:34pm
the 2nd problems answer is 0.26 and the way i got that is by (2x-1)=1, 1x0.1=0.1x, then (4x+4)=8x, 8x0.02=0.16x, 0.16+0.1=0.26x
Monday, September 21, 2009 at 5:18pm
Math algebra
1/5(x +1) + 2/5(x-1) = 3/10 2nd 0.1(2x-1) + 0.02(4x+4)= 0.05x
Monday, September 21, 2009 at 4:04pm
Math algebra
1/5(x +1) + 2/5(x-1) = 3/10 2nd 0.1(2x-1) + 0.02(4x+4)= 0.05x
Monday, September 21, 2009 at 3:57pm
maths (2nd time)
Solve the equation. Round to the nearest hundredth: 1. 1.59x+4.23=3.56x+2.12 2. -3(1.25-2.48x)=8.15+5.86
Sunday, September 20, 2009 at 4:12pm
Can anybody figure out how to find the maximum value on this? -b/2a doesn't work either since it's not a 2nd degree polynomial.
Friday, September 18, 2009 at 4:06am
simple subject of 1st sentence ( cup ) and 2nd sentence ( crinkle )
Monday, September 14, 2009 at 10:29pm
Solve the following 2x (2nd power) + 5x - 12 = 0
Monday, September 14, 2009 at 12:41pm
Simplify the following expression 2(3 x - 2)(2nd Power) - 3x (x + 1) + 4
Sunday, September 13, 2009 at 9:36pm
you are standing on one side of the river. Figure out the width of the river. All calculations have to be made on the side you are standing on. You cant step into the river or cross it. You may use
the 2 laws ( sine and cos ). no guessing allowed. has to be accurate. The ...
Sunday, September 13, 2009 at 1:37pm
oh by the way to change the answer from mixed to a fraction or vice versa thaen just use the "2nd" button also known as "shift"
Thursday, September 10, 2009 at 9:41pm
5th grade
Will someone tell me a webpage my daughter can get help on solving equations, word problems w/ algebraic chains...or for 5th grade math. Thank you
Thursday, September 10, 2009 at 6:12pm
3/4(-8/-2+3{2}) In written words it three fourths multiplied neg 8 div neg 2 plus 3 2nd power
Tuesday, September 8, 2009 at 6:09pm
(a+b)2nd +a2nd + b2nd (a+b)squared + (a+b)squared its reverse distribution i think if its not im sorry
Wednesday, September 2, 2009 at 6:25pm
math problem
2nd 8000 1st 4000
Saturday, August 8, 2009 at 10:30pm
math problem
2nd 16,000 1st 8,000
Saturday, August 8, 2009 at 10:17pm
math problem
2nd $1600 1st $800
Saturday, August 8, 2009 at 10:10pm
How would the information about grade and frequency change if there were twice as many of each grade? Grade Frequency A 2 B 8 C 11 D 2 F 1 I think that the grades would double if there was twice as
many. Thanks.
Wednesday, July 29, 2009 at 8:27pm
If you think about it, that won't work logically. Your 2nd pot is wider than your first pot, so it will have to be more shallow to fill the same amount of water. V= pi x r^2 x h So the first thing we
have to do is figure out the volume of the first cylinder. Pi = 3.14 The ...
Wednesday, July 29, 2009 at 8:49am
which is the complete subject,simple subject The two-and-a-half-month-old child received 2nd degree burns across the bridge of his nose. simple subject is: received 2nd degree burns :complete subject
is two-and-a-half-month-old
Sunday, July 26, 2009 at 6:57pm
You do the same thing, except replace 10 with -6, something like 6 +/- (2nd) LN
Monday, July 20, 2009 at 10:40pm
F - F2 = m1*a1 Get F2 from the FBD and Newton's 2nd law for block 2, as explained earlier.
Sunday, July 12, 2009 at 4:47pm
shouldnt the 2nd diabram have 5 squares. I believe the CHANGE in the number of squares grows by 4 each time. So, 1, 5, 13, 25, and so on.
Wednesday, June 17, 2009 at 4:23pm
I think the 2nd one would be easy to do.
Sunday, June 14, 2009 at 7:36pm
This is my question I am just wanting to make sure that I get a good grade. I have bombed this class and need the grade
Saturday, June 13, 2009 at 4:44pm
english is my 2nd language.
Friday, June 12, 2009 at 10:58pm
french help please!
no Imean it wat grade r u in, ill tellu wat grade i'm in if u tell me
Tuesday, June 9, 2009 at 8:28pm
the 2nd one does, but u wouldn't have to use because
Monday, June 8, 2009 at 12:30am
1st grade math
... since when has this become 1st grade maths? we never started this until high school, as simple as it is
Saturday, June 6, 2009 at 5:25am
Sí on your 2nd thought! Sra
Wednesday, June 3, 2009 at 11:17am
7th grade
Also indicate your subject rather than your grade in the School Subject line.
Monday, May 18, 2009 at 11:14am
Thanks to Mike, look at that 2nd one again. Are you saying the "imagen" is called? = llamada or "Cristo" is called = llamado. Sra
Thursday, May 14, 2009 at 7:21pm
I looked and couldn't find anything on the internet about the lowest temperatures on April 2nd in the U.S., excluding Alaska.
Monday, May 11, 2009 at 10:21pm
algebra tahnks to damon
can you help with the 2nd question? I have stressed myself over this and I am frustrated. I appreciate you help so much
Sunday, May 10, 2009 at 11:35am
help me with sequences problem.So if the sequence is 2,5,10,17,26 +3 +5 +7 +9 1st difference +2 +2 +2 2nd difference then the nth term can be worked out by writing out : n 1 2 3 4 n squared 1 4 9 16
original sequence 2 5 10 17 nth term= n squared+1 but what about this question...
Sunday, May 10, 2009 at 10:27am
visual basic
Cant get my message dialog box to display anbody see waht wrong with it? The message box is located in my if then else statement Dim total As Integer = 0 Dim gradeCounter As Integer = 0 Dim grade As
Integer = 0 Dim average As Double = 0 ' sum grades in ListBox Do ' ...
Thursday, May 7, 2009 at 4:58pm
5th grade Math
i know right. this is hard and im in the 9th grade
Monday, May 4, 2009 at 6:10pm
What would you like us to do to help you. The 2nd phrase is NOT a complete sentence. Sra
Wednesday, April 29, 2009 at 11:34pm
5th grade
The title of this post would be 5th grade math not fifth grade
Wednesday, April 29, 2009 at 7:25pm
please answer asap
for my project i have to do research on a mathematician born in the 2nd centuary ... any help???
Tuesday, April 28, 2009 at 7:00pm
Quick science question #2
Why are the reactions in the 1st and 2nd stages of photosynthesis called "light reactions."
Saturday, April 25, 2009 at 5:55pm
2nd year macroecnomics
i have just come across this website i am not familiar with it thanks for the advice
Thursday, April 23, 2009 at 5:23pm
grandma (me) trying to help 2nd grader with pyramids, triangles, rectangular prisms and cylinders. how many cylindeers are needed to make 5 solid figures?
Thursday, April 16, 2009 at 5:15pm
8th grade
essay on should students only have to take reading andmath to go to 9th grade agree or disagree?
Monday, April 13, 2009 at 9:41pm
1. y=3/2x+7 so the equation is y=-2/3x+7 sorry, not sure about 2nd question
Tuesday, April 7, 2009 at 8:18pm
2nd #1. (pedir) = conjugate it to go with Miguel = pide very well done! Sra
Monday, April 6, 2009 at 9:14pm
Thanks Alice for trying, but that's not correct, the first answer is 50 I figured it out, but I'm still having a hard time with the 2nd question.
Monday, April 6, 2009 at 7:55pm
3rd grade
in the book " how to be cool in the third grade. Do you think Robbie was cool and why
Monday, March 30, 2009 at 7:47pm
2nd grade
One inch divided into 8 equal sections. In this oversize image, each inch is divided into 16 equal sections: http://www.mrmyers.org/Teacher_Resources/GIFs/rulers_combo.jpg The shortest markings
indicate the 1/16th-inch sections; the next shortest markings indicate the 1...
Monday, March 30, 2009 at 3:24pm
"Typical" might also apply to the 2nd post. Look up "average" on an online thesaurus. I hope this helps a little more. Thanks for asking.
Friday, March 27, 2009 at 12:17pm
10th grade English
What are some common 10th grade action words?
Wednesday, March 25, 2009 at 9:46am
6th grade math
I guess I do know what grade level, all I had to do was look at the subject title.
Sunday, March 22, 2009 at 3:58pm
10th grade (science)
For faster service, it is more important to indicate your subject rather than your grade level when posting questions.
Thursday, March 19, 2009 at 12:19am
1st grade
This is 4th grade math.
Sunday, March 15, 2009 at 1:53pm
1st grade
I would agree that this is not first grade math at all. The only thing that I could come up with is a cube. I didn't learn anything like this when I was in first grade.
Friday, March 13, 2009 at 8:37pm
dividing moniomials. 2 to the 2nd/2to the -3
Sunday, March 8, 2009 at 2:17pm
Grade 8 science
I remember in grade 8, we studied cells and we examined cork under a microscope. Corks contain cells, but is a non-living object.
Thursday, March 5, 2009 at 8:17pm
:) I read your 2nd post first. Sra (aka Mme) P.S. Thanks for YOUR help!
Wednesday, March 4, 2009 at 6:47pm
chem help
sorry the 2nd question is the right one
Sunday, March 1, 2009 at 12:26am
2nd grade
Here is a great site on the history of air transportation. http://inventors.about.com/library/inventors/bl_history_of_transportation.htm http://en.wikipedia.org/wiki/Aviation
Wednesday, February 25, 2009 at 3:56pm
4th grade
brandon, get real, or I will block you. This is a Fourth grade question.
Tuesday, February 24, 2009 at 6:50pm
3rd grade science
EASY FOR ME! Ok,ignous rock. I'm in 4th grade and I remember! YAY!
Monday, February 23, 2009 at 10:33pm
Pages: <<Prev | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | Next>> | {"url":"http://www.jiskha.com/2nd_grade/?page=18","timestamp":"2014-04-19T00:30:40Z","content_type":null,"content_length":"29523","record_id":"<urn:uuid:2ff5044f-bbd0-4d9f-b9ae-6c19b3432563>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
I'm not sure if anyone has asked this.
Currently I have just installed HCL on my Hosting Site, and everytime someone posts a long message, it seems to refresh the page and delete the message.
liike above, I would have to type that out in short paragraphs for it to be accepted like
Currently I have just installed HCL on my Hosting Site
and everytime someone posts a long message,
it seems to refresh the page and delete the message. | {"url":"http://www.helpcenterlive.com/smf/problem-t559.0.html;msg2706","timestamp":"2014-04-18T05:32:21Z","content_type":null,"content_length":"40986","record_id":"<urn:uuid:f3bc4dda-1a12-41ad-8505-0ffc22890ea4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
square roots
April 20th 2008, 01:28 PM
square roots
1.the square root of y+4 -2=3
2.the cubed root of 6x+9 +5=2
the space implies that the other numbers are not under the square root
April 20th 2008, 01:53 PM
any help would be greatly appreciated
April 20th 2008, 02:33 PM
April 20th 2008, 02:44 PM
$<br /> \sqrt{y+4}-2 = 3<br />$
April 20th 2008, 02:53 PM
this is how it was suposed to read
April 20th 2008, 11:07 PM
Your original equation is: ....... $\sqrt{y+4}-3=2$....... Add 3 on both sides.
You'll get bobak's equation.
But since squaring both sides of an equation is not an equivalent operation you must prove if the solution is valid or not. Plug in the solution into the original equation and prove if the
equation is true:
$\sqrt{21+4}-3 \ \buildrel {\rm?} \over {\rm=} \ 2$
$5-3 \buildrel {\rm?} \over {\rm=} \ 2$
$2 \ \buildrel {\rm!} \over {\rm=} \ 2$....... So y = 21 is a valid solution.
to #2:
$\sqrt[3]{6x+9} + 5 = 2~\iff~ \sqrt[3]{6x+9}=-3$
Cube both sides. You'll get:
$6x+9 = -27 ~\implies~ 6x =-36~\implies~x = -6$
Plug in this value into the original equation:
$\sqrt[3]{6 \cdot (-6)+9}+5 \buildrel {\rm?} \over {\rm=} 2~\iff~\sqrt[3]{-27}+5 \buildrel {\rm?} \over {\rm=} 2~\iff~ -3+5 \buildrel {\rm?} \over {\rm=} 2$....... which is obviously true.
And therefore x = -6 is a solution of this equation. | {"url":"http://mathhelpforum.com/algebra/35255-square-roots-print.html","timestamp":"2014-04-20T21:18:56Z","content_type":null,"content_length":"9020","record_id":"<urn:uuid:9b4f19ff-2a80-4291-95e5-87b701d070b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modelling Tank Velocity
Hi guys. I've started working on a 3D tank game in OpenGL and I need some advice on how to model the speed and direction of the tank based on the speed of it's two tracks.
This is a C++ project and I'm fairly confident with vector math and trig. The tank's left and right tracks can be controlled independently (ie. accelerated or even reversed). When both tracks are
engaged are full speed, the tank should move forward. If the left track is spinning faster than the right track, the tank should move forward while turning to the right. Likewise, if the left track
is reversed and the right track is at full speed, the tank should turn on the spot towards the left. Just like a real tank
I need some advice on how to model this. Each track has it's own speed and the tank itself has a position and a direction vector.
One idea - give each track a vector facing away from the tank's center position (ie. left and right) and increase the length as that track accelerates. Then build the direction vector from the sum of
the two?
I'm sure this sort of thing has been done before
Posts: 283
Joined: 2006.05
Ugh... this sounds like a tough one to get right, because as the difference between the two track speeds changes the centre of rotation changes. i.e. if the left track has speed -1 and the right
track has speed +1 it rotates about the very centre of the two. But if left = +1 and right = 0 it rotates about the centre of the right track.
I don't have any idea how to help you. Just wanted to make life more difficult for anyone that wants to try.
Posts: 1,140
Joined: 2005.07
You can think of the treads as 2 vectors on either side of the center of the tank, and connected by a bar at the middle. For example, in ASCII it would look something like this:
To calculate the total forward motion, you add the vectors of the 2 tracks. To calculate the rotation, you need to do the torque on the bar in the center. The bar is the simplest to calculate, but a
box would probably be more correct for angular momentum. Of course, you could also just scale it as necessary, since game physics are all about approximations.
Posts: 336
Joined: 2004.07
If I were modeling it, I'd probably fudge the physics a little bit.
So you have the two treads, and the speed for each can be any value between a min and a max (say -1 and +1.) At rest, these values are 0. +1 is full speed ahead, and -1 is full reverse.
The tank itself has its speed and orientation. For the speed (along the orientation) just sum the values of the two vectors. (You could also have a tank speed variable somewhere you could multiply
by.) Example:
Tank speed = 10.0
Both treads forward: (1 + 1) * 10.0 = 20.0
One tread forward, one back: (1 + -1) * 10.0 = 0
Both treads back: (-1 + -1) * 10.0 = -20.0
For orientation, I would take the difference between those two values, and again, multiply by some turn speed.
Tank turn speed = 5.0
So both treads equal: (x - x) * 5.0 = 0
Left tread forward, right back: (1 - (-1)) * 5.0 = 10.0 (positive = right turn)
Right tread forward, left back: (-1 - 1) * 5.0 = -10.0 (negative = left turn)
Right tread forward, left at rest: (-1 - 0) * 5.0 = -5.0 (slight left turn)
That gives you an angular velocity which you can apply to your orientation.
Justin Ficarrotta
"It is better to be The Man than to work for The Man." - Alexander Seropian
Thankyou very much for your help guys! I'll try out these ideas and I'll let you know how I get on.
JustinFic - I tried out your idea and with a little tweaking it works great! Thanks.
At some point I may need to implement real torque rotation but for now this works and looks fine.
Possibly Related Threads...
Thread: Author Replies: Views: Last Post
Suggestions for controls in a tank driving game iamflimflam1 0 1,803 Oct 7, 2010 03:33 AM
Last Post: iamflimflam1
OpenGL Velocity and Grouping of Lines (C++, Xcode) TheThirdL3g 2 3,126 Jul 29, 2010 01:28 PM
Last Post: SethWillits
Combining then splitting angular and linear velocity reubert 4 4,701 Sep 5, 2006 12:47 AM
Last Post: reubert
Speed distance velocity and other headaches Thinker 6 3,524 Jul 3, 2003 09:55 AM
Last Post: Thinker
Getting XY speed from an angle/velocity macboy 14 8,471 Apr 19, 2003 01:32 PM
Last Post: OneSadCookie | {"url":"http://idevgames.com/forums/thread-2794-post-46680.html","timestamp":"2014-04-20T16:12:40Z","content_type":null,"content_length":"28482","record_id":"<urn:uuid:f52b28fb-1fcc-4065-87ea-a6d379ebc4a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fixed Effects Model: Answers the question whether the studies included in the meta-analysis show that the treatment or exposure produced the effect on average
Random Effects Model: Answers the question, on the basis of the studies that are examined, is it possible to comment that the treatment or the exposure will produce a result?
A random effects model is computationally more intense than a fixed effects model. Also, if the studies are homogeneous, fixed effects and random effects models are similar.
In this tutorial, we will also examine special situations when the outcomes are on a continuous scale, rather than discrete counting of events.
Finally, check out this table about the methods to be used for different types of models. | {"url":"http://www.pitt.edu/~super1/lecture/lec1171/012.htm","timestamp":"2014-04-17T01:33:43Z","content_type":null,"content_length":"3041","record_id":"<urn:uuid:b0bb4d0d-c34d-4c65-8c8a-44f65bc65648>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Backus @ NYU (research)
David Backus
Miscellaneous research papers
• "Identifying Taylor rules in macro-finance models," with Chernov and Zin, paper, slides. A draft of a paper outlining the structure you need to identify the Taylor rule in a macro-finance model.
• "Demography and low-frequency capital flows," with Cooley and Henriksen, conference draft, slides, earlier CMSG slides, R code for data work. We've been interested in low-frequency movements in
capital flows and consider demography as a possible driving force.
• "Sources of entropy in representative agent models," with Chernov and Zin,
. Forthcoming, JF. The paper is part methodology and part intuitive explanation for how popular representative agent models work. We propose two metrics for summarizing the properties of asset
pricing models and apply them to representative agent models with recursive preferences, habits, and jumps. The metrics describe the pricing kernel's dispersion (the entropy of the title) and
dynamics (horizon dependence, a measure of how entropy varies over different time horizons). Most popular models generate lots of entropy, but several have difficulty doing it without also
producing too much horizon dependence. All of this depends on parameter values, so there's more to come on how to hit both targets.
• "Monetary policy and the uncovered interest rate parity puzzle," with Gavazzoni, Telmer, and Zin,
. There's a longstanding issue in international finance, that high interest rate currencies tend to rise in value. That gives investors a double kick (high rate, appreciation on average): what
people have come to call the carry trade. Presumably the excess return from this strategy reflects risk of some sort. We build a relatively simple macro model in which the risk can be traced to
monetary policy. It's an example more than a complete explanation, but suggests where you might look for a solution.
• "Disasters implied by equity index options," with Chernov and Martin,
Matlab programs
. Published JF 2011. The possibility of large adverse events -- "disasters" -- can generate much larger risk premiums than you would get with normal (Gaussian) risks. The question is how much of
this kind of thing is plausible. If you believe, as we do, that most risk reflects the economy as a whole, the difficulty in assessing the probabilities of extreme events is that they don't
happen enough to allow precise estimates. We back the probabilities out of equity index options, which value such risks whether or not they occur in our sample. The resulting disaster
probabilities are much smaller than those calculated by Barro and his coauthors.
• "Cyclical component of US asset returns," with Routledge and Zin,
. We look at two old facts -- the stock market and the yield spread both lead the economy -- and note that they imply excess returns lead the cycle. We construct an exchange economy with similar
features, a close relative of the Bansal-Yaron model in which changes in risk lead directly to changes in expected growth.
• "Asset prices in business cycle analysis," with Routledge and Zin,
. First step toward a quasi-analytic solution of a business cycle model with recursive preferences. We know more now than we did then, will fix this up in some form in the near future.
• "Taxes and the global allocation of capital," with Henriksen and Storesletten,
. Published JME, 2008. We see significant differences across countries in investment rates (ratio of investiment to GDP) and capital intensity (ratio of capital to GDP). We attribute some of this
to differences in capital taxes. The main issue, here, is explaining why taxes based on revenue data are so different from those derived from the tax code.
• "Cracking the conundrum," with Wright,
. Published Brookings Papers, 2007. We interpret movements at the long end of the yield curve as reflecting, in large part, movements in risk premiums. It's an old story, but we bring some new
evidence to bear on it.
• "Current account fact and fiction," with Henriksen, Lambert, and Telmer,
updated figures
(US net worth, saving, investment). We never did anything with this, but it makes a useful point: that we're not really sure what international capital flows are telling us. Inflows could be
signs of good news or they could be portents of disaster. As it turns out, disaster did occur, but in our view for much different reasons.
"Exotic preferences for macroeconomists," with Routledge and Zin,
Matlab programs
. Published in the 2004 NBER Macro Annual, but the link above has some corrections that were made after publication. The idea is to review recent advances in the theory of recursive preferences
in user-friendly terms. Later papers with Zin and others developed loglinear approximations further.
• "Discrete time models of bond pricing," with Foresi and Telmer,
NBER version
Matlab files - Self-Extracting
. Published in a volume edited by former NYU colleague Bruce Tuckman that seems to have disappeared from sight. Bruce's
fixed income textbook
is available, though, and very good. Our paper is a review and synthesis of bond pricing models, including Vasicek, CIR, HJM, and many others. The idea is to describe these models in common
language and simple terms, which we take to mean pricing kernels in discrete time.
• "Oil prices and the terms of trade," with Crucini,
NBER version
. Published in JIE, 2000. A model of the impact of oil prices on business cycles, exchange rates, and capital flows. Mario is responsible for the novel parts, including the treatment of oil
• "Accounting for biases in Black-Scholes," with Foresi, Li, and Wu,
. We never got around to revising this one, but it has a nice result, mostly due to Liuren: that the slope and curvature of volatility smiles has a simple connection to the skewness and kurtosis,
respectively, of the risk-neutral distribution. Based on a Gram-Charlier approximation.
Conference discussions
Essays and op eds
• "Theory and reality in Europe," Huffington Post, November 29, 2012.
• "The euro: bad idea, poorly executed, hard to fix," with Kim Schoenholtz, Huffington Post, October 15, 2012.
• "Clear thinking about economic policy," with Tom Cooley, Vox EU, November 9, 2011.
• "Global imbalances and the crisis," with Tom Cooley, WSJ, January 11, 2010.
• "Formula for fiscal fitness," with Matt Richardson and Nouriel Roubini, New York Post, January 16, 2009.
• "Stimulus ambivalence," FT Forum, January 10, 2009.
• "Government money should have strings attached," with Viral Acharya and Raghu Sundaram, FT Forum, January 6, 2009. | {"url":"http://people.stern.nyu.edu/dbackus/index_research.htm","timestamp":"2014-04-18T13:32:49Z","content_type":null,"content_length":"19975","record_id":"<urn:uuid:fa1a8dd2-f92b-4641-b8c7-d3d926eb8718>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Command-line options for Sage
Running Sage, the most common options
• file.[sage|py|spyx] – run the given .sage, .py or .spyx files (as in sage my_file.sage)
• -h, -?, --help – print a short help message
• -v, --version – print the Sage version
• --advanced – print (essentially this) list of Sage options
• -c cmd – evaluate cmd as sage code. For example, sage -c 'print factor(35)' will print “5 * 7”.
Running Sage, other options
• --preparse file.sage – preparse file.sage, a file of Sage code, and produce the corresponding Python file file.sage.py. See the Sage tutorial for more about preparsing and the differences between
Sage and Python.
• -q – quiet; start with no banner
• --grep [options] <string> – grep through all the Sage library code for string. Any options will get passed to the “grep” command; for example, sage --grep -i epstein will search for epstein, and
the -i flag tells grep to ignore case when searching. Note that while running Sage, you can also use the function search_src to accomplish the same thing.
• --grepdoc [options] <string> – grep through all the Sage documentation for string. Note that while running Sage, you can also use the function search_doc to accomplish the same thing.
• --min [...] – do not populate global namespace (must be first option)
• -gthread, -qthread, -q4thread, -wthread, -pylab – pass the option through to IPython
• --nodotsage – run Sage without using the user’s .sage directory: create and use a temporary .sage directory instead. Warning: notebooks are stored in the .sage directory, so any notebooks created
while running with --nodotsage will be temporary also.
Running the notebook
• -n, --notebook – start the Sage notebook, passing all remaining arguments to the ‘notebook’ command in Sage
• -bn [...], --build-and-notebook [...] – build the Sage library (as by running sage -b) then start the Sage notebook
• --inotebook [...] – start the insecure Sage notebook
Running external programs and utilities
• --cython [...] – run Cython with the given arguments
• --ecl [...], --lisp [...] – run Sage’s copy of ECL (Embeddable Common Lisp) with the given arguments
• --gap [...] – run Sage’s Gap with the given arguments
• --gp [...] – run Sage’s PARI/GP calculator with the given arguments
• --hg [...] – run Sage’s Mercurial with the given arguments
• --ipython [...] – run Sage’s IPython using the default environment (not Sage), passing additional options to IPython
• --kash [...] – run Sage’s Kash with the given arguments
• --M2 [...] – run Sage’s Macaulay2 with the given arguments
• --maxima [...] – run Sage’s Maxima with the given arguments
• --mwrank [...] – run Sage’s mwrank with the given arguments
• --python [...] – run the Python interpreter
• -R [...] – run Sage’s R with the given arguments
• --scons [...] – run Sage’s scons
• --singular [...] – run Sage’s singular with the given arguments
• --twistd [...] – run Twisted server
• --sh [...] – run a shell with Sage environment variables set
• --gdb – run Sage under the control of gdb
• --gdb-ipython – run Sage’s IPython under the control of gdb
• --cleaner – run the Sage cleaner. This cleans up after Sage, removing temporary directories and spawned processes. (This gets run by Sage automatically, so it is usually not necessary to run it
Installing packages and upgrading
• -i [options] [packages] – install the given Sage packages (unless they are already installed); if no packages are given, print a list of all installed packages. Options:
□ -c – run the packages’ test suites, overriding the settings of SAGE_CHECK and SAGE_CHECK_PACKAGES.
□ -f – force build: install the packages even if they are already installed.
□ -s – do not delete the spkg/build directories after a successful build – useful for debugging.
• -f [options] [packages] – shortcut for -i -f: force build of the given Sage packages.
• --info [packages] – display the SPKG.txt file of the given Sage packages.
• --standard – list all standard packages that can be installed
• --optional – list all optional packages that can be installed
• --experimental – list all experimental packages that can be installed
• --upgrade [url] – download, build and install standard packages from given url. If url not given, automatically selects a suitable mirror. If url=’ask’, it lets you select the mirror.
The Sage-combinat package manager
Sage-combinat is a collection of experimental patches (i.e. extensions) on top of Sage, developed by a community of researchers, with a focus, at least to some extent, in combinatorics. Many of those
patches get eventually integrated into Sage as soon as they are mature enough, but you can install the still-experimental ones by running sage -combinat install. This creates a new branch, called
sage-combinat by default, containing the new patches. More information on sage-combinat is available at the Sage wiki. More details on the --combinat command-line option for Sage:
Building and testing the Sage library
• --root – print the Sage root directory
• --branch – print the current Sage branch
• --clone [new branch] – clone a new branch of the Sage library from the current branch
• -b [branch] – build Sage library – do this if you have modified any source code files in $SAGE_ROOT/devel/sage/. If branch is given, switch to the branch in $SAGE_ROOT/devel/sage-branch and build
• -ba [branch] – same as -b, but rebuild all Cython code. This could take a while, so you will be asked if you want to proceed.
• -ba-force [branch] – same as -ba, but don’t query before rebuilding
• --br [branch] – switch to, build, and run Sage with the given branch
• -t [options] <files|dir> – test examples in .py, .pyx, .sage or .tex files. Options:
□ --long – include lines with the phrase ‘long time’
□ --verbose – print debugging output during the test
□ --optional – also test all examples labeled # optional
□ --only-optional[=tags] – if no tags are specified, only run blocks of tests containing a line labeled # optional. If a comma separated list of tags is specified, only run blocks containing a
line labeled # optional tag for any of the tags given and in these blocks only run the lines which are unlabeled or labeled #optional or labeled #optional tag for any of the tags given.
□ --randorder[=seed] – randomize order of tests
• -tnew [...] – like -t above, but only tests files modified since last commit
• -tp <N> [...] – like -t above, but tests in parallel using N threads with 0 interpreted as minimum(8, cpu_count())
• --testall [options] – test all source files, docs, and examples; options are the same as for -t.
• -bt [...] – build and test, options like -t above
• -btp <N> [...] – build and test in parallel, options like -tp above
• -btnew [...] – build and test modified files, options like -tnew
• --fixdoctests file.py [output_file] [--long] – writes a new version of file.py to output_file (default: file.py.out) that will pass the doctests. With the optional --long argument the long time
tests are also checked. A patch for the new file is printed to stdout.
• --startuptime [module] – display how long each component of Sage takes to start up. Optionally specify a module (e.g., “sage.rings.qqbar”) to get more details about that particular module.
• --coverage <files> – give information about doctest coverage of files
• --coverageall – give summary info about doctest coverage of all files in the Sage library
• --sync-build – delete any files in $SAGE_ROOT/devel/sage/build/ which don’t have a corresponding source file in $SAGE_ROOT/devel/sage/sage/
• --docbuild [options] document (format | command) – build or return information about the Sage documentation.
□ document – name of the document to build
□ format – document output format
□ command – document-specific command
A document and either a format or a command are required, unless a list of one or more of these is requested.
□ help, -h, --help – print a help message
□ -H, --help-all – print an extended help message, including the output from the options -h, -D, -F, -C all, and a short list of examples.
□ -D, --documents – list all available documents
□ -F, --formats – list all output formats
□ -C DOC, --commands=DOC – list all commands for document DOC; use -C all to list all
□ -i, --inherited – include inherited members in reference manual; may be slow, may fail for PDF output
□ -u, --underscore – include variables prefixed with _ in reference manual; may be slow, may fail for PDF output
□ -j, --jsmath – render math using jsMath; formats: html, json, pickle, web
□ --no-pdf-links – do not include PDF links in document website; formats: html, json, pickle, web
□ --check-nested – check picklability of nested classes in document reference
□ -N, --no-colors – do not color output; does not affect children
□ -q, --quiet – work quietly; same as --verbose=0
□ -v LEVEL, --verbose=LEVEL – report progress at level 0 (quiet), 1 (normal), 2 (info), or 3 (debug); does not affect children
Advanced – use these options with care:
□ -S OPTS, --sphinx-opts=OPTS – pass comma-separated OPTS to sphinx-build
□ -U, --update-mtimes – before building reference manual, update modification times for auto-generated ReST files
Making Sage packages or distributions
• --pkg dir – create the Sage package dir.spkg from the directory dir
• --pkg_nc dir – as --pkg, but do not compress the package
• --merge – run Sage’s automatic merge and test script
• --bdist VER – build a binary distribution of Sage, with version VER
• --sdist – build a source distribution of Sage
• --crap sage-ver.tar – detect suspicious garbage in the Sage source tarball
Valgrind memory debugging
• --cachegrind – run Sage using Valgrind’s cachegrind tool
• --callgrind – run Sage using Valgrind’s callgrind tool
• --massif – run Sage using Valgrind’s massif tool
• --memcheck – run Sage using Valgrind’s memcheck tool
• --omega – run Sage using Valgrind’s omega tool
• --valgrind – this is an alias for --memcheck | {"url":"http://sagemath.org/doc/reference/cmd/options.html","timestamp":"2014-04-18T23:17:19Z","content_type":null,"content_length":"39161","record_id":"<urn:uuid:01298557-1529-4889-b91e-a9f4c6504c8e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunny Isles Beach, FL Algebra 2 Tutor
Find a Sunny Isles Beach, FL Algebra 2 Tutor
...If a student reschedules for a later day or time within the same week, there will be a 20 minute charge. If I go to the student's house and the tutoring cannot be held for whatever reason, the
cancellation charge is 1 h. Thank you for your understanding and I hope to assist you in the best possible way!
16 Subjects: including algebra 2, chemistry, Spanish, geometry
...I taught many graduate students basic skills in a wet laboratory. I feel like a I have a strong Mathematical and Biological background for which I can help students with their assignments. I
am a very approachable and positive person.
18 Subjects: including algebra 2, chemistry, calculus, geometry
...Though I have worked with tutoring centers, most of my clients over the years have come as a result of recommendations by parents as well as from colleagues with whom I have worked and shared
students for the past 14 years. I have helped students to increase their scores on standardized tests by...
24 Subjects: including algebra 2, calculus, geometry, statistics
...My major is Biology: Environmental Science, and currently along with a finished minor in Political Science (although this will most likely be turning into a double major situation). Because of
this diversity, my areas of helpfulness are wide ranging. Need help with English? How about writing?
36 Subjects: including algebra 2, chemistry, reading, English
...I am open to all levels from K-college level students aspiring to perform better in school. My goal for each student is to provide the tools to the learning material by simplifying and
providing examples to illustrate how to effectively solve problems in science and math related topics. I have experience as a tutor at MDC in chemistry, as well as local tutoring for people in my
17 Subjects: including algebra 2, chemistry, physics, biology
Related Sunny Isles Beach, FL Tutors
Sunny Isles Beach, FL Accounting Tutors
Sunny Isles Beach, FL ACT Tutors
Sunny Isles Beach, FL Algebra Tutors
Sunny Isles Beach, FL Algebra 2 Tutors
Sunny Isles Beach, FL Calculus Tutors
Sunny Isles Beach, FL Geometry Tutors
Sunny Isles Beach, FL Math Tutors
Sunny Isles Beach, FL Prealgebra Tutors
Sunny Isles Beach, FL Precalculus Tutors
Sunny Isles Beach, FL SAT Tutors
Sunny Isles Beach, FL SAT Math Tutors
Sunny Isles Beach, FL Science Tutors
Sunny Isles Beach, FL Statistics Tutors
Sunny Isles Beach, FL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Aventura, FL algebra 2 Tutors
Bay Harbor Islands, FL algebra 2 Tutors
Biscayne Park, FL algebra 2 Tutors
El Portal, FL algebra 2 Tutors
Golden Beach, FL algebra 2 Tutors
Hallandale algebra 2 Tutors
Mia Shores, FL algebra 2 Tutors
N Miami Beach, FL algebra 2 Tutors
North Miami Bch, FL algebra 2 Tutors
North Miami Beach algebra 2 Tutors
North Miami, FL algebra 2 Tutors
Ojus, FL algebra 2 Tutors
Pembroke Park, FL algebra 2 Tutors
Sunny Isles, FL algebra 2 Tutors
West Park, FL algebra 2 Tutors | {"url":"http://www.purplemath.com/Sunny_Isles_Beach_FL_Algebra_2_tutors.php","timestamp":"2014-04-17T04:46:40Z","content_type":null,"content_length":"24738","record_id":"<urn:uuid:63160bff-94be-4183-9ddf-6f74947c952b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ted Bunn’s Blog
I asked my brother Andy what the consequences for climate science would be from the failed launch of the Orbiting Carbon Observatory satellite this week. He pointed me to this, which has a lot of
good information, both on this satellite and on the difficulty of doing science with space-based instruments in general.
For those who don’t know about it, RealClimate.org is full of great stuff. | {"url":"http://blog.richmond.edu/physicsbunn/2009/02/","timestamp":"2014-04-17T02:41:56Z","content_type":null,"content_length":"50797","record_id":"<urn:uuid:f4919b13-7d3d-4c78-9b41-8567e1020e30>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
describe the motion of freely falling body
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
Free fall
right|thumb|Scott conducting an experiment during the [[Apollo 15]] moon landing.
Free fall is any motion of a body where gravity is the only or dominant force acting upon it, at least initially. These conditions produce an inertial trajectory so long as gravity remains the only
force. Since this definition does not specify velocity, it also applies to objects initially moving upward. Since free fall in the absence of forces other than gravity produces weightlessness or
"zero-g," sometimes any condition of weightlessness due to inertial motion is referred to as free-fall. This may also apply to weightlessness produced because the body is far from a gravitating body.
Although strict technical application of the definition excludes motion of an object subjected to other forces such as aerodynamic drag, in nontechnical usage, falling through an atmosphere without a
deployed parachute, or lifting device, is also often referred to as free fall. The drag forces in such situations prevent them from producing full weightlessness, and thus a skydiver's "free fall"
after reachingterminal velocity produces the sensation of the body's weight being supported on a cushion of air.
thumb|A video showing objects free-falling 215 feet (65 m) down a metal well, a type of drop tube Examples of objects in free fall include:
• A spacecraft (in space) with propulsion off (e.g. in a continuous orbit, or on a suborbital trajectory going up for some minutes, and then down).
• An object dropped at the top of a drop tube.
• An object thrown upwards or a person jumping off the ground at low speed (i.e. as long as air resistance is negligible in comparison to weight). Technically, the object or person is in free fall
even when moving upwards or instantaneously at rest at the top of their motion, since the acceleration is still g downwards. However in common usage "free fall" is understood to mean downwards
Since all objects fall at the same rate in the absence of other forces, objects and people will experience weightlessness in these situations.
Examples of objects not in free fall:
• Flying in an aircraft: there is also an additional force of lift.
• Standing on the ground: the gravitational acceleration is counteracted by the normal force from the ground.
• Descending to the Earth using a parachute, which balances the force of gravity with an aerodynamic drag force (and with some parachutes, an additional lift force).
The example of a falling skydiver who has not yet deployed a parachute is not considered free fall from a physics perspective, since they experience a drag force which equals their weight once they
have achieved terminal velocity (see below). However, the term "free fall skydiving" is commonly used to describe this case in everyday speech, and in the skydiving community. It is not clear,
though, whether the more recent sport of wingsuit flying fits under the definition of free fall skydiving.
On Earth and on the Moon
Near the surface of the Earth, an object in free fall in a vacuum will accelerate at approximately 9.8 m/s^{2}, independent of its mass. With air resistance acting upon an object that has been
dropped, the object will eventually reach a terminal velocity, around 56 m/s (200 km/h or 120 mph) for a human body. Terminal velocity depends on many factors including mass, drag
coefficient, and relative surface area, and will only be achieved if the fall is from sufficient altitude.
Free fall was demonstrated on the moon by astronaut David Scott on August 2, 1971. He simultaneously released a hammer and a feather from the same height above the moon's surface. The hammer and the
feather both fell at the same rate and hit the ground at the same time. This demonstrated Galileo's discovery that in the absence of air resistance, all objects experience the same acceleration due
to gravity. (On the Moon, the gravitational acceleration is much less than on Earth, approximately 1.6 m/s^{2}).
Free fall in Newtonian mechanics
Uniform gravitational field without air resistance
This is the "textbook" case of the vertical motion of an object falling a small distance close to the surface of a planet. It is a good approximation in air as long as the force of gravity on the
object is much greater than the force of air resistance, or equivalently the object's velocity is always much less than the terminal velocity (see below).
v_{0}\, is the initial velocity (m/s).
v(t)\,is the vertical velocity with respect to time (m/s).
y_0\, is the initial altitude (m).
y(t)\, is the altitude with respect to time (m).
t\, is time elapsed (s).
g\, is the acceleration due to gravity (9.81 m/s^2 near the surface of the earth).
Uniform gravitational field with air resistance
This case, which applies to skydivers, parachutists or any bodies with Reynolds number well above the critical Reynolds number, has an equation of motion:
m\frac{dv}{dt}=\frac{1}{2} \rho C_{\mathrm{D}} A v^2 - mg \, ,
m is the mass of the object,
g is the gravitational acceleration (assumed constant),
C[D] is the drag coefficient,
A is the cross-sectional area of the object, perpendicular to air flow,
v is the fall (vertical) velocity, and
� is the air density.
Assuming an object falling from rest and no change in air density with altitude, the solution is:
v(t) = -v_{\infty} \tanh\left(\frac{gt}{v_\infty}\right),
where the terminal speed is given by
v_{\infty}=\sqrt{\frac{2mg}{\rho C_D A}} \, .
The object's velocity versus time can be integrated over time to find the vertical position as a function of time:
y = y_0 - \frac{v_{\infty}^2}{g} \ln \cosh\left(\frac{gt}{v_\infty}\right).
When the air density cannot be assumed to be constant, such as for objects or skydivers falling f
Rigid body
In physics, a rigid body is an idealization of a solid body of finite size in which deformation is neglected. In other words, the distance between any two given points of a rigid body remains
constant in time regardless of external forces exerted on it. Even though such an object cannot physically exist due to relativity, objects can normally be assumed to be perfectly rigid if they are
not moving near the speed of light.
In classical mechanics a rigid body is usually considered as a continuous mass distribution, while in quantum mechanics a rigid body is usually thought of as a collection of point masses. For
instance, in quantum mechanics molecules (consisting of the point masses: electrons and nuclei) are often seen as rigid bodies (see classification of molecules as rigid rotors).
Linear and angular position
The position of a rigid body is the position of all the particles of which it is composed. To simplify the description of this position, we exploit the property that the body is rigid, namely that
all its particles maintain the same distance relative to each other. If the body is rigid, it is sufficient to describe the position of at least three non-collinear particles. This makes it possible
to reconstruct the position of all the other particles, provided that their time-invariant position relative to the three selected particles is known. However, typically a different and
mathematically more convenient approach is used. The position of the whole body is represented by:
1. the linear position or position of the body, namely the position of one of the particles of the body, specifically chosen as a reference point (for instance its center of mass or its centroid, or
the origin of a coordinate system fixed to the body), together with
2. the angular position (or orientation) of the body.
Thus, the position of a rigid body has two components: linear and angular, respectively. The same is true for other kinematic and kinetic quantities describing the motion of a rigid body, such as
velocity, acceleration, momentum, impulse, and kinetic energy.
The linear position can be represented by a vector with its tail at an arbitrary reference point in space (often the origin of a chosen coordinate system) and its tip at a point of interest on the
rigid body (often its center of mass or centroid).
There are several ways to numerically describe the orientation of a rigid body, including a set of three Euler angles, a quaternion, or a direction cosine matrix (also referred to as a rotation
In general, when a rigid body moves, both its position and orientation vary with time. In the kinematic sense, these changes are referred to as translationandrotation, respectively. Indeed, the
position of a rigid body can be viewed as a hypothetic translation and rotation (roto-translation) of the body starting from a hypothetic reference position (not necessarily coinciding with a
position actually taken by the body during its motion).
Linear and angular velocity
Velocity (also called linear velocity) and angular velocity are measured with respect to a frame of reference.
The linear velocityof a rigid body is avector quantity, equal to the time rate of change of its linear position. Thus, it is the velocity of a reference point fixed to the body. During purely
translational motion (motion with no rotation), all points on a rigid body move with the same velocity. However, when motion involves rotation, the instantaneous velocity of any two points on the
body will generally not be the same. Two points of a rotating body will have the same instantaneous velocity only if they happen to lay on an axis parallel to the instantaneous axis of rotation.
Angular velocityis avector quantity that describes the angular speed at which the orientation of the rigid body is changing and the instantaneous axis about which
Linear motion
Linear motion is motion along a straight line, and can therefore be described mathematically using only one spatial dimension. It can be uniform, that is, with constant velocity (zero acceleration),
or non-uniform, that is, with a variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along the line can be described by its position x, which varies with t
(time). Linear motion is sometimes called rectilinear motion.
An example of linear motion is that of a ball thrown straight up and falling back straight down.
The average velocity v during a finite time span of a particle undergoing linear motion is equal to
v = \frac {\Delta d}{\Delta t}.
The instantaneous velocity of a particle in linear motion may be found by differentiating the position x with respect to the time variable t. The acceleration may be found by differentiating the
velocity. By the fundamental theorem of calculus the converse is also true: to find the velocity when given the acceleration, simply integrate the acceleration with respect to time; to find
displacement, simply integrate the velocity with respect to time.
This can be demonstrated graphically. The gradient of a line on the displacement time graph represents the velocity. The gradient of the velocity time graph gives the acceleration while the area
under the velocity time graph gives the displacement. The area under an acceleration time graph gives the velocity.
Linear motion is the most basic of all motions. According to Newton's first law of motion, objects not subjected to forces will continue to move uniformly in a straight line indefinitely. Under
every-day circumstances, external forces such as gravity and friction will cause objects to deviate from linear motion and can cause them to come to a rest.
For linear motion embedded in a higher-dimensional space, the velocity and acceleration should be described as vectors, made up of two parts: magnitude and direction. The direction part of these
vectors is the same and is constant for linear motion, and only for linear motion
From Yahoo Answers
Question:Q 1.a body moving with constant speed has uniform motion while a freely falling object has non uniform motion. why? Q 2.can a body have acceleration while at rest? give eg. Q 3. what type of
motion is associated with a pendulum? Q 4. what does the rate of change of angular velocity refer to? Q 5. a boy after going around a circular track of radius 'r' comes back to the starting point. a)
what is the displacement of the body? b)what is the total distance covered by him?
Answers:Q1. Uniform motion definition (my guess): Same distance covered every second, which also means a constant velocity. Free fall has an increasing velocity every second. Q2. Yes. You are
accelerating right now while reading this. Gravity is causing you to ALWAYS accelerate while on earth. Q3. Simple harmonic motion Q4. Angular acceleration Q5. The displacement is 0 (starting position
minus ending position).The distance covered is 2*pi*r (circumference)
Question:Determined to test the law of gravity himself, a student walks off a skyscraper 180 m high, stopwatch in hand, and starts his free fall(0 initial velocity). Five sec. later, Superman arrives
at the scene and dives off the roof to save the student. a.) Superman leaves the roof with an initial speed v0 that he produces by pushing himself downward from the edge of the roof with his legs of
steel. He then falls with the same acceleration as any freely falling body. What must the value of v0 be so that Superman catches the student just before they reach the ground? b.)if the height of
the skyscraper is less than some minimum value, even Superman can t reach the student before he hits the ground. What is this minimum height?
Answers:Working formula is S = Vo(T) + (1/2)gT^2 where S = distance travelled Vo = initial velocity T = time interval g = acceleration due to gravity = 9.8 m/sec^2 For the student, S = 0 + (1/2)(9.8)
(T^2) S = 4.9T^2 For Superman, S = Vo(T - 5) + (1/2)(9.8)(T - 5)^2 For Superman to catch the student, 4.9T^2 = Vo(T - 5) + (1/2)(9.8)(T - 5)^2 4.9T^2 = Vo(T - 5) + 4.9(T^2 - 10T + 25) 4.9T^2 = Vo(T -
5) + 4.9T^2 - 49T + 122.5 Solving for Vo, Vo = (49T - 122.5)/(T - 5) -- call this Equation 3 Going back to Equation 1, since "Superman catches the student just before they reach the ground?" , then
180 = 4.9(T^2) and solving for "T" T = 6.06 sec. Substituting T = 6.06 into Equation 3, Vo = (49*6.06 - 122.5)/(6.06 - 5) Vo = 168 m/sec. << if the height of the skyscraper is less than some minimum
value, even Superman can t reach the student before he hits the ground. What is this minimum height? >> The calculated values for the above were based on the given data of the problem. ASSUMING all
the conditions remain the same, then if the height of the skyscraper were less than 180 m, then Superman cannot save the student before he hits the ground. Hope this helps.
Question:In the Galapagos Islands at the equator, the acceleration of a freely falling body is 9.780m/s^2, while at he latitude of Oslo, Norway, it is 9.831m/s^2. why does the acceleration differ?
Answers:Height from the Earth. The gravitational pull is greater the closer you are to the center of the Earth. Also, gravitational pull at the equator is going to be less because the Earth actually
bows out at the center.
Question:Can someone please help me!
Answers:What can you conclude about the nature of vertical acceleration for a freely falling projectile? The weight of the object is the force which is causing the vertical velocity to increase as
the object falls toward the surface of the earth. When you are asked to calculate the weight of an object, you need to know the mass of the object. The equation that you use to determine weight is
shown below! Weight = mass * g Did you ever wonder what this equation actually means?? Newton s 2nd Equation: Force = mass * acceleration In this equation, the acceleration is dependent on the
magnitude of the force that is exerted and the magnitude of the mass of the object. The mass actually measure the resistance of the object to a change in its velocity. Mass measures inertia! An
object with greater mass is more resistant to acceleration. So, a greater force is required to accelerate an object with greater mass. The same is true when an object is freely falling. An object
with greater mass requires a greater weight to cause it to accelerate as it falls. However, when you simultaneously drop 2 objects of different masses from the same vertical position; the 2 object
fall at the same rate. The vertical position and the vertical velocity are identical at any specific time during the fall. Weight = mass * g The weight of a 10 kg object equal (10 kg * g) = (10 kg *
9.8 m/s^2) = 98 Newtons Weight = Force = 98 N Force = mass * acceleration 98 = 10 * acceleration acceleration = 9.8 m/s^2 Now let s double the mass and see what happens to the vertical acceleration!
Weight = mass * g The weight of a 20 kg object equal (20 kg * g) = (20 kg * 9.8 m/s^2) = 196 Newtons Weight = Force = 196 N Force = mass * acceleration 196 = 20 * acceleration acceleration = 9.8 m/s^
2 The vertical acceleration for a freely falling projectile is constant! The actual force that causes the acceleration of a falling object is the Universal force of gravitational attraction between
any two objects anywhere in the Universe! The Universal Gravitational Force is described by the equation below. Fg = (G * m1 * m2) r^2 G = 6.67 * 10^-11 m1 = mass of earth = 5.98 * 10^24 m2 = mass of
object r = distance between the centers of mass of the earth and the object. When the object is on or near the surface of the earth, r = radius of the earth = 6.38 * 10^6 meters. Fg = (6.67 * 10^-11
* 5.98 * 10^24 * m2) (6.38 * 10^6 )^2 Fg = 9.8 * m2 This force is pulling the object down toward the center of the earth. Notice that 9.8 is the constant that relates the mass of an object to the
force of attraction between the object and the earth. And Fg mass always equals 9.8 m/s^2 The symbol, g, is the constant acceleration of all freely falling objects. The equation for the Universal
Gravitational Force; Fg = (G * m1 * m2) r^2; illustrates magnificent structure of the Universe. The one simple equation describes the force that maintains all planets, stars, comets, and all other
bodies in their relative position, velocity, and acceleration with respect to each other! And this equation can be simplified down to the equation, Weight = mass * g, for all objects on or near the
surface of the earth. I am very happy that we all accelerate at the same rate as we fall. I had great fun with my son s as we held hands on jumped off cliffs at Turkey Run State Park in Indiana. We
all fell at the same rate, and we all hit the water at the same time. Great memories due to the simple structure of the Universe. Thank God for such blessings!!
From Youtube
James Horner - Suite from the Motion Picture Score LEGENDS OF THE FALL (1994) :Legends of the Fall is a 1994 drama film based on the 1979 novella of the same title by Jim Harrison. It was directed by
Edward Zwick and stars Brad Pitt, Anthony Hopkins and Aidan Quinn. The film won the Academy Award for Best Cinematography. The movie's timeframe spans the decade before World War I through the
Prohibition era, and into the 1930s, ending with a brief scene set in 1963. The film centers on the Ludlow family of Montana, including veteran of the Indian Wars, Colonel Ludlow, his sons, Alfred,
Tristan, and Samuel, and object of the brothers' love, Susannah. This movie was shot in Alberta and British Columbia, Canada. The film opened in limited release on December 23, 1994 and made $14
million in its first weekend in wide release a month later. It went on to have a final box office total of $66 million. Although released in the hopes of being an Academy Award frontrunner, the film
was nominated for just three awards, in none of the major categories. It won one award, for best cinematographer John Toll. The film has a 70% positive review from critics on Rotten Tomatoes, with
acclaimed critic Roger Ebert describing it as "pretty good ... with full-blooded performances and heartfelt melodrama." In one scene (at 25 min, 24 sec), the book "Report of a Reconnaissance of the
Black Hills of Dakota" is shown and referred to as the work of the fictional William Ludlow. This is a real book, written by the real William Ludlow, who served as a Colonel during the Plains ... | {"url":"http://www.edurite.com/kbase/describe-the-motion-of-freely-falling-body","timestamp":"2014-04-16T13:31:00Z","content_type":null,"content_length":"93795","record_id":"<urn:uuid:09cedf09-af79-47eb-b826-7cf62877e270>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do indices work?
Starting with multiplication how can we rewrite (click to see):
Did you know the answer? The rule is that when you mutliply two powers, and the bases are the same, then you add the indices. If you knew this then good. Now do you know why that is the thing to do?
Here is a description of why you add the indices
The brackets are needed to show the order of operations.
Yet it is still the same result if you leave the brackets out .
Just count up the factors.
Notice that in the first and third lines we used the definition of indices. In the second line we used the fact that multiplications can be done in any order whatsoever. Once you know this the "add
the indices" rule follows from the fact that addition is a short cut way to count up the factors.
We need to know that multiplication can be done in any order because the order of operation changes when we add the indices. With we only ever multiply by 2, since firstly we do , then , then and so
on. With one of the multiplications is . Check out the following expression to see the different order of operations. Click on a number other than 2 to factorize. Click on an operation to implement
that operation and all the others that precede it. To get the answer 128 you must click on the operation that is done last. Which one is the last?
Here is a formal statement of the rule for adding indices. | {"url":"http://members.optushome.com.au/mmaths/explore/indlog/indices/aab.htm","timestamp":"2014-04-16T16:00:20Z","content_type":null,"content_length":"21647","record_id":"<urn:uuid:92cbcf77-030a-4fd9-90d9-516c364cfc46>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Finite State Automaon Question
Nathan Moore <nathan.moore@sdc.cox.net>
25 Mar 2005 21:56:29 -0500
From comp.compilers
| List of all articles for this month |
From: Nathan Moore <nathan.moore@sdc.cox.net>
Newsgroups: comp.compilers
Date: 25 Mar 2005 21:56:29 -0500
Organization: Cox Communications
References: 05-03-058
Keywords: lex
Posted-Date: 25 Mar 2005 21:56:29 EST
I just thought up the following. It is possible that it is a
more English version of one of the already mentioned algorithms,
but those are hard to read and this should be easier to understand.
Using an adjacency matrix to represent each of the FSAs and start with
L1 and L2 in reduced DFA form to avoid headaches. M1 accepts L1 and M2
accepts L2.
Find all ending nodes in M1.
Find nodes in M2 that correspond to the ending nodes in M1.
Make a copy of M2 without marking the existing start node as the start node.
Make a new start node in `M2 with lambda transitions to all
of the nodes that were found to correspond to the final nodes
of M2.
`M2 will accept the language M3, but you will probably want to
remove all the unreachable nodes and make it deterministic before
you call it M3.
This should yield the full L3 (M3 accepts L3).
Anyone spot any errors?
Nathan Moore
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/05-03-098","timestamp":"2014-04-16T14:00:56Z","content_type":null,"content_length":"5683","record_id":"<urn:uuid:6f055b46-a3da-4cfb-87b0-6a6d5bf91015>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
match corresponding equation with graph and explain
July 4th 2012, 02:54 PM #1
match corresponding equation with graph and explain
Last edited by iloveall; July 4th 2012 at 04:11 PM.
Re: match corresponding equation with graph and explain
what are your matches?
Re: match corresponding equation with graph and explain
For matching problems, first look to see how each of the equations are changing in respect to each other. For this problem we see that the amplitude of sin (x) is changing since the coefficients
infront of the equations are changing. With that in mind we can proceed to match the equations based on how their amplitudes look.
What are your matches?
July 4th 2012, 03:21 PM #2
July 5th 2012, 11:21 AM #3 | {"url":"http://mathhelpforum.com/trigonometry/200639-match-corresponding-equation-graph-explain.html","timestamp":"2014-04-18T14:32:14Z","content_type":null,"content_length":"36774","record_id":"<urn:uuid:fadc8031-05cd-4c23-ada4-84ab344b2c78>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |