content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
May 14th 2011, 07:10 PM #1
Junior Member
May 2009
Consider the next set.
$\Lambda = \left\{ {f \in C\left( {\left[ {0,1} \right],\mathbb{R}} \right);\left| {f\left( x \right) - f\left( y \right)} \right| \leqslant \pi \left| {x - y} \right| \wedge \left\| f \right\|_\
infty \leqslant 1} \right\}$
I have been ask to prove that $\Lambda in \left( {C\left( {\left[ {0,1} \right],\mathbb{R}} \right),\left\| . \right\|_\infty } \right)\$ is compact.
I really don`t know how to do it. I have no idea where to start from. Please help.
Use Arzela-Ascoli: The pointwise boundedness follows from the bound on the supremum norm, and the equicontinuity from the uniform Lipschitz condition.
The problem is that in my course the Arzela Th. haven`t been taught....
Here's an outline of an easy proof in this case:
1. Ennumerate the rationals in [0,1], use a diagonal argument to prove that there is a subsequence (of your function sequence) that converges at every rational.
2. Prove that this limit function (with the rationals as domain) satisfies the same Lipschitz condition.
3. Prove that there is an extension of this function to [0,1] and, finally, that it satisfies the conditions.
Here's an outline of an easy proof in this case:
1. Ennumerate the rationals in [0,1], use a diagonal argument to prove that there is a subsequence (of your function sequence) that converges at every rational.
2. Prove that this limit function (with the rationals as domain) satisfies the same Lipschitz condition.
3. Prove that there is an extension of this function to [0,1] and, finally, that it satisfies the conditions.
Ok, I`ll try that.
Just for the record, could you do the exercise using the Arzela Th ?
Thank you!
Well, my first post gives you how to use the theorem, you would just have to check that the limit function is in your set, but that is easy to see (take pointwise limits).
Why the equicontinuity follows from the uniform Lipschitz condition.¿¿??
$|f(x)-f(y)| \leq \pi |x-y| <\varepsilon$ whenever $|x-y|< \delta = \varepsilon / \pi$
oK, Thanks.
But.... the Arzela theorema saids also that the set must be close.... How do I demostrate that?
You need only notice that the norm map $n$ ( $n(x)=\|x\\_\infty$) is (of course) continuous and the map $\varphi_{x,y}:C[0,1]\to\mathbb{R}:f\mapsto |f(x)-f(y)|$ is continuous since the absolute
value map and the evaluation functionals $e_z:C[0,1]\to\mathbb{R}:f\mapsto f(z)$ are continous (and differences, are continous etc.). But, we have that $\displaystyle \Lambda=\bigcap_{x,y\in
[0,1]}\varphi_{x,y}^{-1}\left([0,\pi|x-y|]\right)\cap n^{-1}([0,1])$
Here's an outline of an easy proof in this case:
1. Ennumerate the rationals in [0,1], use a diagonal argument to prove that there is a subsequence (of your function sequence) that converges at every rational.
2. Prove that this limit function (with the rationals as domain) satisfies the same Lipschitz condition.
3. Prove that there is an extension of this function to [0,1] and, finally, that it satisfies the conditions.
Im sorry but I don´t get how to prove that there is a subsequence (of your function sequence) that converges at every rational.
A possibly easier way than proving it full out might be to just prove it's totally bounded and complete. You get complete for free since it's a closed (as I showed) subspace of a complete space
and the fact that it's totally bounded is not too bad.
Hi. Believe me, I tried to do this exercise. But I could not, that´s why I posted it. Im not lazy or anything like that.It´s just Im taking analysis for the first time and it´s been a very hard
course for me. That´s all.
May 14th 2011, 07:12 PM #2
Super Member
Apr 2009
May 14th 2011, 07:36 PM #3
Junior Member
May 2009
May 14th 2011, 08:03 PM #4
Super Member
Apr 2009
May 15th 2011, 11:53 AM #5
Junior Member
May 2009
May 15th 2011, 12:05 PM #6
Super Member
Apr 2009
May 15th 2011, 06:01 PM #7
Junior Member
May 2009
May 15th 2011, 09:19 PM #8
Super Member
Apr 2009
May 16th 2011, 06:29 PM #9
Junior Member
May 2009
May 16th 2011, 07:05 PM #10
May 17th 2011, 03:45 PM #11
Junior Member
May 2009
May 17th 2011, 04:15 PM #12
May 17th 2011, 05:28 PM #13
Junior Member
May 2009
May 18th 2011, 06:02 AM #14
May 18th 2011, 06:29 AM #15
Junior Member
May 2009 | {"url":"http://mathhelpforum.com/differential-geometry/180600-compactness.html","timestamp":"2014-04-16T07:48:36Z","content_type":null,"content_length":"72580","record_id":"<urn:uuid:fd9b18fe-3ac0-4163-b5e3-61457d9aca54>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
closed loop of classical string
Actually, there is a good treatment of this in Zwiebach's string theory book. Just check out the earlier chapters, and forget about trying to quantize the string.
Basically, you take the wave equation for a non-closed string, and give it periodic boundary conditions, that's all. | {"url":"http://www.physicsforums.com/showthread.php?t=421552","timestamp":"2014-04-18T10:47:16Z","content_type":null,"content_length":"24081","record_id":"<urn:uuid:416b3c60-3532-434b-af4e-769d3f3fc836>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Attleboro Prealgebra Tutor
...Apart from my strong background in mathematics, being a student at Brown University has allowed me to garner experience in a large variety of studies outside of my area of focus. I have a
strong foundation in non-mathematical fields ranging from the sciences to the humanities. I also have 5 years of experience in Spanish.
22 Subjects: including prealgebra, Spanish, reading, geometry
...My positive approach allows my students to relax and start focusing on the material to be learned. I have had excellent success in motivating students and creating real results in academic
achievement. I welcome the opportunity to share my mathematical knowledge, effective study skills and posi...
13 Subjects: including prealgebra, physics, calculus, geometry
I've been tutoring math to high-school and middle-school students in the Canton, Dedham, Sharon, Norwood and Stoughton area for ten years. I've tutored nearly all the students I've worked with for
many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjo...
11 Subjects: including prealgebra, geometry, algebra 1, precalculus
...In Pensacola, FL, I also had the opportunity to volunteer with a group of inner-city students with their homework, part of a inner-city mission named Milk and Honey Outreach Ministry. The most
rewarding part of my participation are the relationships that I built with the students. Not only was ...
21 Subjects: including prealgebra, English, reading, study skills
...I enjoy tutoring Calculus! Calculus comes naturally to me in ways that I can translate "how it works" to students. My approach has yielded successful results.
11 Subjects: including prealgebra, physics, calculus, geometry | {"url":"http://www.purplemath.com/Attleboro_prealgebra_tutors.php","timestamp":"2014-04-20T06:46:38Z","content_type":null,"content_length":"24018","record_id":"<urn:uuid:a9a9e3d1-604b-4e07-bff0-2633ca057430>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector matrix orientation
Posted Monday, 16 April, 2012 - 08:08 by phq in
This row-major/column-major has been a big issue of mine, especially since I understand the basics which has nothing to do with the actual issue.
Now when I realized that row-major in graphics also means that vectors are row-vectors it has become much clearer to me.
So I have made a few observations of which I would like a better understanding behind the choices.
1. OpenGL uses by default column-major matrix indexing, why is OpenTK.Math.Matrix4 using a row-major representation.
2. OpenGL is designed for column-vectors and thus calculate like this: M.v (Matrix * vector), rather than v.M
This is also the way the OpenGL online tutorials and books do, as well as how the shader code is written.
But I found out that the Matrix.Create.. functions generates matrices that assume row-vectors.
3. When I almost had got it, I tried to verify whether OpenTK was using row or column vectors. I tried to find that perhaps only M.v or v.M code was written but I found none.
The only indicator was the matrix create... methods, or rather the output they generated.
Why is there no Mult(Vector4, Matrix4) in the OpenTK.Math namespace?
Posted Tuesday, 17 April, 2012 - 16:38 by mOfl
This row-major/column-major has been a big issue of mine, especially since I understand the basics which has nothing to do with the actual issue. Now when I realized that row-major in graphics also
means that vectors are row-vectors it has become much clearer to me.
There's been a thread on this matter already, http://www.opentk.com/node/2794, maybe some of the information in this thread will help to resolve your troubles.
1. OpenGL uses by default column-major matrix indexing, why is OpenTK.Math.Matrix4 using a row-major representation.
The correct question is: "CPU-based libraries like OpenTK.Math, GLM, IMath, ... are using row-major representation, then why does OpenGL use column-major matrices?" It is common to use the row-major
representation, only in OpenGL column-major matrices are used for some reason (I think they state it's more intuitive). As already explained in the other thread, in the end, the two representations
are nothing but different conventions for how to align the 16 (float) values of a matrix in memory. A row-major matrix has the same memory alignment as its transpose in column-major form, which also
explains the difference of M * v vs. v * M.
Posted Sunday, 6 May, 2012 - 18:41 by Robmaister
Why is there no Mult(Vector4, Matrix4) in the OpenTK.Math namespace?
It's called Vector4.Transform(Vector4, Matrix4)
Posted Sunday, 27 May, 2012 - 14:29 by lid6j86
so i'm gonna sound stupid, but this means that it is set up like:
[x1 , x2, x2, 0
y1, y2, y3, 0
z1, z2, z3, 0
T1, T2, T3, 1]
instead of
[x1, y1, z1, T1
x2, y2, z2, T2
x3, y3, z3, T3
0 , 0 , 0 , 1
is this correct?
Posted Thursday, 31 May, 2012 - 00:53 by phq
Yes the translation components are on the bottom row. | {"url":"http://www.opentk.com/node/2931","timestamp":"2014-04-18T06:22:44Z","content_type":null,"content_length":"18226","record_id":"<urn:uuid:ddc6063a-8a24-4c1c-81f2-aaec65e0eb6b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
second order lenear equation 10
November 27th 2010, 05:28 AM #1
MHF Contributor
Nov 2008
second order lenear equation 10
y(x0)=y0 y'(x0)=z0
how to solve this one?
and i need to deside which of the following statements are true:
which of the following contains x0 which has a single solution to the equation above?
a.every segment wich contains x0 but dont contains 0
b.every segment wich contains x0
c.every segment wich contains x0 and contains 0
d. every segment wich contains x0 but dont contains 1
e.there is no such segment
Last edited by transgalactic; November 27th 2010 at 05:57 AM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/differential-equations/164525-second-order-lenear-equation-10-a.html","timestamp":"2014-04-17T07:21:17Z","content_type":null,"content_length":"29465","record_id":"<urn:uuid:8aa623d4-2f4a-4047-b775-d575adc26efe>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haskell Code by HsColour
{-# OPTIONS -fno-implicit-prelude #-}
{- |
Copyright : (c) Henning Thielemann 2008
License : GPL
Maintainer : synthesizer@henning-thielemann.de
Stability : provisional
Portability : requires multi-parameter type classes
module Synthesizer.SampleRateContext.Noise
(white, whiteBandEnergy, randomPeeks,
whiteGen, whiteBandEnergyGen, randomPeeksGen,
) where
import qualified Synthesizer.Plain.Noise as Noise
import qualified Synthesizer.SampleRateContext.Signal as SigC
import qualified Synthesizer.SampleRateContext.Rate as Rate
import qualified Algebra.OccasionallyScalar as OccScalar
import qualified Algebra.Algebraic as Algebraic
import qualified Algebra.Field as Field
import qualified Algebra.Ring as Ring
import System.Random (Random, RandomGen, randomRs, mkStdGen)
import NumericPrelude
import PreludeBase as P
{- |
Uniformly distributed white noise.
The volume is given by two values:
The width of a frequency band and the volume caused by it.
The width of a frequency band must be given
in order to achieve independence from sample rate.
See 'whiteBandEnergy'.
white :: (Ring.C yv, Random yv, Algebraic.C q') =>
q' {-^ width of the frequency band -}
-> q' {-^ volume caused by the given frequency band -}
-> Rate.T t q' -> SigC.T y q' yv
{-^ noise -}
white = whiteGen (mkStdGen 6746)
whiteGen :: (Ring.C yv, Random yv, RandomGen g, Algebraic.C q') =>
g {-^ random generator, can be used to choose a seed -}
-> q' {-^ width of the frequency band -}
-> q' {-^ volume caused by the given frequency band -}
-> Rate.T t q' -> SigC.T y q' yv
{-^ noise -}
whiteGen gen bandWidth volume sr =
(sqrt (3 * bandWidth / Rate.toNumber sr) * volume)
(Noise.whiteGen gen)
Uniformly distributed white noise.
Instead of an amplitude you must specify a value
that is like an energy per frequency band.
It makes no sense to specify an amplitude
because if you keep the same signal amplitude
while increasing the sample rate by a factor of four
the amplitude of the frequency spectrum halves.
Thus deep frequencies would be damped
when higher frequencies enter.
If your signal is a function from time to voltage,
the amplitude must have the unit @volt^2*second@,
which can be also viewed as @volt^2\/hertz@.
Note that the energy is proportional to the square of the signal amplitude.
In order to double the noise amplitude,
you must increase the energy by a factor of four.
Using this notion of amplitude
the behaviour amongst several frequency filters
is quite consistent but a problem remains:
When the noise is quantised
then noise at low sample rates and noise at high sample rates
behave considerably different.
This indicates that quantisation should not just pick values,
but it should average over the hold periods.
whiteBandEnergy :: (Ring.C yv, Random yv, Algebraic.C q') =>
q' {-^ energy per frequency band -}
-> Rate.T t q' -> SigC.T y q' yv
{-^ noise -}
whiteBandEnergy = whiteBandEnergyGen (mkStdGen 6746)
whiteBandEnergyGen :: (Ring.C yv, Random yv, RandomGen g, Algebraic.C q') =>
g {-^ random generator, can be used to choose a seed -}
-> q' {-^ energy per frequency band -}
-> Rate.T t q' -> SigC.T y q' yv
{-^ noise -}
whiteBandEnergyGen gen energy sr =
SigC.Cons (sqrt (3 * Rate.toNumber sr * energy)) (Noise.whiteGen gen)
The Field.C q constraint could be lifted to Ring.C
if we would use direct division instead of toFrequencyScalar.
randomPeeks ::
(Field.C q, Random q, Ord q,
Field.C q', OccScalar.C q q') =>
Rate.T q q'
-> SigC.T q q' q {- ^ momentary densities (frequency),
@p@ means that there is about one peak
in the time range of @1\/p@. -}
-> [Bool]
{- ^ Every occurence of 'True' represents a peak. -}
randomPeeks =
randomPeeksGen (mkStdGen 876)
randomPeeksGen ::
(Field.C q, Random q, Ord q,
Field.C q', OccScalar.C q q',
RandomGen g) =>
g {-^ random generator, can be used to choose a seed -}
-> Rate.T q q'
-> SigC.T q q' q {- ^ momentary densities (frequency),
@p@ means that there is about one peak
in the time range of @1\/p@. -}
-> [Bool]
{- ^ Every occurence of 'True' represents a peak. -}
randomPeeksGen g sr dens =
let amp = SigC.toFrequencyScalar sr (SigC.amplitude dens)
in zipWith (<)
(randomRs (0, recip amp) g)
(SigC.samples dens) | {"url":"http://hackage.haskell.org/package/synthesizer-0.0.3/docs/src/Synthesizer-SampleRateContext-Noise.html","timestamp":"2014-04-18T09:01:39Z","content_type":null,"content_length":"21237","record_id":"<urn:uuid:2ad1acd9-ef61-4e15-99ee-23325d508c3a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of a Circular Segment
August 23rd 2010, 12:37 PM #1
Aug 2010
Area of a Circular Segment
While working on a problem, I had to look up a formula for the area of a circle segment. I have noticed that there is some ambiguity in the results.
According to Wolfram Alpha, the formula is:
According to two other sources, the formula is:
½r^2(θ × pi/180)
Wolfram Alpha has been very reliable in the past, however I would prefer to not get this wrong due to a formula error. Please clarify which or what is correct. Thanks.
While working on a problem, I had to look up a formula for the area of a circle segment. I have noticed that there is some ambiguity in the results.
According to Wolfram Alpha, the formula is:
This is correct for angles expressed in radians.
The triangle from the centre of the circle to the 2 points on the circumference
is subtracted from the sector to obtain the segment.
According to two other sources, the formula is:
½r^2(θ × pi/180)
This is the area of the sector for the angle expressed in degrees.
The formula is converting degrees to radians and calculating the sector area.
The triangle still needs to be subtracted to obtain the segment area.
Wolfram Alpha has been very reliable in the past, however I would prefer to not get this wrong due to a formula error. Please clarify which or what is correct. Thanks.
One formula calculates segment area, the second one calculates sector area.
You can learn the difference in circular segment and circular sector.
Circular Segment -- from Wolfram MathWorld
Circular Sector -- from Wolfram MathWorld
August 23rd 2010, 12:52 PM #2
MHF Contributor
Dec 2009
August 23rd 2010, 01:13 PM #3 | {"url":"http://mathhelpforum.com/geometry/154266-area-circular-segment.html","timestamp":"2014-04-21T05:00:34Z","content_type":null,"content_length":"38273","record_id":"<urn:uuid:b49e5296-7acf-4cfc-bced-1d71f573b64a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kearny, NJ Calculus Tutor
Find a Kearny, NJ Calculus Tutor
...Overall, I am a very experienced tutor. I worked with students in a tutoring environment for two and a half years at Montclair State University. I have also done private tutoring for several
years now in subjects from algebra to physics to calculus III and differential equations.
7 Subjects: including calculus, physics, algebra 1, astronomy
...I have taught it many times in a community college setting. I was an actuary for about 7 years. I have passed the first 4 exams of the Casualty Actuarial Society.
28 Subjects: including calculus, physics, GRE, geometry
...Recent tutoring successes, especially in SAT Prep, have resulted in students receiving hefty scholarships (up to $50,000 a year for some) to colleges including Gettysburg College, Boston
University, Drexel University, Rutgers University, Stony brook, Drew, Montclair, The College Of New Jersey, H...
33 Subjects: including calculus, physics, geometry, GRE
...I have made a lifetime of learning. Having earned three master's degrees and working on a doctoral degree, all in different fields, I have become very aware of the importance of approaching
material in a way that minimizes the anxiety of what may seem an overwhelming task. This involves learning how to strategize learning.
50 Subjects: including calculus, chemistry, physics, geometry
Hello my name is Andres. I was a language teacher in my native country teaching English as a second language for native students and Spanish as a second language for foreign students. I am
currently finishing my second major in engineering science.
9 Subjects: including calculus, Spanish, algebra 2, geometry
Related Kearny, NJ Tutors
Kearny, NJ Accounting Tutors
Kearny, NJ ACT Tutors
Kearny, NJ Algebra Tutors
Kearny, NJ Algebra 2 Tutors
Kearny, NJ Calculus Tutors
Kearny, NJ Geometry Tutors
Kearny, NJ Math Tutors
Kearny, NJ Prealgebra Tutors
Kearny, NJ Precalculus Tutors
Kearny, NJ SAT Tutors
Kearny, NJ SAT Math Tutors
Kearny, NJ Science Tutors
Kearny, NJ Statistics Tutors
Kearny, NJ Trigonometry Tutors
Nearby Cities With calculus Tutor
Belleville, NJ calculus Tutors
Bloomfield, NJ calculus Tutors
East Newark, NJ calculus Tutors
East Orange calculus Tutors
Glen Ridge calculus Tutors
Harrison, NJ calculus Tutors
Irvington, NJ calculus Tutors
Lyndhurst, NJ calculus Tutors
Montclair, NJ calculus Tutors
Newark, NJ calculus Tutors
North Arlington calculus Tutors
Nutley calculus Tutors
Orange, NJ calculus Tutors
South Kearny, NJ calculus Tutors
West Orange calculus Tutors | {"url":"http://www.purplemath.com/Kearny_NJ_Calculus_tutors.php","timestamp":"2014-04-17T08:02:09Z","content_type":null,"content_length":"23858","record_id":"<urn:uuid:69b8e991-dc46-4748-8150-b2ec5a70fb3a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry and Physics Seminars
All seminars are held at 11:00am in room M/2.06, Senghennydd Road, Cardiff unless stated otherwise. All are welcome.
Programme Organiser and Contact: Professor Stefan Hollands
7 October 2011
Speaker: Jan Holland (Cardiff)
Title: Algebraic structures in quantum field theory.
Abstract: TBC
14 October 2011
Speaker: Andrew Bruce.
Title: L_{\infty}-algebroids and higher Schouten structures.
Abstract: L_{\infty}-algebras are the "homotopy-relative" of Lie algebras. That is we have a series of n-array brackets (n \geq 0) on a vector space that satisfies a series of higher order Jacobi
identities. In this talk I describe how to pass from a vector space to a vector bundle and define the notion of an L_{\infty}-algebroid. I will also show that such algebroids can be encoded in an L_
{\infty}-algebra version of a Schouten bracket.
18 November 2011
Speaker: Florian Robl (Cardiff)
Title: Nuclearity and split property in Algebraic Quantum Field Theory
Abstract: TBC
25 November 2011
Speaker: Roger Behrend (Cardiff)
Title: Proof of the alternating sign matrix and descending plane partition conjecture.
Abstract: Alternating sign matrices (ASMs) and descending plane partitions (DPPs) are combinatorial objects, each of which arose about 30 years ago, but in somewhat different contexts. However, it
was conjectured by Mills, Robbins and Rumsey in 1983 that these objects are closely connected. Specifically, they conjectured that certain finite sets of ASMs have the same sizes as certain finite
sets of DPPs, where these sets are comprised of all ASMs or DPPs with fixed values of particular statistics. In this talk, some background on ASMs and DPPs will be discussed, and some details of the
first known proof of the conjecture will be presented. The proof will use various intermediate combinatorial objects and results, but no prior knowledge of these will be assumed. This is joint work
with Philippe Di Francesco and Paul Zinn-Justin.
9 December 2011
Speaker: Stephen Fairhurst (Cardiff).
Title: TBC
Abstract: TBC
17 February 2012
Speaker: Thomas Prellberg (Queen Mary, University of London)
Title: Exact solution of a model of a vesicle attached to a wall subject
to mechanical deformation.
Abstract:Area-weighted Dyck-paths are a two-dimensional model for vesicles attached
to a wall. We model the mechanical response of a vesicle to a pulling
force by extending this model, and find a resulting non-trivial phase
We obtain an exact solution using two different approaches, leading to a
q-deformation of an algebraic functional equation, and a q-deformation of
a linear functional equation with a catalytic variable, respectively.
While the non-deformed linear functional equation is solved by
substitution of special values of the catalytic variable (the so-called
"kernel method"), the q-deformed case is solved by iterative substitution
of the catalytic variable.
This talk will be a gentle introduction to the modern techniques available
for studying this and a large class of related problems.
27 April 2012
Speaker: Mark Hannam (Cardiff).
Title: Worse than dummies: black holes for computers.
Abstract: Numerical solutions of Einstein's equations for configurations of colliding black holes are an essential part of efforts to detect gravitational waves. They are also a rich source of
problems in mathematical relativity, numerical analysis, and numerical methods. In this talk I will focus on the novel representations of black-hole geometries that have appeared in numerical
simulations, some of the issues we have in disentangling physical effects from "mere" coordinate (gauge) artifacts, and some of the remaining open questions in the field. | {"url":"http://www.cf.ac.uk/maths/newsandevents/events/seminars/geometryphysics/index.html","timestamp":"2014-04-20T18:31:19Z","content_type":null,"content_length":"15251","record_id":"<urn:uuid:37fc5ed1-11fc-4521-84c0-f99cb430eb78>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
the differenece o 4 times a number and 3 is the same a the difference of two times the number 7 . what is the difference - WyzAnt Answers
the differenece o 4 times a number and 3 is the same a the difference of two times the number 7 . what is the difference
this is the problem i have i dont understand
Tutors, please
to answer this question.
Did you mean to finish the question as "what is the number" instead of "what is the difference"?
You need to think about how the words translate into symbols
difference means (-) subtraction
four times means (4x) multiplying something by 4
a number (n) n can represent anything for the time being
the same means (=) equals
two times means (2x) multiplying something by 2
Then you put the sentence into symbols and numbers
4 x n - 3 = 2 x 7
4n - 3 = 14
4n = 17
n = 17/4 = 4.25 | {"url":"http://www.wyzant.com/resources/answers/653/the_differenece_o_4_times_a_number_and_3_is_the_same_a_the_difference_of_two_times_the_number_7_what_is_the_difference","timestamp":"2014-04-19T09:59:56Z","content_type":null,"content_length":"43248","record_id":"<urn:uuid:682d8580-8f53-4a30-8deb-0f24fb4532ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
DiversitySampler: Functions for re-sampling a community matrix to compute diversity indices at different sampling levels
There are two functions in this package, which can be used together to estimate the Shannon's Diversity index at different levels of sample size. A Monte-Carlo procedure is used to re-sample a given
observation at each level of sampling. The expectation being that the mean of the re-sampling will approach Shannon's diversity index at that sample level.
Version: 2.1
Published: 2012-12-22
Author: Matthew K. Lau
Maintainer: Matthew K. Lau <mkl48 at nau.edu>
License: GPL (≥ 3)
NeedsCompilation: no
CRAN checks: DiversitySampler results
Reference manual: DiversitySampler.pdf
Package source: DiversitySampler_2.1.tar.gz
MacOS X binary: DiversitySampler_2.1.tgz
Windows binary: DiversitySampler_2.1.zip
Old sources: DiversitySampler archive | {"url":"http://cran.r-project.org/web/packages/DiversitySampler/index.html","timestamp":"2014-04-18T16:09:45Z","content_type":null,"content_length":"2742","record_id":"<urn:uuid:cc1c59ce-2be4-4d45-b70e-5d493f6cb223>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can we extract information about how fast a function decay from its Laplace transform?
up vote 2 down vote favorite
My question is whether we can extract information about how fast an integrable function converges to zero by looking at the asymptotics of its Laplace transform.
More concrete case, let $f:\mathbb{R} \to \mathbb{R}_+$ be a smooth function in $L^1(\mathbb{R})$. If we know that its Laplace transform exists on the positive real axis and:
$\int_{\mathbb{R}} f(x) e^{sx} {\rm d}x \geq e^{\frac{s^2}{2}}, \quad \forall s > 0$,
can we conclude that the speed that $f$ converge to zero cannot be faster than $e^{-\frac{x^2}{2}}$, say,
$\liminf_{|x| \to \infty} \frac{f(x)}{e^{\frac{-x^2}{2} (1 - \epsilon)}} > 0$
for some small $\epsilon \in (0, 1)$? In a more probabilistic setup, if we know the moment generating function is lower bounded by that of Gaussian, can we conclude that it is "super-Gaussian"? I
know that the other direction seems to be true and is called sub-Gaussian.
If the information on the right-half real axis is not enough, do we need to know more? Will Fourier transform be more helpful? How about the other direction, i.e., lower bound on the Laplace
transform and upper bound on the decay of $f$? Thanks.
add comment
2 Answers
active oldest votes
You can extract some information but not as accurate as you suggested. For instance, there is no chance to say anything about your $\liminf$ but if you are happy with $\limsup$ instead,
up vote 4 you are fine. To describe everything that is possible and everything that isn't will just take too much time. What exactly are you aiming at?
down vote
Yes. I did got limsup myself: $\limsup \frac{\log\frac{1}{f(x)}}{x^2} \geq 1/2$, something like that. I am aiming at lower bound $f$ pointwise, or maybe lower bound it except for a
set of small Lebesgue measure. – mr.gondolier Feb 19 '10 at 4:07
2 That is just impossible to do looking at crude growth rates: take $f(s)$ equal to $e^{-s^2/2}$ on $[2n-1,2n]$ and $0$ on $[2n,2n+1]$. You'll see that the gaps don't really matter much
for the growth rate but they occupy half the line. – fedja Feb 19 '10 at 4:37
add comment
Can you control the oscillation of f(x) as x increases? If you can show that the ratio of f(x) to your 'simplified' form is 'slowly varying' then your asymptotics will probably work
A simple example of what you cannot afford is a log-periodic oscillation; this is because the limits of oscillation of the function and Laplace transform need not agree. The simplest
example of a log-periodic oscillation is a complex exponential:
$\int_{0}^{\infty }e^{-st}t^{\alpha +i\beta }dt=\left[ \frac{\Gamma \left( \alpha +i\beta +1\right) }{\Gamma \left( \alpha +1\right) }e^{i\beta \log t}\right] \frac{\Gamma \left( \
alpha +1\right) }{s}s^{-\alpha }$
up vote 4 down In a sense you can view the imaginary part of the exponential as a 'wobbly constant' which changes more and more slowly. The point is the amplitude of the wobble in the transform
vote depends on β but not for the function.
If you do have this sort of problem (it happens all the time in analysis of algorithms and chaotic dynamics) then you can for example resort to the 'gamma function method' of DeBruijn.
The same thing holds true for the moment question. If you look up a counterexample for the moment problem, (e.g. Feller volume II p. 227) you see the ubiquitous log-periodic
Not surprisingly the log-periodic oscillation also shows up in convergence questions of Fourier series, but there it is not oscillating more and more slowly, but faster and faster.
add comment
Not the answer you're looking for? Browse other questions tagged laplace-transform fourier-analysis pr.probability fa.functional-analysis ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/15780/can-we-extract-information-about-how-fast-a-function-decay-from-its-laplace-tran","timestamp":"2014-04-16T10:40:26Z","content_type":null,"content_length":"59226","record_id":"<urn:uuid:05c04556-58e0-42d4-b56e-becffd753fc5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Visual Math Commercial Edition
Visual Math Commercial Edition
Visual Math Commercial Edition The Product Review:Visual Math commercial edition. Visual Math product family 12 in 1 bundle special. Help school, college, university teachers, students to teach or
study mathematics, including algebra, geometry, calculus, statistics, complex variable function, fractal, curve fitting, probability analysis, optimistics and scientific data visualization. Visual
Math commercial edition. Visual Math product family 12 in 1 bundle special. Help school, college, university teachers, students to teach or study mathematics, including algebra, geometry, calculus,
statistics, complex variable function, fractal, curve fitting, probability analysis, optimistics and scientific data visualization.
Visual Math Commercial Edition is the product proudly presented to we by RegNow Vendor Software:Home & Education:Mathematics. You can find out some-more about Visual Math Commercial Edition Visual
Math Commercial Edition website: Visual Math Commercial Edition.
If you have no examination nonetheless from a users, it can be probable which this revolutionary product is completely new. If we have experience right before with all the products achievable Vendor
Software:Home & Education:Mathematics , we’re going to serve this village by withdrawal a reputable comment.
Visual Math Commercial Edition! Money Back Guarantee
Visual Math Commercial Edition is agreeable underneath ClickBanks Refund Policy. It is settled during RegNow site.
Our lapse process for all RegNow products is as follows:
RegNow is to judge acknowledge that the expiry of any product or deputy within 60 days from the date of purchase. Recurring billing products, pay some-more than a one-time fee can be assumed, if
desired, the inside of the normal 60 days period..
You can try Get Visual Math Commercial Edition during 100% RISK-FREE. If after the squeeze we have been not confident with the calm of this product or whatsoever reasons , we can emanate the
Without-Questions-Asked-Refund inside of 60 days of your purchase. There is no RISK in perplexing out Get Visual Math Commercial Edition.
Visual Math Commercial Edition WebSite Preview
Click Here To Get Visual Math Commercial Edition Original Page
Rating: 10 out of 10 (from 90 votes)
This entry was posted by hamdouch on July 3, 2013 at 12:37 pm, and is filed under Last Product Review. Follow any responses to this post through RSS 2.0.Both comments and pings are currently closed. | {"url":"http://curbshop.net/last-product-review/visual-math-commercial-edition","timestamp":"2014-04-20T20:56:07Z","content_type":null,"content_length":"39220","record_id":"<urn:uuid:8c74d952-5a5c-43fd-906c-5c6a3482b513>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Re: Extensions to: Creating variables recording properties of the ot
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Re: Extensions to: Creating variables recording properties of the other members of a group
From "Marcela Perticara" <mperticara@uahurtado.cl>
To <statalist@hsphsun2.harvard.edu>
Subject st: Re: Extensions to: Creating variables recording properties of the other members of a group
Date Wed, 28 Aug 2002 19:08:56 -0500
Hello Guillermo,
This code might help. I only tested with the two cases you sent, it might
need some adjutment.
Here I am creating only one vble for the number of kids, but you can easily
the variable to be father-mother specific and you will get two vbles after
you merge
If your dataset is too big you might want to read the data twice instead of
/*Make datasets for fathers and mothers with their number
of kids*/
use hm.dta, clear /*your dataset*/
foreach var of varlist fatherm motherm {
qui keep if `var'!=.
sort hhid `var'
by hhid `var': gen sibl=_N /*here you can make the name father-mother
by hhid `var': keep if _n==1
keep hhid `var' sibl
ren `var' member
sort hhid member
save `var'.dta, replace
use hm.dta
sort hhid member
merge hhid member using fatherm
drop _merge
sort hhid member
merge hhid member using motherm, update
drop _merge
I didnīt put too much effort in writting it short, you might be able to
shorten it.
Hope this helps!
Universidad Alberto Hurtado
Erasmo Escala 1835
Santiago, Chile
Phono: 671-7130 anexo 267
----- Original Message -----
From: <gcruces@worldbank.org>
To: <statalist@hsphsun2.harvard.edu>
Sent: Wednesday, August 28, 2002 4:19 PM
Subject: st: Extensions to: Creating variables recording properties of the
other members of a group
> When working with twin household/individual datasets, this is one of the
> useful FAQs:
> http://www.stata.com/support/faqs/data/members.html
> However, there are a few issues I couldn't solve with the information
> there, or not efficiently at least. I would like to solve my problem and,
> worthwhile, write an extension of the FAQ. The problem refers to the fact
> sometimes you may need to record for one individual the properties not of
> whole group, but of another member of the group in particular.
> In my example, I have a household survey where I don't have direct
> about the number of kids of each individual, but I have something like
> hhid and member are just the household id and number of member. Variables
> fatherm and motherm tell you the number of the member of the father and
> mother, if in the household:
> hhid member fatherm motherm
> 1 1 - -
> 1 2 - -
> 1 3 1 2
> 1 4 1 2
> 1 5 1 2
> 2 1 - -
> 2 2 - 1
> 2 3 - 2
> ...
> Family one is a couple with three kids. Family two is a grandma, the
> and a grandchild.
> I want to create the variable ownkids that gives me the number of own kids
> living in the house:
> hhid member ownkids
> 1 1 3
> 1 2 3
> 1 3 0
> 1 4 0
> 1 5 0
> 2 1 1
> 2 2 1
> 2 3 0
> My force brute solution, which makes a lot of unnecessary comparisons and
> very long (because I generate and drop many variables) is of the form:
> maxmem being the number of members of each household (group i, max is the
> of groups),
> forvalues i = 1/`max' {
> qui sum member if group==`i'
> local maxmem=r(max) forvalues j = 1/`maxmem' {
> di "-----------Household number `i', number of members: `maxmem'"
> forvalues k = 1/`maxmem' {
> di "Household `i', member `j', comparing with `k'"
> qui gen a=motherm==`j' if member==`k'&group==`i'
> qui egen b=max(a)
> qui replace mkids=mkids+b if member==`j'&group==`i'
> drop a b
> qui gen a=fatherm==`j' if member==`k'&group==`i'
> qui egen b=max(a)
> qui replace fkids=fkids+b if member==`j'&group==`i'
> drop a b
> }
> }
> }
> This creates two variables, mkids and fkids, which are the number of kids
> mothers and fathers. For each member of the household, I compare if . The
> replace, drop, takes very long, and even longer if the dataset in memory
> large (I had to partition the dataset in 25 parts to make this run
> The main problem (the main awkwardness in this program) is that I gen,
> etc. because I could not just create a scalar that reflects the value of a
> variable for one precise observation, something of the form (which of
> doesn't work):
> local a=mother==`j' if member==`k'&group==`i' (meaning: mother etc.
> refer to the observation: member==`k'&group==`i')
> I coudn't use something like motherm[_...] becauseI was not using by: ...
> What I would like to know if there are more efficient ways of doing this
> sure there are!).
> Thank you all
> ***************************
> Guillermo Cruces
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Universidad Alberto Hurtado
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-08/msg00516.html","timestamp":"2014-04-17T10:07:03Z","content_type":null,"content_length":"10810","record_id":"<urn:uuid:4c664d76-1392-4dd2-bc97-0846fbb5ffa5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chicago Ridge Trigonometry Tutor
...I have assisted in Pre-Algebra, Algebra, and Pre-Calculus classes. I have also tutored Geometry and Calculus students. I have a degree in Mathematics from Augustana College.
7 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...Wilcox scholarship). In high school, I scored a 2400 on the SAT, and earned a 5 on the AP Calculus BC exam from self study. I also received a 5 on the AP Statistics exam. I have teaching
experience, as well.
13 Subjects: including trigonometry, calculus, geometry, statistics
I am an experienced, professional educator who has worked with students mainly in the area of mathematics. I have guided students to success from grades pre-k through graduate school. In addition,
I also work with test preparation including district wide tests and will be working with students pre...
76 Subjects: including trigonometry, English, Spanish, reading
...I teach by asking the student prompting questions so the student can practice the thought processes that lead them to determining correct answers on their own. This increases the success rate
on examinations and enhances the critical thinking skills necessary in the 'real world'. I also provide...
13 Subjects: including trigonometry, chemistry, calculus, geometry
...I am a college graduate with a Bachelor of Science degree. Study skills are the most important aspect when teaching young children especially in math. I make the learning experience fun and
tell the student "why" the math question has the answer it has, and by using a graphic or visual method in explaining the problems as simple as addition to advanced algebra.
14 Subjects: including trigonometry, chemistry, calculus, algebra 2 | {"url":"http://www.purplemath.com/Chicago_Ridge_Trigonometry_tutors.php","timestamp":"2014-04-18T00:58:45Z","content_type":null,"content_length":"24193","record_id":"<urn:uuid:d74ce629-2b6a-4061-a914-b23d3a87a80d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pooling data for Number Needed to Treat: no problems for apples
BMC Med Res Methodol. 2002; 2: 2.
Pooling data for Number Needed to Treat: no problems for apples
R Andrew Moore
^^1 David J Gavaghan
^2 Jayne E Edwards
^1 Phillip Wiffen
Henry J McQuay^1
^1Pain Research & Nuffield Department of Anaesthetics, University of Oxford, Oxford Radcliffe Hospital, The Churchill, Headington, Oxford, UK
^2Oxford University Computing Laboratory, Wolfson Building, Parks Rd, Oxford OX1 3QD, UK
Corresponding author.
R Andrew Moore: andrew.moore/at/pru.ox.ac.uk; David J Gavaghan: David.Gavaghan/at/comlab.oxford.ac.uk; Jayne E Edwards: jayne.edwards/at/pru.ox.ac.uk; Phillip Wiffen: phil.wiffen/at/pru.ox.ac.uk;
Henry J McQuay: henry.mcquay/at/pru.ox.ac.uk
Received October 24, 2001; Accepted January 25, 2002.
© 2002 Moore et al; licensee BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice
is preserved along with the article's original URL.
This article has been
cited by
other articles in PMC.
To consider the problem of the calculation of number needed to treat (NNT) derived from risk difference, odds ratio, and raw pooled events shown to give different results using data from a review of
nursing interventions for smoking cessation.
A review of nursing interventions for smoking cessation from the Cochrane Library provided different values for NNT depending on how NNTs were calculated. The Cochrane review was evaluated for
clinical heterogeneity using L'Abbé plot and subsequent analysis by secondary and primary care settings.
Three studies in primary care had low (4%) baseline quit rates, and nursing interventions were without effect. Seven trials in hospital settings with patients after cardiac surgery, or heart attack,
or even with cancer, had high baseline quit rates (25%). Nursing intervention to stop smoking in the hospital setting was effective, with an NNT of 14 (95% confidence interval 9 to 26). The
assumptions involved in using risk difference and odds ratio scales for calculating NNTs are discussed.
Clinical common sense and concentration on raw data helps to detect clinical heterogeneity. Once robust statistical tests have told us that an intervention works, we then need to know how well it
works. The number needed to treat or harm is just one way of showing that, and when used sensibly can be a useful tool.
Cates [1] concentrates on Simpson's paradox, which relates to problems that can arise when there is an imbalance between treatment and placebo arms in controlled trials. This "paradox" is hardly new,
having first been discussed by E.H. Simpson 50 years ago [2], and is now a staple of any undergraduate statistics course. Cates further contends that NNTs should be calculated from weighted risk
differences (or odds ratios) rather than pooled raw events, although this is relevant to Simpson's paradox only if inappropriate statistical methods are being used in inappropriate circumstances.
It all comes down to the old problem of meta-analysis, of whether you are comparing apples with something else, and how you count the apples when you've got them.
The problem
All of this is based on a numerical analysis of a Cochrane review of nursing interventions for smoking cessation [3]. The pooled raw data show that fewer people (14.3%) stop smoking with a nursing
intervention than with control (15.6%): that is, the intervention does not work. Cates wants us to believe that the "real" answer is different, and that 3.7% more patients stop smoking with the
intervention than with control.
Clinical heterogeneity
This is, indeed, a paradox. But complicated statistical arguments may not be the best way of dealing with it. When faced with something that looks wrong, the first rule is to look at the raw data. In
this case a simple graphical representation [4] of what happened in each trial helps.
Figure shows a plot of the percentage of quitters for intervention (Y-axis) and control (X-axis) for individual trials. There is a huge variation, from about 2% to 60-70% in each case. Since stopping
smoking is universally judged to be very difficult for most people, trials showing quit rates of up to 55% without any intervention need a second look. When examining the individual trials we find
that three (light blue) were done in primary care populations with no particular desire to stop smoking. We find that seven trials (dark blue) were done in a hospital setting, and included patients
who had heart attacks, cardiac surgery, or even had cancer. It is not surprising that their attitude to stopping smoking was somewhat different.
L'Abbé plot of nursing interventions versus control for smoking cessation at longest follow up. Dark blue symbols indicate studies in a hospital setting and light blue symbols those in a primary care
L'Abbé plots using raw data will almost always show up clinical heterogeneity, whereas Forrest plots, in which data have been manipulated to create statistical outputs like odds ratios or risk
differences, will not.
Of course there are many other sources of clinical heterogeneity in these ten trials, apart from populations tested. It was unlikely, for instance, that any two interventions were the same, and we
know that criteria for cessation were different even within studies. Moreover, the problem of trial imbalance comes from combining different interventions as if they were a single intervention [5,6].
The "real" results
If we believe that patients after coronary artery bypass, for instance, are different in their motivation to stop smoking from unselected general practice patients smoking at least one cigarette per
day, and analyse them separately, a more sensible picture emerges (Table ).
Results for nursing interventions versus control analysed by hospital and primary care setting
In hospital patients there was a significant relative benefit from nursing interventions (using both random and fixed effects models), with 7% more quitting smoking, and generating an NNT of 14. That
is, for every 14 patients given a nursing intervention, one more will quit smoking than would have done without the nursing intervention. Many will see this as a useful result, especially as these
patients need advice about other aspects of their lifestyle, like diet and exercise.
In unselected primary care patients there was no benefit from nursing interventions (using both random and fixed effects models). Two of these three trials were unbalanced, but even choosing the most
effective of three interventions in each to lose the imbalance would not affect the result. Nursing interventions in unselected primary care patients are probably not effective.
These are the real results, and they are quite clear, despite some misgivings about the trials.
Different methods of calculating NNTs, using pooled raw event rates, or from odds ratios, relative risk, or risk difference, will generally give much the same answer when pooling information where
the same outcome is measured over the same time for the same intervention in similar patients, when the effect is large and where there is a sufficiency of information. Variation in event rates may
just be a product of size [7], but when large variations exist the presence of clinical heterogeneity should first be sought.
Unfortunately much of the discussion on statistical techniques put forward in Cates' article is confused and misleading. Other authors have discussed these issues cogently and coherently and
interested readers are referred to these articles [8-12]. We will, however, comment on one particular point which recurs throughout the article, on the validity of pooling data, since this is
fundamental to meta-analysis.
Any technique for combining data from a series of studies or trials of a particular treatment or intervention must be based on a set of assumptions about the nature of any positive or negative effect
that results. These assumptions are discussed below.
1 In the risk difference scale, the traditional assumption is that the event rates are fixed in each of the control (control event rate or CER) and treatment groups (experimental event rate or EER).
Any variation in the observed event rates is then attributed to random chance. If the trials being combined are truly clinically homogeneous and have been designed properly (for example, with
balanced arms), which is the situation that will commonly pertain, then in this (and only in this) case it is appropriate to pool raw data to obtain combined measures such as NNTs.
More recently the "random effects" model [10] has been suggested to allow calculation of summary measures when the degree of "statistical" heterogeneity is greater than that occurring by random
chance. This technique is based on what Thompson & Pocock [11] have described as "the peculiar premise that the trials done are representative of some hypothetical population of trials, and on the
unrealistic assumption that the heterogeneity between studies can be represented by a single variance". We agree with other authors [11,12] who contend that where considerable heterogeneity is
observed it is more useful to investigate what may have caused those differences (such as the underlying differences between the inhospital and primary care patients in the nursing intervention
study) than to attempt to overcome them by statistical methods of unproven validity.
2 The assumptions underlying the odds ratio scale are very different. Here we assume that the ratio of the odds of observing an effect (e.g. smoking cessation) in the treatment group to the odds of
observing that effect in the control group are constant between trials. This scale is appropriate where it can be demonstrated that whilst the underlying event rates in both the control and treatment
arms of the trial may vary, the relative odds of those in whom we observe a particular effect remains fixed.
Techniques for combining odds ratios from several studies were developed primarily for case control studies (particularly cancer trials) to overcome problems due to possible confounding factors (such
as age) by stratifying the data into internally homogeneous strata, then testing the hypothesis that the odds ratio remains constant across the strata. The odds ratio has been proposed as an
appropriate technique for meta-analysis since it allows combination of the results from trials with widely differing control event rates, but it is clearly a matter of some contention whether such
trials can be considered to be clinically homogeneous. In particular, it seems to us to be very unwise to use a summary odds ratio to calculate an NNT value (even if the associated CER is quoted)
since the NNT is, by definition, dependent on the assumption of a fixed underlying control event rate, whilst the odds ratio, also by definition, is not. Any such NNT would therefore be of very
questionable value.
Our practice (as reflected in the two articles published in Bandolier that Cates comments on; [14], [15]) of pooling raw events to calculate an NNT has always been predicated on having clinically
homogeneous trials in the first place, and when outcomes, interventions and duration are similar. Only then is an NNT useful, and only then will an NNT calculated in this way be correct.
The lesson is that systematic reviews and meta-analyses have to be done to high quality. Quality comes in different guises, which might include gross imbalance between the size of groups. What is
needed is some clinical common sense and concentration on raw data. Yes, we need robust statistical tests to tell us that an intervention works, but we need also to know how well an intervention
works. The number needed to treat or harm is just one way of showing how well an intervention works, and when used sensibly can be a useful tool. Among GPs in Essex it was the tool they felt most
confident about using [13].
If we have only apples, then counting them should not be a problem.
Competing Interests
None declared.
Editorial note
An additional commentary on the article by Cates [1] is published alongside this [16].
The authors wish to acknowledge the incredible hard work and dedication of those who produce any systematic review and especially a Cochrane Review. Finding better ways forward is part of the
learning process and the fun; we have learned much from our mistakes, and from those who continue to point them out.
• Cates C. Simpson's paradox and calculation of Number needed to treat from meta-analysis. BMC Medical Research Methodology. 2002;2:1. doi: 10.1186/1471-2288-2-1. [PMC free article] [PubMed] [Cross
• Simpson EH. The Interpretation of Interaction in Contingency Tables. Journal of the Royal Statistical Society, Ser B. 1951;13:238–241.
• Rice VH, Stead LF. The Cochrane Library, Oxford: Update Software.; 2001. Nursing interventions for smoking cessation (Cochrane Review). [PubMed]
• L'Abbé KA, Detsky AS, O'Rourke K. Meta-analysis in clinical research. Ann Intern Med. 1987;107:224–33. [PubMed]
• Hollis JF, Lichstenstein E, Vogt TM, Stevens VJ, Biglan A. Nurse-assisted counseling for smokers in primary care. Ann Intern Med. 1993;118:521–5. [PubMed]
• Rice VH, Fox DH, Lepczyk M, Siegreen M, Mullin M, Jarosz P, Templin T. A comparison of nursing interventions for smoking cessation in adults with cardiovascular health problems. Heart Lung. 1994;
23:473–86. [PubMed]
• Moore RA, Gavaghan D, Tramèr MR, Collins SL, McQuay HJ. Size is everything – large amounts of information are needed to overcome random effects in estimating direction and magnitude of treatment
effects. Pain. 1998;78:209–16. doi: 10.1016/S0304-3959(98)00140-7. [PubMed] [Cross Ref]
• Altman DG, Deeks JJ, Sackett DL. Odds ratios should be avoided when events are common. BMJ. 1998;317:1318. [PMC free article] [PubMed]
• Senn S. Rare distinction and common fallacy. BMJ electronic letters. 2001. http://www.bmj.com/cgi/eletters/317/7168/1318#EL3
• DerSimonian R, Laird N. Meta-analysis of clinical trials. Control Clin Trial. 1986;7:177–88. doi: 10.1016/0197-2456(86)90046-2. [PubMed] [Cross Ref]
• Thompson SG, Pocock SJ. Can Meta-analyses be trusted. Lancet. 1991;338:1127–1130. doi: 10.1016/0140-6736(91)91975-Z. [PubMed] [Cross Ref]
• Fleiss JL. Wiley, New York, Second 1991. Statistical Methods for rates and Proportions.
• McColl A, Smith H, White P, Field J. General practitioners' perceptions of the route to evidence based medicine: a questionnaire survey. BMJ. 1998;316:361–365. [PMC free article] [PubMed]
• Number needed to treat (NNT). Bandolier. 1999;59:1–4. http://www.jr2.ox.ac.uk/bandolier/band59/NNT1.html
• Nicotine replacment and smoking cessation. Bandolier. 2001;86:5–6. http://www.jr2.ox.ac.uk/bandolier/band86/b86-2.html
• Altman DG, Deeks JJ. Meta analysis, Simpson's paradox, and the number needed to treat. BMC Medical Research Methodology. 2002;2:3. doi: 10.1186/1471-2288-2-3. [PMC free article] [PubMed] [Cross | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC65633/?tool=pubmed","timestamp":"2014-04-17T13:43:48Z","content_type":null,"content_length":"66694","record_id":"<urn:uuid:06a9179e-08fe-49ef-b23d-3cc79f920a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Speaker: Prof. David Gleich
Dept. of Computer Science, Purdue University
The Power and Arnoldi Methods in an Algebra of Circulants
Circulant matrices play a central role in a recently proposed formulation of three-way data computations. In this setting, a three-way table corresponds to a matrix where each "scalar" is a vector
of parameters defining a circulant. This interpretation provides many generalizations of results from matrix or vector-space algebra. We derive the power and Arnoldi methods in this algebra. In the
course of our derivation, we define inner products, norms, and other notions. These extensions are straightforward in an algebraic sense, but the implications are dramatically different from the
standard matrix case. For example, a matrix of circulants has a polynomial number of eigenvalues in its dimension; although, these can all be represented by a carefully chosen canonical set of
eigenvalues and vectors. These results and algorithms are closely related to standard decoupling techniques on block-circulant matrices using the fast Fourier transform. | {"url":"http://icme.stanford.edu/events/cme510-linear-algebra-and-optimization-seminar-2","timestamp":"2014-04-19T22:05:40Z","content_type":null,"content_length":"20924","record_id":"<urn:uuid:09097470-53f1-4fe8-823e-7edf55a45cb3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yeah, you would think I would have something better to put here after such a long absence. Still, it's something.
This is nearly identical to the code for the Credit Card Checksum posted earlier. This code does the validation of the checksum for an Ontario health card. I needed it for a project with the
Ministry, found my own code on the Internet (!) and hacked it to fit.
I don't know the exact quote, but someone well known in open source said something to the effect that real men don't do backups, they just publish to the Internet. Here's my backup. Cheers,
'Health Card Validation Routine (Visual Basic)
'Copyright (c) Bob Trower, 2001, 2010
'This source code may be used as you wish, provided
'that these comments remain in the code.
'Returns True if the card number is valid, False otherwise
'Algorithm is the one specified by the Ontario Ministry
'of Health (in turn similar to the 'LUHN' formula
'specified in ANSI X4.13)
'Last Digit of Card Number is the 'check digit'.
'From right to left, take value of check digit and add:
' double every second number and sum resulting digits.
' add the non-doubled digits.
'If the Health Card Number is valid, the final sum
'will be an even multiple of ten ((x mod 10) = 0 ).
Function IsValidHCNumber(HCNumber As String) As Integer
Dim CheckSum As Integer
Dim ThisDigit As Integer
Dim i As Integer
' Note: Order of Steps is important...
IsValidHCNumber = False ' Redundant in VB, but good
If Len(HCNumber) = 10 Then ' Known Card numbers are 10
digits long
CheckSum = Val(Mid(HCNumber, Len(HCNumber), 1)) 'Last
Digit is CheckSum Digit
For i = Len(HCNumber) - 1 To 1 Step -2
ThisDigit = Val(Mid(HCNumber, i, 1)) * 2 ' Double
these digits
CheckSum = CheckSum + (ThisDigit \ 10) +
(ThisDigit Mod 10) ' Add the sum of the resulting digits
For i = Len(HCNumber) - 2 To 1 Step -2
CheckSum = CheckSum + Val(Mid(HCNumber, i, 1)) '
Add these digits
If CheckSum Mod 10 = 0 Then ' Resulting number should
be an even multiple of 10
IsValidHCNumber = True
End If
End If
End Function | {"url":"http://advogato.org/person/DeepNorth/","timestamp":"2014-04-21T09:42:21Z","content_type":null,"content_length":"24171","record_id":"<urn:uuid:dd0c6354-a37a-4f63-a0f1-b4b48c144a6d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Euclid (yōˈklĭd) [key], fl. 300 B.C., Greek mathematician. Little is known of his life other than the fact that he taught at Alexandria, being associated with the school that grew up there in the
late 4th cent. B.C. He is famous for his Elements, a presentation in thirteen books of the geometry and other mathematics known in his day. The first six books cover elementary plane geometry and
have served since as the basis for most beginning courses on this subject. The other books of the Elements treat the theory of numbers and certain problems in arithmetic (on a geometric basis) and
solid geometry, including the five regular polyhedra, or Platonic solids. A few modern historians have questioned Euclid's authorship of the Elements, but he is definitely known to have written other
works, most notably the Optics.
The great contribution of Euclid was his use of a deductive system for the presentation of mathematics. Primary terms, such as point and line, are defined; unproved assumptions, or postulates,
regarding these terms are stated; and a series of statements are then deduced logically from the definitions and postulates. Although Euclid's system no longer satisfies modern requirements of
logical rigor, its importance in influencing the direction and method of the development of mathematics is undisputed.
One consequence of the critical examination of Euclid's system was the discovery in the early 19th cent. that his fifth postulate, equivalent to the statement that one and only one line parallel to a
given line can be drawn through a point external to the line, can not be proved from the other postulates; on the contrary, by substituting a different postulate for this parallel postulate two
different self-consistent forms of non-Euclidean geometry were deduced, one by Nikolai I. Lobachevsky (1826) and independently by János Bolyai (1832) and another by Bernhard Riemann (1854).
See D. Berlinski, The King of Infinite Space: Euclid and His Elements (2013).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Greek mathematician Euclid from Infoplease:
• geometry: Types of Geometry - Types of Geometry Euclidean geometry, elementary geometry of two and three dimensions (plane and ...
• number theory - number theory number theory, branch of mathematics concerned with the properties of the integers ... | {"url":"http://www.infoplease.com/encyclopedia/people/euclid-greek-mathematician.html","timestamp":"2014-04-16T14:14:24Z","content_type":null,"content_length":"29678","record_id":"<urn:uuid:b632ddf1-11f0-44f3-a22d-0e71cf533a8c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: November 1998 [00198]
[Date Index] [Thread Index] [Author Index]
vector fields on spheres
• To: mathgroup at smc.vnet.net
• Subject: [mg14809] vector fields on spheres
• From: wself at viking.emcmt.edu (Will Self)
• Date: Wed, 18 Nov 1998 01:29:11 -0500
• Sender: owner-wri-mathgroup at wolfram.com
David Farrelly and Naum Phleger recently discussed plotting vector
fields on spheres.
Below is an approach to creating vector field plots on spheres. I start
with an icosahedron and subdivide it (repeatedly, if desired) to make a
Buckminster-Fuller-dome-like approximation to a sphere. I show a
vector of the vector field at each vertex of the approximate sphere.
An advantage of this approach, over, say, using lines of longitude and
latitude, is that the points at which the vectors are drawn are
distributed evenly about the sphere.
My homely picture of a vector is a line segment together with an
enlarged point at its tail. It's missing its head. If you really
wanted heads on your vectors, you could use a different function in
place of the function vec below. You can of course adjust the
PointSize[.015] to suit your taste.
You may need to adjust the length of all the vectors to get a good
picture. In the function g below, the factor .4 works pretty well if
you are using a 2-stage subdivision, as shown, but it is somewhat large
if you use a 3-stage subdivision; the vectors crowd into each other.
(By the way, the function g gives a tangent vector field on the sphere,
which does indeed vanish at at least one point (two, really), as
required by the Hairy Ball Theorem.)
You can show just the vector field, or just the sphere, or both
together, as below.
I assume that the sphere is of radius 1. The basic icosahedron is taken
from the Polyhedra package, but normalized so as to be inscribed in a
sphere of radius 1.
Write me if you want any details explained.
Will Self
vec[v_, w_] := (* a vector from point v to point w *) {{PointSize[.015],
Point[v]}, Line[{v, w}]} normalize[{x_, y_, z_}] := {x, y,
z}/Sqrt[x^2+y^2+z^2]; ubary[Polygon[{p_, q_, r_}]] :=
Module[{pq = normalize[(p + q)/2],
pr = normalize[(p + r)/2],
qr = normalize[(r + q)/2],}, {Polygon[{p, pq, pr}], Polygon[{q,
pq, qr}],
Polygon[{r, pr, qr}], Polygon[{pr, pq, qr}]}]; SetAttributes[ubary,
Times[x_, Polygon[y_]] := Polygon[x y]; Protect[Times];
basicIcos = Icosahedron[]/Icosahedron[][[1, 1, 1, 3]]; sphereApprox[n_]
:= Nest[Flatten[ubary[#]]&, basicIcos, n]; (* EXAMPLE *)
g[{x_, y_, z_}] := .4 {Sqrt[1-z^2] y, -Sqrt[1-z^2] x, 0}; sph2 =
verts2 = Union[Flatten[List@@#& /@ sph2, 2]];
(* extracting the set of vertices *) field2 =
Graphics3D[vec[#,#+g[#]]& /@ verts2];
(* generating the list of headless arrows *) Show[field2,
Graphics3D[EdgeForm[], sph2], Boxed -> False] | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Nov/msg00198.html","timestamp":"2014-04-17T13:23:41Z","content_type":null,"content_length":"36745","record_id":"<urn:uuid:9cc4b410-65df-4056-b79a-0e345230f6db>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ratios & Proportions Examples Page 1
At a local book store, for every one hundred books sold, 68 of them are fiction. What is the proportion of Since 68 of every 100 sold are fiction, it’s a safe bet that the remaining books are
nonfiction books sold? nonfiction. So,
At a local book store, for every one hundred books sold, 68 of them are fiction. What is the proportion of nonfiction books sold?
Since 68 of every 100 sold are fiction, it’s a safe bet that the remaining books are nonfiction. So, | {"url":"http://www.shmoop.com/ratios-percentages/ratios-proportions-examples.html","timestamp":"2014-04-20T13:21:31Z","content_type":null,"content_length":"36195","record_id":"<urn:uuid:70851d9a-b1c3-4b62-b1b0-ffff182946ec>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Ellipse
03-20-2013, 09:35 PM
The Ellipse
Just a quick and simple question. Let's say you had a rectangle with a moving ellipse inside. If the ellipse hits any of the walls you can get it to 'bounce' just by testing boundaries (if the
top of the ellipse comes into contact with the top of the rectangle, the ellipse with move in the opposite direction). That's simple enough. But what if the situation was a big ellipse with a
small ellipse inside? How do you check boundaries between two circular objects? I know Ellipse.2D is declared with a few variables for x position, y position, width, and height, but there's
nothing said about circumference or any of that. Some help or guidance would be much appreciated. I hope this issue was stated clearly.
03-20-2013, 10:15 PM
Re: The Ellipse
This will get pretty "mathy", but on Stackoverflow they have an article that might help you on your way: java - How to detect overlapping circles and fill color accordingly? - Stack Overflow
I know it's not overlapping, but the method of distance and radii looks like a place to start.
03-20-2013, 10:23 PM
Re: The Ellipse
I don't think there is anything within the API that you can use to determine this directly. But mathematically, you have the center of both ellipses, so first find the distance between both
centers (call it l2). Next, find the points on the surface of each ellipse that lie on this line (you can use the ellipse equations and polar coordinates to do so). The distance between the
larger ellipse center and its surface point should be larger than l2 + 2x the distance between the smaller ellipse center and its surface point if the inner ellipse is inside.
03-20-2013, 10:47 PM
Re: The Ellipse
I was thinking about the general case. Consider the case where the smaller ellipse's major axis is just a bit shorter in length than the larger ellipses' minor axis. The ellipse centers don't
change relative to one another with respect to each of the individual ellipses rotational orientation. So as the smaller ellipse moves inside the larger one, depending on its orientation I
believe some more calculations will be required. Or am I missing something here?
03-20-2013, 10:54 PM
Re: The Ellipse
This will get pretty "mathy", but on Stackoverflow they have an article that might help you on your way: java - How to detect overlapping circles and fill color accordingly? - Stack Overflow
I know it's not overlapping, but the method of distance and radii looks like a place to start.
That's a very detailed article. I'll have to give that a very thorough read. Thanks for the guidance!
I don't think there is anything within the API that you can use to determine this directly. But mathematically, you have the center of both ellipses, so first find the distance between both
centers (call it l2). Next, find the points on the surface of each ellipse that lie on this line (you can use the ellipse equations and polar coordinates to do so). The distance between the
larger ellipse center and its surface point should be larger than l2 + 2x the distance between the smaller ellipse center and its surface point if the inner ellipse is inside.
That seems to be on the right track. I'll give this some consideration along with SurfMan's suggestion.
I was thinking about the general case. Consider the case where the smaller ellipse's major axis is just a bit shorter in length than the larger ellipses' minor axis. The ellipse centers don't
change relative to one another with respect to each of the individual ellipses rotational orientation. So as the smaller ellipse moves inside the larger one, depending on its orientation I
believe some more calculations will be required. Or am I missing something here?
It's sounds like you're on the right track. I agree, a lot of mathematical calculations are going to have to come into play and I was seeing what you all would think about this. Thank you kindly
for all the replies, everyone. Right at the moment there's no project I've started that has this problem, but I'll be dealing with cases like these very soon. I'll take this to heart! | {"url":"http://www.java-forums.org/new-java/70513-ellipse-print.html","timestamp":"2014-04-21T15:08:31Z","content_type":null,"content_length":"10441","record_id":"<urn:uuid:07b05a7b-e06f-49af-b7f6-becc5548fa6c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: the math in classify.m
Replies: 7 Last Post: Dec 3, 2012 7:34 PM
Messages: [ Previous | Next ]
Re: the math in classify.m
Posted: Dec 3, 2012 7:34 PM
"Aniket" wrote in message <k9ja1j$1gg$1@newscl01ah.mathworks.com>...
> Hi
> This is a little old thread but I hope its active!
> Even I am using classify.m to classify my data into two groups. To get a better insight I tried to understand the math involved in the 'linear' case of classify.m. However, I could not really make
a whole lot of sense out of it.
> I referred the multivariate analysis book by W.J. Krzanowski and found one of the criteria for classification is the closeness of an observation to the mean of a class. Is this what is used in
classify.m or is it different?
It is more general. The measure of closeness is the log of the Gaussian probability distribution which contains the (squared) Mahalanobis distance and log of the apriori probability.
> I understand my question is more statistics oriented than matlab based, but I hope to find some fundamental answers.
Hope this helps.
Date Subject Author
8/9/05 marquito
8/9/05 Re: the math in classify.m Peter Perkins
8/9/05 Re: the math in classify.m marquito
8/9/05 Re: the math in classify.m Peter Perkins
8/9/05 Re: the math in classify.m Greg Heath
8/10/05 Re: the math in classify.m marquito
12/3/12 Re: the math in classify.m Aniket
12/3/12 Re: the math in classify.m Greg Heath | {"url":"http://mathforum.org/kb/message.jspa?messageID=7931811","timestamp":"2014-04-16T20:46:12Z","content_type":null,"content_length":"24955","record_id":"<urn:uuid:a87afa67-06e0-4aa6-bdc6-215ae327ebcb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Kendall's_notation
queueing theory
Kendall's notation
(or sometimes
Kendall notation
) is the standard system used to describe and classify the
queueing model
that a queueing system corresponds to. First suggested by
D. G. Kendall
as a 3 factor
notation system for chacterising
, it has since been extended to include up to 6 different factors.
The notation now appears in most standard reference work about queueing theory. e.g.
A queue is described in shorthand notation by A/B/C/K/N/D or the more concise A/B/C. In this concise version, it is assumed K = ∞, N = ∞ and D = FIFO.
A: The arrival process
A code describing the arrival process. The codes used are:
│ Symbol │ Name │ Description │
│M │Markovian │Poisson process (or random) arrival process. │
│M$^\left\{X\right\}$│batch Markov │Poisson process with a random variable X for the number of arrivals at one time. │
│MAP │Markovian arrival process │Generalisation of the Poisson process. │
│BMAP │Batch Markovian arrival process │Generalisation of the MAP with muliple arrivals │
│MMPP │Markov modulated poisson process│Poisson process where arrivals are in "clusters". │
│D │Degenerate distribution │A deterministic or fixed inter-arrival time. │
│Ek │Erlang distribution │An Erlang distribution with k as the shape parameter. │
│G │General distribution │Although G usually refers to independent arrivals, some authors prefer to use GI to be explicit. │
│PH │Phase-type distribution │Some of the above distributions are special cases of the phase-type, often used in place of a general distribution.│
B: The service time distribution
This gives the distribution of time of the service of a customer. Some common notations are:
│Symbol│ Name │ Description │
│M │Markovian │Exponential service time. │
│D │Degenerate distribution│A deterministic or fixed service time. │
│Ek │Erlang distribution │An Erlang distribution with k as the shape parameter. │
│G │General distribution │Although G usually refers to independent arrivals, some authors prefer to use GI to be explicit. │
│PH │Phase-type distribution│Some of the above distributions are special cases of the phase-type, often used in place of a general distribution. │
C: The number of servers
The number of service channels (or servers).
K: The number of places in the system
The capacity of the system, or the maximum number of customers allowed in the system including those in service. When the number is at this maximum, further arrivals are turned away. If this number
is omitted, the capacity is assumed to be unlimited, or infinite.
Note: This is sometimes denoted C+k where k is the buffer size, the number of places in the queue above the number of servers C.
N: The calling population
The size of calling source. The size of the population from which the customers come. A small population will significantly affect the
effective arrival rate
, because, as more jobs queue up, there are fewer left available to arrive into the system. If this number is omitted, the population is assumed to be unlimited, or infinite.
D: The queue's discipline
The Service Discipline or Priority order that jobs in the queue, or waiting line, are served:
│ Symbol │ Name │ Description │
│FIFO/FCFS│First In First Out/First Come First Served│The customers are served in the order they arrived in. │
│LIFO/LCFS│Last in First Out/Last Come First Served │The customers are served in the reverse order to the order they arrived in. │
│SIRO │Service In Random Order │The customers are served in a random order with no regard to arrival order. │
│PNPN │Priority service │Priority service, including preemptive and non-preemptive. (see Priority queue) │
│PS │Processor Sharing │ │
Note: An alternative notation practice is to record the queue discipline before the population and system capacity, with or without enclosing parenthesis. This does not normally cause confusion
because the notation is different.
External links
• http://www.doc.ic.ac.uk/~nd/surprise_97/journal/vol4/wll1/main.htm
• http://new-destiny.co.uk/andrew/past_work/queueing_theory/Andy/kendall.html
• http://www.cs.wm.edu/~riska/main/node12.html
• http://www.everything2.com/index.pl?node_id=1055043
• A Google search gives many others. | {"url":"http://www.reference.com/browse/wiki/Kendall's_notation","timestamp":"2014-04-18T03:09:26Z","content_type":null,"content_length":"81348","record_id":"<urn:uuid:cca679a6-e8c8-458e-ba61-cdd549bb063e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic - clarification needed about implication
Another reason for those definitions is so that logic "works" the way it should, for every combination of "true" and "false".
For example, "P implies Q" means the same (in ordinary English) as "if P is true, then Q is true", which means the same as "if Q is false, then P is false".
So the truth table for P→Q must be the same as for (not Q)→(not P),
That means P→Q must be defined as true, when P and Q are both false.
You can create a similar argument to show how P→Q must be defined with P is false and Q is true. | {"url":"http://www.physicsforums.com/showthread.php?p=4263434","timestamp":"2014-04-17T12:48:30Z","content_type":null,"content_length":"28489","record_id":"<urn:uuid:08059b67-b8a8-4f70-852a-461904fe13ea>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roger Bagula
Works at Bmftg
Attended UCLA
Lives in here in Lakeside
864 followers|441,404 views
Five Petaled
with different colored leaves:
12*(E^(I Sqrt[5] \[Pi]) nz^2 (-36+nz^5) (-1+Abs[z]^2))/((-1+36 nz^5) (1+Abs[z]^2))
has an hyperbolic disk tree structure with pretty quad-ovals made from Blaschke disks:
6*(E^(I Sqrt[5] \[Pi]) nz^2 (-36+nz^4) (-1+Abs[z]^2))/((-1+36 nz^4) (1+Abs[z]^2))
But the Cassini ring is just isolated at the center
when I hoped to get better structures inside.
I got the disks to separate:
A kind of nice spiral small disk effect by moving the Blaschke disk in closer in the
Changing the Gauss function from a Julia to a Mandelbrot gives a whirlpool effect
( a pregnant Dragon?):
which seems to confirm the the larger disk is associated with the Gauss function.
In his circles
4,699 people
Threefold symmetry in a cubic based
6*(E^(I Sqrt[5] \[Pi]) nz^2 (-36+nz^3) (-1+Abs[z]^2))/((-1+36 nz^3) (1+Abs[z]^2))
The article seem to be talking mostly about artist who draw realistic
representations inn a photographic like way?
Another experiment:
By trying to make Blaschke type Gauss disks I got this
I added the term:
An uneven dipole like
I removed the balancing Gauss function and linked the two disks again with a slow down constant of 1.4.
In his circles
4,699 people
math programming
• Bmftg
math programming, present
here in Lakeside
over there in La Mesa - oakland - redding( oak run) - camp belvoir , virginia - san antonio, texas - los angeles, ( santa monica, lakewood)
Phone 619-5610814
Email • roger.bagula@gmail.com
Address Lakeside
inorganic chemist turned old math type
I was born a while back,
got some education
and then, started reading...
Looking for
A relationship
Other names
fuzzy (nick name)
Roger Bagula's +1's are the things they like, agree with, or want to recommend.
sci.fractals - Google Groupsgroups.google.com
Book review: Survival of the Beautiful By Lucas Brouwers | October 25, 2012 |, Roger Bagula, 7:23 AM. A Math Genius's Sad Calculus Benoit Ma | {"url":"https://plus.google.com/110803890168343196795","timestamp":"2014-04-20T01:40:59Z","content_type":null,"content_length":"431509","record_id":"<urn:uuid:0d1f6e07-edd0-4faa-a5f5-08b6cb5b504c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Loren on the Art of MATLAB
Last week I touched on the difference between using 1:N and [1:N] as the expression in a for statement. I have gotten enough more questions recently on how for works in MATLAB that it seems time for
its own post.
"Typical Usage"
The most common use of for is when we want to do something a certain known number of times and we decide to not vectorize the code. In this context, we often know the total number of times, N and can
write code like this:
N = 3;
for ind = 1:N
for Expression Can Be a Matrix
The for loop uses each column of the expression as the temporary loop variable. When MATLAB was designed, Cleve tells me, he chose : to create row vectors since 1:N is a natural expression and he
expected : to be involved frequently in setting up loops.
From the early days of MATLAB, we could also use a matrix (and these days an N-dimensional array) as the loop expression. In these cases, MATLAB iterates on the columns (collapsing all dimensions >1
to be virtual columns).
A = reshape(1:6,3,2);
count = 0;
% display loop counter and transpose of loop expression
for ind = A
count = count+1;
disp([count, ind'])
for Expression can go to Infinity!
For the benefit of those who didn't read last week's post, I repeat an interesting MATLAB tidbit. You can have your for loop go to Infinity in MATLAB if you don't insist on precalculating the vector
to iterate over.
for ind = 1:Inf
if ind>=2
disp('Stopping now')
Stopping now
ind =
for Scope
The scope of the loop counter in MATLAB's for loop is not like other laguages as far as I know. You can reassign values to the counter inside the loop and yet the loop will, in certain ways, proceed
as if that reassignment hadn't happened. Let's look at some examples.
for ind = 1:3
ind =
In this case, we don't try any funny business and we see we can use the final value of ind past the end of the for loop. This was true as well for the case in which we had the loop counter go to Inf.
MATLAB will continue marching through the loop even if, inside, the loop variable gets disturbed, unless you do something like above and break out when a condition is met (or, of course, if there's
an error). Watch this example:
for ind = 1:3
ind = 10
ind =
ind =
ind =
ind =
ind =
ind =
ind =
I reset ind inside the loop, and yet when the loop continues on its next pass, it reverts to using the loop expression to determine whether or not the for loop is complete.
Uses of for
With the introduction of the JIT/Accelerator in MATLAB 6.5, for loops often do not exact a large performance penalty as they had in the past. I can think of situations in which using for makes lots
of sense, even though MATLAB is a vector-oriented language.
• code can't be vectorized
• takes too much memory if vectorized
• code with loop is clearer and more maintainable
for Gotcha
The main blunder people make using for loops is assigning output to an array that is not preallocated before the loop begins. This results in MATLAB constantly needing to reallocate memory for an
array that is typically one element larger each time through the loop. If you're not careful, this memory allocation time can overwhelm the time of the calculation. One pattern I have seen that
doesn't flagrantly preallocate the output (but does intrinsically) is when you have the loop go in reverse.
N = 7;
for ind = N:-1:1
B(ind) = ind
if ind <= N-1
B =
B =
When, how, and why do you use for loops? Do you know of other perils to watch out for? Post your thoughts here.
Published with MATLAB® 7.2
18 CommentsOldest to Newest
An excellent discussion of the “for” loop. Encourage the writers of MatLab documentation to include examples that span this range in the discussion of “for”.
Regarding when to use “for,” vs vectorizing, one common place to use the “for” is when performing a mathematical operation on the elements of a pair of vectors, e.g.,
a(1) * b(1)
a(2) * b(2)
a(n) * b(n)
This could also be done with:
a .* b
Since the “.*” and it associated operators seem to be unique to MatLab, many lay users won’t be familiar with it. Thus, even though the “for” loop may be slower, it is preferred because it is easier
for many lay users to understand.
Writers of code frequently have to make style choices which may impact the readability and / or speed of the code. Generally, I find that it is better to choose readability over speed.
just to add a bit to the amazing handling of the scope of the loop-counter:
if you run this (slightly contrived) code a few times, either as a script or a function, w/o feature accel on|off, you will see how counters are re-stored into memory from a – yet unknown – place…
some miraculous stuff must be happening under the hood…
format debug;
clear i;
rex=@(str) regexp(str,...
'(?< = = )|(\\s)[-]*\\w+',...
for i=1:5
% let's peek at the addresses
disp(sprintf('+ outer %5d %9s %9s %6s',...
disp(sprintf('* outer %5d %9s %9s %6s',...
clear i; % !
for i=-500:-499
disp(sprintf('- inner %5d %9s %9s %6s',...
end % inner
end % outer
And this is the output:
% this is a typical output (note format may be off!)
+ outer 1 192df560 241bce8 1
* outer 10 192df560 241bce8 10 % = outer counter
- inner -500 192df560 241a988 -500
- inner -499 192df560 241a988 -499
+ outer 2 192df560 241a988 2 % = inner counter
* outer 10 192df560 241a988 10
- inner -500 192df560 2419298 -500
- inner -499 192df560 2419298 -499
+ outer 3 192df560 2419298 3
* outer 10 192df560 2419298 10
- inner -500 192df560 241bce8 -500
- inner -499 192df560 241bce8 -499
+ outer 4 192df560 241bce8 4
* outer 10 192df560 241bce8 10
- inner -500 192df560 241a988 -500
- inner -499 192df560 241a988 -499
+ outer 5 192df560 241a988 5
* outer 10 192df560 241a988 10
- inner -500 192df560 2419298 -500
- inner -499 192df560 2419298 -499
it really is AMAZING
May be it is slightly off topic, but since you have mentioned assigning array output in a loop, can you please explain which one of the following two array assignments should be preferred.
Case 1:
for ind = 1:N
a = ……….;
b(ind) = a
Case 2:
b = [];
for ind = 1:N
a = ……….;
b = [b a];
I understand that pre-allocation is faster, but the vector length is not always known before the loop is executed (for example, the loop may be terminated with a conditional break statement).
Both methods have the same issue with respect to memory allocation. They will allocate each time through the loop. If you know that you will be adding at most one element each time, you can still
preallocate, and then just chop off the end afterwards, perhaps. That might be temporarily wasteful of memory, but will cause less thrashing, on average.
b = zeros(1,N);
for ind = 1:N
b(ind) = something;
possibly break
b = b(1:ind);
for Structure fields:
(let me know if there is a shorter way to
do the same ?)
for k={‘n’ ‘money’}
%rank is an intensive value
I’m not sure how much you might need to generalize this. Ironically, you’d use one line less in this case if you do the formulae explicitly instead of the for loop:
but the runtime shouldn’t be significantly different (although not having to create and dereference cells for the for index might be a win here).
Hi Loren,
the first output of B should look like:
B =
Thanks. I fixed it.
A for loop won’t actually go to infinity; on my computer it won’t go past 2.1475e+009. Take the following script:
for x = 1:inf
a = x;
for x = 1:a*10
b = x;
x = 1;
while x<a*10
x = x + 1;
c = x;
x = 1;
while x<inf
x = x + 1;
I let it run for a long period of time, and then did a CTRL-C. It was still running on the last while loop and the variable values where:
a = 2.1475e+009
b = 2.1475e+009
c = 2.1475e+010
x = 9.1499e+010
It seems as though there is some constraint on the number of iterations on a for loop, but not a while loop. I can’t explain exactly where the constraint comes from or why it’s there, but I certainly
can’t find a way to beat it.
Interestingly, I discovered this because I asserted to someone that, in MATLAB, there are some things that a while loop could do, but not a for loop. Of course, in most languages, both are equivalent
though optimized for different uses. Then someone reminded me of the ability to go to infinity. When I tested it though, it didn’t work.
Anyone know why?
Sorry, about the weird formatting of the script above. It should be:
for x = 1:inf
a = x;
for x = 1:a*10
b = x;
x = 1;
while x<a*10
x = x + 1;
c = x;
x = 1;
while x<inf
x = x + 1;
I get the same limit for FOR loops. While I’m no expert, I hypothesize that the difference occurs between For and While because of variable declaration sizes. Consider these two loops:
for x=0:inf
x = 0
while 1
x = x +1
The for loop terminates quite early with x equal to the 2.15e19 constant you speak of. However, the while loop can march on much past it.
My guesstimation is that MATLAB allocates all of the FOR loop values (the vector 1:inf) at once. The size of which reaches the MATLAB limit for number of elements in a vector. So I assume MATLAB,
truncates the size of the vector to this limitation.
Meanwhile, the while loop only deals with a 1×1 double and will only error out when it reaches the maximum value for a double.
Further proof of this is demonstrated when you try to create a vector 1:2.15e19. This fails because it exceeds the maximum number of elements in a vector. However, 1:2.15^18 does not (though may
crash your computer!). This indicates to me that MATLAB is approaching the limit quantity of elements in a vector when trying to run the 1:inf FOR loop. Thus MATLAB, truncates it prematurely (rather
than run the expected “infinite” for loop).
Whereas, the while loop can continue on until it overflows the double.
Hmm, very interesting. I think you’re mostly right, but slightly off. See Loren’s post from last week: http://blogs.mathworks.com/loren/2006/07/12/what-are-you-really-measuring/,
She explains the difference between 1:N and [1:N] in a for loop. In short, 1:N is expanded as needed, while [1:N] is expanded at start so that MATLAB can complete the called for concatenate
operation. For example:
for x = [1:inf]
will immediately produce an error because MATLAB can’t store an infinite length array. This:
for x = 1:inf
works though because MATLAB expands it as needed.
However, apparently MATLAB actually does save the entire vector 1:inf somewhere despite the fact that all of the previous elements will never be used. To verify this I ran:
[c, maxsize] = computer
which revealed:
maxsize = 2.1475e+009
which is exactly where my for loops stops.
This seems to confirm our suspicions. It also indicates that the for loop could be further optimized to throw out previous elements when given a vector that could be expanded as needed since there is
no way to retrieve the previous elements anyway.
I looked into this a little, and it seems to me that the for loop uses a signed integer as the index. This is why it couldn’t go past 2.15e19. In the current version of MATLAB, the limit for the loop
is 2^63 (they’ve updated to 64-bit integers). It actually gives a warning when the loop index passes past this value (after interrupting with Ctrl-C):
Warning: FOR loop index is too large. Truncating to 9223372036854775807.
This totally makes sense, of course. Someone needs to count to keep track of how many iterations have been done. “for ii=1:inf” is not the same as “while 1″!
Thank you for the informative help!
To extend this, how would you “redo” an iteration in a for loop? I have a particularly long for loop repeatedly asking questions to my user, and I want to be able to redo a number if necessary in the
middle of the loop.
for i = 1:100
%When i = 50, I realize that I want to do 49 again (in the middle of the loop)
Any ideas? If there’s a way that I can do it without using for loops, that is also welcomed.
I can’t think of a way to do what you ask with a for loop. You could more easily do it with a while loop however, since in that case, you can adjust the control parameter for the while loop inside
the loop. It also depends on whether or not the calculations later in the loop depend on previous iterations. If they do not, then you might be able to vectorize the calculation, avoid the loop
totally, then check the conditions afterwards and just recalculate the few elements that need to be adjusted.
Nevermind! You can do it with a while loop and then a count variable that you increment (or decrement) based on your preferences.
count = 1;
while (endloop == 0)
if (goforward == 1)
count = count + 1
if (backup == 1)
count = count - 1
Hi Loren,
I have a question regarding the scope of for. You gave an example where you try to change the for loop variable:
for ind = 1:3
ind = 10
and yet the for loop continues as if you haven’t. Is there away past this? I am checking something with an “if” inside the loop, and in some cases I would like to make the loop run one more, so I
would like to do something like this: ind=ind-1
but I would like this to have the desired effect of making to loop run with the same ind value again.
Thanks for your help!
You can create a secondary counter that you increment or not at the relevant places and make your decision based on that derived value. You might also consider the break or continue commands in
MATLAB for more control over the loop. | {"url":"http://blogs.mathworks.com/loren/2006/07/19/how-for-works/","timestamp":"2014-04-18T03:04:17Z","content_type":null,"content_length":"328765","record_id":"<urn:uuid:f58437a4-97bb-4c86-ae23-25bc97e7d210>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating Search Engine Visibility Percentage
March 9, 2011
Calculating Search Engine Visibility Percentage
Love it or hate it Excel is one of the most useful applications for SEO reporting. An essential function that we utilise it for at Bruce Clay Australia are for calculating “Search Engine Visibility
Percentage” for client and competitor websites.
Search Engine Visibility Percentage
Search Engine Visibility Percentage (SEV%) is a very simple calculation that you can perform to see what percentage of your keywords are ranking in the top 10, or 20 or any other position you’d like
to keep tabs on. We use SEV% as one mechanism for quickly visualising overall ranking movements. In its simplest form, it can be calculated like this:
Number of ranking keywords/Total number of keywords monitored
Unfortunately calculating SEV% like that doesn’t really give you any useful information. What would be more useful to know would be the SEV% for the number 1 rankings, number of rankings in the top 3
or number of rankings on the first page of Google as well as that of your competitors. Enter Excel and the “countifs” formula.
Calculating Simple SEV% for Top 10 Ranking Keywords
Let’s start out with one website to begin with. We extract our ranking data from our own SEO Tools however; ranking data comes in many forms so we’ll use a simple example to start. Here we have a
list of 20 keywords and their respective rankings in Google.com.au, this example and all others are included in the spreadsheet linked to below, so if you understand the concepts of SEV% calculation
you may just want to skip to the chase. Excel workbooks (for 2003 and 2007) are included at the end of the post.
To calculate the Top 10 SEV% of this keyword list in Excel we need to count the number of rankings in the 2^nd column that are between 1 and 10 (inclusive) and divide by the total number of keywords
(20). Here’s how we do it:
Assuming your keywords are in Column A and your rankings are in Column B, add the following formula into Column D (you can put it anywhere you like, just not in Column B as you will create a circular
The countif formula will look at the entire of Column B and count the number of cells that have a number greater than or equal to 1 and less than or equal to 10, then it will divide that number by
20, which is the total number of keywords. The result of this formula is a SEV% of 55% – fifty five percent of those keywords are in the top 10.
Getting more ranking brackets from the same data
We can easily change this formula to expand or reduce the SEV% brackets, here’s what top 3, 5, 20 and 30 would look like:
Top 3
Top 5
Top 20
Top 30
Top 1 is actually much easier, because we don’t have to look for number occurring between other values. Since there is only one condition to search for (numbers that equal 1), we can use a countif
formula instead of countifs.
Top 1
Put it all together and you get something like this. If you’ve downloaded the example spreadsheet, refer to the Simple SEV% tab.
Here it is again in formula view.
Nice, so there you go, you’ve calculated top 1, top 3, top 5, top 10, top 20 and top 30 search engine visibility percentages for your website. Now you can repeat the process for your competitors…or
do it all at once.
Repeating this process for your website and your competitor websites is a bit annoying, not to mention time consuming, especially if you need to do it on a regular basis so let’s delve a little
deeper by adding some more conditions to the data – dates and competitors.
Calculating Competitor SEV%
We’ll use the same data as before but I’m going to reduce the number to 10 keywords so I can illustrate the process in screenshots effectively. You can use as many as you want in your version, there
are no limits imposed by the formulas we’re going to use (only by Microsoft Excel).
Add a couple of columns in front of Column A so we can add dates and website names. We’ll also need to build another small table off to the right to calculate competitor’s SEV% as well as our own.
We’ll need to check the SEV% for ranking data for this month and last month, so add dates to that new table too.
We’re going to be pasting our ranking data and our competitor’s ranking data in the same columns, so we’ll need to make sure each value we paste in has an indication of who owns that ranking, that’s
why we have a website name column. Each ranking will also need a date associated with it so that we can calculate rankings from a specific website on specific date. Now let’s add some data:
I’ve colour coded the competitor data here so we can see the differences easily. Notice that the entries I have used in Column B (Website Name) are exactly the same as the headings used in Column F.
This is important as the formulas we’re going to use will require these to be the same. The date row also matters so make sure they match.
OK, time to add the formulas. Once again, Top 1 is easier to calculate so let’s start there. In cell G5, add the following formula to give you Top 1 SEV% for MySite:
You’ll notice that this formula is a good deal more complex that what we have used previously. The reason for this is now we are dealing with multiple dates as well as competitor data being mixed
with our own. To explain this formula a bit, it; looks in Column A and counts the number of cells that match the date in cell G4 (there are 20), then looks in Column B and counts the number of cells
containing the text in cell F4 (this excludes the competitor values previously counted in the first part of the formula), next it looks in Column D and counts all cells with a value of 1.
The next part of the formula is basically the equivalent of the /20 in the simple SEV% calculations. In this example the number of keywords have been reduced to 10 per website, so we cannot divide by
20, we need to divide by 10. Since it’s logical to assume that the number of keywords we’re monitoring will change, we can get around having to manually update the formula each time the number of
keywords increase or decrease by using this last half of the formula. So if we check 10 keywords this month and 20 keywords next month the SEV% calculation will be accurate, if we need to drop a
keyword or two we can just delete them from the list and the formula will adjust.
Now let’s tackle top 3, which is a similar formula but with the added conditions of greater than or equal to 1 and less than or equal to 3.
Top 3
=COUNTIFS($A:$A,G$4,$B:$B,$F$4,$D:$D,">=1",$D:$D, "<=3")/COUNTIFS($A:$A,G$4,$B:$B,$F$4)
Now copy that formula down to your top 30 and modify the “<=3” to fit the ranking brackets we need:
Top 5
=COUNTIFS($A:$A,G$4,$B:$B,$F$4,$D:$D,">=1",$D:$D, "<=5")/COUNTIFS($A:$A,G$4,$B:$B,$F$4)
Top 10
=COUNTIFS($A:$A,G$4,$B:$B,$F$4,$D:$D,">=1",$D:$D, "<=10")/COUNTIFS($A:$A,G$4,$B:$B,$F$4)
Top 20
=COUNTIFS($A:$A,G$4,$B:$B,$F$4,$D:$D,">=1",$D:$D, "<=20")/COUNTIFS($A:$A,G$4,$B:$B,$F$4)
Top 30
=COUNTIFS($A:$A,G$4,$B:$B,$F$4,$D:$D,">=1",$D:$D, "<=30")/COUNTIFS($A:$A,G$4,$B:$B,$F$4)
Here’s what it will look like in formula view.
What we can do now is reuse these formulas for the competitor data. So, select the range from G5 to G10 copy and paste them at G14. At this stage the data is still “MySite” data because we are
looking up on the cell $F$4. The quickest way to change this to competitor data is to find and replace $F$4 (TheirSite) with the cell where the competitor name is, $F$13.
Now we have the correct data for competitors we can simply drag the top 1 to top 30 values to the right to get the next month’s SEV%.
And here’s the finished product.
And that’s about it, as long as you make sure your website names and dates match up like they should you can add as many keywords and competitors as you need to.
Now that you have produced the raw SEV% data, you can use Excel’s charting functions to visualise and compare your websites SEV% against your competitors.
This is obviously just a start to how you can cut and visualise your ranking data and can be combined with other metrics to provide richer reports and additional insight into your website’s SEO
Search Engine Visibility Calculations – Excel Version 2003
One response to “Calculating Search Engine Visibility Percentage”
1. Siddharth writes:
Another good post Isriel!
@ May 20th, 2011 at 18:45 | {"url":"http://www.bruceclay.com/blog/calculating-sev/","timestamp":"2014-04-17T16:24:30Z","content_type":null,"content_length":"49262","record_id":"<urn:uuid:ae29a269-1ac7-4620-b177-670c69d1d621>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating random numbers with given properties
Author Generating random numbers with given properties
Hand Hi all,
Not sure if this is an appropriate forum, but here goes...
Joined: I want to write a function that is something like this:
Mar 15, int getRandom (int idealNum, int max, int min)
2004 the idea is that over time, calls to getRandom() should average idealNum, and be in the bounds (min, max). It would be fine to also pass in an extra parameter, if helpful, for example:
Posts: int getRandom (int idealNum, int averageSoFar, int max, int min)
539 The actual distribution of numbers isn't important.
Does anyone have a trick for this? It seems like it should be easy, but the bounds (in particular) make it more difficult.
On a side note, this function will be used for generating test data for load testing a database. I want to say stuff like "give each person a random number of kids between 0 and 5".
Hand Usual randomizers will return numbers between LOW and HIGH and an average of (LOW+HIGH)/2
This is known as equal distributions, in contrast to normal distribution.
Joined: For instance the number of children in a culture may be 2.
Jun 02, And the minimum 0.
2003 But the maximum will of course be greater than 4.
Posts: To describe a normal distribution, you don't only need the average, but the standard deviation too.
1923 If you throw a dice, you will expect a equal distribution for 1, 2, ... 6.
But if you throw two dices, you get something like a normal distribution for the sum of those dices between 2, 3, ... 12, with an average of 7.
I like... Perhaps this may help in some way.
If you only need few distributions, like age, you could of course try something like this:
A generally bad idea is, to give a averageSoFar.
In real life, if you throw a coin, and get 3 times an 'eagle' - how probably will the next event be 'number'?
If it is a fair coin, it will be 50%.
There is no memory in the coin, telling it what happend before, and the events are independ from each other.
There is no need to correct the past.
If the next 1000 throws lead to 500 eagles and 500 numbers, the sum will be 503 to 500 which is pretty close to the ideal value.
Now we take an integer-coin. One side is '0' and one side is '1'.
For a fair coin we expect an average of '0.5'.
If you throw the coin 1 time, and get a '0'.
Now you call your method
Your method will not know how often it will be called in the future, to 'correct' the randomness. If it will be called infinitivly, it can ignore the 'averageSoFar'-value. So we need an
additional parameter:
if you now call it with commingCalls=1, shall it return 1?
And if you call it twice, shall it throw a 'AssertedCommingCallsViolatedException'?
Perhaps you will find a OpenSource randomizer- or statistic- library.
I don't know any.
But I'm sure you will not find something with 'avgSoFar'.
Hand Stefan,
Thanks for the detailed reply - that was great!
Joined: I ended up implementing it in the way you said I shouldn't - with an average so far. You're right about the standard distribution - I know the statistics aren't *too* complicated, but I
Mar 15, didn't want to spend too long on this one.
2004 Out of interest, the code I used was this. It basically gives me what I want, though it isn't "Monte Carlo" random.
539 And some example generated numbers...ideal 2, min 0, max 4:
Thanks again,
Hand Tim,
Are you looking for something where idealNum is not necessarily in the middle of min and max? If not, a simple solution (no need for idealNum) would be:
Mar 24, If idealNum is not the average of min and max, your extra parameter averageSoFar could be used as follows:
Posts: 83 so any time your average strayed too high, you'd get a number BELOW idealNum, and any time it went too low, you'd get a number ABOVE idealNum. Over many iterations you end up tending
toward idealNum as your average.
Both of these assume that min < idealNum < max. I'd put in a validation of that, maybe throw an InvalidArgumentException if the arguments don't make sense.... except that you said this was
just for generating test data...
Thanks for the diversion, hope this helps.
- Jon Egan
TOOOOOO funny This is what happens when I compose my post, get distracted, get a phone call, do other things, and come back and hit submit. DOH! :roll:
Joined: Turns out I had essentially the same algorithm.... my code, for your viewing pleasure:
Mar 24,
2004 -- Jon "Next time I'll reload the thread before submitting" Egan
Posts: 83
Hand Jon,
Thanks for the input...Yeah I was assuming min <= idealNum <= max...and yup I threw a RuntimeException...though InvalidArgumentException would be better.
Joined: I basically did what you did in your latter suggestion but complicated the solution for no good reason.
Mar 15, If avgSoFar < ideal, my 'solution' will allow for the possibility of this number also being < ideal, but it makes it no more than half as likely as the number being greater.
2004 This gives no advantage (since distribution is irrelevant) and your code is considerably simpler - I thought about doing it that way but ruled it out for a reason I've since forgotten.
Posts: Hmm...
539 Thanks again,
Joined: Haha and this is what you do when you monitor forums to keenly - I posted my second reply between your first and second replies...
Mar 15, Hmm, sounds like we need exponentially increasing backoff times...I won't post again for a while ;-)
Posts: -Tim
Hand I don't have a solution, but I have new problems
I tested the code 'RandomRange' but modified it, to return ints as intended in the very first post.
Joined: Then I counted the occurences of the numbers.
Jun 02, I tested with numbers from 0 to 4 inclusive with an avg. of 2.
2003 The result is well equal distributed - so what is the improvement?
Posts: Only the 'randomness' is pretty lost. You don't get two Zeros or Fours in sequence.
1923 Ok - you may specify numbers from 0 to 9 with an avg of 2.
You will get an average of 2, but not randomness.
I like... You get values below 2 in a nearly equal distribution and values above 2 in a nearly equal distribution.
Is this what you want?
I would put a big fat warning to the code, that it isn't really creating randomness. Avoid usage in systems, which really need randomness (like simulating, testing).
If you only look for [012] with an average of 1, you get long sequences of '2 0 2 0' and '1 1 1 '. You will never get '2 2' or '0 0'.
Try to create a random function, which does not know the avgSoFar.
This is the enemy of all randomness!
Hand Not to pick nits, but
die=a single, 6-sided cube (often used in gambling and or/gaming)
Joined: dice=plural of die
Sep 29, so, you roll the dice. If you only roll one, you roll the die.
2003 http://www.yourdictionary.com/ahd/d/d0211000.html
299 Brian
My Java Freeware:<br />MACCC - <a href="http://maccc.pipasoft.com" target="_blank" rel="nofollow">http://maccc.pipasoft.com</a><br />Nator - <a href="http://nator.pipasoft.com" target=
"_blank" rel="nofollow">http://nator.pipasoft.com</a>
Hand Stefan,
You're absolutely right on all that. I had a warning, as you say...I beefed it up on your recommendation.
Joined: The numbers my code generates are also fairly dodgy. In the situation with range (1, 10), "ideal average" 2, the sequences goes something like
Mar 15, 5, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1 etc...
2004 That is, 'n' where n > 2 followed by n 1s.
Posts: It still is good enough for what I want - as long as the "average" is right, the distribution and randomness properties are essentially irrelevant. Of course, in 99% of cases this wouldn't
539 be the case, so my code shouldn't be used.
I guess in my original post I was hoping someone else would have an easy solution that wasn't succeptible to these issues...I had a suspicion any solution I wrote would be adequate but
only just. The problem just isn't important enough to warrant it.
Still, in general I guess it would be useful to have a library that generates random numbers according to some given properties / distribution (binomial, normal, others?!)... a pet project
for some statistician out there maybe
Thanks for all your thoughts - it has been interesting!
Jun 02, Yes. To me too.
2003 Alea jacta sunt. (The diece had been thrown? [Thanks to Papa Brien. http://dict.leo.org/ I see you're right.)
Posts: [ March 31, 2004: Message edited by: Stefan Wagner ]
I like...
Actually it's "Alea acta est." But what if you work with just Math.random and if/else statements (or "for" loops) for range of numbers 1-6.
Feb 24, This compiles, but when run, sometimes as a result I get... nothing! How's that? How would you randomly select 1-6 without anything else but this code (strings not allowed tambien)
2004 Muchas gracias
Posts: 85
Hand alea acta est, alea iacta est, alea jacta est?
However - singular form.
Joined: alea acta sunt, alea iacta sunt, alea jacta sunt.
Jun 02, Plural form.
2003 I learned latin only from asterix, a french comic.
Posts: Since i learned java much deeper I will concentrate on this.
I like... This compiles, but when run, sometimes as a result I get... nothing!
No. It doesn't compile.
Codeblocks don't make much sense, if you don't indent.
Indent with tabs in an editor, save, compile, test, and cut'n'paste.
You miss an opening brace in the very first line, and a semicolon at 'first-=limit'.
Since you only print, if not (first>limit), you often get no output.
And your comment says 'choosing from 1-10'.
It doesn't.
It's choosing from 0-9.
Actually it's you who has to stand up much more early, to tell me about errors.
And: I fear, you missed the whole intention of the thread.
veni, vidi, vici
(edited for typo)
[ April 01, 2004: Message edited by: Stefan Wagner ]
Hand O gustas non disputanum- indentation or not...
About the code that I gave, it is true that I missed some important parts as I didn't use copy/paste operation but re-typed it on the run... I apologize for the confusion... As Stefan
Joined: pointed out, the problem was in the if-else statement and the number by which Math.random was multiplied... ("6" instead of "10" adding 1 to the product).
Feb 24, However, I think you cannot put a semicolon after closing parentheses like Stefan pointed out:
2004 "and a semicolon at 'first-=limit'."
Posts: 85 Thread is about randomization, right?
[ April 01, 2004: Message edited by: Gjorgi Var ]
Joined: I didn't mention 'after parantheses'.
Jun 02,
2003 And yes - about randomization, - but:
Posts: with special interest in an avg, not necessarly beeing in the center of [min - max].
1923 and not with equal distribution, but normal distribution.
I like...
Hand So I found a solution for a real normaldistribution of whole numbers from 0 to MAX (inclusive).
For numbers [MIN MAX] | MIN != 0 it needs a fix.
Joined: And it doesn't work for asymetric distributions.
Jun 02, Here is wonderful output for 23 numbers from 0-22,
2003 by 1 000 000 generated numbers:
Posts: Occurences of i = 0:1
1923 Occurences of i = 1:4
Occurences of i = 2:47
I like... Occurences of i = 3:377
Occurences of i = 4:1811
Occurences of i = 5:6295
Occurences of i = 6:18062
Occurences of i = 7:40995
Occurences of i = 8:75955
Occurences of i = 9:118417
Occurences of i = 10:154873
Occurences of i = 11:167258
Occurences of i = 12:154351
Occurences of i = 13:118470
Occurences of i = 14:75863
Occurences of i = 15:41055
Occurences of i = 16:17755
Occurences of i = 17:6283
Occurences of i = 18:1701
Occurences of i = 19:375
Occurences of i = 20:46
Occurences of i = 21:6
Occurences of i = 22:0
Sheriff [Stefan]:
Usual randomizers will return numbers between LOW and HIGH and an average of (LOW+HIGH)/2
Joined: This is known as equal distributions, in contrast to normal distribution.
Jan 30, For instance the number of children in a culture may be 2.
2000 And the minimum 0.
Posts: But the maximum will of course be greater than 4.
18671 Mmmm... this seems a poor example of a normal (Gaussian) distribution, given that it's inherently asymmetric (much like the original problem statement in this thread). A true normal
distribution is symmetric about the mean, and extends to infinity in either direction. Also it would cover all real numbers, not just integers. In practice of course we're often dealing
with a discrete approximation to a normal distribution, and we needn't usually worry about it extending all the way to positive and negative infinity since the probability becomes
negligible (though still nonzero) at some point.
The example of the distribution of the number of children per household might better fit a Poisson distribution. At least, it would if the chance of children remained constant during a
couple's lifetime, and there weren't any additional social or economic forces at work. But that's another topic...
[Stefan]: So I found a solution for a real normaldistribution of whole numbers from 0 to MAX (inclusive).
Actually this is a binomial distribution with p = .5 (making it symmetric). For large N (or rather, large MAX) it will approach a normal ditribution.
It's possible to take this same approch, but skew the probabilities so as to skew the mean:
This again is a binomial distribution, but not (necessarily) symmetric as p can be anything from 0 to 1. It should satisfy all Tim's requirements, with a nice bell-shaped distribution (as
much as possible, depending how skewed you make it). The time to execute is proportional to (max - min), so for large ranges this may be too slow. In this case you can probably approximate
using a normal distribution instead, something like:
[ April 04, 2004: Message edited by: Jim Yingst ]
"I'm not back." - Bill Harding, Twister
Hand Thanks Jim.
Of course it was a big mistake to mix the question of equal- and normal-distribution with symetric and asymetric distribution, as I could have avoided it by more concentration.
Joined: My learning of statistics is too long ago for remembering the differences between gaussian and binominal distribution, so thanks for making this a bit more clearly and giving the links.
Jun 02, I will play around with the code a bit and keep it for the case I need it - you don't claim copyright on it?
2003 Can I ask, whether you took it from a repertoire, or developed it just in time?
Posts: I thought about time of execution of my code too, and had the idea, for large intervals only to generate one random-number, and iterate over it's bits, deciding to increment or decrement
1923 my value.
But that would lead to less readable code and get a bit complex for widths > 64.
I like... Thanks again for your valuable post.
Joined: Can I ask, whether you took it from a repertoire, or developed it just in time?
Jan 30, I just wrote it. For the randomGaussianInt() method, I got the formula sd = n * p * (1 - p) from the page on binomial distributions which I cited; I didn't remember what the formula was. I
2000 didn't test either method though, so you might not want to trust them too much yet.
Jun 02, Yes.
2003 Since the thread-opener stopped writing in this thread, and since I'm only interested in this thread by - well - interest, it's not needful to make it work, but a hint in this thread might
Posts: help the millions of silent readers, googeling for 'random numbers' or reading this forum every day.
I like...
Hand Rest assured that the thread-opener is still reading the thread with interest. :-)
Unfortunately the project I'm working on has just reached the (very) pointy end and so I haven't played with the code that's been posted ... yet! The code I wrote very early on ended up
Joined: sufficing; time constraints prevented me from improving it.
Mar 15, Still, the discussion has been useful and interesting to me, and hopefully others out there.
2004 Cheers,
539 --Tim
Joined: Mmmm, I see I wrote "n = max - mean" when I meant "n = max - min", in both methods. Also there's no reason the mean needs to be an int. Here are corrected versions:
Jan 30,
2000 My results for nextGaussianInt() still seem a little off, maybe, but they're much better now. The nextBinomialInt() method is pretty solid now though.
Hand Yes - a little bit off.
Mean 10, min 0, max 20, 10000 executions:
Jun 02, I get very different Results, when only creating one java.util.Random - Object, and reusing it, calling 'nextGaussian' (There is a hint about 'nextGaussian' being called every second time
2003 in a different way. Each is generating two gaussians, and the second value is returned by the next call. Creating new Objects all the time gives you every time a 'first'-value. This might
Posts: have leaded to the suspicious results above. The new values look much better, if you ignore the '0' and '20' values:
I thought this may be an effect of:
I like...
and changed it to:
calling the method recursively, and the output looks ok for '20:' and not that worse for '0:', but still wrong for '0:', since bigger than the value for '1' (tested for calls = 1000000).
(I kept everything except p and sd as integers).
Shall I post my testprogram?
[ April 08, 2004: Message edited by: Stefan Wagner ]
subject: Generating random numbers with given properties | {"url":"http://www.coderanch.com/t/373277/java/java/Generating-random-numbers-properties","timestamp":"2014-04-18T21:07:31Z","content_type":null,"content_length":"94208","record_id":"<urn:uuid:cbbdf465-6c1e-42cc-ba2e-8ebed4175e07>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roots of cubic equation
October 10th 2009, 05:25 PM
Roots of cubic equation
Kindly help me with this roots question...please
If the roots of the equation 4x³ + 7x² - 5x - 1 = 0 are α, β and γ, find the equation whose roots are
i. α + 1, β + 1 and γ + 1
ii. α², β², γ²
October 11th 2009, 01:05 AM
We have $x_1=\alpha, \ x_2=\beta, \ x_3=\gamma$. Then
1) $y_1=x_1+1, \ y_2=x_2+1, \ y_3=x_3+1$
In this case the easiest way is to let $y=x+1$. Then $x=y-1$ and replace x in the equation:
Now continue.
2) Let $S_1=y_1+y_2+y_3=x_1^2+x_2^2+x_3^2=(x_1+x_2+x_3)^2-2(x_1x_2+x_1x_3+x_2x_3)$
$S_2=y_1y_2+y_1y_3+y_2y_3=x_1^2x_2^2+x_1^2x_3^2+x_2 ^2x_3^2=$
Then the equation is $y^3-S_1y^2+S_2y-S_3=0$
October 11th 2009, 04:18 AM
We have $x_1=\alpha, \ x_2=\beta, \ x_3=\gamma$. Then
1) $y_1=x_1+1, \ y_2=x_2+1, \ y_3=x_3+1$
In this case the easiest way is to let $y=x+1$. Then $x=y-1$ and replace x in the equation:
Now continue.
2) Let $S_1=y_1+y_2+y_3=x_1^2+x_2^2+x_3^2=(x_1+x_2+x_3)^2-2(x_1x_2+x_1x_3+x_2x_3)$
$S_2=y_1y_2+y_1y_3+y_2y_3=x_1^2x_2^2+x_1^2x_3^2+x_2 ^2x_3^2=$
Then the equation is $y^3-S_1y^2+S_2y-S_3=0$
I'm having trouble with similar problems. How do you get:
October 11th 2009, 05:35 AM
If $x_1, \ x_2, \ x_3$ are the roots of equation $ax^3+bx^2+cx+d=0$ then
These are the relations of Viete. | {"url":"http://mathhelpforum.com/pre-calculus/107247-roots-cubic-equation-print.html","timestamp":"2014-04-19T15:13:37Z","content_type":null,"content_length":"13119","record_id":"<urn:uuid:0b219377-e109-4905-89ef-6746d4cdf036>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- mathedu
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Discussion: mathedu
An unmoderated distribution list discussing teaching and learning of post-calculus mathematics. For more information, consult the documentation about the mathedu.
Denotes unread messages since your last visit.
Denotes updated messages since your last visit. | {"url":"http://mathforum.org/kb/forum.jspa?forumID=185&start=210","timestamp":"2014-04-17T07:56:52Z","content_type":null,"content_length":"38261","record_id":"<urn:uuid:064fc032-8e06-4dbb-b2f3-521f295f254c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics, U.Va.
Nicholas Kuhn
Algebraic Topology
Localization of Andre-Quillen-Goodwillie towers, and the periodic homology of infinite loopspaces, Advances in Math. 201 (2006), 318-378.
Primitives and central detection numbers in group cohomology, Advances in Math. 216 (2007), 387-442.
Research Interests
My research is centered around algebraic topology and homotopy theory. Over the years, my research interests have broadened to include algebraic K-theory and group representation theory. My work in
topology in recent years has concerned the development of a character theory for complex oriented cohomology theories, the stable homotopy groups of spheres, the foundations of H-space theory,
iterated loopspace theory, topological realization questions, and the application of Goodwillie polynomial functor theory to classical homotopy. My algebraic work in recent years has been on the
topics of modern Steenrod algebra technology over all finite fields, generic representation theory of the finite general linear groups, rational cohomology, and homological stability questions.
• Bachelor of Arts (BA), Princeton University
• Master of Science (MS), University of Chicago
• Doctor of Philosophy (PhD), University of Chicago
Research Projects
• FRG: Collaborative Research: The Calculus of Functors & the Theory of Operads
□ Project sponsored by U.S. NSF - Directorate Math. & Physical Sciences
□ 06/01/2010 - 05/31/2013 | {"url":"http://www.math.virginia.edu/people/njk4x","timestamp":"2014-04-18T02:57:49Z","content_type":null,"content_length":"18010","record_id":"<urn:uuid:4b310b34-4abb-4335-890b-b9dec091f241>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
absolute value question..
November 10th 2009, 05:13 AM #1
MHF Contributor
Nov 2008
absolute value question..
why they remove the absolute value?
and norm by definition equals the root of inner product integral
why they put second power on both of them?
What exactly is your question? I looked at the .gif and everything seemed to be correct (except for the last part with the $eq$).
They must have forgotten to write it, but it should be there.
and norm by definition equals the root of inner product integral
this is only true when you actually have an inner product defined, and it's not hard to see that in $L^1( \Omega )$ or $C^0(\Omega )$ the norm you are given does not come from an inner product
(actually, precisely because the norm doesn't satisfy the parallelogram law).
November 11th 2009, 08:13 PM #2
Nov 2006
November 11th 2009, 08:31 PM #3
Super Member
Apr 2009 | {"url":"http://mathhelpforum.com/differential-geometry/113648-absolute-value-question.html","timestamp":"2014-04-21T00:20:20Z","content_type":null,"content_length":"34376","record_id":"<urn:uuid:c42bfed2-18ce-412d-87b8-8404a4d52c4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is every finite-dimensional Lie algebra the Lie algebra of an algebraic group?
up vote 16 down vote favorite
Harold Williams, Pablo Solis, and I were chatting and the following question came up.
In Lie group land (where you're doing differential geometry), given a finite-dimensional Lie algebra g, you can find a faithful representation g → End(V) by Ado's theorem. Then you can take the group
generated by the exponentiation of the image to get a Lie group G⊆GL(V) whose Lie algebra is g. I think this is correct, but please do tell me if there's a mistake.
This argument relies on the exponential map, which we don't have an the algebraic setting. Is there some other argument to show that any finite-dimensional Lie algebra g is the Lie algebra of some
algebraic group (a closed subgroup of GL(V) cut out by polynomials)?
lie-algebras lie-groups algebraic-groups
A very nice fact over fields $k$ of char. 0: for any linear algebraic $k$-group G and Lie $k$-subalgebra h in g = Lie(G), [h,h] = Lie(G') for a (unique) connected closed $k$-subgroup $G'$ in $G$.
2 In particular, if h is a semisimple Lie $k$-subalgebra of g (so h = [h,h]) then it is the Lie algebra of a connected closed $k$-subgroup of $G$. See 7.9 in Borel's book on linear algebraic groups
(and 7.7 for a nec/sufficient condition in general, in char. 0). So as always, it's the commutative/solvable stuff that creates all the headaches. – BCnrd May 5 '10 at 16:47
add comment
8 Answers
active oldest votes
A Lie subalgebra of gl(n,k) which is the Lie algebra of an algebraic subgroup of GL(n,k) is called an algebraic subalgebra. Apparently there are Lie subalgebras which are not
up vote 15 algebraic, even in characteristic zero. If g is the Lie algebra of an affine algebraic group then it must be ad-algebraic, ie. its image in End(g) under the adjoint representation must
down vote be an algebraic subalgebra. An example of a non-ad-algebraic Lie algebra is given on pg. 385 of Lie Algebras and Algebraic Groups, by Tauvel and Yu.
add comment
If a Lie subalgebra of gl(V) is the Lie algebra of an algebraic group, then it contains the semisimple and nilpotent factors of any element. There is a five-dimensional Lie algebra for
up vote 16 which this fails, which you can find in Bourbaki (or on p153 of my notes www.jmilne.org/math/CourseNotes/ala.html )
down vote
add comment
Sorry to tune in so late to this conversation, but I think it's worth pointing out some of the paper trail on the question (supplementing BCnrd's comment). This goes back to Chevalley's
initial work in the 1950s, especially in volume II (in French) of his projected six volume work Theorie des groupes de Lie. The first volume (in English) was published by Princeton Press,
then II and III followed but no more; his 1956-58 Paris seminar changed the whole approach to linear algebraic groups and largely ignored the Lie algebras. In Section 14 of II, working over
an arbitrary field of characteristic 0, Chevalley asks which Lie subalgebras $\mathfrak{g} \subset \mathfrak{gl}(V)$ (with $\dim V < \infty$) can be Lie algebras of closed subgroups of the
general linear group. He worked out a number of nice features of the unique smallest algebraic subalgebra containing $\mathfrak{g}$: it has the same derived algebra as $\mathfrak{g}$, for
instance. In fact, the derived algebra of any $\mathfrak{g}$ is algebraic.
up vote Some of these ideas were written down by Borel (Section 7) and by me (Chap. V) in our Springer graduate texts, working over an algebraically closed field of characteristic 0 (my treatment
6 down came from the earlier Bass/Borel notes). These sources include further references to papers by Hochschild and others along with the more scheme-theoretic treatment in the 1970 book by
vote Demazure and Gabriel: II, Section 6, no. 2. They assume $k$ is a field and $\mathfrak{G}$ is a " $k$-groupe localement algebrique" with an appropriate Lie algebra attached, then study
possible algebraic subalgebras.
In prime characteristic the notion of algebraic Lie algebra becomes much more problematic. See Seligman's 1965 book Modular Lie Algebras, VI.2, for some discussion. For example, the Lie
algebra $\mathfrak{sl}(p,k)$ is usually simple modulo its one-dimensional center, but the quotient algebra can't be algebraic since then it would be the Lie algebra of a known simple
algebraic group. Even more extreme are the extra simple Lie algebras of "Cartan type" (and others for small primes), for which there are no corresponding groups. `
add comment
I recommend http://eom.springer.de/l/l058380.htm .
up vote 2 down vote
add comment
My suspicion is yes, at least over C, and that the thing to do is take the Zariski closure in GL(V) of the exponentials of the Lie algebra elements. Of course, over random fields,
one doesn't have this trick.
up vote 1 down
vote Might a trick like looking at the subgroup of GL(V) fixing all invariant polynomials for the Lie algebra work?
I'll have to think about it more, but I really like the trick you're proposing. – Anton Geraschenko Oct 10 '09 at 20:12
add comment
There is a very interesting way in which this statement can fail. If K is a number field and G is an algebraic group over K having good reduction away from N, then a Lie subalgebra h of
Lie(G) is going to be algebraic if and only if the reduction mod P for primes P not dividing N is closed under p-th powers.
For example, take G = G_m x G_m over a number field K. and consider the subalgebra of Lie(G) defined by the graph of the map multiplication by a in K. Then this is going to be algebraic
if and only if a is in Q.
up vote 1
down vote The above result is proven in Bost's paper "Algebraic Leaves of Algebraic Foliations over Number Fields".
1 This isn't right: any abelian Lie algebra is surely algebraic! – Victor Protsak May 5 '10 at 7:47
Yes, but it won't be of the form Lie(H) for a K-algebraic subgroup H of G unless it satisfies the criterion. – Nicolás May 5 '10 at 8:35
The word some (algebraic group) in the formulation is crucial – Victor Protsak May 6 '10 at 0:19
add comment
Let ${\mathbb R}$ act on ${\mathbb R}^2$ by rotation at unit speed, and on a second ${\mathbb R}^2$ by rotation at some irrational speed. Let $G$ be the (solvable) semidirect product ${\
up vote 1 mathbb R} \ltimes {\mathbb R}^4$. Then the coadjoint orbits of $G$ may not be locally closed. I don't think that should happen in the algebraic case.
down vote
Hmmmm.......why not? – Dr Shello Jun 30 '11 at 3:06
The orbit through $x\in X$ is the image of the composite map $G \to G \times \{x\} \to G \times X \to X$. Images of algebraic maps are constructible sets. Constructible sets (are nasty
but) have an open set on which they are locally closed. So $G\cdot x$ has an open dense set on which it's locally closed. And $G\cdot x$ is homogeneous, so it's locally closed
everywhere. – Allen Knutson Jul 4 '11 at 15:12
add comment
Over a field that's not C --- well, at least in non-zero characteristic --- , the problem is deeper than Ben suggests: the proof of Ado's theorem that I know requires characteristic
zero. I think if the theorem were true in non-zero characteristic Mark Haiman would have said so --- he seemed to suggest in his class that it was not.
Incidentally, you don't really need Ado's theorem, which includes data about the action of the nilpotency ideal, just to find a faithful action. Levi's theorem splits any lie algebra as
up vote 0 semisimple semi-direct solvable, and this is enough to find a faithful finite-dimensional representation.
down vote
Also, even with Ado's theorem, there's a warning. The Zariski closure, and indeed even the analytic closure, of the image of the exponential might have higher dimension. E.g. the
irrational line in the torus.
1 <a href="en.wikipedia.org/wiki/Ado's_theorem">Wikipedia</…; says Ado is true in all characteristics. – Ben Webster♦ Oct 7 '09 at 21:32
same link as Ben posted: en.wikipedia.org/wiki/Ado%27s_theorem – Anton Geraschenko Oct 8 '09 at 5:14
Ah, great. Except it wasn't Ado who proved it, but Iwasawa (1948) and Harish-Chandra (1949), according to Wikipedia. We skipped these theorems in Mark's class, which is why I don't
know them. I wonder why Mark didn't mention them... – Theo Johnson-Freyd Oct 8 '09 at 6:07
Well, so, a glance at the paper shows that Harish-Chandra considers only the case when the characteristic is 0, but emphasizes that his proof is entirely algebraic. Iwasawa's paper
isn't available online via MathSciNet. – Theo Johnson-Freyd Oct 8 '09 at 6:24
add comment
Not the answer you're looking for? Browse other questions tagged lie-algebras lie-groups algebraic-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/124/is-every-finite-dimensional-lie-algebra-the-lie-algebra-of-an-algebraic-group/600","timestamp":"2014-04-18T21:17:53Z","content_type":null,"content_length":"91700","record_id":"<urn:uuid:cc8cda72-996d-488a-ab75-92455231ca55>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] reduce array by computing min/max every n samples
Warren Weckesser warren.weckesser@enthought....
Sat Jun 19 15:45:23 CDT 2010
Benjamin Root wrote:
> Brad, I think you are doing it the right way, but I think what is
> happening is that the reshape() call on the sliced array is forcing a
> copy to be made first. The fact that the copy has to be made twice
> just worsens the issue. I would save a copy of the reshape result (it
> is usually a view of the original data, unless a copy is forced), and
> then perform a min/max call on that with the appropriate axis.
> On that note, would it be a bad idea to have a function that returns a
> min/max tuple?
+1. More than once I've wanted exactly such a function.
> Performing two iterations to gather the min and the max information
> versus a single iteration to gather both at the same time would be
> useful. I should note that there is a numpy.ptp() function that
> returns the difference between the min and the max, but I don't see
> anything that returns the actual values.
> Ben Root
> On Thu, Jun 17, 2010 at 4:50 PM, Brad Buran <bburan@cns.nyu.edu
> <mailto:bburan@cns.nyu.edu>> wrote:
> I have a 1D array with >100k samples that I would like to reduce by
> computing the min/max of each "chunk" of n samples. Right now, my
> code is as follows:
> n = 100
> offset = array.size % downsample
> array_min = array[offset:].reshape((-1, n)).min(-1)
> array_max = array[offset:].reshape((-1, n)).max(-1)
> However, this appears to be running pretty slowly. The array is data
> streamed in real-time from external hardware devices and I need to
> downsample this and compute the min/max for plotting. I'd like to
> speed this up so that I can plot updates to the data as quickly as new
> data comes in.
> Are there recommendations for faster ways to perform the downsampling?
> Thanks,
> Brad
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> ------------------------------------------------------------------------
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-June/051074.html","timestamp":"2014-04-18T18:26:09Z","content_type":null,"content_length":"6180","record_id":"<urn:uuid:d98cb5fd-aebd-4f67-ba07-9b288c85b8f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Department of Mathematics & Statistics
Speaker: Hongyun Dong
Title: Boundaries of reduced free group C -algebras
Date: Friday, April 25, 2008
Time: 1.30 pm
Location: Classroom Building 251
A C -dynamical system .A; G; / is a triple consisting of a C -algebra A, a locally
compact group G and a homomorphism of G into the automorphism group Aut(A) of
A. We denote the automorphism for s in G by s. The crossed product is a C -algebra
built out of a dynamical system. Similar to group C -algebras, when the group G is
amenable, the reduced crossed product equals the full crossed product.
This crossed product has the universal property: if . ; U / is any covariant representa-
tion of .A; G; /, then there is representation of A G into C . .A/; U.G//.
Let denote the free group of rank 2 Ä r Ä 1. From the last seminar I gave, we
know that is not amenable. However, is hyperbolic and the action of a hyperbolic
group on its boundary is amenable. Also Cr ./ is exact.
Let L1
.@; / be the crossed product von Neumann algebra, where @ is the | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/641/1325756.html","timestamp":"2014-04-18T09:18:11Z","content_type":null,"content_length":"8165","record_id":"<urn:uuid:8b3d52a2-b7a5-44c6-9cd1-c03f297f64b1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Properties of radicals exponents
January 27th 2009, 09:06 AM #1
Jan 2009
Properties of radicals exponents
How did you simplify the letters in this equation ,did you do a^3/3 and got a^1 as the product and did you do b^6/3 and got 2 or did you cube root 3 and got 1 and cube rooted 6 and got 2?
Please answer this question
Thank you
Hi mj,
This question sounds like it's part of another post somewhere, but I'll try to answer it for you. And I think you meant to say "rational exponents" instead of "radical exponents".
$\sqrt[3]{-27a^3b^6}=(-3^3)^{\frac{1}{3}}(a^3)^{\frac{1}{3}}(b^6)^{\frac{ 1}{3}}=(-3^{\frac{3}{3}})(a^{\frac{3}{3}})(b^{\frac{6}{3}}) =-3ab^2$
January 27th 2009, 09:47 AM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia | {"url":"http://mathhelpforum.com/algebra/70172-properties-radicals-exponents.html","timestamp":"2014-04-20T08:02:19Z","content_type":null,"content_length":"34242","record_id":"<urn:uuid:ce7f0543-4f7c-4d1f-aa42-eacc64c2a53f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
SlideSort: all pairs similarity search for short reads
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Bioinformatics. Feb 15, 2011; 27(4): 464–470.
SlideSort: all pairs similarity search for short reads
Motivation: Recent progress in DNA sequencing technologies calls for fast and accurate algorithms that can evaluate sequence similarity for a huge amount of short reads. Searching similar pairs from
a string pool is a fundamental process of de novo genome assembly, genome-wide alignment and other important analyses.
Results: In this study, we designed and implemented an exact algorithm SlideSort that finds all similar pairs from a string pool in terms of edit distance. Using an efficient pattern growth
algorithm, SlideSort discovers chains of common k-mers to narrow down the search. Compared to existing methods based on single k-mers, our method is more effective in reducing the number of edit
distance calculations. In comparison to backtracking methods such as BWA, our method is much faster in finding remote matches, scaling easily to tens of millions of sequences. Our software has an
additional function of single link clustering, which is useful in summarizing short reads for further processing.
Availability: Executable binary files and C++ libraries are available at http://www.cbrc.jp/~shimizu/slidesort/ for Linux and Windows.
Contact: slidesort/at/m.aist.go.jp; shimizu-kana/at/aist.go.jp
Supplementary information: Supplementary data are available at Bioinformatics online.
Due to the dramatic improvement of DNA sequencing, it is required to evaluate sequence similarities among a huge amount of fragment sequences such as short reads. We address the problem of
enumerating all neighbor pairs in a large string pool in terms of edit distance, where the cost of insertion, deletion and substitution is one. Namely, given a set of n sequences of equal length s
[1],…, s[n], the task is to find all pairs whose edit distance is at most d,
It is conventionally called all pairs similarity search.
All pairs search appears in important biological tasks. For example, it is required in finding seed matches in all pairs alignment necessary in sequence clustering (Abouelhoda et al., 2004). Such
alignments can then be used to detect and correct errors in short reads (Qu et al., 2009). In the first step of de novo genome assembly (Simpson et al., 2009; Zerbino and Birney, 2008), short reads
are decomposed to k-mers, and suffix–prefix matches of length k − 1 are detected. In most cases, exact matches are employed due to time constraint. Using approximate matches, the length of contigs
can be extended, which leads to final assembly of better quality. This problem reduces to all pairs similarity search by collecting all k − 1 prefixes and suffixes into a sequence pool. From the
output, only prefix–suffix pairs are reported.
Basically, most popular methods solve the search problem by either of the following two approaches or a combination of them. (i) Finding a common k-mer and verify the match (Lipman and Pearson, 1985;
Simpson et al., 2009; Warren et al., 2007; Weese et al., 2009; Zerbino and Birney, 2008). (ii) Backtracking in an index structure (i.e. suffix array and FM-index) (Langmead et al., 2009; Li and
Durbin, 2009; Li et al., 2009; Rajasekaran et al., 2005; Sagot, 1998; Trapnell et al., 2009). The first type finds common k-mers in strings (i.e. seed match) and verify if two strings sharing the k
-mer are neighbors indeed by extending the match with dynamic programming. It works perfectly well when the string is long enough. However, when strings are short and the threshold d is large, the
length of shared k-mers falls so short that too many candidate pairs have to be verified. The second type stores the strings into an index structure, most commonly a suffix array. Then, similar
strings are found by traversing nodes of the corresponding suffix tree. This approach works fine if d is small, e.g. d ≤ 2, and employed in state-of-the-art short read mapping tools such as BWA (Li
and Durbin, 2009), bowtie (Langmead et al., 2009) and SOAP2 (Li et al., 2009). However, it becomes rapidly infeasible as d grows larger, mainly because the complexity is exponential to d and no
effective pruning is known. ELAND and SeqMap (Jiang and Wong, 2008) decompose sequences into blocks and use multiple indices to store all k-concatenations of blocks. Obviously, it requires much more
memory compared with BWA, which would be problematic in many scenarios. Multisorting (Uno, 2008) uses multiple substring matches to narrow down the search effectively, but it can find neighbors in
terms of Hamming distance only.
Our method termed SlideSort finds a chain of common substrings by an efficient pattern growth algorithm, which has been successfully applied in data mining tasks such as itemset mining (Han et al.,
2004). A pattern corresponds to a sequence of substrings. The space of all patterns is organized as a tree and systematically traversed. Our method does not rely on any index structure to avoid
storage overhead. Instead, radix sort is employed to find equivalent strings during pattern growth. To demonstrate the correctness of our algorithm, the existence of a common substring chain in any
neighbor pair is proved first. In addition, we deliberately avoid reporting the same pair multiple times by duplication checking. As a result, our method scales easily to 10 million sequences and is
much faster than seed matching methods and suffix arrays for short sequences and large radius.
The rest of this article is organized as follows. Section 2 introduces our algorithm. In Section 3, results of computational experiments are presented. Section 4 concludes the article.
2 METHOD
Two similar strings share common substrings in series. Therefore, we can detect similar strings by detecting chains of common strings systematically. Before proceeding to the algorithm, let us
describe fundamental properties first. Divide the interval 1,…, b blocks of arbitrary length w[1],…, w[b], ∑[i=1]^b w[i] = q[i] = 1 + ∑[j=1]^i−1 w[j]. The alphabet is denoted as Σ. We assume that
each string in the database {s[i]}[i=1]^n consists of s[i] ^. Given two strings s, t, s = t holds if all letters are identical. The substring from positions i to j is described as s[i, j].
A pattern of length k is defined as a sequence of strings and block indices,
where x[i] ^w[y[i]], 1 ≤ y[1] < y[2],…, < y[k] ≤ b. Pattern X matches to string s[i] with offset {p = (p[1],…, p[k]), if
All occurrences of X in the database are denoted as
For convenience, an index set I(X) is defined as the set of sequences appearing in C(X). The number of sequences in I(X) is defined as |I(X)|.
The relationship between neighbor pairs and patterns are characterized by the following theorem.
Theorem 1. —
If s[i] and s[j] of equal length are neighbors, i.e. EditDist(s[i], s[j]) ≤ d, i < j, there exists a pattern X of length b − d such that X matches s[i] with zero offset (p[1] = p[2] = … = p[b−d] = 0)
and matches s[j] with bounded offset −d/2p[k] ≤ d/2k = 1,…, b − d.
Proof. —
There are multiple possible alignments of s[i] and s[j]. An alignment is characterized by the number of matches m, that of mismatches f, that of gaps g[i] in s[i] and that of gaps g[j] in s[j].The
length of s[i] is equal to m + f + g[j] and that of s[j] is m + f + g[i], because any letter in s[i] is aligned to either a letter in s[j] or gap symbol in s[j] and vice versa. Thus, we obtain g[i] =
g[j] ≤ d/2d. Therefore, an aligned position of any letters is within a bound of d/2
Let us divide s[i] into b blocks of length w[1],…, w[b]. Since the number of mismatches are at most d, at least b − d blocks match exactly with their counterparts in s[j] in any alignment. Also,
since aligned positions are bound within d/2s[i] can be found in s[j] within offset between −d/2d/2
Figure 1 illustrates an example of patterns with b = 5, d = 3. This theorem implies that any neighbor pair has a chain of b − d common blocks and the corresponding blocks lie close to each other. It
serves as a foundation of our algorithm presented later on.
An example pattern for block size 5 and edit-distance threshold 3. s[i] matches to X with no offset in the first block and the third block. s[j] matches to X with no offset in the first block but
with −1 offset in the third block.
2.1 Pattern growth
In our algorithm, all patterns X of |I(X)| ≥ 2 are enumerated by a recursive pattern growth algorithm. In a pattern growth algorithm, a pattern tree is constructed, where each node corresponds to a
pattern (Fig. 2). Nodes at depth k contain patterns of length k.
Pattern growth and pruning process of the proposed method. Patterns are enumerated by traversing the tree in depth-first manner. In each node, new elements are generated by sorting substrings in
sequence pool (‘ATA’, ‘TAT’, ...
At first, patterns of length 1 are generated as follows. For each block y[1] = 1,…, d + 1, a string pool is made by collecting substring of {s[i]}[i=1]^n starting at q[y[1]] − d/2q[y[1]]+d/2Fig. 3).
Each pattern of length 1, denoted as X[1], is constructed as a combination of the repeated string x[1] and y[1],
At the same time, all occurrences C(X[1]) are recorded. If s[i] matches the same pattern X[1] by several different offsets, only the smallest offset is recorded. They form the nodes corresponding to
depth 1 of the pattern tree.
Discovery of equivalent strings by radix sort.
Given a pattern X[t] of length t, its children in the pattern tree are generated similarly as follows. For each y[t+1] = y[t] + 1,…, d + t + 1, a string pool is made by collecting substrings of I(X[t
]) starting at q[y[t+1]] − d/2q[y[t+1]] + d/2x[t+1] is identified and the pattern is extended as
and the occurrences C(X[t]) are updated to C(X[t+1]) as well.
To avoid generation of useless patterns, the pattern tree is pruned as soon as the support falls below 2. Also, the tree is pruned if there is no string in I(X) that matches X with zero offset. As
pattern growth proceeds in a depth-first manner, memory is reserved as a pattern is extended, and then immediately released as the pattern is contracted to visit another branch. This dynamic memory
management keeps peak memory usage rather small.
2.2 From patterns to pairs
As implied in Theorem 1, every neighbor pair (Fig. 1) appears in index set I(X) of at least one pattern. Since one of the pair must have zero offset, the set of eligible pairs is described as
Since not all members of P[X] correspond to neighbors, we have to verify if they are neighbors by actual edit-distance calculation.
A problem here is that the same pair (i, j) possibly appears in the index set of many different patterns. It is also possible that pair (i, j) in the same index set is derived from different offsets.
In most applications, it is desirable to ensure that no pair is reported twice. The straightforward solution of the problem is to check if a new pair is previously reported by storing all pairs,
which requires huge amount of memory. We propose an alternative solution that rejects non-canonical pairs without using any extra memory as follows.
A match of s[i] and s[j] can occur in various ways, each of which can be described as the tuple (y, p), where y = y[1],…, y[b−d] describe the blocks in the pattern and p is the offset with which the
pattern matches s[j]. We define the canonical match as the one with lexicographically smallest y and p, where priority is given to y. For example, consider the case s[i] = AATT, s[j] = ATAT, d = 2
and all block widths set to 1. There are 10 different (y, p) pairs as shown in Figure 4, where matching residues are shown in red squares. In this case, (1) is canonical. Among them, the matches with
overlapping squares do not have correct alignment. We do not exclude such pairs to avoid an extra run of dynamic programming.
All (y, p) of s[i] = AATT, s[j] = ATAT. Matching residues are shown in red squares. Since the red squares overlap in (6) and (10), they do not correspond to correct alignment.
To judge if a given match (y, p) is canonical or not, it is sufficient to check if there exists another match that is lexicographically smaller. More precisely, the match represented by y, p is not
canonical, iff there exists a block 1 ≤ z ≤ max(y), z y and an offset −d/2r ≤ d/2
This canonicity check can be done in O(d
Pseudocode of the overall algorithm is shown in Algorithm 1. In line 18, it suffices to compute diagonal stripe of width 2d + 1 of DP matrix. Thus, the distance calculation is performed in O(d
2.3 Remarks
With small modification, SlideSort can deal with gap opening and gap extension penalties. Define the gap open and extension cost as γ[o] and γ[e], respectively. Denote the number of mismatches, gap
opens and gap extensions as f, g[o] and g[e], respectively. Then our all pairs similarity search problem is reformulated as finding pairs such that f + g[o]γ[o] + g[e]γ[e] ≤ d. Denote the number of
gaps in each sequence as g[i] and g[j]. Then, g[e] = g[i] + g[j] and g[o] ≥ 2 (if g[e] ≠ 0), g[o] = 0 (if g[e] = 0) by definition. When g[e] ≠ 0, we have (g[i] + g[j]) γ[e] ≤ d − 2 γ[o]. Since the
lengths of two sequences are equal, the number of gaps is also equal, g[i] = g[j], leading to the following inequality,
Therefore, the offsets p[k], for k = 1,…, b − d, are bounded by
When g[e] = 0, we can find all pairs by zero offsets, hence the offset range (3) covers this case as well. Notice that the block size b must be larger than max(d, d/(γ[o] + γ[e])[o] and γ[e] are
larger than substitution cost.
It is worthwhile to notice that SlideSort can handle sequences of slightly different lengths without any essential modification. See a Supplementary Method in Supplementary file 1 for details.
2.4 Complexity
Denote by σ = |Σ| the alphabet size. Space complexity of SlideSort is O((b − d)dn log n + nm the number of all pairs included in the index set I(X). Time complexity of SlideSort is O(b^d−1 d^b−d n +
mdm. The worst case of the latter part becomes O(n^2 dm is expected to scale much better than O(n^2).
The all pairs similarity search problem can be solved by finding approximate non-tandem repeats in the concatenated text of length nO(^d+1 σ^dn) time and O(n logn + nAbouelhoda et al., 2004). This
time complexity is essentially achieved by producing all variants within distance d of all sequences and finding identical pairs. The difference is that the time complexity of the suffix array
depends on the alphabet size and that not of SlideSort. Thus, SlideSort can be applied to large alphabets (i.e. proteins) as well.
From NCBI Sequence Read Archive (http://www.ncbi.nlm.nih.gov/sra/), we obtained two datasets: paired-end sequencing of Human HapMap (ERR001081) and whole genome shotgun bisulfite sequencing of the
IMR90 cell line (SRR020262). They will be referred to as dataset 1 and 2, respectively. Sequence length of dataset 1 is 51 and that of dataset 2 is 87. Both datasets were generated by Illumina Genome
Analyzer II. Reads that do not include ‘N’ were selected from the top of the original fastq files. Our algorithm was implemented by C++ and compiled by g++. All the experiments were done on a Linux
PC with Intel Xeon X5570 (2.93 GHz) and 32 GB RAM. Only a single core is used in all experiments.
As competitors, BWA (Li and Durbin, 2009) and SeqMap (Jiang and Wong, 2008) are selected among many alternatives, because the two methods represent two totally different methodologies, backtracking
and block combination. BWA is among the best methods using index backtracking, while SeqMap takes an ELAND-like methodology of using multiple indexes for all block combinations. SlideSort is also
compared to the naive approach that calculates edit distances of all pairs. BWA and SeqMap are applied to all pairs similarity search by creating an index from all short reads and querying it with
the same set of reads.
Notice that both BWA and SeqMap are not originally designed for all pairs similarity search but for read mapping, which requires a larger search space. Although fair comparison is difficult between
tools of different purposes, we used mapping tools as competitors, because no tool is widely available for all pairs similarity search, to our knowledge.
For our method, the number b of blocks has to be determined. In the following experiments, we set b relative to the distance threshold d as b = d + k. Here, k corresponds to the pattern size. In the
following experiments, we tried k = 1,…, 5 and reported the best result.
The number of neighbor pairs for both datasets are shown in Supplementary Figure S1. We confirmed that both SlideSort and the naive approach reported exactly the same number of neighbor pairs, which
ensures correctness of our implementation of SlideSort.
3.1 Computation time and memory usage
Figure 5 plots computation time against the distance threshold d. SlideSort is consistently faster in all configurations. As the number of sequences grows and the distance threshold is increased, the
difference from BWA and SeqMap becomes increasingly evident. Not all results are obtained, because of the 30 GB memory limit and 300 000 s time limit. Figure 6 compares peak memory usage of
SlideSort, SeqMap and BWA. We separately measured the memory usage of the indexing step and searching step for BWA, because BWA is designed to execute those steps separately. The peak memory of BWA
for the search step is the smallest in most of the configurations, while that of SlideSort is comparable or slightly better than BWA's peak indexing memory. Detailed results for 100 000 short reads
are shown in Table 1.
Computation time on the two short read datasets. Among the four methods, ‘naive’ represents the exhaustive distance calculation.
Memory usage on the two short read datasets. BWA's memory usage is separately evaluated for the indexing step (index) and the search step (search).
Computation time on 100 000 short reads
BWA is most efficient in space complexity, because its index size does not depend on the distance threshold. Instead, BWA's time complexity rapidly deteriorates as the edit-distance threshold grows
due to explosion of the number of traversed nodes in backtracking. In contrast, SeqMap indexes and hashes all the combination of key blocks, which leads to huge memory usage. SlideSort is similar to
SeqMap in that it considers all block combinations, but is much more memory efficient. The difference is that SlideSort is an indexing free method, which dynamically generates the pattern tree by
depth-first traversal. It allows us to maintain only necessary parts of tree in memory.
3.2 Effect of pattern size
Figure 7 investigates the influence of pattern size k on the efficiency. Except for d = 1, the best setting was around k = 2–4. Our method k = 1 roughly corresponds to the single seed approach, so
this result suggests the effectiveness of using chains. Overall, the computation time was not too sensitive to the choice of k.
Comparison of performance of the proposed method with different k evaluated on 1 000 000 short reads.
3.3 Comparison to single seed
Our algorithm employs a chain of common substrings to narrow down the search. Compared with the single seed approach that uses a k-mer to derive candidate pairs, the total length of substrings can be
much larger than the k-mer without losing any neighbors. It yields higher specificity leading to a smaller number of candidate pairs. Instead of a chain, one can detect multiple k-mers and verify
those pairs containing multiple matches (Burkhardt and Kärkkäinen, 2002). However, this approach has lower specificity in comparison to the chain of the same total length, because the matching
positions of each elements of the chain are strictly localized due to Theorem 1.
Figure 8 compares the number of candidate pairs generated by our method and single seed (k-mer in plot). It corresponds to the number of edit-distance calculations. We have two variations of the
single seed method: ‘k-mer/nonredundant’ stores previously reported pairs in memory, and does not include previously reported pairs in candidates. ‘k-mer/redundant’ does not use additional memory but
counts the same pair multiple times. Here we set the length of the k-mer to d so that no neighbors are lost. In the plot, one can see a significant reduction in the number of candidate pairs in our
algorithm. Notice that the number of candidate pairs is shown in log scale. In our method, the number of candidates is minimum at the largest pattern size, because the total length of substrings is
maximized and specificity becomes optimal. However, since the search space of patterns is expanded, the total computation time is not optimal in this case.
Comparison of number of candidate pairs. The evaluations were done on 100 000 short reads. The proposed method was examined with k = 1,…, 5. ‘Neighbor pairs’ represent the actual number of neighbor
pairs in data. ‘k-mer/nonredundant’ ...
3.4 Clustering analysis of short reads
A main application of SlideSort is hierarchical sequence clustering, which would be used in correcting errors in short reads and preprocessing for metagenome mapping, for example. SlideSort provides
an undirected graph G, where vertices represent short reads and weighted edges represent edit distances of neighbor pairs. Among hierarchical clustering algorithms, single link is most scalable (
Manning et al., 2008). Since the dendrogram of single link clustering is isomorphic to the minimum spanning tree (Gower and Ross, 1969), one can perform single link clustering via minimum spanning
trees (MSTs) construction by the Kruskal or Prim algorithm (Kruskal, 1956; Prim, 1957).
Since storing all edges can require a prohibitive amount of memory, we used a well-known online algorithm for building MSTs (Tarjan, 1983). It creates MSTs from a stream of edges, discarding
unnecessary edges along the way. It essentially maintains all cycle-free connected components and, if a cycle is made by a new edge, it removes the heaviest edge from the cycle. In our experiment,
the additional computation time for finding MSTs was trivially small compared with that of SlideSort finding similar pairs (Table 2). Figure 9 visualizes largest MSTs found in 10 000 000 short reads
of dataset 2 with edit-distance threshold 3 by the 3D visualization tool Walrus (http://www.caida.org/tools/visualization/walrus/).
Visualization of large MSTs from a neighbour graph of 10 000 000 short reads with edit-distance threshold 3. The left graph shows 360 MSTs of 112 995 nodes, each of which consists of more than 100
nodes. The right graph focuses on the largest MST that ...
Comparison of computation time of searching pairs and finding MSTs for two types of short read datasets with edit-distance threshold 3
In this study, we developed a novel method that enumerates all similar pairs from a string pool in terms of edit distance. The proposed method is based on a pattern growth algorithm that can
effectively narrow down the search by finding chains of common k-mers. Using deliberate duplication checks, the number of edit distance calculations is reduced as much as possible. SlideSort was
evaluated on large datasets of short reads. As a result, it was about 10–3000 times faster than other index-based methods. All these results demonstrate practical merits of SlideSort.
One naturally arising question is if SlideSort can be used for mapping. In fact, it is possible by storing the pattern tree (Fig. 2) in memory, and using it as an index structure. However, the index
would cost too much memory for genome-scale data. What we learned from this study is that all pairs similarity search is essentially different from mapping in that one can employ pruning and dynamic
memory management. Thus, all pairs similarity search is not a subproblem of mapping and deserves separate treatment.
In future work, we would like to implement SlideSort with parallel computation techniques. Recent progress in hardware technology enables end-users to use many types of parallel computing scheme such
as SSE and GPGPU. SlideSort would be further improved by using these technologies.
Supplementary Material
Supplementary Data:
The authors thank Kiyoshi Asai, Hisanori Kiryu, Takeaki Uno, Tetsuo Shibuya, Yasuo Tabei and Martin Frith for their fruitful discussions.
Funding: Grant-in-Aid for Young Scientists (22700319, 21680025) by JSPS; FIRST program of the Japan Society for the Promotion of Science in part.
Conflict of Interest: none declared.
• Abouelhoda M., et al. Replacing suffix trees with enhanced suffix arrays. J. Discrete Algorithms. 2004;2:53–86.
• Burkhardt S., Kärkkäinen J. Proceedings of the 13th Symposium on Combinatorial Pattern Matching (CPM'f02) Vol. 2373. Berlin, Germany: Springer; 2002. One-gapped q-gram filters for levenshtein
distance; pp. 225–234. of Lecture Notes in Computer Science.
• Gower J., Ross G. Minimum spanning trees and single-linkage cluster analysis. Appl. Stat. 1969;18:54–64.
• Han J., et al. Mining frequent patterns without candidate generation. Data Min. Knowl. Discov. 2004;8:53–87.
• Jiang H., Wong W.H. Seqmap: mapping massive amount of oligonucleotides to the genome. Bioinformatics. 2008;24:2395–2396. [PMC free article] [PubMed]
• Kruskal J.B. On the shortest spanning subtree of a graph and the traveling salesman problem. Proc. Am. Math. Soc. 1956;7:48–50.
• Langmead B., et al. Ultrafast and memory-efficient alignment of short dna sequences to the human genome. Genome Biol. 2009;10:R25. [PMC free article] [PubMed]
• Li H., Durbin R. Fast and accurate short read alignment with burrows-wheeler transform. Bioinformatics. 2009;25:1754–1760. [PMC free article] [PubMed]
• Li R., et al. Soap2: an improved ultrafast tool for short read alignment. Bioinformatics. 2009;25:1966–1967. [PubMed]
• Lipman D.J., Pearson W.R. Rapid and sensitive protein similarity searches. Science. 1985;227:1435–1441. [PubMed]
• Manning C., et al. Introduction to Information Retrieval. Cambridge, UK: Cambridge University Press; 2008.
• Prim R. Shortest connection networks and some generalizations. Bell Syst. Tech. J. 1957;26:1389–1401.
• Qu W., et al. Efficient frequency-based de novo short-read clustering for error trimming in next-generation sequencing. Genome Res. 2009;19:1309–1315. [PMC free article] [PubMed]
• Rajasekaran S., et al. High-performance exact algorithms for motif search. J. Clin. Monit. Comput. 2005;19:319–328. [PubMed]
• Sagot M.-F. Spelling approximate repeated or common motifs using a suffix tree. In: Lucchesi C.L., Moura A.V., editors. LATIN '98: Theoretical Informatics, Third Latin American Symposium. Vol.
1380. Berlin, Germany: Springer; 1998. pp. 374–390. of Lecture Notes in Computer Science.
• Simpson J.T., et al. Abyss: a parallel assembler for short read sequence data. Genome Res. 2009;19:1117–1123. [PMC free article] [PubMed]
• Tarjan R. Society for Industrial and Applied Mathematics (SIAM) Philadelphia, USA: 1983. Data Structures and Network Algorithms.
• Trapnell C., et al. Tophat: discovering splice junctions with rna-seq. Bioinformatics. 2009;25:1105–1111. [PMC free article] [PubMed]
• Uno T. Proceedings of the 12th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining (PAKDD'08) Vol. 5012. Berlin, Germany: Springer; 2008. An efficient algorithm for finding
similar short substrings from large scale string data; pp. 345–356. of Lecture Notes in Computer Science.
• Warren R.L., et al. Assembling millions of short dna sequences using ssake. Bioinformatics. 2007;23:500–501. [PubMed]
• Weese D., et al. Razers-fast read mapping with sensitivity control. Genome Res. 2009;19:1646–1654. [PMC free article] [PubMed]
• Zerbino D.R., Birney E. Velvet: algorithms for de novo short read assembly using de bruijn graphs. Genome Res. 2008;18:821–829. [PMC free article] [PubMed]
Articles from Bioinformatics are provided here courtesy of Oxford University Press
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3035798/?tool=pubmed","timestamp":"2014-04-21T10:18:06Z","content_type":null,"content_length":"97331","record_id":"<urn:uuid:0b6822ac-310d-42d5-bb68-1b8685e07e81>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Worth Precalculus Tutor
Find a Fort Worth Precalculus Tutor
My name is Jose and I'm from Fort Worth. I am a sophomore at Texas Tech University in Lubbock. I am a Cell and Molecular Biology major.
7 Subjects: including precalculus, geometry, biology, algebra 1
I graduated from the University of Texas at Arlington with a B.S. degree in Biology (emphasis in Evolution and Microbiology), and minor in Psychology (emphasis on Neuropsychology). I have been a
microbiologist for two years now, and have my own website promoting science education and advocacy. I ha...
33 Subjects: including precalculus, chemistry, calculus, physics
...I am a state certified teacher in 4-8th AND 8-12th grade math. I have tutored every math content including statistics, pre-calculus, and college algebra.I am currently an Algebra 2 teacher, but
I have also taught Algebra 1 for many years. I am currently an Algebra 2 & Pre-AP Algebra 2 teacher.
5 Subjects: including precalculus, algebra 1, ACT Math, algebra 2
I have been helping students overcome their fears and doubts about mathematics for over ten years. I have a Chemical Engineering degree from The University of Texas at Austin and I am Texas
certified in Mathematics grades 4-8 and 8-12. I have experience teaching grades 5, 6, 7, 8, Algebra I, Algebra II, Geometry, Pre-Calculus and Calculus.
9 Subjects: including precalculus, chemistry, calculus, geometry
I am a recent graduate of Trinity University in San Antonio, Texas. Before transferring to Trinity, I attended the United States Naval Academy in Annapolis, Maryland for three years. I was an
applied math major at the academy, and finished my math degree at Trinity University.
14 Subjects: including precalculus, chemistry, geometry, Microsoft Word | {"url":"http://www.purplemath.com/Fort_Worth_Precalculus_tutors.php","timestamp":"2014-04-17T01:05:27Z","content_type":null,"content_length":"24180","record_id":"<urn:uuid:f8800b25-93fa-4c7d-bbd8-1e8c4921e39f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eigen MAGMA backend implementation project
I have "created a fork" of Eigen 3.2 and incorporated some (small) progress I have preparing a MAGMA backend to best exploit GPU & CPU. This is an alternative to using MKL which indirectly uses MKL
because MAGMA does use MKL in the back. I have been testing it using MAGMA 1.4.0-beta2 and so far all my project tests pass without having to change our Eigen-based code base which is great!
Anyone who wants to contribute please contact me to bravegag@hotmail.com or via the GitHub account below.
The code base is available here:
I have working the first port corresponding to GeneralMatrixMatrix_MAGMA.h which in reality uses MAGMA API but invokes CUBLAS which is slightly faster:
https://github.com/bravegag/eigen-magma ... ix_MAGMA.h
Another partial implementation (currently a bit of work in progress) ColPivHouseholderQR_MAGMA.h it is missing enabling the macro for float and complex types:
https://github.com/bravegag/eigen-magma ... QR_MAGMA.h
I have been adding implementations prioritizing the functions we use as part of our project.
The remaining *_MAGMA.h implementations are simply mock copies of the MKL counterparts with some basic pre-processing changes i.e. MKL -> MAGMA replacement.
Best regards,
Re: Eigen MAGMA backend implementation project
I have created a simple benchmark project to check the implementation:
I have added documentation including the Gflops results for DGEMM and DGEQP3 so far implemented.
Re: Eigen MAGMA backend implementation project
Quick update, I have additionally implemented:
- dgemv (matrix vector multiplication)
- dtrsm (triangular matrix solver)
- dpotrf (Cholesky decomposition)
The results are very disappointing, unless I have bugs (e.g. copying Host <-> Device more memory than needed ) MAGMA overall and taking into account the memory transfer underperforms in these three
cases see:
The Cholesky decomposition result is the most surprising because Eigen beats both MKL and MAGMA (see the Gflops):
https://raw.github.com/bravegag/eigen-m ... gflops.png
If anyone would be willing to donate a code review I will be more than happy ;)
Best regards,
PS: I think I will include the MAGMA mgpu version of Cholesky to see how it compares running on 2x nVidia Titan GTX. Note that unlike the testings implementations I'm including in the benchmark Gflop
/s the memory transfer times which is effectively relevant for my purposes (get the problems solved asap).
Re: Eigen MAGMA backend implementation project
First, on this page
the images appear broken for me. I have to click on each one to see the image in a separate window.
For Cholesky, where does your matrix reside, on the CPU or the GPU, and which magma routine do you call to do the factorization? The performance there (250 Gflop/s) seems very low. We easily achieve
600 Gflop/s on a Kepler K20c (705 MHz). What performance do you get using the magma testers, testing_dpotrf and testing_dpotrf_gpu?
Yes, increasing the MKL num threads should improve the MAGMA performance. I usually use one socket, say MKL_NUM_THREADS=8. Also, for multi-threaded code, using numactl --interleave=all can have a
huge impact on MKL performance. But surely Cholesky was run multi-threaded, to achieve 400 Gflop/s?
For the dgemm, dgemv, dtrsm, are you calling cublas routines, or magmablas routines? Currently, I generally recommend calling cublas routines, as particularly their gemm is optimized for newer
generation Nvidia cards (Keplers, etc.). In which case the graphs should be labelled "cublas" instead of "magma". You can of course use the magma wrappers; magma_dgemm is a wrapper around
cublasDgemm, while magmablas_dgemm is our own kernel.
Counting the data transfer time is unfair for BLAS operations (gemm, gemv, trsm, etc.) -- one should not transfer matrices to the GPU to do only a single BLAS operation and then transfer the results
back. That is generally a losing strategy. Also, often data transfers can be done asynchronously while other computation is done on the GPU. Perhaps it would be best to plot performance both with and
without data transfer time, to emphasize that data transfers should be avoided or overlapped as much as possible.
Can you be a bit more specific about the matrix sizes? Are these all square (n x n) matrices? For trsm, how many RHS are you solving?
Re: Eigen MAGMA backend implementation project
Hi Mark,
Thank you very much for your response! Please find my comments below:
mgates3 wrote:First, on this page
the images appear broken for me. I have to click on each one to see the image in a separate window.
Thank you. I have corrected that. Now all the images are embedded.
mgates3 wrote:For Cholesky, where does your matrix reside, on the CPU or the GPU, and which magma routine do you call to do the factorization? The performance there (250 Gflop/s) seems very low.
We easily achieve 600 Gflop/s on a Kepler K20c (705 MHz). What performance do you get using the magma testers, testing_dpotrf and testing_dpotrf_gpu?
First the code is here for reference:
https://github.com/bravegag/eigen-magma ... LT_MAGMA.h
The matrix is passed to the method from Eigen and it resides on the Host, it is unpinned Host memory. There I use the function magma_?potrf_gpu therefore the matrix is copied from Host to the Device
and the result L matrix copied back from the Device to the Host. The copying times are accounted for in the benchmark. The benchmark was obtained using MKL_NUM_THREADS=1 but increasing this doesn't
make a big difference. I get with N=10k about 200Gflop/s and testings with two GPU cards I reach 300Gflop/s but note I have two nVidia GTX Titan and not a Tesla card. I can afford 3x GTX Titan cards
for the price of a Tesla but maybe I will switch later to Teslas. But also please note I account for the memory transfer times whereas the MAGMA testing benchmarks don't, for me it is important to
know the overall performance and answer whether makes sense to use MAGMA in each case.
I get the following for N=10k:
testing_dpotrf 177.82 Gflop/s
testing_dpotrf_gpu 190.83 Gflop/s
and testing_dpotrf_gpu with --ngpu 2 I get 300 Gflop/s
I have two Xeon E5-2690 and MKL with MKL_NUM_THREADS=16 which is the total number of cores in this machine reaches 300 Gflop/s
mgates3 wrote:Yes, increasing the MKL num threads should improve the MAGMA performance. I usually use one socket, say MKL_NUM_THREADS=8. Also, for multi-threaded code, using numactl --interleave=
all can have a huge impact on MKL performance. But surely Cholesky was run multi-threaded, to achieve 400 Gflop/s?
I could not find any --interleave=all option anywhere in the MAGMA installation. Where do you specify that in testings program arguments? [UPDATE: I found out how to do --interleave=all in the Intel
knowledge base, thank you. I will check whether the benchmarks improve] I ran my benchmarks using MKL_NUM_THREADS=1 because I am more interested in exploiting intra-parallelism at GPU level and leave
the cores free for inter-parallelism or a MapReduce algorithm that invokes my processes that in turn use Eigen MAGMA. If I use the same cores for Inter and Intra parallelism performance will not be
great. For very big problem sizes it may make sense to put all resources available to solve one single problem. However, I am now facing a huge grid search (+2 million evaluations) of parameters for
a ML problem and need the cores for the inter-parallelism with MapReduce.
mgates3 wrote:For the dgemm, dgemv, dtrsm, are you calling cublas routines, or magmablas routines? Currently, I generally recommend calling cublas routines, as particularly their gemm is
optimized for newer generation Nvidia cards (Keplers, etc.). In which case the graphs should be labelled "cublas" instead of "magma". You can of course use the magma wrappers; magma_dgemm is a
wrapper around cublasDgemm, while magmablas_dgemm is our own kernel.
Yes you are right I use CUBLAS in these cases, but through the MAGMA API. I will correct the labels.
mgates3 wrote:Counting the data transfer time is unfair for BLAS operations (gemm, gemv, trsm, etc.) -- one should not transfer matrices to the GPU to do only a single BLAS operation and then
transfer the results back. That is generally a losing strategy. Also, often data transfers can be done asynchronously while other computation is done on the GPU. Perhaps it would be best to plot
performance both with and without data transfer time, to emphasize that data transfers should be avoided or overlapped as much as possible.
I can do that plotting both with and without accounting for the transfers. I hear you about the losing strategy but this is a simple solution to take advantage of MAGMA and speed up Eigen code easily
which is the case while using ?gemm or ?geqp3. A better strategy would be integrating MAGMA deeper into the Eigen framework and caching matrices in the Device for multiple use given that Eigen uses
expression templates I believe it would be possible to do so. A relatively cheap improvement would be to allocate all Host memory in Eigen to be pinned, this would speed up transfer and improve the
results. Of course, changing to a more powerful Tesla K20 card may also improve the results drastically. I have been playing with my two nVidia GTX Titans so far.
mgates3 wrote:Can you be a bit more specific about the matrix sizes? Are these all square (n x n) matrices? For trsm, how many RHS are you solving?
For ?trsm I use a RHS matrix of size N too.
Many thanks again for your help!
Best regards,
Re: Eigen MAGMA backend implementation project
Bug fix related to the Cholesky decomposition, the benchmark input matrix A was not SPD, this has been fixed and now the results are correct. Now MAGMA shines reaching over 120 Gflop/s:
Re: Eigen MAGMA backend implementation project
After integrating magma_?geqrf3_gpu implementation I get very disappointing benchmark results. This was very surprising to me given the excellent result of the magma_?geqp3_gpu.
Eigen MAGMA integration:
https://github.com/bravegag/eigen-magma ... QR_MAGMA.h
Benchmark result:
https://raw.github.com/bravegag/eigen-m ... gflops.png
The MAGMA testing_dgeqrf_gpu that invokes magma_dgeqrf3_gpu produced very good results topping at 193.88 GFlop/s for N=10k so the memory costs may be really slowing this one down.
Re: Eigen MAGMA backend implementation project
You could also try magma_dpotrf and magma_dgeqrf, that is, the CPU interfaces, since your matrix is on the CPU. However, your matrix is not pinned, so I don't know if that will improve results or
not. Pinning memory can also be a slow operation.
Re: Eigen MAGMA backend implementation project
Hi Mark,
Thank you very much for your feedback and help.
I tried all versions already and the gpu versions performed best in my benchmarks even while having to copy unpinned memory between Host <-> Device. It would be great to have the same benchmarks
executed using a Tesla K20 card and before asking my department to buy them :)
One question. I have 2x Xeon E5-2690 with a total of 16 cores available and the gain I get exporting MKL_NUM_THREADS from 1 to 16 is very marginal for the MAGMA Host versions e.g. testing_dgeqrf
improves by a small margin same goes for testing_dgesvd and so on. MKL has a significant performance increase when enabling multiple threads and using "numactl --interleave=all"
Best regards,
Giovanni Azua
Re: Eigen MAGMA backend implementation project
We use MKL for factoring the panel on the CPU. I'm not sure how parallel MKL has made this. Most of MKL's performance increase with multiple threads is in updating the trailing matrix, which we do on
the GPU, so we benefit much less from multiple threads. Also, once the panel is faster than the trailing matrix update, making it any faster won't help -- the CPU is waiting for the GPU.
The big exception is eigenvalue and SVD routines, where we do the Hessenberg, tri- or bi-diagonal reduction with the GPU, but then call LAPACK's eigensolver directly, so MAGMA potentially benefits
much more from MKL's multithreading. I haven't benchmarked them with different MKL number of threads, though. | {"url":"http://icl.cs.utk.edu/magma/forum/viewtopic.php?p=2494","timestamp":"2014-04-19T02:17:12Z","content_type":null,"content_length":"38011","record_id":"<urn:uuid:d5545eb9-e6c5-4704-b80d-08d0f31daebe>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: positive exponents
Replies: 7 Last Post: May 13, 1995 1:20 PM
Messages: [ Previous | Next ]
Re: positive exponents
Posted: May 10, 1995 12:34 PM
What do you mean "eliminate negative exponents?" There's nothing wrong
with writing x^-1 instead of 1/x. Sometimes it's better left that way.
For example, it's a lot easier to multiply x^2 + x^-1 times x^-2 + x^3 than
to multiply (x^3 + 1)/x times (x^5 + 1)/x^2.
On the other hand, if you want to set a rational function equal to zero, it
helps to get it in a quotient form with only positive exponents, since you
can then go ahead and (almost) ignore the denominator. (Of course at the
end you have to check that your solutions don't make the denominator equal
to zero).
So it really depends on the situation.
Judy Roitman, Mathematics Department
Univ. of Kansas, Lawrence, KS 66049 | {"url":"http://mathforum.org/kb/message.jspa?messageID=1475015","timestamp":"2014-04-19T21:06:48Z","content_type":null,"content_length":"25296","record_id":"<urn:uuid:0887d735-384c-4786-840f-52551cefdd1e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 1997 [00276]
[Date Index] [Thread Index] [Author Index]
Re: Help with findroot
• To: mathgroup at smc.vnet.net
• Subject: [mg9222] Re: [mg9169] Help with findroot
• From: Daniel Lichtblau <danl>
• Date: Fri, 24 Oct 1997 01:00:35 -0400
• Sender: owner-wri-mathgroup at wolfram.com
At 09:38 +0200 97.10.16, Karl Kevala wrote:
> I'm having a problem using findroot to solve an equation. Perhaps
> someone
> could shed some light on what's wrong.
> FindRoot[Sqrt[x/(1.2 10^ -4)]==-0.1*Tan[Sqrt[x/(1.2*10^
> -4)]],{x,0.1,0.1,2}]
> Mathematica 3. returns a value of -0.07 for x which is not anywhere
> close to correct.
> Further, I've tried several different starting values and min/max
> limits, but
> a negative answer is always returned. Eventually I'de like to compile
> a list
> of all the roots of the equation up to, say, x=1000, but I can't even
> get one
> right now.
> Thanks,
> Karl Kevala
Let me add to the several reasonable responses already posted. One
useful method might be to remove the source of singularities, that is,
kill the denominator.
In[45]:= ee = Cos[y]*y + .1*Sin[y] /. y->Sqrt[x/(1.2*10^(-4))] Out[45]=
91.2871 Sqrt[x] Cos[91.2871 Sqrt[x]] + 0.1 Sin[91.2871 Sqrt[x]]
In[46]:= rutes = x /. Table[FindRoot[ee == 0, {x,j}], {j,0,.02,.001}];
Newton's method failed to converge to the prescribed accuracy after
In[47]:= rutes = Union[rutes, SameTest->(Chop[Abs[#1-#2],10^-5]==0&)]
Out[47]= {0., 0.000319609, 0.00268874, 0.00742618, 0.0145323, 0.0240071,
> 0.0358507, 0.130599}
So apparently FindRoot had minor trouble on one attempt, but still found
a bunch of roots in the region sampled. The point is that Newton/secant
methods will behave much better in the absence of poles.
Alternatively (as several others noted) if you know the location of the
poles and roughly where to proceed toward the zeroes, you can give
sufficiantly good initial values that FindRoot will have a chance to
Closer investigation reveals that the troublesome starting point was
.001. Looking at the plot makes it clear what goes awry: a nearly
horizontal slope sends Newton's method to the negative axis, where the
function is complex-valued (and discontinuous) due to presence of Sqrt.
In general it might be better (when possible) to make a substitution
that removes such multi-valued inverses, then solve in two stages.
Others made this point so I'll skip the details.
By the way, you can do the denominator removal more-or-less
automatically using Together and related commands. It typically takes a
few minutes playing around to get the right use of Trig->True and that
sort of thing.
In[69]:= origee = Sqrt[x/(1.2 10^(-4))] +
In[70]:= ee2 = Together[origee, Trig->True];
In[71]:= ee3 = Chop[Numerator[ee2, Trig->True]] Out[71]= 91.2871 Sqrt[x]
Cos[91.2871 Sqrt[x]] + 0.1 Sin[91.2871 Sqrt[x]]
Daniel Lichtblau
Wolfram Research
danl at wolfram.com | {"url":"http://forums.wolfram.com/mathgroup/archive/1997/Oct/msg00276.html","timestamp":"2014-04-21T14:50:24Z","content_type":null,"content_length":"36634","record_id":"<urn:uuid:9feed143-0493-4970-9320-fda4dacf3653>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 6 Review
6.5: Chapter 6 Review
Created by: Heather Haney
In this chapter, we have learned that when working with bivariate, numerical data it is important to first identify whether there is an explanatory and response relationship between the two
variables. Often one of the variables, the explanatory (independent) variable, can be identified as having an impact on the value of the other variable, the response (dependent) variable. The
explanatory variable should be placed on the horizontal axis, and the response variable should be placed on the vertical axis. Next we learned how to construct a visual representation, in the form of
a scatterplot, so that we can see what the association looks like. A scatterplot helps us see what, if any, association there is between the two variables. If there is an association between the two
variables, it can be identified as being strong if the points form a very distinct form or pattern, or weak if the points appear more randomly scattered. If the values of the response variable
generally increase as the values of the explanatory variable also increase, the data has a positive association. If the response variable generally decreases as the explanatory variable increases,
the data has a negative association. We also are able to see the form of the pattern, if any, in the graph.
When the data looks reasonably linear, we learned how to use technology to calculate the least-squares regression line and the correlation coefficient. The least-squares regression line is often
useful for making predictions for linear data. However, we now know to beware of extrapolating beyond the range of our actual data. Correlation is a measure of the linear relationship between two
variables – it does not necessarily state that one variable is caused by another. For example, a third variable or a combination of other things may be causing the two correlated variables to relate
as they do. We learned how to interpret the linear correlation coefficient and that it can be greatly affected by outliers and influential points. Also, just because two variables have a high
correlation, does not mean that they have a cause-and-effect relationship. Correlation ≠ Causation!
Beyond constructing graphs and calculating statistics, we learned how to describe the relationship between the two variables in context. The acronym we learned to help us remember what to include in
our descriptions is S.C.O.F.D. This tells us to describe the strength of the association, to be sure that our description is in context, to mention any outliers or influential points that we observe,
and to describe the form and the direction of the relationship. We also learned how to interpret the slope and y-intercept of the least-squares regression line in context. Even though we are doing
easy calculations, statistics is never about meaningless arithmetic and we should always be thinking about what a particular statistical measure means in the real context of the data.
Chapter 6 Review Exercises
Answer the following as TRUE or FALSE.
1) A negative relationship between two variables means that for the most part, as the x variable increases, the y variable increases.
2) A correlation of -1 implies a perfect linear relationship between the variables.
3) The equation of the regression line used in statistics is $\hat{y}= a + bx$
4) When the correlation is high, one can assume that x causes y.
Complete the following statements with the best answer.
5) The symbol for the Correlation coefficient is _____
6) A statistical graph of two variables is called a(n) ______________________.
7) The ____________________ variable is plotted along the x-axis.
8) The range of r is from _____ to ______.
9) The sign of r and ___________ will always be the same.
10) LSRL stands for _______________________________________________________.
11) If all the points fall on a straight line, the value of r will be ________ or _________.
12) If r = -0.86, then r^2 = _______.
13) If r^2 = 0.77, then r = ______ or ______.
14) Using an LSRL to make predictions outside the range of our original data is called ________________.
15) Using an LSRL to make predictions within the range of our original data is called ________________.
16) When describing the relationship visible in a scatterplot, the acronym S.C.O.F.D. stands for _____________________________________________________________________.
17) Suppose that a scatterplot shows a strong, linear, positive relationship, and the correlation coefficient is very high. However, both of the variables are actually increasing due to some outside
lurking variable. This relationship suffers from ____________________.
18) Suggest possible lurking variables to explain the high correlations between the following variables. Consider whether common response, confounding, or coincidence may be involved.
a) The number of cell phones being made has been increasing over the past 15 years. So has the number of starving children. Do cell phones cause starvation?
b) The stress level of all of the employees at a certain company has been going up consistently over the past year. During this time, they have received three pay bumps. Does this mean that
higher pay is causing the stress?
c) Suppose that a study shows that the number of hours of sleep a person gets is negatively correlated with the number of cigarettes a person smokes. Does this mean that not sleeping causes a
person to smoke more cigarettes?
19) Some researchers wanted to determine how well the number of beers consumed can predict what a person's blood alcohol content will be after a given length of time. They set up an experiment in
which several volunteers each drank a randomly selected number of beers during a given time period. The volunteers were between 21 and 25 years of age, but all ranged in gender and in weight. Exactly
three hours after they began to drink the beers, their BAC level was measured three times. The three measurements were averaged and the results are given in the following table. (This is fictitious
data, but is based on calculations from the BAC calculator at: http://www.dot.wisconsin.gov)
a) Identify the explanatory and response variables and construct a scatter-plot (be neat & label your axes).
b) Calculate the LSRL and correlation. Report the equation and add it to your scatter-plot? Identify your variables (report what x and y stand for).
c) Identify and interpret the slope in context.
d) Identify and interpret the y-intercept in context.
e) If a person drinks 6 beers during this time period, on average what do you predict the person’s BAC will be?
f) If a person drinks 15 beers during this time period, on average what do you predict the person’s BAC will be?
g) Are you confident in both of the previous answers? Why or why not?
20) When investigating car crashes, it is often necessary to try to determine the speed at which a vehicle was traveling at the time of the accident. Investigators are able to do this by measuring
the length of the skid mark left by the vehicle in question. The following table lists several speeds (mph) based on the skid length (feet), according to the Forensic Dynamics website: http://
a) Identify the explanatory and response variables and construct a scatter-plot (be neat & label your axes).
b) Calculate the LSRL and add it to your scatter-plot? Report the equation and identify your variables.
c) Describe the relationship you see in the scatter-plot (S.C.O.F.D.). Be thorough & use complete sentences! Be sure that you explain the relationship in the context of the problem (overall trend
between the two variables).
d) What is the correlation coefficient? Based on your scatterplot and the value of r, how well do you feel that your model fits this data? Explain
e) What is the predicted speed if the skid mark is 157 feet? If it were 36 feet?
f) Would you expect predictions beyond 250 feet to generally over-estimate or under-estimate the actual speed of the vehicle? Why?
Image References:
Beach visitors & temperature: http://technomaths.edublogs.org
Study Time & Test Scores: http://www.icoachmath.com
Car weight & mpg: http://www.statcrunch.com
Elevation & Temperature: http://staff.argyll.epsb.ca
Peanut Butter & Quality Rating: http://intermath.coe.uga.edu
Arm Span & Height: http://3.bp.blogspot.com
Surgeon General’s Warning Labels: http://abibrands.com
Outlier Example: http://mathworld.wolfram.com
Recycling Rates: http://www.earth-policy.org
Files can only be attached to the latest version of None | {"url":"http://www.ck12.org/user%3AAnoka/book/Anoka-Hennepin-Probability-and-Statistics/r56/section/6.5/","timestamp":"2014-04-19T17:23:46Z","content_type":null,"content_length":"106972","record_id":"<urn:uuid:96251dc2-b770-4f5a-bf68-8ffe1f065e57>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAT100 - Fund of Mathematics
This 3- credit course was designed to enhance the student’s knowledge, understanding and appreciation of mathematics. Topics are selected from among a variety of areas and fields in mathematics:
problem solving, set theory, logic, numeration systems, elementary number theory, Euclidean geometry, probability and statistics. The student will examine the language, notation and applications
relative to each area of mathematic. The prerequisite for this course is passing Part A of the University’s math placement exam (11 or higher) or SAT- Math scores of 440 or higher. [Offered in-class/
Web, fall and spring; Web only, summer]. (3 crs.)
MAT104 - Tentative Math
This course is to be used by Student Retention for incoming freshman and transfer students who need a Mathematics course on their schedule.
MAT110 - Applications of Math
This course will provide the student with an application-oriented mathematics curriculum. Students will use cooperative learning to solve real-world problems using technology and multimedia
resources. The course will be taught from a student discovery and investigative standpoint incorporating the use of the National Council of Teachers of Mathematics Principles and Standards for School
Mathematics. The topics covered include statistics, circuits, probability, linear programming and dynamic programming. Prerequisite: Must pass Part A of the University math placement test (11 or
higher) or SAT- Math 440 or higher. [Offered in-class/Web, fall and spring; Web only, summer]. (3 crs.)
MAT120 - Elementary Topics in Math I
This is the first course of a sequence of two mathematics content courses specifically designed for PreK - Grade 8 teacher education candidates by providing an overview of fundamental mathematical
concepts. The content covered includes basic algebraic work with equations and inequalities in one unknown, systems of equations, problem-solving, sets, concepts of logic, binary operations, systems
of numeration, number theory, rational numbers, real numbers, measurement, and use of calculators and computers. Prerequisite: DMA 092 for education majors; pass Part A of the University math
placement test for non majors (11 or higher). [Offered in-class/Web, fall; in-class only, spring; Web only, summer]. (3 crs.)
MAT130 - Elementary Topics in Math II
This is the second course of a sequence of two mathematics content courses specifically designed for PreK - Grade 8 teacher education candidates by providing an overview of fundamental mathematical
concepts. The content covered includes metric and non-metric geometry, coordinate geometry, introduction of statistics and probability, problem-solving, and computer use. Prerequisite: DMA 092 for
education majors; pass Part A of the University math placement test for non-majors (11 or higher). [Offered in-class only, fall; in-class/Web spring; Web only, summer]. (3 crs.)
MAT181 - College Algebra
Fundamental operations; factoring and fractions; exponents and radicals; functions and graphs; equations and inequalities; properties of graphs; systems of linear equations; synthetic division; and
rational zeros of polynomials. Prerequisite: DMA 092 or pass Part B of the University math placement test (12 or higher) or SAT-Math 520 or higher. [Offered in-class/Web, fall and spring; Web only,
summer]. (3 crs.)
MAT191 - College Trig
This course is a thorough development of trigonometry. This course includes both circular and right-triangle geometry, evaluation of trigonometric functions, graphing trigonometric and inverse
trigonometric functions, analyses of trigonometric graphs, verifying trigonometric identities, solutions of trigonometric equations, and applications of trigonometry. Prerequisite: MAT 181 or pass
Part C of the University mathematics placement test (10 or higher) or SAT-math 580 or higher. [Offered in-class only, fall; in-class/Web spring; Web only, summer]. (3 crs.)
MAT195 - Discrete Mathematical Structures for Computer Science
MAT 195. DISCRETE MATHEMATICAL STRUCTURES FOR COMPUTER SCIENCE. An introduction to the theories and structures of mathematics that are relevant in computer science. Topics include set theory, formal
logic, mathematical induction, Boolean algebra, number theory, matrix algebra, combinatorics, probability, algorithmic analysis, complexity and graph theory. Prerequisite: MAT 181, pass Part C of the
University mathematics placement test (10 or higher), or SAT-Math 580 or higher. [Offered in-class, spring]. (3 crs.)
MAT199 - Pre-Calculus
An overview of the essential concepts and techniques of algebra and trigonometry required for the study of calculus. Topics include slope; lines; equations; analyses of graphs and graphing;
logarithms; trigonometric identities; and algebraic, exponential, logarithmic and trigonometric functions. Functions, their graphs and related applications are emphasized. Prerequisite: MAT 181 or
SAT-Math 640 or higher. [Offered in-class, fall and spring; Web only, summer]. (3 crs.)
MAT205 - Statistics for the Health & Social Sciences
For health and Social science majors only; not counted toward a mathematics major. This course is intended to provide just-in-time algebra reviews necessary to complete statistical analysis for
various health and social sciences related problems. The following topics will be covered: frequency distribution, percentiles, measures of central tendency and variability, normal distribution and
curve, populations, samples, sampling distribution of means, sampling distributions of proportion, null and alternative hypotheses, type I and type II errors, tests of means, confidence intervals,
decision procedures, correlation, chi-square, simple analysis of variance, and design of experiments. (3 crs.)
MAT215 - Statistics
For non-mathematics majors; not counted toward a mathematics major. Frequency distribution, percentiles, measures of central tendency and variability, normal distribution, populations, samples,
sampling distribution of means, sampling distribution of proportions, type I and II erros, hypothesis testing of means, confidence intervals, decision procedures, correlation, chi-square, simple
analysis of variance, and design of experiments. Appropriate technology will be used. Prerequisite: MAT 181 or MAT 182; or pass Part A of the University math placement test (11 or higher). [Offered
in-class/Web, fall and spring; Web only, summer]. (3 crs.)
MAT225 - Business Statistics
Statistical techniques relevant to business applications. Primary emphasis is placed upon identification of appropriate statistical methods to use, proper interpretation and appropriate presentation
of results. Topics include descriptive statistics, probability concepts, the normal probability distribution, estimation techniques, tests of hypotheses, simple and multiple linear regression.
Statistical software is used to implement many of the statistical methods. Prerequisite: MAT 181 or pass Part C of the University math placement test (10 or higher). [Offered in-class/Web, fall and
spring; Web only, summer]. (3 crs.)
MAT272 - Discrete Mathematics
Introduction to theories and methods of mathematics relative to computer science but taught from a mathematics perspective. Topics include logic, set theory, elementary number theory, methods of
proofs and proof writing (direct, indirect and math induction), combinatorics, probability, relations and functions, and graph theory. Prerequisite: MAT 181, pass Part C of the University mathematics
placement test (10 or higher), or SAT-math 580 or higher. [Offered in-class, fall]. (3 crs.)
MAT273 - Applied Calculus
The techniques of differentiation and integration are covered without the theory of limits and continuity. Applications in business and biological science are considered. Prerequisite: MAT 181 or MAT
199. [Offered in-class, spring]. (3 crs.)
MAT281 - Calculus I
A study of modeling, functions, limits and continuity; the derivative; appliation of the derivative. Prerequisite: MAT 181 and MAT 191 or MAT 181 and MAT 199 or appropriate score on placement test.
[Offered in-class, fall; Web only, summer]. (3 crs.)
MAT282 - Calculus II
The integral; fundamental theorem of calculus; applications of the integral; inverse functions; logarithmic functions; hyperbolic functions; techniques of integration. Prerequisite: MAT 281 (3 crs.)
MAT290 - Technology for Mathematics
This course, designed for both mathematics and science majors, and for prospective and practicing educators, details the use of technological tools in the study of mathematics and explores the
effective and appropriate use of technology in the teaching, learning, and application of mathematics. The course is composed of three components: using graphing calculators; using calculator-based
laboratories; using mathematical software. The course will be taught from a laboratory-based perspective. Prerequisite: MAT 281. [Offered in-class, spring]. (3 crs.)
MAT303 - Geometry
This course is an analysis of axiomatic systems, axiomatic development of elementary Euclidean geometry and non-Euclidean geometry. Prerequisites: MAT 272. [Offered in-class, fall]. (3 crs.)
MAT304 - History of Mathematics
This course is a historical summary of the development of mathematics. Emphasis is placed on relating mathematics to the development of world culture and its relationship with all aspects of our
culture. The lives and discoveries of many mathematicians are discussed. Methods of incorporating the history of mathematics into high school mathematics courses are a major focus of the course. This
is a writing-intensive course. Prerequisites: MAT 303 and MAT 282. [Offered in-class, spring] (3 crs.)
MAT305 - Theory of Equations
This course deals with the development of the theory involved in solving algebraic equations. It includes complex numbers as an algebraic system, polynomials inone variable, cubic and biquadratic
equations, limits of roots and rational roots, isolation and seperation of roots, and the approximate evaluations of roots. Prerequisite: MAT 281. {Offered in-class spring} (3 crs.)
MAT341 - Linear Algebra I
This course covers systems of linear equations and matrices, determinants, vectors in n-space, vector spaces, linear transformations, eigenvalues, eigenvectors, and applications. Prerequisite: MAT
272. [Offered in-class, fall and spring]. (3 crs.)
MAT345 - Cryptography I
This course is intended to provide an introduction to Cryptography with the Number Theory portion tied in. The following topics will be covered: Modular Arithmetic, Classical Cryptography, Public Key
Cryptography, and Introduction to Complexity. Prerequisite: MAT 195 or MAT 272 and MAT 282. (3 crs.)
MAT351 - Abstract Algebra I
Fundamental concepts of logic; natural numbers, well-ordering property, induction, elementary concepts of number theory; groups, cosets, Lagrange's theorem, normal subgroups, factor groups;
homomorphism, isomorphism, and related topics, including Cayley's theorem. Prerequisite: MAT 272. [Offered in-class, spring]. (3 crs.)
MAT361 - Nonparametric Statistics
This course provides an introduction to nonparametric statistics. It includes the introduction of nonparametric inference testing including the Wilcoxon Test, the Mann-Whitney test, the
Ansari-Bradley test, the Kruskal-Wallis test, the Kendall test and the Theil test along with their associated estimators. Students will also learn how to run analyses within a statistical software
program. Prerequisite: MAT 215 (3 crs.)
MAT381 - Calculus III
Indeterminate forms, L'hospital's rule, improper intregrals, arc length, area of a surface of revolution, parametric equations (tangents, arc length, areas, and tangents), vectors, dot products,
cross products, vector-valued functions, vector equations of lines and planes in three space, polar corrdinates (graphs, areas, and arc length), conics, sequences, bounded sequences, series, integral
test and estimates of sums, comparison tests, alternating series, absolute convergence (ratio and root tests), power series, taylor series, and taylor remainder theorem. Prerequisite: MAT 282
{offered in class Fall and Spring} (3 crs.)
MAT382 - Calculus IV
Vector analysis in two and three dimensions. Topics include theory of curves and surfaces; partial derivatives; multiple integrals; and Greens, Stokes and the Divergence theorems. Prerequisite: MAT
381. [Offered in-class, spring]. (3 crs.)
MAT400 - Mathematical Modeling
This course provides an introduction to mathematical modeling. Students will be presented with real-world problems from a variety of fields, such as physics, biology, earth science, meteorology,
engineering, economics, etc. Students will learn how to select appropriate mathematical models to model the real-world situation, use the model to solve a real-world problem, interpret the results of
the solution(s), and communicate their work orally and in written format. This course serves as a capstone course for students in mathematics. This is a writing-intensive course. Prerequisites: MAT
215, MAT 341 and MAT 381. Offered in-class, fall. (3 crs.)
MAT406 - Differential Equations
Ordinary differential equations and their solutions. The existence and uniqueness of solutions. Various types of differential equations and the techniques for obtaining their solution. Some basic
applications, including numerical techniques, computer solution techniques are discussed. Prerequisite: MAT 381. [Offered in-class, Fall]. (3 crs.)
MAT419 - Math Internship
This course is designed for the BA in Mathematics majors who are seeking work experience in the Mathematics area. This intern experience will enable students to apply their knowledge of Mathematics
in the real workplace. The internship will provide students with the valuable experience in the applications of Mathematics that should enhance their job opportunities upon graduation. Prerequisite:
Students should have completed 64 credits with a good GPA plus have sufficient background to meet the needs of the particular internship in which they will be participating. (3 crs.)
MAT441 - Linear Algebra II
Extends the concepts learned in Linear Algebra I. The content is not fixed, but usually includes the following topics: linear transformations, change-of-bases matrices, representation matrices,
inner-product spaces, eigenvalues and eigenvectors, diagonalization, and orthogonality. Prerequisite: MAT 341. [Offered in-class, spring-even yrs. only]. (3 crs.)
MAT451 - ABSTRACT ALGEBRA II
Study of rings, ideals, quotient rings, integral domains and fields; ring homomorphisms; polynomial rings, division algorithms, factorization of polynomials, unique factorization, extensions,
fundamental theorem; finite fields. Prerequisite: MAT 351. (3 crs.)
MAT461 - Statistical Analysis I
Basic concepts of both discrete and continuous probability theory. The study of random variables, probability distributions, mathematical expectation and a number of significant probability models.
Introduction to statistical estimation and hypothesis testing. Prerequisite: MAT 282 and MAT 215 or MAT 225. [Offered in-class, fall]. (3 crs.)
MAT462 - Statistical Analysis II
Continuation of MAT 461. Statistical theory and application of statistical estimation techniques and hypothesis testing methods. Simple linear and multiple linear regression models. Statistical
techniques are implemented with microcomputer statistical software. Prerequisite: MAT 461. [Offered in-class, spring-odd yrs. only].(3 crs.)
MAT468 - Field Experience in Math
Prerequisites: Mathematics major, completed 64 credits or permission of Dept. Chairman or course instructor. The class is not scheduled to run every semester and will be run approximately once every
two years. It gives the student an opportunity to delve into a topic of special interest to him/her. It also affords him/her an opportunity to experience research procedures in the field. The
selection of the topic or topics to be examined will vary according to the research interests of faculty and students. The course is an online course that includes a required field trip related to
the topic of the course. Examples of possible topics may be “The Mathematics of Egypt," “The Mathematicians of Europe," “The Mathematics of Wallstreet."
MAT474 - Complex Analysis
The course introduces the essential concepts in the Complex Analysis such as: Complex Numbers, Functions of complex variables, their Limits, Continuity, Derivatives, Integrals and Cauchy Integral
Formula. 2. Shows students the importance of Complex Analysis Theory in pure mathematics, applied mathematics and Engineering Applications. 3. Develops the elements of Complex-Variable Functions in a
rigorous and self contained manner. Prerequisites: MAT 382 or the permission of the department chairperson or the course instructor (3 crs.)
MAT481 - Real Analysis I
This course covers logic and techniques of proof; relations, functions, cardinality and naive set theory; development of real numbers from natural numbers through topology of the line; and
convergence and related ideas dealing with functions (sequences and series), including continuity. Prerequisites: MAT 272 and MAT 381. [Offered in-class, fall]. (3 crs.)
MAT482 - Real Analysis II
This course covers logic and techniques of proof; relations, functions, cardinality and naive set theory; development of real numbers from natural numbers through topology of the line; and
convergence and related ideas dealing with functions (sequences and series), including continuity. Prerequisite: MAT 481 . [Offered in-class, fall]. (3 crs.)
MAT496 - Senior Research Project
This course, which should be taken near the end of the student's bachelor's degree program, involves an in-depth investigation of a mathematical or computer science topic (theoretical computer
science being mathematical in nature). The investigation will culminate in the presentation of a senior paper. Prerequisite: Permission of mathematics and computer science departments. (3 crs.) | {"url":"http://www.calu.edu/current-students/academic-resources/catalogs/undergraduate/mat-st-courses.htm","timestamp":"2014-04-17T00:50:34Z","content_type":null,"content_length":"29006","record_id":"<urn:uuid:1a87b689-d20f-44a7-901d-2bf709556a97>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
ASU - EEE - 230
CSE/EEE 230: Computer Organization and Assembly ProgrammingAssignment 3Date assigned: October 4th, 2012Due date: 4.30 pm, October 17th, 2012 (at the start of the lecture)1. [40 points] This exercise
deals with recursive procedure calls. For the follow
ASU - EEE - 230
CSE/EEE 230: Computer Organization and Assembly ProgrammingAssignment 4Date assigned: October 26th, 2012Due date: 4.30 pm, November 5th, 2012 (at the start of the lecture)1. [40 points] The table
below contains the link-level details of two different
ASU - EEE - 230
1a)P121.00 E 93.5GHz 6s1point11pointP230.00 E 92GHz 15s1point11pointP390.00 E 95GHz 9s1point21pointif all are wrong, 0 point; else base points are 2.b)15s6s4points2GHz 5GHz2pointsc)9s15s4points30.00
E 9 18.00 E 92points
ASU - EEE - 230
CSE/EEE 230: Computer Organization and Assembly ProgrammingProject 1Date assigned: September 24th, 2012Due date: 4.30 pm, October 8th, 2012 (at the start of the lecture)1. [20 points] Modify demo
program and run it.i.Follow the demo instruction to r
ASU - EEE - 230
CSE/EEE 230: Computer Organization and Assembly ProgrammingProject 2Date assigned: October 10th, 2012Due date: 4.30 pm, October 24th, 2012 (at the start of the lecture)1. [100 points] Translate C
code to MIPS assembly.Task 1 Translate the following C
ASU - EEE - 230
CSE/EEE 230: Computer Organization and Assembly ProgrammingProject 3Date assigned: October 26th, 2012Due date: 4.30 pm, November 7th, 2012 (at the start of the lecture)Task 1 [20 points] Define
function DOT_PRODUCT, and use this function to calculate
ASU - EEE - 230
1.i.The five classic components of a computer are input, output, memory, datapath, and control, with thelast two sometimes combined and called the processor.ii.The datapath performs the arithmetic
operations, and control tells the datapath, memory, a
ASU - EEE - 230
CSE/EEE 230: Computer Organization and Assembly ProgrammingSolution 2Date assigned: September 24th, 2012Due date: 4.30 pm, October 3rd, 2012 (at the start of the lecture)1. [40 points] The following
problems deal with translating from C to MIPS. Assum
ASU - EEE - 230
CSE/EEE 230: Computer Organization and Assembly ProgrammingSolution 3Date assigned: October 4th, 2012Due date: 4.30 pm, October 17th, 2012 (at the start of the lecture)1. [40 points] This exercise
deals with recursive procedure calls. For the followin
ASU - ENG - 102
Due: Tuesday, August 24 (by class time)Worth: Ungraded but requiredIntroductory ExerciseThis exercise helps students familiarize themselves with uploading assignments to Blackboardand introduces each
students background and educational goals to the in
ASU - ENG - 102
Due: Tuesday, August 30 (by class time)Worth: Ungraded but requiredIntroductory EssayPlease write a short essay (1-2 pages) in which you evaluate the quality of Edward Kochsargument Death and Justice
(posted in the Content area of Blackboard under Rea
ASU - ENG - 102
JournalsDue: Thursday, December 2 (In class)Worth: 10%For this assignment students will keep a journal throughout the semester in which theyrespond to prompts from the instructor and classmates
Current Events presentations. Each entrywill be a freewr
ASU - ENG - 102
ASU - ENG - 102
Optional FinalDue:Thursday, December 9 in LL 545BBetween 8:45 and 10:30 AM (NO LATE PAPERS)(you may make arrangements to turn the final in early, but they must be turned inin-person)Worth: The full
points for the original assignmentIn a two-pocket
ASU - ENG - 102
The Research PaperTopic Selection: Thursday October 28, 2010Research Plan and Annotated Bibliography: November 4 or 9, 2010Peer Review:Claims and Reasons: Tuesday, November 16 2010Full Drafts:
Thursday, November 18Final Drafts: Tuesday, November 23,
ASU - ENG - 102
Due: Tuesday, September 7, 2010(peer review Thursday, September 2)Worth: 10%Length: 2-3 pages (double-spaced)Writing Assignment 1Often no progress in the argumentative process can occur because the
sides differ onwhat is at issue, share no common gr
ASU - ENG - 102
Due: October 21, 2010Peer Editing: October 19, 2010Worth: 20%Writing Assignment 3Eng 102This writing assignment asks students to evaluate a claim made by Al Gore in AnInconvenient Truth. Students may
pick any cause and effect claim except the one di
ASU - PHY - 121
Calculus 101As I write this I am looking at my copy Calculus, by Michael Spivak. It is a Spanishtranslation that I purchased more than 30 years ago for my Calculus class in college. Iwas so
fascinated by the topic and by Spivak's presentation that I re
ASU - PHY - 121
Chapter 11Dynamics of many-particle systems11.1 Many-particle systemsIn Chapter 1 we introduced the particle model. By assuming that an object can berepresented by a geometrical point, with the
entire object's mass concentrated at thispoint, we were
ASU - PHY - 121
Question 1"Projectile Motion" implies that air resistance can be ignored. See page 79.Question 2If the acceleration is constant but not zero, the trajectory is either a straight line or
aparabola. The first case occurs when the initial velocity is par
ASU - PHY - 121
Question 1The graph of velocity versus time is a straight, non-horizontal line as in Fig. 2.17.Question 2For constant non-zero acceleration the graph of x versus t is a parabola. See page 49.Question
3Free fall is motion in vacuum ( the effects of ai
ASU - PHY - 121
Question 1This is the Test of Understanding of Section 2.6 Question. Then answer is "concave up".If the acceleration is increasing with time, the velocity is increasing at an increasing
rate,and this means that the curve bends upward.Question 2The ve
ASU - PHY - 121
Question 1In uniform circular motion the speed and the magnitude of the acceleration are constant.See CAUTION statement on page 89.Question 2The period T in circular motion is the time needed for one
revolution. See page 89.Question 3In non-uniform
ASU - PHY - 121
Question 11,000,000 N is close to the weight of a blue whale. See Table 4.1Question 2If the net force disappears the acceleration also disappears because of Newton's SecondLaw. This means that
velocity cannot change any more. In other words, the objec
ASU - PHY - 121
Question 1A body is in equilibrium when it is moving at constant velocity or at rest. See page 136.In other words, the acceleration must be zero, and if the acceleration is zero the net forcemust be
zero too.Question 2The normal force can be equal, l
ASU - PHY - 121
Question 1The force that pushes the car up the hill is the friction force excerted by the road on thewheels.Question 2The only activity that does not require friction is jumping into the air.Question
3The coefficient of friction between rubber and c
ASU - PHY - 121
Question 1The normal force acquires a component in the direction of the centripetal acceleration.See Example 5.23Question 2The centrifugal force does not exist. See CAUTION section on page
159.Question 3The force needed is proportional to the accele
ASU - PHY - 121
Question 1Since the object is starting from rest, Eq. 6.6 implies.12W = 2Now if the work is W = 2W, the new speed v must satisfy12W = mv212W = mv22Now if we divide Eq. (2) by Eq. (1) we get22 2= 2 =
and this impliesv = 2 vQuestion 2T
ASU - PHY - 121
Question 1If power is constant, we haveFv = ,where P is a constant. This means that a = F/m = P/mv. Since the velocity is changing,the acceleration is also changing. If you integrate the equations of
motion (solutions arein Problem 6.92), youll find
ASU - PHY - 121
Question 1Both students assign the same value to the change in gravitational potential energybetween roof and ground. Read the CAUTION statement on page 216.Question 2Both rocks reach the ground with
the same speed (This you know from Chapter 2, buty
ASU - PHY - 121
Question 1If the balls have the same momentum, the same force is needed to stop them. But if theyhave the same kinetic energy, the bowling ball's momentum is a factor (mbowling/mbaseball)1/2larger
than the baseball's momentum. Thus a force larger by th
ASU - PHY - 121
Question 1This is a perfectly inelastic collision. Only momentum is conserved.Question 2The moving body A stops dead. You can see this from Eq. 8.24 for the special case mA =mB.Question 3In an
elastic collision the relative velocity of the objects h
ASU - PHY - 121
Question 1The expression is only valid for constant angular acceleration. See Section 9.2Question 2The tangential component of acceleration is zero if the angular acceleration is zero. SeeEq.
9.14Question 3The centripetal component of acceleration i
ASU - PHY - 121
Question 1The gravitational potential energy of an extended object of mass M is the same as thepotential energy of a point mass M located at the position of the center of mass of theextended object.
The derivation of this is on page 301.Question 2The
ASU - PHY - 122
Data Analysis1. IntroductionScientists need very precise and accurate measurements for many reasons. We may needto test (distinguish) rival theories which make almost the same prediction (e.g. for
thetrack of an asteroid headed for earth, or the effec
ASU - PHY - 122
Data FitsIntroductionMost experiments involve several variables that are interdependent, such as force (F) anddisplacement (x) for a spring. A range of F will produce a range of x,
withproportionality constant k, as described by Hookes law F(x) = kx.
ASU - PHY - 122
Friction: Determination of s and kIntroductionIn this lab, we will take measurements for which the random errors are relatively large(roughly 10 % of the measured values). We will use a statistical
analysis to determine afinal measured value with prop
ASU - PHY - 122
Error Propagation: VolumesIntroductionIn this lab you will practice error analysis by measuring volumes of a few regular shapes.You need to understand the Data Analysis document to successfully
complete thisexercise. The aim is to understand how error
ASU - PHY - 122
Vectors and StaticsIntroductionStatics is concerned with the application of Newtons laws to things which do notaccelerate. Examples include the design of bridges, elasticity (forces within
deformedmaterials), and the forces which act at the junctions
ASU - PHY - 122
Springs and OscillatorsIntroductionIn this lab you will measure the static behavior (stretch vs. force) of simple springs,practice linear fits to find the static spring constant, then make an
oscillator and test therelationship between frequency and m
ASU - PHY - 122
Density (Linearized Plots)IntroductionIn this lab you will determine the density of clay by weighing a set of hand-made spheres.You will fit a model function for mass vs. diameter using a linearized
plot and a log-logplot. The aim is to understand how
ASU - PHY - 122
Newtons Second Law of MotionIntroduction and TheoryNewtons second law of motion can be summarized by the following equation: F = ma(1)where F represents the net external force acting on an object, m
is the mass of the objectmoving under the influenc
ASU - PHY - 122
Energy ConservationIntroduction and TheoryThe energy that an object has due to its motion is called kinetic energy and is defined as1K = mv 2(1)2where m is the objects mass and v is its speed. An
object will also have gravitationalpotential energy
ASU - PHY - 122
Projectile MotionIntroduction and TheoryConsider the projectile motion of a ball as shown in figure 1. At t = 0 the ball is released atthe position specified by coordinates (0, y0) with horizontal
velocity vx.yvx(0, y0)xFigure 1. The system of coo
ASU - PHY - 122
Conservation of Linear Momentum and (Sometimes) EnergyIntroduction and TheoryCollisions are an important way of studying how objects interact. Conservation laws havebeen developed that allow one to
say quite a bit about what is happening without knowin
ASU - PHY - 122
Collisions in Two DimensionsIntroduction and TheoryThis experiment will build from the collisions in one-dimension activity that has alreadybeen done. In two dimensions, as in one, the total linear
momentum of the system ofcolliding bodies will be con
ASU - PHY - 122
Rotational MotionIntroductionIn this lab a constant torque will create a constant angular acceleration for a rigid body rotatingabout its center of mass. You will see that the moment of inertia
depends on the rotation axis fora given object and finall
ASU - PHY - 122
Damped OscillationsIntroductionIn an earlier lab you looked at a mass and linear spring system as an oscillator, whichproduced simple harmonic motion (SHM). A pendulum is also an oscillator, where
themotion takes place along a curved path. In this lab
ASU - PHY - 122
Lab 3: Vectors and StaticsSupplementWhat is below extends the discussion on pages 7 and 8 of the Data Analysis (DA) packet.In the Vectors and Statics exercise you will need to understand how errors
in measuredangles propagate thorough the sine and cos
ASU - PHY - 131
PHY 131ShumwayEXAM 1 (5 pages + equation sheet)9:0010:15 am, Feb. 17, 20111. (60 points) Consider a uniform line of charge extending from x = a to x = 0 with total charge Q, anda charge q located at
point x = d, a distance D from the end of the line.
ASU - PHY - 131
ASU - PHY - 131
PHY 131ShumwayExam 2 (5 pages + eq. sheet)Mar. 24, 20111. (60 points) An innitely long insulating cylinder of radius R has non-uniform charge density,(r) = 0r2R2In this problem you will have to use
calculus on cylindrical shells and cylindrical Gau
ASU - PHY - 131
ASU - PHY - 131
ASU - PHY - 131
PHY 131ShumwayFINAL EXAM (7 pages + eq. sheet)7:30-9:20am, May 6, 20101. (90 points) A capacitor is made from two concentric spherical shells with inner radius a and outerradius b, with an insulating
dielectric medium with dielectric constant K betwee
ASU - PHY - 131
ASU - PHY - 131
PHY 131ShumwaySAMPLE FINAL EXAM (7 pages + eq. sheet + notes) 7:30-9:20am, May 6, 20101. (90 points) To save space, someone designs a capacitor with three parallel plates, each with an area A,and
separated by distances d. The middle plate holds a charg
ASU - PHY - 131
ASU - PHY - 131
PHY 131ShumwaySAMPLE EXAM 1 (5 pages + eq. sheet)for the Feb. 25, 2010 exam1. (60 points) Consider a disk of radius a with total charge Q uniformly distributed over its area, and acharge q located a
perpendicular distance x from the center of the disk
ASU - PHY - 131
PHY 131ShumwaySAMPLE EXAM 1 (5 pages + eq. sheet)for the Feb. 25, 2010 examEquation Sheet for Exam 1Chapter 21F=1 |q1 q2 |4 0 r 2F = qEThe torque and energy for a dipole p = q d in an electric eld
are =pEU = p EChapter 22E =E dAE =Qenc0P
ASU - PHY - 131
PHY 131ShumwaySAMPLE EXAM 2 (5 pages + eq. sheet + notes)for the Apr. 1, 2010 exam1. (60 points) A cylindrical metal wire with length L and radius r carries current according to Drudeslaw.(a) Express
the current I in terms of the density of electrons | {"url":"http://www.coursehero.com/file/7195398/assignment22/","timestamp":"2014-04-21T14:42:27Z","content_type":null,"content_length":"51946","record_id":"<urn:uuid:674370c8-4280-4a8a-ad5d-f45cdac74a61>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 36
, 1997
"... {hustadt, schmidt} topi-sb.mpg.de This paper investigates the evaluation method of decision procedures for multi-modal logic proposed by Giunchiglia and Sebastiani as an adaptation from the
evaluation method of Mitchell et al of decision procedures for propositional logic. We compare three different ..."
Cited by 52 (7 self)
Add to MetaCart
{hustadt, schmidt} topi-sb.mpg.de This paper investigates the evaluation method of decision procedures for multi-modal logic proposed by Giunchiglia and Sebastiani as an adaptation from the
evaluation method of Mitchell et al of decision procedures for propositional logic. We compare three different theorem proving approaches, namely the Davis-Putnam-based procedure KSAT, the
tableaux-based system KTUS and a translation approach combined with first-order resolution. Our results do not support the claims of Giunchiglia and Sebastiani concerning the computational
superiority of KSAT over KRIS, and an easy-hard-easy pattern for randomly generated modal formulae. 1
"... This paper reports on an empirical performance analysis of four modal theorem provers on benchmark suites of randomly generated formulae. The theorem provers tested are the Davis-Putnam-based
procedure Ksat, the tableaux-based system KRIS, the sequent-based Logics Workbench, and a translation appro ..."
Cited by 23 (10 self)
Add to MetaCart
This paper reports on an empirical performance analysis of four modal theorem provers on benchmark suites of randomly generated formulae. The theorem provers tested are the Davis-Putnam-based
procedure Ksat, the tableaux-based system KRIS, the sequent-based Logics Workbench, and a translation approach combined with the first-order theorem prover SPASS.
- IN PROC. TABLEAUX 2007, AIX EN PROVENCE , 2007
"... The description logic SHI extends the basic description logic ALC with transitive roles, role hierarchies and inverse roles. The known tableau-based decision procedure [9] for SHI exhibit (at
least) NEXP-TIME behaviour even though SHI is known to be EXPTIME-complete. The automata-based algorithms fo ..."
Cited by 21 (11 self)
Add to MetaCart
The description logic SHI extends the basic description logic ALC with transitive roles, role hierarchies and inverse roles. The known tableau-based decision procedure [9] for SHI exhibit (at least)
NEXP-TIME behaviour even though SHI is known to be EXPTIME-complete. The automata-based algorithms for SHI often yield optimal worst-case complexity results, but do not behave well in practice since
good optimisations for them have yet to be found. We extend our method for global caching in ALC to SHI by adding analytic cut rules, thereby giving the first EXPTIME tableau-based decision procedure
for SHI, and showing one way to incorporate global caching and inverse roles.
"... We present sound, (weakly) complete and cut-free tableau systems for the propositional normal modal logics S4:3, S4:3:1 and S4:14. When the modality 2 is given a temporal interpretation, these
logics respectively model time as a linear dense sequence of points; as a linear discrete sequence of po ..."
Cited by 20 (3 self)
Add to MetaCart
We present sound, (weakly) complete and cut-free tableau systems for the propositional normal modal logics S4:3, S4:3:1 and S4:14. When the modality 2 is given a temporal interpretation, these logics
respectively model time as a linear dense sequence of points; as a linear discrete sequence of points; and as a branching tree where each branch is a linear discrete sequence of points. Although
cut-free, the last two systems do not possess the subformula property. But for any given finite set of formulae X the "superformulae" involved are always bounded by a finite set of formulae X L
depending only on X and the logic L. Thus each system gives a nondeterministic decision procedure for the logic in question. The completeness proofs yield deterministic decision procedures for each
logic because each proof is constructive. Each tableau system has a cut-free sequent analogue proving that Gentzen's cut-elimination theorem holds for these latter systems. The techniques are due to
- In Proceedings of the 2nd International Conference on Language Resources and Evaluation , 2007
"... Abstract. We show that global caching can be used with propagation of both satisfiability and unsatisfiability in a sound manner to give an EXPTIME algorithm for checking satisfiability w.r.t. a
TBox in the basic description logic ALC. Our algorithm is based on a simple traditional tableau calculus ..."
Cited by 19 (12 self)
Add to MetaCart
Abstract. We show that global caching can be used with propagation of both satisfiability and unsatisfiability in a sound manner to give an EXPTIME algorithm for checking satisfiability w.r.t. a TBox
in the basic description logic ALC. Our algorithm is based on a simple traditional tableau calculus which builds an and-or graph where no two nodes of the graph contain the same formula set. When a
duplicate node is about to be created, we use the pre-existing node as a proxy, even if the proxy is from a different branch of the tableau, thereby building global caching into the algorithm from
the start. Doing so is important since it allows us to reason explicitly about the correctness of global caching. We then show that propagating both satisfiability and unsatisfiability via the and-or
structure of the graph remains sound. In the longer paper, by combining global caching, propagation and cutoffs, our framework reduces the search space more significantly than the framework of [1].
Also, the freedom to use arbitrary search heuristics significantly increases its application potential. A longer version with all optimisations is currently under review for a journal. An extension
for SHI will appear in TABLEAUX 2007.
, 2000
"... We give algorithms to construct the least L-model for a given positive modal logic program P , where L can be one of the modal logics KD, T , KDB, B, KD4, S4, KD5, KD45, and S5. If L 2 fKD5;
KD45;S5g, or L 2 fKD;T ; KDB;Bg and the modal depth of P is finitely bounded, then the least L-model of P can ..."
Cited by 18 (16 self)
Add to MetaCart
We give algorithms to construct the least L-model for a given positive modal logic program P , where L can be one of the modal logics KD, T , KDB, B, KD4, S4, KD5, KD45, and S5. If L 2 fKD5;KD45;S5g,
or L 2 fKD;T ; KDB;Bg and the modal depth of P is finitely bounded, then the least L-model of P can be constructed in PTIME and coded in polynomial space. We also show that if P has no flat models
then it has the least models in KB, K5, K45, and KB5. As a consequence, the problem of checking the satisfiability of a set of modal Horn formulae with finitely bounded modal depth in KD, T , KB,
KDB, or B is decidable in PTIME. The known result that the problem of checking the satisfiability of a set of Horn formulae in K5, KD5, K45, KD45, KB5, or S5 is decidable in PTIME is also studied in
this work via a different method. 1.
, 1997
"... The paper shows satisfiability in many propositional modal systems can be decided by ordinary resolution procedures. This follows from a general result that resolution and condensing is a
decision procedure for the satisfiability problem of formulae in so-called path logics. Path logics arise from p ..."
Cited by 14 (4 self)
Add to MetaCart
The paper shows satisfiability in many propositional modal systems can be decided by ordinary resolution procedures. This follows from a general result that resolution and condensing is a decision
procedure for the satisfiability problem of formulae in so-called path logics. Path logics arise from propositional and normal uni- and multi-modal logics by the optimised functional translation
method. The decision result provides an alternative decision proof for the relevant modal logics (including K, KD, KT and KB, their combinations ...
- KB, KDB, K5, KD5. Studia Logica
"... Abstract. We give complete sequent-like tableau systems for the modal logics KB, KDB, K 5, and KD5. Analytic cut rules are used to obtain the completeness. Our systems have the analytic
superformula property and can thus give a decision procedure. Using the systems, we prove the Craig interpolation ..."
Cited by 12 (10 self)
Add to MetaCart
Abstract. We give complete sequent-like tableau systems for the modal logics KB, KDB, K 5, and KD5. Analytic cut rules are used to obtain the completeness. Our systems have the analytic superformula
property and can thus give a decision procedure. Using the systems, we prove the Craig interpolation lemma for the mentioned logics. 1
- Advances in Modal Logic - Volume 5 , 2005
"... abstract. We study and give a summary of the complexity of 15 basic normal monomodal logics under the restriction to the Horn fragment and/or bounded modal depth. As new results, we show that:
a) the satisfiability problem of sets of Horn modal clauses with modal depth bounded by k ≥ 2 in the modal ..."
Cited by 9 (2 self)
Add to MetaCart
abstract. We study and give a summary of the complexity of 15 basic normal monomodal logics under the restriction to the Horn fragment and/or bounded modal depth. As new results, we show that: a) the
satisfiability problem of sets of Horn modal clauses with modal depth bounded by k ≥ 2 in the modal logics K 4 and KD4 is PSPACE-complete, in K is NP-complete; b) the satisfiability problem of modal
formulas with modal depth bounded by 1 in K 4, KD4, and S4 is NP-complete; c) the satisfiability problem of sets of Horn modal clauses with modal depth bounded by 1 in K, K 4, KD4, and S4 is
PTIME-complete. In this work, we also study the complexity of the multimodal logics Ln under the mentioned restrictions, where L is one of the 15 basic monomodal logics. We show that, for n ≥ 2: a)
the satisfiability problem of sets of Horn modal clauses in K5n, KD5n, K45n, and KD45n is PSPACE-complete; b) the satisfiability problem of sets of Horn modal clauses with modal depth bounded by k ≥
2 in Kn, KBn, K5n, K45n, KB5n is NP-complete, and in KDn, Tn, KDBn, Bn,
- CMCS , 2008
"... We study sequent calculi for propositional modal logics, interpreted over coalgebras, with admissibility of cut being the main result. As applications we present a new proof of the (already
known) interpolation property for coalition logic and establish the interpolation property for the conditional ..."
Cited by 8 (7 self)
Add to MetaCart
We study sequent calculi for propositional modal logics, interpreted over coalgebras, with admissibility of cut being the main result. As applications we present a new proof of the (already known)
interpolation property for coalition logic and establish the interpolation property for the conditional logics CK and CK Id. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=80791","timestamp":"2014-04-16T14:59:07Z","content_type":null,"content_length":"36834","record_id":"<urn:uuid:1f90ccab-8d06-4a34-9c03-01fe8cd8c778>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATHEMATICA BOHEMICA, Vol. 127, No. 2, pp. 203-209 (2002)
Some recent results on the existence of global-in-time weak solutions to the Navier-Stokes equations of a general barotropic fluid
Eduard Feireisl
Eduard Feireisl, Mathematical Institute AV CR, Zitna 25, 115 67 Praha 1, Czech Republic, e-mail: feireisl@math.cas.cz
Abstract: This is a survey of some recent results on the existence of globally defined weak solutions to the Navier-Stokes equations of a viscous compressible fluid with a general barotropic
pressure-density relation.
Keywords: compressible Navier-Stokes equations, global existence, weak solutions
Classification (MSC2000): 35Q30, 35A05
Full text of the article:
[Previous Article] [Next Article] [Contents of this Number] © 2005 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/MB/127.2/8.html","timestamp":"2014-04-20T13:22:04Z","content_type":null,"content_length":"2366","record_id":"<urn:uuid:9d06a397-6316-494b-a387-6f0311faa003>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Topics - Algebra, Geometry, Calculus, Statistics
Math is considered one of those subjects where some may need more help then others. Now you don’t have to worry! Our math tutoring including Arithmetic, Pre-Algebra, Algebra 1 & 2, Pre-calculus,
Trigonometry, Calculus (Thru A.P.), Geometry, Integrated Math, Math Analysis, Statistics (Thru A.P.) and many more will have you completing your work timely and accurately.
Our professional math tutors are presented to best accompany the needs of the student. Whether you learn better in a classroom atmosphere or your availability limits your options, we are here to make
sure you are learning to the best of your ability and are overly satisfied with our service.
We are positive that our Math tutors will be able to boost your grade point average in the classroom! For more information on Math tutoring, please don’t hesitate to call. | {"url":"http://manhattanreview.com/academic-tutoring-math-topics/","timestamp":"2014-04-23T17:50:28Z","content_type":null,"content_length":"14439","record_id":"<urn:uuid:aa1d128a-bc36-42fb-b30e-3003b4215f05>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating stress scenarios: null correlation is not enough
December 28, 2010
By arthur charpentier
In a recent post (here, by @teramonagi), Teramonagi mentioned the use of PCA to model yield curve, i.e. to obtain the three factor, "parallel shift", "twist" and "butterfly". As in Nelson & Siegel,
if m is maturity, $yleft( m right)$ is the yield of the curve at maturity m, assume that
, are parameters to be fitted
• β[0] is interpreted as the long run levels of interest rates (the loading is 1, it is a constant that does not decay)
• β[1] is the short-term component (it starts at 1, and decays monotonically and quickly to 0);
• β[2] is the medium-term component (it starts at 0, increases, then decays to zero);
• τ is the decay factor: small values produce slow decay and can better fit the curve at long maturities, while large values produce fast decay and can better fit the curve at short maturities; τ
also governs where β[2] achieves its maximum.
(see e.g. here). Those factors can be obtained using PCA,
term.structure = read.csv("C:\tmp\FRB_H15.csv",
term.structure = tail(term.structure,1000)
term.structure = term.structure[,-1]
label.term = c("1M","3M","6M","1Y","2Y","3Y","5Y"
colnames(term.structure) = label.term
term.structure = subset(term.structure,
term.structure$'1M' != "ND")
term.structure = apply(term.structure,2,as.numeric)
term.structure.diff = diff(term.structure)
term.structure.princomp = princomp(term.structure.diff)
factor.loadings = term.structure.princomp$loadings[,1:3]
legend.loadings = c("First principal component",
"Second principal component","Third principal component")
lwd=3,lty=1,xlab = "Term", ylab = "Factor loadings")
> summary(term.structure.princomp)
Importance of components:
Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8
Standard deviation 0.2028719 0.1381839 0.06938957 0.05234510 0.03430404 0.022611518 0.016081738 0.013068448
Proportion of Variance 0.5862010 0.2719681 0.06857903 0.03902608 0.01676075 0.007282195 0.003683570 0.002432489
Cumulative Proportion 0.5862010 0.8581690 0.92674803 0.96577411 0.98253486 0.989817052 0.993500621 0.995933111
using Teramonagi's R code, those factors (or components) and then to aggregate them (using the expression given above). With
Principal Component Analysis
, PCA, we get
components, while with
Independent Component Analysis
, ICA, we get
components. And independence and null correlation can be rather different. We recently discussed that idea in a paper with Christophe Villa (available soon
Consider the following sample
I=(Y<.25)*(Y<3*X)*(Y>X/3) +
op <- par(mfrow = c(1, 2))
plot(FACT1[1:2000],FACT2[1:2000],main="Principal component analysis",col="black",cex=.2,xlab="",ylab="",xaxt="n",yaxt="n")
plot(PCA$scores,cex=.2,main="Principal component analysis",
The PCA obtain the following projections on the two components (drawn in red, below)
> X=PCA$scores[,1];
> Y=PCA$scores[,2];
> n=length(FACT1)
> x=X[sample(1:n,size=n,replace=TRUE)]
> y=Y[sample(1:n,size=n,replace=TRUE)]
> PCA$loadings
Comp.1 Comp.2
FACT1 -0.707 0.707
FACT2 -0.707 -0.707
Comp.1 Comp.2
SS loadings 1.0 1.0
Proportion Var 0.5 0.5
Cumulative Var 0.5 1.0
> F1=0.707*x-.707*y
> F2=0.707*x+.707*y
Hence, with PCA, we have two components, orthogonal, with a triangular distribution, so if we generate them independently, we obtain
which is quite different, compared with the original sample. On the other hand, with ICA,we obtain factors that are really independent....
plot(ICA$S,cex=.2,main="Independent component analysis",
see below for the graphs, and
for more details,
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/generating-stress-scenarios-null-correlation-is-not-enough/","timestamp":"2014-04-16T22:25:24Z","content_type":null,"content_length":"67574","record_id":"<urn:uuid:e4959970-757a-4248-bb8c-f6568144d204>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chuntak F.
My weekly schedule is basically full with one, at most two non-regular openings.
Due to traffic, I could only travel in the neighborhood (Dublin, Pleasanton,
Livermore and San Ramon). Weekdays 11am-3pm remain mostly open. Thanks you very
much for your support.
I am an experienced Math tutor (4 recent years, 50+ students), college instructor and software engineer. I have helped students succeeded from the most difficult to the most elementary Math courses.
I have an unmatched Math (with technical) background and know many issues of learning Math in general and any particular class. I was admitted to UCB Math PhD program (but chose Purdue PhD for some
reason). Some of my students eventually go to top colleges (UCB (>2), Davis, UOP). Others did learn the basic Math they needed in their jobs and lives. My rates are very reasonable given what I could
do to help. Please read on for more detail.
I have tutored and helped college students since the 80s and substantially in the last 5 years, on all HS Math classes (many Honors). I have a Math PhD (and MS in EE, CS) from Purdue and taught
various courses before and after graduation, and regularly at San Jose State U (Calculus III for Fall 2012) and junior colleges.
Calculus: I have many Calculus students (AP and non-AP) and taught Calculus II, III at SJSU. I know many issues of learning Calculus and have effective solutions. I also wrote 5 sections for a free
Calculus textbook. On top of 4 standard Calculus courses, I have taken several graduate level Analysis courses (real and complex variables, topology, measure theory).
Honors class and Math competition: I have helped many Honors class students with extra practices and drills I set myself. One of my student also got first in MathCount in his city.
Basic: I have taught students (many of them adults) the basics of Math (4 operations, fraction, decimal) systematically so they gained valuable skills in their jobs and lives.
I can help in these areas: pre-Algebra, (Honors) Algebra I & II, Geometry (with proofs), pre-Calculus, (AP, College, 3-D) Calculus, Linear Algebra, Probability, (lower class) Physics*, and Java*.
*I have very intensive training in mechanics including kinematics, Newton's Laws (with friction) and statics. I also have taken graduate courses in electromagnetism and optics. I worked on
applications research and system software (e.g. Open Source) to support applications including the Java (and J2ME) and Bluetooth communication stacks.
Chuntak's subjects | {"url":"http://www.wyzant.com/Tutors/CA/Pleasanton/7572525/?g=3FI","timestamp":"2014-04-21T15:05:38Z","content_type":null,"content_length":"88282","record_id":"<urn:uuid:a1fcc411-5f3a-407a-bea8-3e85982528a8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yonkers ACT Tutor
Find a Yonkers ACT Tutor
...Logic serves as the fundamental building block of all studies in academia, from philosophy, to math, science, even computer science all depends on what Aristotle would call a "New and
Necessary" way of thinking. As a Mathematics Major one is very familiarized with logic in the following ways: Ma...
16 Subjects: including ACT Math, Spanish, algebra 1, calculus
...My students over the past two years have a median grade of 4 on the AP exam, with more than 50% earning a 5 (highest possible score), more than 75% earning a 4 or above, and more than 90%
earning a 3 or above. I have taught my school's Precalculus course for seven years. Each school has a diffe...
9 Subjects: including ACT Math, calculus, geometry, algebra 1
...I have a Ph.D. in chemical engineering from the California Institute of Technology, and a minor concentration in applied mathematics. I have worked over 20 years in research in the oil,
aerospace, and investment management industries. I also have extensive teaching experience -- both as a mathematics tutor and an adjunct professor.
11 Subjects: including ACT Math, calculus, algebra 2, algebra 1
I have extensive experience tutoring and coaching all ages from elementary school students through college age students. I've done official tutoring work through my college as well as outside
tutoring work. One of the main things I focus on is developing an active interest in their work and building up the confidence that they are capable of doing it.
25 Subjects: including ACT Math, chemistry, algebra 1, algebra 2
...I am patient and encouraging. Physical Science may include the following subjects: Physics-the forces on the matter around us, its energy and how matter moves; Chemistry - the nature of matter
and how atoms interact forming the various molecules in us and around us; Earth Science study of the...
17 Subjects: including ACT Math, chemistry, physics, geometry | {"url":"http://www.purplemath.com/Yonkers_ACT_tutors.php","timestamp":"2014-04-17T15:33:47Z","content_type":null,"content_length":"23636","record_id":"<urn:uuid:07b410b5-fadf-4f07-8464-5c9940b50ece>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: geometry puzzle
Replies: 6 Last Post: Oct 8, 2013 9:11 PM
Messages: [ Previous | Next ]
geometry puzzle
Posted: Oct 8, 2013 12:37 AM
Given a triangle, with sides A, B, C, and
opposite angles a, b, c.
Prove: if A < (B + C)/2, then a < (b + c)/2 | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2601202&messageID=9295076","timestamp":"2014-04-17T10:43:53Z","content_type":null,"content_length":"22852","record_id":"<urn:uuid:1a5bbecd-9e0e-4ca8-81f6-0fc58d6ee849>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithms Flowcharts And Pseudocode
Algorithms Flowcharts And Pseudocode PPT
Sponsored High Speed Downloads
ALGORITHMS AND FLOWCHARTS ALGORITHMS AND FLOWCHARTS A typical programming task can be divided into two phases: Problem solving phase produce an ordered sequence of steps that describe solution of
problem this sequence of steps is called an algorithm Implementation phase implement the program in ...
ALGORITHMS AND FLOWCHARTS Summer Assignment AP Computer Science Wakefield High School Assignment Summary Read this ppt. thoroughly and complete the 5 exercises at the end.
ALGORITHMS AND FLOWCHARTS. A typical programming task can be divided into two phases: Problem solving phase. produce an ordered sequence of steps that describe solution of problem
ALGORITHMS AND FLOWCHARTS ... (one can use pseudocode) Refine the algorithm successively to get step by step detailed algorithm that is very close to a computer language. Pseudocode is an artificial
and informal language that helps programmers develop algorithms.
ALGORITHMS AND FLOWCHARTS ALGORITHMS AND FLOWCHARTS A programming task divided into two phases: ... Pseudocode: An artificial and informal language that helps programmers develop algorithms.
Pseudocode is very similar to everyday English.
ALGORITHMS AND FLOWCHARTS Computer programming can be divided into two phases: ... Pseudocode is an artificial and informal language that helps programmers develop algorithms. Pseudocode may be an
informal english, combinations of computer languages and spoken language.
ALGORITHMS AND FLOWCHARTS ALGORITHMS AND FLOWCHARTS A typical programming task can be divided into two phases: ... Pseudocode is an artificial and informal language that helps programmers develop
algorithms. Pseudocode is very similar to everyday English.
Algorithms (Flowcharts and Pseudocode) Introduction Objectives What is an algorithm? What components does an algorithm have? What is modularity? What is an Algorithm?
ALGORITHMS AND FLOWCHARTS Examples The Flowchart (Dictionary) A schematic representation of a sequence of operations, as in a manufacturing process or computer program.
Pseudocode is an artificial and informal language that helps programmers develop algorithms. ... ALGORITHMS AND FLOWCHARTS Author: Mustafa Uyguroglu Last modified by: syousaf Created Date: 11/8/2004
9:34:17 AM Document presentation format:
Pseudocode Flowcharts Pseudocode Pseudocode is an artificial and informal language that helps programmers develop algorithms. Pseudocode is a "text-based" detail (algorithmic) design tool. ...
ALGORITHMS AND FLOWCHARTS FIRST YEAR Author: Pharaonic Last modified by: Dr.Eng. H. M. Mousa Created Date: 3/9 ... THIRD YEAR Graph Algorithms Depth-first search Depth-first search Depth-first search
Depth-first search Pseudocode of Depth-first search Slide 8 Example What is the running time ...
Pseudocode is an artificial and informal language that helps programmers develop algorithms. ... ALGORITHMS AND FLOWCHARTS Author: Mustafa Uyguroglu Last modified by: USER Created Date: 11/8/2004
9:34:17 AM Document presentation format:
Pseudocode is an artificial and informal ... Roman Flow 1_Flow 2_Flow 3_Flow 4_Flow 5_Flow 6_Flow Microsoft Visio Drawing MathType 5.0 Equation Slide 1 ALGORITHMS AND FLOWCHARTS Steps in Problem
Solving Pseudocode & Algorithm Pseudocode & Algorithm Pseudocode & Algorithm The ...
Problem Solving Methods Learning Objectives Define an algorithm Know what is meant by "decomposition" of a problem Learn how to write algorithms using flowcharts and pseudocodes Know what is meant by
"top-down" design method Problem Solving Process General Problem Solving Method Define and ...
1.4 Programming Tools Flowcharts Pseudocode Hierarchy Chart Direction of Numbered NYC Streets Algorithm Class Average Algorithm * * * * * * * * * * * * * * We need a loop to read and then add
(accumulate) the grades for each student in the class.
Algorithms can be described in various ways … Pseudocode. Flowcharts. Pseudocode. Pseudocode is an artificial and informal language that helps programmers develop algorithms. Pseudocode is a
"text-based" detail (algorithmic) design tool.
Pseudocode and Flowcharts Expressing Algorithms to Solve Programming Challenges Program Development Define the problem Outline the solution Develop the outline into an algorithm Test the algorithm
(desk check) Code the algorithm Run the program and debug it Document and maintain the program ...
6. DESIGN II: DETAILED DESIGN Software Engineering Roadmap: Chapter 6 Focus Chapter Learning Goals Understand how design patterns describe some detailed designs Specify classes and functions
completely Specify algorithms use flowcharts use pseudocode 1.
Objectives: Understand general properties of algorithms Get familiar with pseudocode and flowcharts Learn about iterations and ... for finding the greatest common factor (circa 300 BC) Binary Search
(guess-the-number game) Tools for Describing Algorithms Pseudocode A sequence of ...
Title: Designing computer algorithms with flowcharts and pseudo-code Author: James Tam Last modified by: tamj Created Date: 8/18/1995 10:27:02 AM Document presentation format
Figure that each line of pseudocode requires a constant amount of time. One line may take a different amount of time than another, but each execution of line . i . ... ALGORITHMS AND FLOWCHARTS FIRST
YEAR Last modified by: root Company:
... Flowcharts Flowchart - a graphical way of writing algorithms Flowcharts Symbols: Calculations Flowcharts ... Pseudocode Pseudocode Example Common Pseudocode Keywords Writing Pseudocode Algorithms
Pseudocode Sequential Algorithm Example (1/2) Pseudocode Sequential Algorithm ...
Chapter 2 - Algorithms and Design print Statement input Statement and Variables Assignment Statement if Statement Flowcharts Flow of Control Looping with Flowcharts Looping with Pseudocode Tracing
Pseudocode Syntax Summary Loop Terminatation Counter loop
... Security Pirates of Silicon Valley Writing Lab Reports Binary Mathematics Flowcharts and Algorithms Today’s Topics Algorithms Flowcharts Pseudo code Problem Solving For the rest of the semester,
we will practice problem-solving.
Flowcharting Shapes Pseudocode Pseudocode is structured english that is used as an alternative method to flowcharts for planning structured programs. ... Tools of Algorithms Flowcharts Flowcharting
Shapes Flowcharting Shapes Flowcharting Shapes Pseudocode References ...
... INIT Add one: INCREMENT, BUMP Decisions: TEST, IF/THEN/ELSE, WHILE/DO Writing Pseudocode Algorithms Be certain the task is completely specified! Questions to ask: What data is known before the
program runs? ... Flowcharts Flowchart ...
... Flowcharts are subjective Pseudocode Pseudocode is an artificial and informal language that helps programmers develop algorithms. Pseudocode ... -Down Design Structure Charts Top-Down Design A
General Example Flowcharts and Pseudocode Flowchart Pseudocode Flowcharts & Pseudocode are ...
1 Chapter 2 - Algorithms and Design Example Algorithm and Variables print Statement input Statement Assignment Statement if Statement Flowcharts Flow of Control
School of Business Eastern Illinois University Program Flowchart, Pseudocode & Algorithm development Week 2: Wednesday 1/22/2003 Friday 1/24/2003
... Analytic vs Numerical Solution Software Packages Objectives Computer Implementation of Numerical Methods Introduce Algorithms Discuss Flowcharts Discuss Pseudocodes Programming Tools and Styles
Computer Implementation Algorithm Flowchart Pseudocode Computer Program Algorithms & Flowcharts ...
Algorithms Recipes for disaster? Topics Covered Programming vs. problem solving What is an algorithm? Algorithm development common constructs (loops, conditionals, etc) Representing algorithms
flowcharts pseudocode Programming vs. Problem Solving Two very distinct tasks!
Algorithms, Flowcharts, and Pseudocode Flowchart Symbols Fundamental Control Structures 1. Sequence 2. Selection 3. Repetition *Any program can be constructed using only these three operations
Selection Repetition Types of Error Human Error Data Error Example: How tall is he/she? As accuracy ...
... and are rarely used for complex or technical algorithms Pseudocode & Flowcharts Pseudocode and flowcharts are structured ways to express algorithms that avoid many of the ambiguities common in
natural language statements, ...
ALGORITHMS Algorithms are the steps needed to solve a problem using pseudocode or flowcharts. After creating an algorithm, programmers check its logic. A logic error is a mistake in the way an
algorithm solves a problem.
Chapter 2 - Algorithms and Design print Statement input Statement and Variables Assignment Statement if Statement Flowcharts Flow of Control Looping with Flowcharts Looping with Pseudocode Tracing
Pseudocode Syntax Summary Loop Terminatation Counter loop
Design the program Create a detailed description of program Use charts or ordinary language (pseudocode) Identify algorithms needed Algorithm: a step-by-step method to solve a problem or complete a
Algorithms and pseudocode Notes adapted from Marti Hearst at UC Berkeley and David Smith at Georgia Tech Upcoming Beginning Java Reading “Cyberspace as a Human Right”, William McIver
Flowcharts Flowcharts Bohm and Jacopini were not the first to use flowcharts but they were the first to formalize the ... Binary Bypass Binary Choice Pseudocode Sometimes we use more formal language
to describe algorithms - pseudocode. It looks something like the code in a computer programming ...
Flowchart & Pseudocode Flowcharts (pictorial representations) and pseudocode (English-like representations) ... Foundations of Algorithms Author: user Last modified by: piano Created Date: 3/1/2006
9:23:42 PM Document presentation format: 화면 쇼
2.2 Programming Tools Flowcharts Pseudocode Hierarchy Chart Examples: Direction of Numbered NYC Streets Algorithm Class Average Algorithm Programming Tools Three tools are used to convert algorithms
into computer programs: Flowchart ...
... IPO charts, pseudocode, flowcharts To desk-check or hand-trace, use pencil, paper, and sample data to walk through algorithm A coded algorithm is called a program Creating Computer Solutions to
Problems ... Fifth Edition * Hints for Writing Algorithms ...
ALGORITHMS Algorithm: ... There are several ways to represent algorithms. Such as pseudocode, flowcharts, … Pseudocode: Individual operations are described briefly and precisely in English standard
statements consists of nouns and verbs Pseudocode: ...
Express an algorithm using flowcharts, pseudo-code or structured English and the standard constructs: Sequence. Assignment. Selection. Repetition. ... Structured English - A restricted part of the
English language used to describe algorithms Pseudocode
FUNDAMENTALS: ALGORITHMS, INTEGERS AND MATRICES BCT 2083 DISCRETE STRUCTURE AND APPLICATIONS SITI ZANARIAH SATARI ... Natural languages – used any language – rarely used for complex or technical
algorithms. Pseudocode & Flowcharts ...
Algorithm development and descriptions of algorithms using flowcharts and pseudocode. ... 1_Default Design Chapter 3 Outline Objectives Algorithm Development Algorithm Development Top-Down Design
Pseudocode Notation and Flowchart Symbols Structured Programming Structured ...
1 Program Design Simple Program Design Third Edition A Step-by-Step Approach Objectives To describe the steps in the program development process To explain structured programming To introduce
algorithms and pseudocode To describe program data Steps in Program Development Programming can be ...
... Compute circle area as pi* r2 Print the value of circle area How do we represent more complex algorithms Pseudocode, flowcharts (will introduce flowcharts later) ...
Representation of Algorithms Pseudocode (Pseudo = not real; false, ... convenient (for example, for trivial operations such as swapping two variables). Flow Charts Another way to represent algorithms
is by using flowcharts. Flowcharts can be thought of as a graphical form of pseudocode. A ...
CIS162AB - C++ Flow Control if, while, do-while Juan Marquez (03_flow_control.ppt) Overview of Topics Pseudocode Control Structures Flowcharts Single and Compound Boolean Expressions Single and
Compound Statements If, if-else, nested ifs While, do-while, nested loops Pseudocode Pseudocode is a ... | {"url":"http://ebookily.org/ppt/algorithms-flowcharts-and-pseudocode","timestamp":"2014-04-23T15:11:23Z","content_type":null,"content_length":"43023","record_id":"<urn:uuid:8577fda9-5859-4ba4-b0ca-b3367f597faa>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Structural and extremal results in graph theory
Files in this item
LeSaulnier_Timothy.pdf (778KB) (no description provided) PDF
Title: Structural and extremal results in graph theory
Author(s): LeSaulnier, Timothy
Advisor(s): West, Douglas B.
Contributor West, Douglas B.; Füredi, Zoltán; Ferrara, Michael J.
Department / Mathematics
Discipline: Mathematics
Granting University of Illinois at Urbana-Champaign
Degree: Ph.D.
Genre: Doctoral
Subject(s): immersion immersion-closed rainbow Ramsey theory rainbow domination rainbow edge-chromatic number rainbow decomposition acquisition acquisition-extremal
An H-immersion is a model of a graph H in a larger graph G. Vertices of H are represented by distinct "branch" vertices in G, while edges of H are represented by edge-disjoint walks in G
joining branch vertices. By the recently proved Nash-Williams Immersion Conjecture, any immersion-closed family is characterized by forbidding the presence of H-immersions for a finite
number of graphs H. We off er descriptions of some immersion-closed families along with their forbidden immersion characterizations. Our principal results in this area are a
characterization of graphs with no K_{2,3}-immersion, and a characterization of graphs with neither a K_{2,3}-immersion nor a K_4-immersion. We study of the maximum number of edges in an
n-vertex graph with no K_t-immersion. For t≤7, we determine this maximum value. When 5≤t≤7, we characterize the graphs with no K_t-immersion having the most edges. Given an edge-colored
graph, a rainbow subgraph is a subgraph whose edges have distinct colors. We show that if the edges of a graph G are colored so that at least k colors appear at each vertex, then G
contains a rainbow matching of size floor(k/2). We consider the rainbow edge-chromatic number of an edge-colored graph, \chi'_r(G), which we define to be the minimum number of rainbow
Abstract: matchings partitioning the edge set of G. A d-tolerant edge-colored graph is one that contains no monochromatic star with d + 1 edges. We off er examples of d-tolerant n-vertex
edge-colored graphs G for which \chi'_r(G)≥d/2 (n-1) and prove that \chi'_r(G)<d(d+1)nlnn for all such graphs. We study the rainbow domination number of an edge-colored graph, \widehat{\
gamma}(G), which we define to be the minimum number of rainbow stars covering the vertex set of G. We generalize three bounds on the domination number of graphs. In particular, we show
that \widehat{\gamma}(G) ≤d/(d+1)n for all d-tolerant n-vertex edge-colored graphs G and characterize the edge-colored graphs achieving this bound. A total acquisition move in a weighted
graph G moves all weight from a vertex u to a neighboring vertex v, provided that before this move the weight on v is at least the weight on u. The total acquisition number, a_t(G), is
the minimum number of vertices with positive weight that remain in G after a sequence of total acquisition moves, starting with a uniform weighting of the vertices of G. We offer an
independent proof that a_t(G)≤floor((|V(G)|+1)/3) for all graphs with at least two vertices. In addition, we characterize graphs achieving this bound. If a_t(G)=(|V(G)|+1)/3 , then G\in
T\cup{P_2,C_5}, where T is the family of trees that can be constructed from P_5 by iteratively growing paths with three edges from neighbors of leaves.
Issue Date: 2012-02-06
Genre: thesis
URI: http://hdl.handle.net/2142/29742
Rights Some material copyright 2010 Timothy LeSaulnier.
Available in 2012-02-06
Date 2011-12
This item appears in the following Collection(s)
Item Statistics
• Total Downloads: 271
• Downloads this Month: 1
• Downloads Today: 0
My Account
Access Key | {"url":"https://www.ideals.illinois.edu/handle/2142/29742","timestamp":"2014-04-17T00:48:32Z","content_type":null,"content_length":"24438","record_id":"<urn:uuid:5b9f554e-b38f-4210-8245-b478d182c50f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tag Clouds: Usability and Math
For the purposes of illustration, I created a dataset of well-known authors in our field, with the number of hits these names score in a Google search. When I use the raw data to create a tag cloud,
I get the result in Figure 2(a). The tag cloud presents most of the names in approximately the same size. Only some names jump out, and some are nearly illegible. The reason is that the weights are
not distributed evenly over the range of the source data. Most of the authors on my bookshelf have (roughly) the same number of Google hits. Only some authors have either very many or very few hits.
It appears you can recognize a normal distribution (or Gaussian distribution) here, of which you can see examples in Figure 3. To get a more evenly distributed range of font sizes in the tag cloud,
it is necessary to "linearize" the original values. You get a better result when you use a linearized representation, as in Figure 2(b). Technically, linearization means that the weights become less
accurate. Bust because the tags have differing word lengths, there is already no such thing as an accurate reflection of the weights. Here, we are interested in usability, not accuracy.
[Click image to view at full size]
[Click image to view at full size]
The Pareto distribution, or "80-20 rule" (see Figure 4) is also frequently encountered. In this distribution, 80 percent of the weights are in the lowest 20 percent of the range, while the other 20
percent fill the remaining 80 percent of the range, or the other way around. Well-known examples of this distribution include wealth among people, popularity of websites, and the frequency of words
from the English language. You need to select the right algorithm for linearization of your dataset. In Figure 2(c), my dataset (which contains a normal distribution) is linearized as if it contained
a Pareto distribution. The result can be weird when you select the wrong distribution model. Strangely enough, I've noticed several authors doing exactly the opposite they linearized datasets that
contained Pareto distributions assuming (unknowingly, I suppose) that they were normal distributions. Evidently, statistical knowledge itself is not distributed evenly among software developers.
[Click image to view at full size]
You will need several functions when linearizing multiple types of distributions. Each function only needs one collection of weights as input, and it returns a new (linearized) version of the
collection. I suggest you work with generic interfaces for collections so that you can apply the same functions to different types of data sources. It is necessary to specify explicit upper and lower
boundaries to the desired range of output values. It also seems proper to work with decimal or real numbers, not integers. Rounding the values to integers should be left to the UI code, in my
Listing Two is my attempt at linearizing a normal distribution, which is partly based on some examples on the Internet. The function calculates the standard deviation (sd) and makes the statistically
correct assumption that nearly all numbers will be in the range -2 * sd to + 2 * sd. For each number, a new weight is calculated on a straight line through that range. Listing Three presents an
algorithm that linearizes a Pareto distribution. This function calculates a new weight for each number using a logarithm, with e as the base number. (Diehards among us will not be satisfied with this
and can determine from their own source data which base number would render the best approximation.) The remainder of the function in this case also plots the new values on a fictitious linear line
between the minimum and maximum values.
Public Shared Function FromBellCurve( _
ByVal weights As ICollection(Of Decimal), _
ByVal minSize As Decimal, ByVal maxSize As Decimal) _
As ICollection(Of Decimal)
'First, calculate the mean weight.
Dim meansum As Decimal = 0
For Each w As Decimal In weights
meansum += w
Dim mean As Double = meansum / weights.Count
'Second, calculate the standard deviation of the weights.
Dim sdsum As Double = 0
For Each w As Decimal In weights
sdsum += (w - mean) ^ 2
Dim sd As Double = ((1 / weights.Count) * sdsum) ^ 0.5
'Now calculate the slope of a straight line from -2*sd to +2*sd.
Dim slope As Double
If sd > 0 Then
slope = (maxSize - minSize) / (4 * sd)
End If
'Get the value in the middle between minSize and maxSize.
Dim middle As Double = (minSize + maxSize) / 2
'Calculate the result for the given deviation from mean.
Dim output As New List(Of Decimal)
For Each w As Decimal In weights
If (sd = 0) Then
'With sd=0 all tags have the same weight.
'Calculate the distance from mean for this weight.
Dim distance As Double = w - mean
'Calculate the position on the slope for this distance.
Dim result As Double = CDec(slope * distance + middle)
'If the tag turned out too small, set minSize.
If result < minSize Then result = minSize
'If the tag turned out too big, set maxSize.
If result > maxSize Then result = maxSize
End If
Return output
End Function
Public Shared Function FromParetoCurve( _
ByVal weights As ICollection(Of Decimal), _
ByVal minSize As Decimal, ByVal maxSize As Decimal) _
As ICollection(Of Decimal)
'Convert each weight to its log value.
Const BASE As Double = Math.E
Dim logweights As New List(Of Decimal)
For Each w As Decimal In weights
logweights.Add(CDec(Math.Log(w, BASE)))
'First, find the min and max weight.
Dim min As Decimal = Decimal.MaxValue
Dim max As Decimal = Decimal.MinValue
For Each w As Decimal In logweights
If w < min Then min = w
If w > max Then max = w
'Now calculate the slope of a straight line, from min to max.
Dim slope As Double
If max > min Then
slope = (maxSize - minSize) / (max - min)
End If
'Get the value in the middle between minSize and maxSize.
Dim middle As Double = (minSize + maxSize) / 2
'Calculate the result for each of the weights.
Dim output As New List(Of Decimal)
For Each w As Decimal In logweights
If (max <= min) Then
'With max=min all tags have the same weight.
'Calculate the distance from the minimum for this weight.
Dim distance As Double = w - min
'Calculate the position on the slope for this distance.
Dim result As Double = CDec(slope * distance + minSize)
'If the tag turned out too small, set minSize.
If result < minSize Then result = minSize
'If the tag turned out too big, set maxSize.
If result > maxSize Then result = maxSize
End If
Return output
End Function | {"url":"http://www.drdobbs.com/web-development/tag-clouds-usability-and-math/204800620?pgno=3","timestamp":"2014-04-16T16:10:24Z","content_type":null,"content_length":"101189","record_id":"<urn:uuid:4312bb52-2e3f-4dc3-97a7-cde537ce1e45>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
On a Stochastic Zeldovich-Burgers model for the condensation of planets out of a protosolar nebula
Seminar Room 1, Newton Institute
Using the correspondence limit of Nelson's Stochastic mechanics for the atomic elliptic state in a Coulomb potential, we find stationary state solutions of a related Burgers equation in which the
Burgers fluid is attracted to a Keplerian elliptical orbit in the infinite time limit. Modelling collisions between the Nelsonian particles making up the fluid and Newtonian planetesimals by
classical mechanics leads to a Burgers-Zeldovich equation with vorticity. Here planetesimals are forced to spin about the axis normal to the plane of motion and their masses obey an arcsine law for
elliptical orbits of small eccentricity. A preliminary study of the Stochastic Burgers-Zeldovich equation will be presented.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/SPD/seminars/2010010811301.html","timestamp":"2014-04-19T09:57:07Z","content_type":null,"content_length":"6663","record_id":"<urn:uuid:504aee87-96db-4790-86f5-3b5424e3730d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scaling parallel, banded LU decomposition and solve
I have written a simple program to factor and solve a banded matrix in ScaLAPACK using PDGBTSV and PDGBTRS. Basically, I invoke PDGBTSV, and then use the output from the internal call of PDGBTRF for
additional calls to PDGBTRS. I have run the program with 1, 2, 4, and 8 processes (everything regarding the matrix and rhs kept the same) and find that the solve (PDGBTRS) portion does get faster as
I increase use more processors(scales approx. as P^-.63), but the factorization step (PGBTRF called from PDGBSV) scales as P^3.0 ! I would like to run with more processors, but I am concerned about
the factorization scaling...With one process, for this problem, the PDGBSV call takes about 2.6 seconds, but for 8 processes on the same matrix/rhs, I did the factorization once and it took nearly 40
My sample problem as N=40000 unknowns, with BWL and BWU both equal to 200 (this is emulating part of a much larger code, and I am considering PDGBSV to obtain the factorization and then repeatedly
apply PDGBTRS to get an important quantity, on the order of several million times). I can live with a costly factorization b/c I only do that part once, but other parts of the code need to use a
larger number of processors.
My actual question is:
Is the scaling I'm seeing inherent to PDGBTRF or can I fix this by changing how I compile my code, or something else I haven't thought of? If you need more information about the environment I am
using the code in, or the code itself, I'd be happy to provide them. What have others with more experience found when using PDGBTRF?
Thank you,
Re: Scaling parallel, banded LU decomposition and solve
After discussing things with a friend of mine, he suggested I try setting the environment variable OMP_NUM_THREADS to 1. With this in place, the time for the factorization(PDGBTRF) with one processor
increased by about a factor of ten, doubled when run with 2 processors, but then has decreased when using more processors all the way up to 80 (scaling as P^(-.5)). However, the matrix I was using as
a test case probably wasn't big enough to see strong benefits from more processors at that point because N/P was approaching the same order as my matrix bandwidth.
With OMP_NUM_THREADS at 1, the solve calls (PDGBTRS) scaled well enough (~ P^-.6 or so), just as before.
In summary, setting OMP_NUM_THREADS completely solved the performance issue I was seeing. If I am able to see a benefit from OMP with different settings, I will be sure to update.
Last edited by sgildea on Wed May 30, 2012 8:51 am, edited 2 times in total.
Re: Scaling parallel, banded LU decomposition and solve
Great! Sorry for not spotting this on our end. It's a classic. Yes you absolutely do need to set the OMP_NUM_THREADS to 1 for ScaLAPACK if you run your application with one MPI process per core.
This is because otherwise the BLAS will (in general) spawn as many threads as core available on a node. So if you have a sixteen-core node, your application spawn sixteen MPI processes (that's you)
each MPI processes spawning sixteen threads (that's the BLAS). This is a recipe for performance disaster. (Note: it would make sense to have four MPI processes per node each spawning four threads at
BLAS level. You would do this with the appropriate mpirun -np xxxx and then setting OMP_NUM_THREADS to 4. In general OMP_NUM_THREADS to 1 is close to best. So not worth worrying.) (Note:
OMP_NUM_THREADS takes care of most of the BLAS libraries but not all, some have their own environment variable to control the number of threads they are running on.)
Cheers, Julien. | {"url":"http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=6&p=8720","timestamp":"2014-04-20T00:38:18Z","content_type":null,"content_length":"20137","record_id":"<urn:uuid:22c779e5-b2fd-4ace-ae9e-99f9326e10e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A Conjecture of Mukai Relating
Numerical Invariants of Fano Manifolds
Marco Andreatta
Abstract. A complex manifold X of dimension n such that the anti-
canonical bundle -KX := det TX is ample is called a Fano manifold. Be-
sides the dimension, other two integers play an essential role in the clas-
sification of these manifolds, namely the pseudoindex of X, iX, which
is the minimal anticanonical degree of rational curves on X, and the
Picard number X, the dimension of N1(X), the vector space generated
by irreducible complex curves modulo numerical equivalence . A (gen-
eralization of a) conjecture of Mukai says that X(iX - 1) n. In this
paper we present some partial steps towards the conjecture, we show
how one can interpretate and possibly solve it with the use of families
of rational curves on a uniruled variety, and more generally with the
instruments of Mori theory. We consider also other related problems:
the description of some Fano manifolds which are at the border of the
Mukai relations and how the pseudoindex changes via (some) birational
Mathematics Subject Classification (2000). Primary 14J45; Secondary | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/380/1551020.html","timestamp":"2014-04-18T13:36:03Z","content_type":null,"content_length":"8244","record_id":"<urn:uuid:61f618d9-8062-4594-b149-c2447271668a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary Search Tree
Results 1 - 10 of 12
- Journal of Algorithms , 1999
"... We show that, in order to achieve efficient maintenance of a balanced binary search tree, no shape restriction other than a logarithmic height is required. The obtained class of trees, general
balanced trees, may be maintained at a logarithmic amortized cost with no balance information stored in the ..."
Cited by 19 (0 self)
Add to MetaCart
We show that, in order to achieve efficient maintenance of a balanced binary search tree, no shape restriction other than a logarithmic height is required. The obtained class of trees, general
balanced trees, may be maintained at a logarithmic amortized cost with no balance information stored in the nodes. Thus, in the case when amortized bounds are sufficient, there is no need for
sophisticated balance criteria. The maintenance algorithms use partial rebuilding. This is important for certain applications and has previously been used with weight-balanced trees. We show that the
amortized cost incurred by general balanced trees is lower than what has been shown for weight-balanced trees. � 1999 Academic Press 1.
- Software Practice and Experience , 1993
"... Much has been said in praise of... this paper, we compare the performance of three different techniques for self-adjusting trees with that of AVL and random binary search trees. Comparisons are
made for various tree sizes, levels of key-access-frequency skewness and ratios of insertions and deletion ..."
Cited by 17 (1 self)
Add to MetaCart
Much has been said in praise of... this paper, we compare the performance of three different techniques for self-adjusting trees with that of AVL and random binary search trees. Comparisons are made
for various tree sizes, levels of key-access-frequency skewness and ratios of insertions and deletions to searches. The results show that, because of the high cost of maintaining self-adjusting
trees, in almost all cases the AVL tree outperforms all the self-adjusting trees and in many cases even a random binary search tree has better performance, in terms of CPU time, than any of the
self-adjusting trees. Self-adjusting trees seem to perform best in a highly dynamic environment, contrary to intuition.
- In SWAT 90, 2nd Scandinavian Workshop on Algorithm Theory , 1990
"... Trees of optimal and near-optimal height may be represented as a pointer-free structure in an array of size O(n). In this way we obtain an array implementation of a dictionary with O(log n)
search cost and O(log2 n) update cost, allowing interpolation search to improve the expected search time. 1 In ..."
Cited by 14 (0 self)
Add to MetaCart
Trees of optimal and near-optimal height may be represented as a pointer-free structure in an array of size O(n). In this way we obtain an array implementation of a dictionary with O(log n) search
cost and O(log2 n) update cost, allowing interpolation search to improve the expected search time. 1 Introduction The binary search tree is a fundamental and well studied data structure, commonly
used in computer applications to implement the abstract data type dictionary. In a comparison-based model of computation, the lower bound on the three basic operations insert, delete and search is
dlog(n + 1)e comparisons per operation. This bound may be achieved by storing the set in a binary search tree of optimal height. Definition 1 A binary tree has optimal height if and only if the
height of the tree is dlog(n + 1)e. A special case of a tree of optimal height is an optimally balanced tree, as defined below. Definition 2 A binary tree is optimally balanced if and only if the
difference in length between the longest and shortest paths is at most one.
- ACTA INFORMATICA , 1990
"... First we present a generalization of symmetric binary B-trees, SBB(k)- trees. The obtained structure has a height of only \Sigma (1 + 1k) log(n + 1)\Upsilon, where k may be chosen to be any
positive integer. The maintenance algorithms require only a constant number of rotations per updating operati ..."
Cited by 11 (1 self)
Add to MetaCart
First we present a generalization of symmetric binary B-trees, SBB(k)- trees. The obtained structure has a height of only \Sigma (1 + 1k) log(n + 1)\Upsilon, where k may be chosen to be any positive
integer. The maintenance algorithms require only a constant number of rotations per updating operation in the worst case. These properties together with the fact that the structure is relatively
simple to implement makes it a useful alternative to other search trees in practical applications. Then, by using an SBB(k)-tree with a varying k we achieve a structure with a logarithmic amortized
cost per update and a height of log n + o(log n). This result is an improvement of the upper bound on the height of a dynamic binary search tree. By maintaining two trees simultaneously the amortized
cost is transformed into a worstcase cost. Thus, we have improved the worst-case complexity of the dictionary problem.
, 1995
"... Balanced binary search trees are widely used main memory index structures. They provide for logarithmic cost for searching, insertion, deletion, and efficient ordered scanning of keys. Long term
trends in computer technology have emphasized the effect of memory reference locality on algorithm perfor ..."
Cited by 2 (1 self)
Add to MetaCart
Balanced binary search trees are widely used main memory index structures. They provide for logarithmic cost for searching, insertion, deletion, and efficient ordered scanning of keys. Long term
trends in computer technology have emphasized the effect of memory reference locality on algorithm performance. For example, the search performance of large structurally equivalent binary trees can
double if nodes are located optimally in memory relative to each other. Unfortunately the traditional Random Access Memory (RAM) model cannot distinguish algorithms with good memory reference
locality from algorithms with poor memory reference locality. We therefore define a new ...
"... When a balanced data structure is updated and searched concurrently, updating and balancing should be decoupled so as to make updating faster. The balancing is done by special maintenance
processes that run concurrently with the search and update tasks. We show that it is not necessary to use a wea ..."
Cited by 2 (0 self)
Add to MetaCart
When a balanced data structure is updated and searched concurrently, updating and balancing should be decoupled so as to make updating faster. The balancing is done by special maintenance processes
that run concurrently with the search and update tasks. We show that it is not necessary to use a weak balance condition like AVL or red-black condition, since balancing a binary tree perfectly so
that the search paths become as short as possible is not much more expensive, that is, a process must lock only 5 nodes at a time even when perfect balance is desired. In contrast to other algorithms
that perfectly balance a binary search tree, our algorithm keeps the tree (weakly) balanced during the further balancing. This is important if the data structure is used by concurrent search and
update processes.
"... Binary search tree is a best-suited data structure for data storage and retrieval when entire tree could be accommodated in the primary memory. However, this is true only when the tree is
height-balanced. Lesser the height faster the search will be. Despite of the wide popularity of Binary search tr ..."
Add to MetaCart
Binary search tree is a best-suited data structure for data storage and retrieval when entire tree could be accommodated in the primary memory. However, this is true only when the tree is
height-balanced. Lesser the height faster the search will be. Despite of the wide popularity of Binary search trees there has been a major concern to maintain the tree in proper shape. In worst case,
a binary search tree may reduce to a linear link list, thereby reducing search to be sequential. Unfortunately, structure of the tree depends on nature of input. If input keys are not in random order
the tree will become higher and higher on one side. In addition to that, the tree may become unbalanced after a series of operations like insertions and deletions. To maintain the tree in optimal
shape many algorithms have been presented over the years. Most of the algorithms are static in nature as they take a whole binary search tree as input to create a balanced version of the tree. In
this paper, few techniques have been discussed and analyzed in terms of time and space requirement. Key words:
- International Journal of Computer Mathematics , 1991
"... The balance criterion defining the class of ff-balanced trees states that the ratio between the shortest and longest paths from a node to a leaf be at least ff. We show that a straight-forward
use of partial rebuilding for maintenance of ff-balanced trees requires an amortized cost of \Omega\Gamma ..."
Add to MetaCart
The balance criterion defining the class of ff-balanced trees states that the ratio between the shortest and longest paths from a node to a leaf be at least ff. We show that a straight-forward use of
partial rebuilding for maintenance of ff-balanced trees requires an amortized cost of \Omega\Gamma p n) per update. By slight modifications of the maintenance algorithms the cost can be reduced to O
(log n) for any value of ff, 0 ! ff ! 1. KEY WORDS ff-balanced trees, partial rebuilding, search trees. CR CATEGORIES: E.1, F.2, I.1.2. 1 Introduction In his thesis Olivie [9] introduced a class of
binary search trees, which he calls ff-balanced trees, or ffBB-trees. Let h(v) denote the length for the longest path from a node v to a leaf and let s(v) denote the length of the shortest path. We
give a formal definition of ff-balanced trees below. Definition 1 A binary tree is ff-balanced if the following is true for each node v in the tree: s(v) h(v) ff; h(v) 1 1 \Gamma ff (1) h(v) \...
, 1993
"... this paper we propose relaxing the requirement of inserting all nodes on one level before going to the next level. This leads to a new class of binary search trees called ISA[k] trees. We
investigated the average locate cost per node, average shift cost per node, total insertion cost, and average su ..."
Add to MetaCart
this paper we propose relaxing the requirement of inserting all nodes on one level before going to the next level. This leads to a new class of binary search trees called ISA[k] trees. We
investigated the average locate cost per node, average shift cost per node, total insertion cost, and average successful search cost for ISA[k] trees. We also present an insertion algorithm with
associated predecessor and successor functions for ISA[k] trees. For large binary search trees (over 160 nodes) our results suggest the use of ISA[2] or ISA[3] trees for best performance
, 1996
"... Two chapters of this thesis analyze expert consulting problems via game theoretic models; the first points out a close connection between the problem of consulting a set of experts and the
problem of searching. The last chapter presents a solution to the dictionary problem of supporting and update ( ..."
Add to MetaCart
Two chapters of this thesis analyze expert consulting problems via game theoretic models; the first points out a close connection between the problem of consulting a set of experts and the problem of
searching. The last chapter presents a solution to the dictionary problem of supporting and update (Insert and Delete) operations on a set of key values. The first chapter shows... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1498873","timestamp":"2014-04-17T19:29:29Z","content_type":null,"content_length":"36912","record_id":"<urn:uuid:1f6421f7-b903-4f4d-848e-b0787e853e2b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sx acting on up spin particle confusion
You didn't measure anything. Applied to eigenstates of S[z], the S[x] operator acts as a stepping operator (it's equal to ½(S[+] + S[-])) The S[+] gives zero, and the S[-] steps you down. So no
wonder you got the spin down state!: wink:
What you want to do is find the overlap between the spin up state and the eigenstates of S[x]. This will give you the probability amplitude of measuring each value of S[x]. From the matrix you wrote,
the eigenstates of S[x] are (1/√2)(1, ±1). | {"url":"http://www.physicsforums.com/showthread.php?p=4171828","timestamp":"2014-04-19T22:47:00Z","content_type":null,"content_length":"34467","record_id":"<urn:uuid:d9198b18-eedb-4c8c-9926-be77c2e7286e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Note: A number of excellent resources are referred to below, but none of them are ads.
* * *
The question of how to provide our children with a good math education often causes undue anxiety. With a clearer and more relaxed understanding of what it is that we're trying to accomplish, we can
present it as just one more interesting part of life - one that anyone can easily explore and delight in.
Not everyone will become fascinated with math, any more than everyone will become fascinated with painting or creative writing - but anyone can at least become acquainted with the fun to be found in
it, so that they can learn math as a useful and accessible tool rather than as a dull and exhausting set of mysterious processes understood only by people who are "good at math."
Math has always been a natural and necessary part of life. People throughout history have continually come upon wonderful discoveries and innovations for working with relationships, patterns, and
quantities to solve problems or fulfill needs - but, unfortunately, we've managed to narrow our focus to those procedures, removed from the patterns being navigated by them.
Lancelot Hogben was a professor of medical statistics at Britain's University of Birmingham when he wrote a popular book called
Mathematics for the Million: How to Master the Magic of Numbers
, first published in England in 1937 and revised up until 1967. Albert Einstein said of the book, "It makes alive the contents of the elements of mathematics." Hogben once commented:
"The best therapy for emotional blocks to math is the realization that the human race took centuries or millennia to see through the mist of difficulties and paradoxes which instructors now
invite us to solve in a few minutes."
I wasn't even aware of his feelings on that many years ago when reading his wonderful children's book to my young son - The Wonderful World of Mathematics - but it was exactly my reaction at the
time. See the review on it a bit further down this page.
Professor Emeritus Michael Butler, in a foreword to the children's math books, Patterns in Arithmetic, by Suki Glenn and Susan Carpenter, recalls his early exploration of new ways that math can be
"Years ago I was a young professor at UC Irvine, and although I had long been fascinated by the act of learning, this was my first teaching job. Among other things I taught mathematics. The
experience was immensely rewarding but unsettling. I thought of math as beautiful, richly ordered, and fun. Most of my students in those required courses appeared to think of it, at least at
first, as arbitrary, impenetrable, incoherent, and dull; some of them found it scary.
A few students did not - cheerfully pushing and pulling at a formula, for instance, and asking: 'What would happen if this part of the denominator were in the numerator? What would happen if I
reversed this and that part? What would happen if I made this piece very large or very small? What would be a simpler form or a more general form of the expression?' They engaged in this
systematic play for the fun of it, but their reinventing or recasting of the material of mathematics also helped them see why something was the way it was; it helped them understand. In fact, the
students I started listening to each seemed to carry with them a kind of 'understanding kit.' They had an expectation that math would make sense; they knew when a particular expression or idea
did not yet make sense to them, and when it did; and they had developed skills and stamina for getting from the first state to the second, and the habit of doing so. The math they came to know in
this way, they owned.
These happy few were regarded by the others (and by most of my colleagues) as having a peculiar knack. There was no shame in not having it; that was just the luck of the genetic draw. Or did the
attitude of the rest of the class toward math have to do with the way they had been educated? Their reports of their pre-college math study matched what I found when I started visiting schools,
especially elementary schools, and reading texts of that era: my students had been spending most of their time memorizing calculation recipes and learning to run them more or, often, less well.
But that wasn't at all what the kind of people who had discovered the math did. Mathematicians look for and find patterns in formal objects, extend them, seek counter-examples, figure out why the
patterns work, and then, finally, publish an account of one way that they work. The last is the public part, but the rest is what they do. Almost none of my undergraduate students seemed to have
had much experience with that. There was an odd disjunction between what practitioners did and what schools asked students to do, a disjunction that was deeper and odder the more you looked at
it. It was as though we had plucked the fruit "mathematics" for use in schools, peeled it, and fed students the rind instead of the flesh."
Suki Glenn, is the one of the inspiring authors of the Pattern Press children's math books - a program that uses a multi sensory approach developed in an experimental education program called The
Farm School at U.C. Irvine. In the Pattern Press web site, she explains her philosophy of teaching children to think like mathematicians:
"What do mathematicians do? They are good at constructing models, looking for number patterns, and using those patterns to develop reliable procedures for solving problems. The procedures we all
use to add, subtract, multiply or divide were created by mathematicians who generalized a physical reality to an abstract mathematical formula. They invent short cuts and easy recording systems
(like place value) and then teach them to all of us.
The skills of a mathematician can be developed in children by allowing them to use manipulatives to build models of the physical reality of addition, subtraction, or whatever subject with which
they find and use patterns, and then create their own procedures for doing arithmetic operations. If children are given the chance to discover the procedures instead of being told how to do it
and then drilling it in, you will find the results superior, the learning more of an adventure. You will have experiences of delight as your young mathematicians surprise you with methods and
models you have never seen."
To begin finding inspiring ideas for learning about the enjoyment of math:
Read the online article A Travel Excursion of the Mind, an excerpt from David Albert's excellent book, Homeschooling and the Voyage of Self-Discovery: A Journey of Original Seeking. He provides an
enlightening and humorous look at effective ways to lead kids into the appreciation of math, and suggests creative ways to make math an important and enjoyable part of life.
Another enlightening article of David Albert's is
Just Do the Math!
- an excerpt from his book,
Have Fun. Learn Stuff, Grow! Homeschooling and the Curriculum of Love
. It explains how math can be mastered in an astoundingly short time when the time and conditions are right.
Read Sue Heavenrich's online article, Crazy for Calculating! Making Math Fun!, a description of how she made math fun for her kids in spite of her own previous math phobia.
Check out Pam Sorooshian's wonderful math blog - Joyful Math! It includes teaching tips, insights, inspiration, fun games and interesting math activities. Her love of math is infectious and can't
help but broaden, enlighten, and lighten anyone's attitude toward math.
One really fun and fascinating way of understanding and appreciating math is to look back at its beginnings - to the reasons and ways in which people have developed math:
The Wonderful World of Mathematics, by Lancelot Hogben is a beautiful children's book that has been around since 1955. I think this observation of Hogben's, also quoted above, is well worth
"The best therapy for emotional blocks to math is the realization that the human race took centuries or millennia to see through the mist of difficulties and paradoxes which instructors now
invite us to solve in a few minutes."
This is a picture book that tells the stories of how simple systems of math were first developed for practical, everyday reasons. You can even use common materials from around the house to reproduce
the processes that people in early civilizations used. It's pretty amazing to a child to realize that he already knows more about some of these processes than some of the brightest adult minds once
did. This is the kind of thing that makes the imagination soar - and it was none other than Albert Einstein who commented that "Imagination is more important than knowledge." This book is sometimes
hard to find, but it's in quite a few libraries, so ask your librarian if they can get it for you.
These are some very interesting web pages on math history:
• Mark Millmore's Ancient Egypt - This picture is from his lovely site full of fascinating info, free games and printables, some about math on this page.
• Economy Guide, a fascinating web site dealing with the colorful history of money: "Economics is all about math, and learning about ancient and medieval economics is bound to improve students'
math skills as well. You will find lessons here about percentages, weights and measures, and some ideas about devaluing currency, what that means and what the effects are."
• Five Fingers to Infinity - Mathematics and the Liberal Arts - more scholarly, but fascinating, articles on the world history of math.
Helpful online math resources:
Take a look in BestHomeschooling.org's list of annotated links to good math pages: Go Figure! The Fascinating World of Mathematics. Here are just a couple of examples:
Mathematics Lessons that Are Fun! Fun! Fun! - Interactive math activities for kids from Cynthia Lanius of Rice University's math department - activities like "Let's Graph," "Who Wants a Pizza?"
(fractions), "Mathematics of Cartography," "Calendar Fun: It's Algebra," "A Fractal Geometry Unit," "Pattern Block Geometry," and many more!
Count on! - An extraordinary, unusually inventive and colorful site of math adventures, games, tricks, mathszines, resources, links, and more for all ages. It's well worth taking the time to explore
every nook and cranny of this site.
A few of the many good books for learning math while discovering the fun in it:
In FUN-Books catalog's Mathematics page, you can find a delightful and amazing assortment of books for making math fun and interesting - everything from books on math & music, math patterns,
games, the math adventures of SIr Cumference, grocery cart math, hands-on projects, understanding algebra, and others too numerous to list.
You might be thinking "This is all well and good, but what can I do to teach my kids math? I'm the sort of person who needs a structured and methodical plan, and I don't know where to begin! What
about math texts and workbooks?
You might be surprised how quickly this anxiety starts to lift as you begin to explore some of these resources and activities with your kids. Becoming comfortable with math might take a little bit of
time - but it will be time well spent. Kids simply cannot "get behind" as fast as you might imagine - and the quality of their learning will be tremendously affected by your taking time to explore
some of the creative resources and advice that are available.
There are some nice math texts around - it's just good to be aware that texts and workbooks are not the Be-all or End-all in learning math. Most seasoned homeschoolers will advise you to proceed very
slowly and carefully in picking out materials that are a good fit for your child. A textbook or workbook that doesn't fit with your child's learning style or interest, can be deadening - especially
if used before you and your child have a good grasp of what math really is and the enjoyment that can be found in it. Here are a few good sources to consider for some of the best materials:
• Pattern Press - A pleasant and thoughtfully designed set of materials mentioned above; developed from an experimental program at U.C. Irvine in which teachers worked with children to find the
best ways of leading them to think like mathematicians. You use common materials found around the home to do the activities - things like buttons, Goldfish crackers, pennies, crayons, dice,
fruit, beans, as well as pattern blocks. At this point, the program is designed for only younger children, but more books are being planned for older students.
• FUN Books - They review and provide some of the best educational materials around. Check out some of the popular books they describe in their math section - there are good descriptions. They even
carry Harold Jacobs' excellent and well loved texts for older students!
• Creative Publications - A math catalog with hundreds of colorful products, and an emphasis on problem-solving skills development.
Your child might really enjoy using colored pencils for math. I knew an occupational therapist who did sensory integration in public schools, and she commented that one of the things she really liked
about Waldorf schools is their use of color. The colored pencils provide a certain kind of multi sensory delight that aids in the learning process, and greatly enhances the kineshthetic and visual
pleasure of writing. The large, waxy, Lyra ones are wonderful. They can now be found on a number of web sites - try Waldorf Supplies.
Learning and the all-important ~joy~ of learning happen in a much deeper way when learning is not thought of as a series lessons so much as an ongoing part of life.The joy of learning is what will
create a lifelong learner who has the curiosity, ability, and confidence to easily learn about anything life has to offer - and in the shorter term, real learning simply works a lot better when the
learner is engaged from the inside rather than from external pressures or coaxing.
Albert Einstein, someone who certainly had a very special interest in learning, said:
"It is, in fact, nothing short of a miracle that the modern methods of instruction have not yet entirely strangled the holy curiosity of inquiry; for this delicate little plant, aside from
stimulation, stands mainly in need of freedom; without this it goes to rack and ruin without fail. It is a very grave mistake to think that the engagement of seeing and searching can be promoted
by means of coercion and a sense of duty."
Cynics who have been indoctrinated into traditional attitudes about education (we've all been there) will often say that "Learning doesn't have to be 'fun' - kids just need to be made to do the work
to learn what they need to know." This is simply not true - learning is not about "work." Take a look at some of the things these widely respected researchers and educational theory authors have to
say on the subject:
Frank Smith: In the Amazon.com description of his book, The Book of Learning and Forgetting, you can click on the words "Look inside this book" under the picture of the book cover, and get a sense of
what you can learn from this amazing book, considered life-changing by a number of people who have read it. Another of his books, The Glass Wall: Why Mathematics Can Seem Difficult, is a fascinating
read for discovering how math really works and how it can be made accessible.
Alfie Kohn: In his online article, Students Don't "Work" - They Learn!, Kohn offers thought provoking arguments against prevailing educational assumptions. More good articles and descriptions of his
popular books can be found in Alfie Kohn's web site.
Dr. Thomas Armstrong: Dr. Armstrong has written a number of popular books on the subject of learning, and has some wonderful articles in his web site: Articles about learning, by Dr. Thomas Armstrong
These are just a few of the ways in which a child can learn a number of mathematical concepts in a natural way through everyday activities:
Cooking and baking call for the use of measurements, counting, timing, estimating, dividing or multiplying a recipe, cutting or dividing the cooked goods, portioning, etc., all requiring the obvious
use of math.
Crafts and building projects use mathematical thinking in measurement, calculations and estimations, and arithmetic. Take a look in the juvenile non-fiction section of the library for books on all
the fun projects you can make with kids.
Observing and participating with the use of money in the course of shopping, both in grocery stores and other places, offers a valuable opportunity to happily engage kids in comparing prices, using
coupons, estimating, weighing, and using mental arithmetic to estimate the cost of multiple items, and so forth. Teachers in schools often have to refer to expensive specialized educational materials
in order to recreate the activities you can do with your kids in real life.
Everyday activities around the house - doing repairs, taking measurements, hanging pictures or shelves, setting the table, and so forth, often call for simple mathematical processes.
Sewing projects and quilting: There are lots of patterns today for fun and useful crafts and decorating ideas - and these activities require ongoing mathematical thinking about relationships. You
might be surprised at the patterns and ideas to be found in the fabric/craft store.
Using maps: Kids love maps and can help keep track of mileage and comparisons of routes, as well as help estimate fuel and other costs on trips, etc. Again, these activities are most successful when
suggested casually and positively in the natural course of events rather than presented as lessons.
In all sorts of hobbies and other special interests - math turns up everywhere in many forms.
Try "playing store" - You can use milk, cereal, and other food item containers to make a supply of play goods for a fun little store - and there are great toy cash registers to add to the fun.
You are also using mathematical thinking when you play around with any of these:
Pattern Blocks
Cuisenaire Rods
Real U.S. currency, both coins and paper - a fun and practical way to learn about arithmetic, percents, fractions, and more...
Legos are a natural way of noticing math patterns - but it would be a mistake to try to make Lego play into structured "lessons."
Educational software can be a lot of fun and reinforce mathematical thinking:
• There are a lot of free interactive sites that offer fun math activities and games - see the Go Figure! The Fascinating World of Mathematics section of The Homeschooling Gateway to the Internet
in this web site for an annotated list of links to some really interesting interactive sites.
• There are also some delightful pieces of educational software that give kids a chance to think mathematically. Some of the good ones are hard to find, but are still available on Amazon.com. Just
a few of these are:
The Incredible Machine
Return of the Incredible Machine - Contraptions
The Incredible Machine - Even More Contraptions
The Logical Journey of the Zoombinis
Zoombinis Mountain Rescue
The Number Devil
You can waste a lot of money on mediocre educational software - so it's wise to read reviews and ask around. These are a few good places to read about educational software:
Superkids Educational Software Reviews
MultiMedia & Internet@Schools Magazine
A few sources for buying educational software:
Games! - As casual and ongoing fun activities, games can be a very effective way for learning to take place.
The use of mathematical thinking is a common feature of many card and board games. This article in the Ask Dr. Math website explains more: Math in Card Games.
Try incorporating some of these popular games into your routine. You can type any of the names into Ask Jeeves or Goggle for more details on any of them that you might not be familiar with - but
be sure to also type in the word "game" after a game's name to narrow the search down!
• War - a children's card game played with only a standard 52 card pack. Played in many parts of the world, it involves no strategy - simply the ability to recognize which of two cards is higher in
rank, and to follow the procedure of the game. See the procedures on this web page: "War".
• Check out the Card Games web site - This site has information about a wide variety of card and tile games from all over the world.
Some good games for practicing math skills:
• 1-2-3-Oy!
• Chess
• Chutes and Ladders
• Clue
• Countdown!
• Dice Games
• Dominoes
• Five Crowns
• Life
• Mancala
• Math 24
• Monopoly
• Prime Pak
• Risk
• Set
• Sorry
• Uno
• Yahtzee
With all this to explore, there's not much time to worry about math anxieties, because you'll be so busy having fun and exploring math with your children!
Copyright 2005 Lillian Jones | {"url":"http://www.besthomeschooling.org/articles/lillian_jones_math.html","timestamp":"2014-04-18T08:01:50Z","content_type":null,"content_length":"35196","record_id":"<urn:uuid:f1422310-98fb-4bd8-8157-3078c883f33f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mr. Andrade's AP Physics ClassKevin I. Hazel M. Yoshika W.
Circular Motion 1.
p...Thanks for the resources!You started planning your course well in advance. ...
tag:blogger.com,1999:blog-3543576021624914833.comments2012-08-12T15:10:19.753-04:00David Andradehttps://plus.google.com/
116901038659273422860noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-3543576021624914833.post-46367664876255743742009-11-05T12:36:02.905-05:002009-11-05T12:36:02.905-05:00Kevin I. Hazel M.
Yoshika W.<br /><br />Circular Motion 1.<br />problem 1: <br />A conical pendulum consists of a 380-g bob attached to a string 35-cm long that makes an angle of 15 degrees with the verticle. What is
the centripetal force acting on the bob?<br />Answer: .998<br /><br />Problem 2:<br />a ball of mass .1kg moves in a circle of radius .2m with a constant speed of 4m/s. Acceleration?<br />Answer:80m/
s^2<br /><br />Problem 3:<br />A ball moves in a circle of radius 1 m at a constant speed. The period of motion is 2 seconds. What is the speed of the ball?<br />Answer:π m/s<br /><br />Problem 4:<br
/>The ride in question has a 44.5 N chiar which hangs freely from a 9.14 m long chain attached to a pivot on the top of a tall tower. When a child enters the ride, the chain is hanging straight down.
The child is then attached to the chair with a seat belt and shoulder harness. When the ride starts up,the chain rotates about the tower. Soon the chain reaches its max. speed and remains rotating at
that speed. It rotates about the tower once every 3.0 seconds. When you ask the operator, he says that the ride is perfectly safe. He demonstrates this by sitting in the stationary chair. The chain
creaks but holds and he weighs 890 N. Has the operator shown that this ride is safe for a 222.5 N child?<br />Answer: No, he has not shown that it is safe and the child should not ride.<br /><br />
Circular motion 2<br />Problem 1: <br />a wheel is set into motion so that it experiences a constant angular acceleration of .2rad/s. If it starts from rest, what is its angular displacement after 5
seconds?<br />Answer: 143.2 degrees<br /><br />Problem 2:<br />A ball moves in a circle with a radius of 15cm. Its angular displacement is given by θ=2t^2. Assume that it starts from rest. What is
its angular acceleration?<br />Answer:4 rad/s^2<br /><br />Problem 3:<br />A solid rotating ball has an angular velocity about an axis through its center of 8 rad/s when it is acted upon by a force.
This force gives the ball an angular acceleration of 1 rad/s for 4 seconds. By what factor is its kinetic energy increased?<br />
Answer:2.25Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3543576021624914833.post-156756350462387162009-09-14T01:21:05.404-04:002009-09-14T01:21:05.404-04:00This comment has been removed by a
blog administrator.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3543576021624914833.post-28041571653290006662009-09-05T07:49:27.769-04:002009-09-05T07:49:27.769-04:00This comment has been
removed by a blog administrator.Abagalehttp://www.blogger.com/profile/
14954471199568168161noreply@blogger.comtag:blogger.com,1999:blog-3543576021624914833.post-34096501644395361802009-08-26T12:43:43.234-04:002009-08-26T12:43:43.234-04:00Thanks for the resources!
14809267838191379368noreply@blogger.comtag:blogger.com,1999:blog-3543576021624914833.post-1868330020083271002009-08-16T07:48:23.265-04:002009-08-16T07:48:23.265-04:00You started planning your course
well in advance. http://apphysicsresources.blogspot.com and http://hyperphysics.phy-astr.gsu.edu/Hbase/hph.html will be useful sites for your studentsAnonymousnoreply@blogger.com | {"url":"http://mrandradesapphysics.blogspot.com/feeds/comments/default","timestamp":"2014-04-17T01:25:17Z","content_type":null,"content_length":"11841","record_id":"<urn:uuid:03ded8c0-32c7-40d2-bc37-8b23608fe236>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fairlawn, NJ Algebra Tutor
Find a Fairlawn, NJ Algebra Tutor
...Also, I have good problem solving skills that I can teach to you and I can classify the different test questions into categories so that you can focus on the areas where you need the most
help. Finally, I have already tutored several students in math SAT prep this year and so am very familiar wi...
8 Subjects: including algebra 1, algebra 2, chemistry, geometry
...From Algebraic expressions, Functions and Relations, Composition and Inverses of Functions, Exponential and Logarithmic Functions, Trigonometric Functions and their graphs and more. I provide
all and or most of the practice materials for each and every subject. I can assist with Global History, Earth Science, Physics and English, of which I have 100% pass.
47 Subjects: including algebra 1, chemistry, reading, writing
...If you have good intuition about the topic at hand then a briefer explanation may be warranted in order to discuss further implications of the idea, in which case, desires for a more advanced
approach to the relationships among these key concepts may be satisfied. Huge thing I do not do: JUDGE!!...
28 Subjects: including algebra 1, algebra 2, chemistry, physics
...I am permanently certified in K-8 in NJ & NY. I received a BA in elementary education & an MA in Science Education. I taught study skills in summer school and at my high school fore the past 8
16 Subjects: including algebra 1, reading, geometry, biology
...I have 20+ years financial and business management experiences, mostly with global Fortune 100 and Fortune 500 companies. I'd be glad to work with you for your financial and operation
management needs. I have 20 years financial and business experiences, mostly with global Fortune 100 and Fortune 500 companies.
14 Subjects: including algebra 1, English, geometry, accounting
Related Fairlawn, NJ Tutors
Fairlawn, NJ Accounting Tutors
Fairlawn, NJ ACT Tutors
Fairlawn, NJ Algebra Tutors
Fairlawn, NJ Algebra 2 Tutors
Fairlawn, NJ Calculus Tutors
Fairlawn, NJ Geometry Tutors
Fairlawn, NJ Math Tutors
Fairlawn, NJ Prealgebra Tutors
Fairlawn, NJ Precalculus Tutors
Fairlawn, NJ SAT Tutors
Fairlawn, NJ SAT Math Tutors
Fairlawn, NJ Science Tutors
Fairlawn, NJ Statistics Tutors
Fairlawn, NJ Trigonometry Tutors
Nearby Cities With algebra Tutor
Elmwood Park, NJ algebra Tutors
Fair Lawn algebra Tutors
Garfield, NJ algebra Tutors
Glen Rock, NJ algebra Tutors
Hackensack, NJ algebra Tutors
Hawthorne, NJ algebra Tutors
Lodi, NJ algebra Tutors
North Haledon, NJ algebra Tutors
Paramus algebra Tutors
Paterson, NJ algebra Tutors
Radburn, NJ algebra Tutors
Ridgewood, NJ algebra Tutors
Saddle Brook algebra Tutors
Woodcliff, NJ algebra Tutors
Woodland Park, NJ algebra Tutors | {"url":"http://www.purplemath.com/Fairlawn_NJ_Algebra_tutors.php","timestamp":"2014-04-17T04:19:20Z","content_type":null,"content_length":"24012","record_id":"<urn:uuid:fb4ca003-1362-4ea9-ad23-eeecd0891d3a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - question on set theory
Date: Jan 2, 2013 12:37 PM
Author: matmzc%hofstra.edu@gtempaccount.com
Subject: question on set theory
Happy New Year everybody!
For a set X I will write X < Y to indicate that the cardinality of X is strictly smaller than the cardinality of Y. Let me write P(X) for the power set of X. Consider the proposition:
Prop: If P(X) < P(Y)then X < Y.
Is it possible to prove this in ZFC (without continuum hypothesis)? If not is it perhaps the case that the Prop is equivalent to generalized continuum hypothesis? It is trivial to show that generalized continuum hypothesis implies the proposition, but what about the other direction? | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7948743","timestamp":"2014-04-16T20:48:17Z","content_type":null,"content_length":"1559","record_id":"<urn:uuid:b16e4c97-f0c3-4e01-bc28-02c3b2e5302e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Audubon, NJ Algebra 2 Tutor
Find an Audubon, NJ Algebra 2 Tutor
Hi, my name is Jem. I hold a B.S. in Mathematics from Rensselear Polytechnic Institute (RPI), and I offer tutoring in all math levels as well as chemistry and physics. My credentials include over
10 years tutoring experience and over 4 years professional teaching experience.
58 Subjects: including algebra 2, reading, chemistry, calculus
...In addition, I've spent time as an SAT tutor where I taught students vocabulary and tips for memorizing vocabulary. While studying Latin, I've developed a greater understanding of English
grammar. I like reading grammar books and I'm a fairly strong writer.
10 Subjects: including algebra 2, algebra 1, Latin, SAT math
...The balance of technical teaching and conceptual guidance will depend on the student's age, prior knowledge, and goals. I'm a lifelong musician. In addition to having a BM in composition, I
play multiple instruments, and I have experience doing recording and digital production.
8 Subjects: including algebra 2, algebra 1, Java, prealgebra
...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and
non-euclidean geometry. I taught Precalculus with a national tutoring chain for five years. I have taught Precalculus as a private tutor since 2001.
12 Subjects: including algebra 2, calculus, writing, geometry
I specialize in tutoring Math, Spanish and Russian. I have experience tutoring math at the levels of pre-algebra through calculus, and would also be able to tutor probability, statistics, and
actuarial math. I graduated with a degree in Russian Language, and spent a full year living in St.
14 Subjects: including algebra 2, Spanish, calculus, geometry | {"url":"http://www.purplemath.com/Audubon_NJ_Algebra_2_tutors.php","timestamp":"2014-04-18T18:43:32Z","content_type":null,"content_length":"24206","record_id":"<urn:uuid:6991057e-2e6b-49a8-8147-64db3c085c60>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rinton Press - Publisher in Science and Technology
Why study relativity? It certainly is useful in a wide variety of contexts, but that is not all. It is a prime ingredient of theoretical physics, and its study improves the mind. One obtains an
understanding of time, space, and matter through the mind-honing precision of its mathematical
The development of the special and general theories is self-contained. Vectors, tensors, geodesics, and differential geometry are described in simple terms. Applications are made to synchrotron and
Cherenkov radiation, astrophysics, accelerator physics, optics, and statistical mechanics.
As any other subject in physics, relativity can only be learned by thinking, writing, and worrying. The book contains numerous problems and detailed solutions.
undergraduate students, graduate students, teachers, researchers interested in modern physics.
1. Special Relativity Tensor Analysis
Prologue Killing Vectors
Lorentz Transformation Exercises, with Solutions
Experimental Consequences
Geometry of Space-time 6. Dynamics of the Gravitational Field
Tachyons Relativistic Hydrodynamics
Exercises, with Solutions Riemannian Geometry and Physics
Gravitational Field Equations
2. Relativistic Dynamics Schwarzchild Metric: Point Mass
Rockets Kruskal Extension
Lagrangian Formulation Exercises, with Solutions
Particle in Electromagnetic Field
Uniform Electric and Magnetic Fields 7. Solar and Stellar Systems
Bounded Trajectories in Coulomb Field Gravitational Redshift
Unbounded Coulomb Trajectories Relativity in Planetary Orbits
Exercises, with Solutions Deflection of Light Near the Sun
Time Delay of Reflected Signals Passing near the Sun
3. The Electromagnetic Field Rotational Dragging
Electromagnetic Field Dynamics Exercises, with Solutions
Electromagnetic Field Energy
Electromagnetic Fields of a Point Charge 8. Large Scale Gravitation
Exercises, with Solutions Propagation and Detection of Gravitational Waves
4. Electromagnetic Radiation Brans-Dicke Theory
Introduction Epilogue
Synchrotron Radiation Exercises, with Solutions
Cherenkov Radiation
Exercises, with Solutions Appendix
Frequently Used Symbols
5. Physics in Curved Spacetime Vector Analysis
Geometry of Curved Spacetime Chronology
Rindler Space Bibliography | {"url":"http://www.rintonpress.com/books/0541.html","timestamp":"2014-04-16T04:44:10Z","content_type":null,"content_length":"10731","record_id":"<urn:uuid:7fc8e83f-5322-43af-a95d-ccda18899740>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
What primes divide the discriminant of a polynomial?
up vote 8 down vote favorite
Given a monic polynomial $p(t) = t^n + ... + c_1 t + c_0$ with integer (or rational) coefficients and with roots $a_1, \dots a_n$, we can compute its discriminant, which is defined to be $\prod_{i<
j}(a_i - a_j)^2$.
In my case, I have a polynomial which is the characteristic polynomial of some invertible matrix $T$. It is palindromic -- i.e., $c_{n-i} = c_i$ for all $0 \leq i \leq n$ -- so the roots come in
inverse pairs $a$ and $\frac{1}{a}$. There are no repeated roots, so the discriminant is non-zero.
My question is: is there any way of knowing which primes divide this discriminant, i.e. from the coefficients of the polynomial or from the matrix $T$?
nt.number-theory polynomials prime-numbers
Does the matrix have integer coefficients? – Qiaochu Yuan Jan 15 '10 at 16:47
Did you mean to remove the requirement that T be symmetric? – Qiaochu Yuan Jan 15 '10 at 20:45
@Qiaochu: Yes, the matrix is not supposed to be symmetric: see JCollins' answer below. He didn't say that it was, but the language that he used might suggest that on a first reading. So I clarified
it. – Pete L. Clark Jan 15 '10 at 23:54
@Pete. From a post in meta, it seems that JCollis has "she" as the approriate third person pronoun, and not "he". – Anweshi Jan 16 '10 at 0:57
@Anweshi -- OK. I normally use s/he for someone whose name does not give a reasonable guess at his/her gender; looks like I slipped up here. (Probably someone who does not include their first name
does not have strong feelings about the matter. Probably.) – Pete L. Clark Jan 16 '10 at 4:16
show 2 more comments
4 Answers
active oldest votes
I disagree with the definition of the discriminant as the resultant of $P$ and $P'$. When $P$ is a polynomial with integer coefficients, then a prime $q$ should divide the discriminant of
$P$ if and only if the reduction of $P$ modulo $q$ has a multiple root (possibly at infinity, when the degree decreases by at least 2 under reduction). But now consider $P=2X^2+ 3X+1$. The
up vote resultant of $P$ and $P'$ is $-2$, and the reduction of $P$ modulo 2 has no multiple root. In this case, the well known discriminant $b^2-4ac$ is actually 1. The correct relation between the
5 down discriminant and the resultant for a polynomial $P(t)=a_nt^n+\cdots+a_1t+a_0$ is $\mathrm{disc}(P)= (-1)^{n(n-1)/2}\mathrm{res}(P,P')/a_n$.
While you are correct regarding the relationship between the discriminant and the resultant, the OP assumed that the polynomial was monic (hence a_n=1) and that he was only interested in
the discriminant up to sign. – Ben Linowitz Jan 15 '10 at 21:14
I refer to the answer of Pete L. Clark, who is not assuming the polynomial to be monic. Moreover, if the polynomial has rational coefficients, then it is sensible to multiply it by an
integer in order to obtain a polynomial with integer coefficients and content 1 before asking the question of which primes divide the discriminant. – Michel Coste Jan 15 '10 at 21:25
Point taken. I have edited my answer accordingly. – Pete L. Clark Jan 16 '10 at 0:53
add comment
First, when you say "it is symmetric", you probably mean that the polynomial $P(t) = a_n t^n + ... + a_1 t + a_0$ satisfies $a_{n-i} = a_i$ for all $0 \leq i \leq n$, not that the matrix
is symmetric, since it is the former condition which implies that the set of roots is invariant under taking reciprocals (and also that $0$ is not a root). Such a polynomial is more
commonly called palindromic.
The connection with the matrix seems unhelpful, because every polynomial is the characteristic polynomial of some matrix, e.g. its companion matrix. (It could possibly become helpful if
you had some additional information about the matrix.)
You ask whether one can tell which primes divide the discriminant from the coefficients of the polynomial. The answer is a resounding yes, although perhaps not in a way which will be
up vote 4 satisfying to you: you can compute the discriminant directly from the coefficients of the polynomial and then you can factor it! The formula you gave is actually not very good for
down vote computing the discriminant: for that it is better to use
$\operatorname{disc}(P) = (-1)^{\frac{(n)(n-1)}{2}} \frac{\operatorname{Res}(P,P')}{a_n}$,
where $P'(t)$ is the derivative and $\operatorname{Res}$ is the resultant, computed using its interpretation as the determinant of the Sylvester matrix.
[Thanks to Michel Coste for pointing out that the discriminant is not quite equal to the resultant of $P$ and $P'$ when $P$ is not monic.]
Actually, complexity-wise, for a polynomial of degree d, approximating the roots and then calculating $Res(P,P')=\prod P'(\alpha_i)$ directly, would be a soft-oh of d, rather than $d^3$
for the Sylvester. – Dror Speiser Jan 15 '10 at 20:31
add comment
I see that Pete beat me to the Resultant response, so I'll give a slightly different answer. For more on this, see page 21 of Ribbenboim's Classical Theory of Algebraic Numbers.
Let $p(x)=x^n+a_1x^{n-1}+\cdots + a_n$. We'd like to find the discriminant $D(p)$ of $p(x)$ using the coefficients only (i.e. without knowing the roots).
Set $p_k=\alpha_1^k + \cdots + \alpha_n^k$, where the $\alpha_i$ are the roots of $p(x)$ and $k=0,1,2,...$. Now for the amazing part. We can find all of the $p_i$ without actually
computing any of the $\alpha_i$!
up vote 1 Explicitly, $p_0=n$, $p_1=-a_1$ and $p_i$ for $i>1$ can be computed recursively using the Newton Formulas.
down vote
$D(p)=\displaystyle\det\begin{bmatrix} p_0 & p_1 & \cdots & p_{n-1} \newline p_1 & p_2 & \cdots & p_n \newline \vdots &\vdots & &\vdots\newline p_{n-1}&p_n &\cdots &p_{2n-2} \end
@Ben. What is a good source for learning all these "classical algebra" topics? I see a little bit in Lang, but this is not enough. – Anweshi Jan 16 '10 at 1:11
I'm far from being an expert myself, but I've found Volume II of Jacobson's Basic Algebra to be full of neat results that most texts don't cover. You'd also be wise to bookmark the
expository section of Keith Conrad's website: math.uconn.edu/~kconrad/blurbs – Ben Linowitz Jan 16 '10 at 1:23
You are always able to answer when some such algebra topic comes up. So you are indeed a kind of expert. That is why I asked you. Yes, Jacobson is also useful. Still, there are many
things I keep wondering about. – Anweshi Jan 16 '10 at 1:27
Thanks a lot for the link to Keith Conrad webpage. It is very useful. – Anweshi Jan 16 '10 at 1:36
add comment
(I haven't worked out how to leave comments on here rather than answers - help!)
The matrix has rational coefficients but is usually not symmetric. It is indeed only the polynomial which is symmetric in its coefficients, as you say.
up vote 0 Not every integer/rational polynomial is the characteristic polynomial of a rational matrix (see here), so I hope that this extra information may be of some help.
down vote
I appreciate that I can simply compute the discriminant in particular cases and see what happens, but I was hoping that there would be a general theorem to say something about the prime
divisors. So far I can't seem to figure anything out, even using the resultant.
@JCollins: "Not every integer/rational polynomial is the characteristic polynomial of a rational matrix". Yes, it is! Please follow the link I gave you to see what the companion matrix
is and look more carefully at the link you provided to see why it does not assert what you said it does. – Pete L. Clark Jan 15 '10 at 18:24
P.S.: You need at least 50 rep to leave comments. Until you get there, it's fine to leave comments in an answer like this. – Pete L. Clark Jan 15 '10 at 18:27
Sorry, you are right about the companion matrix. Thanks for the link. Thank you also to Ben for the Newton Identities and the determinant formula. Looks tricky to compute but may turn
out to help. (I am having issues with Mathoverflow so that the 'me' who asked the question is not the 'me' who is commenting here; thus I have no reputation and cannot comment on my own
question! Argh.) – JCollins Jan 15 '10 at 18:37
@Pete L.Clark: Thanks for fixing my question, but if you are going to use $a_i$ for the coefficients of my polynomial, please call the roots something else, like $\alpha_i$. – JCollins
Jan 15 '10 at 18:45
@JCollins: Done. – Pete L. Clark Jan 15 '10 at 20:23
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory polynomials prime-numbers or ask your own question. | {"url":"http://mathoverflow.net/questions/11880/what-primes-divide-the-discriminant-of-a-polynomial","timestamp":"2014-04-16T10:52:03Z","content_type":null,"content_length":"86438","record_id":"<urn:uuid:7b65dc63-588c-4f2a-9718-9b46018e003d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
OR/NOR Gate
A NOR gate, also known as a Negated OR gate, does the function of the OR gate, then negates it. This means that if the result was one it would change it to zero and vice versa. A NOR gate also has
two or more inputs but only a single output. This negated function is done on the OR gate by putting a small circle at the end of it. As you can see to the right, the symbol is identical to the OR
gate but with a little circle.
The Boolean equation of the logic gate is:
*The line above the A+B represents a "not" or a negation
Based on this equation we can find out all possible results of different outputs. This gate can have two or more inputs. Below is the truth table that shows the outputs for all the different
input combinations for a two-input NOR gate.
The truth table and Venn diagram below represent the different inputs and outputs of the gate. For more information on truth tables and Venn diagrams, click here.
│Truth Table: NOR Gate - 2 Inputs│
│ Input │ Output │
│ 0 │ 0 │ 1 │
│ 0 │ 1 │ 0 │
│ 1 │ 0 │ 0 │
│ 1 │ 1 │ 0 │
Below, we have provided GIF animations to represent the function of the gate.
NOR Gate: Input = 00 NOR Gate: Input = 01
OR Gate: Input = 10 NOR Gate: Input = 11
A NOR gate can be used in a variety of different applications. One example of a NOR gate in use is in a mixing machine. Sensors placed on the machine sense whether there is any ingredients left
in the machine or not. If any of the ingredients containers are empty and the sensor is activated(1) then the output will be 0 and the mixing will stop. | {"url":"http://www.dfstermole.net/TEJ2O/logicgates/ornor.php","timestamp":"2014-04-23T07:29:44Z","content_type":null,"content_length":"13996","record_id":"<urn:uuid:52c6d0c6-39b6-4276-b641-c48a1f7b6c27>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pala Math Tutor
...My philosophy is that in order to help a student understand their problem areas, you must first understand how the student learns and gear the lesson/tutoring to the students style in order for
them to comprehend the information that is being presented to them. My main goal when tutoring is to d...
35 Subjects: including algebra 2, Spanish, algebra 1, prealgebra
...Taught high school math for seven years and I love it! Students need to be confident in mathematics and I know that some of them have slipped through the cracks in our educational system. I am
here to help students succeed. "Confidence is the companion of success."I have taken the CBEST in 2011 and passed it.
28 Subjects: including differential equations, ACT Math, probability, SAT math
...I have taken many courses that have trained me with the necessary skills to teach and tutor a student whose first language is not English. I have taken courses from Structure of American
English to English as a Foreign Language to Communication in Writing. I also volunteered at my college's writing lab assisting ESL students with their essays and papers.
24 Subjects: including algebra 1, algebra 2, vocabulary, grammar
...Although I have a 24 hour cancellation policy, I will do my best to schedule make-up sessions. My goal is to help you understand and become comfortable with the language of Math. I look forward
to helping you with you Math needs.I am very excited to have have the opportunity to tutor Algebra 1.
3 Subjects: including algebra 1, prealgebra, elementary math
...I have taught all of my students study skills to increase positive results outside of our tutoring sessions. Studying skills are very important - each student must find the right method that
works for them because not everyone learns in the same way. I help my students find that method that works for them - whether they be visual learners, auditory learners, or kinesthetic learners.
15 Subjects: including prealgebra, algebra 1, algebra 2, English | {"url":"http://www.purplemath.com/pala_ca_math_tutors.php","timestamp":"2014-04-16T19:26:29Z","content_type":null,"content_length":"23657","record_id":"<urn:uuid:ca519127-e9e0-4204-a084-cee09f2f056f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Six Card Trick - an advanced variation on Monty Hall
The following should be solvable by everyone, but will be of particular interest to those familiar with Monty Hall.
There are six cards face down - three are Kings, three are Queens. There is a host present who knows what each card is. Naturally, you don’t.
1. You choose one of the cards.
2. From the five remaining cards, four cards must now be revealed and eliminated. The rule is that three of a kind must not be shown. No game can ever be voided.
3. You have three choices as to how this is done.
a) You can instruct the host to reveal all four cards. He will use his knowledge to reveal 2 Kings and 2 Queens in any order you wish.
b) You can turn over the first two cards yourself. and the host will turn over the opposite.
c) You can turn over the first and third card yourself and the host will turn over the opposite.
Whichever method is used, exactly two Kings and two Queens are always removed. You are left with a King and a Queen. Your chosen card is one of them.
Which is your card more likely to be? Are the odds 50/50?
Does it make a difference who revealed what or is the probabilty constant?
Consider the following three games that were played using each method.
In game a), you instructed the host to remove two Kings and then two Queens.
In game b) you turned over two Kings and the host showed two Queens
In game c) you alternated with the host in a sequence of King, Queen, King, Queen.
In each game there is a King and Queen left. Your chosen card could be either. Let's say you want the Queen.
Should you stick or swap? Does it make no difference? Do the odds vary, depending on the method of elimination?
Have fun!
Last edited by Simon (2007-01-27 23:29:22) | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=5865","timestamp":"2014-04-18T00:23:24Z","content_type":null,"content_length":"23554","record_id":"<urn:uuid:8bfd2660-7cc0-4ad5-b20a-ab8f9874f3dc>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determinant of sum of positive definite matrices
up vote 11 down vote favorite
Say $A$ and $B$ are symmetric, positive definite matrices.
I've proved that $\det(A+B) \ge \det(A) + \det(B)$ in the case that $A$ and $B$ are two dimensional.
Is this true in general for $n$-dimensional matrices?
Is it even true that $\det(A+B) \ge \det(A)$ [as this would also be enough..]
linear-algebra ra.rings-and-algebras inequalities
add comment
5 Answers
active oldest votes
The inequality $$\det(A+B)\geq \det A +\det B$$ is implied by the Minkowski determinant theorem $$(\det(A+B))^{1/n}\geq (\det A)^{1/n}+(\det B)^{1/n}$$ which holds true for any
up vote 27 non-negative $n\times n$ Hermitian matrices $A$ and $B$. The latter inequality is equivalent to the fact that the function $A\mapsto(\det A )^{1/n}$ is concave on the set of $n\times
down vote n$ non-negative Hermitian matrices (see e.g., A Survey of Matrix Theory and Matrix Inequalities by Marcus and Minc, Dover, 1992, P. 115 and also the previous MO thread).
looks nice, but isn't it somewhat overkill to invoke the Minkowski theorem here? – Suvrit May 19 '11 at 17:05
Thanks very much! – user15221 May 19 '11 at 17:44
add comment
The determinant of a positive definite matrix $G$ is proportional to $(1/\hbox{Volume}(\mathcal B(G)))^2$ where $\mathcal B(G)$ denotes the unit ball with respect to the metric defined
up vote 4 by $G$. If $A$ and $B$ are positive definite then the volume of $\mathcal B(A+B)$ is smaller than the volume of $\mathcal B(A)$ or $\mathcal B(B)$.
down vote
1 It's worth noting that this is secretly the same as Suvrit's answer. – Mark Meckes May 20 '11 at 14:20
2 Not really: You don't need exponentials for proving that $\det(G)$ is proportional to $1/\hbox{Volume}(G)^2$ : It is enough to stare at an orthogonal basis formed of eigenvectors for
$G$. In this sense this proof is more elementary. – Roland Bacher May 25 '11 at 7:18
Fair enough.$ $ – Mark Meckes May 29 '11 at 0:49
add comment
Yet another way to see this is to note that $A = \overline{Q}^{t}Q$ for some invertible matrix $Q$. Then ${\rm det}(A+B) = |{\rm det}(Q)|^{2}{\rm det}{( I + (\overline{Q}^{-1}})^{t}BQ^{-1})
$.` Now $(\overline{Q}^{-1})^{t}BQ^{-1}$ is Hermitian, and positive definite. It suffices to prove that if $X$ is positive definite and Hermitian, then ${\rm det}(I+X) \geq (1 + {\rm det}X)
$. We may conjugate $X$ by a unitary matrix $U$ and assume that $X$ is diagonal. Let the eigenvalues of $X$ be $\lambda_{1},\ldots, \lambda_{n}$, (allowing repetitions). Then ${\rm det}(I+X)
= \prod_{i=1}^{n}(1 + \lambda_{i}) \geq 1 + \prod_{i=1}^{n} \lambda_{i} = 1 + {\rm det}X.$ Such an argument appears in some proofs by R. Brauer, though I do not know whether it originates
with him.
up vote
21 down Later edit: Incidentally, I think that with the arithemetic-geometric mean inequality and a slightly more careful analysis, you can see by this approach that for $X$ as above, you do have $
vote {\rm det}(I+X) \geq (1 +({\rm det}X)^{1/n})^{n}$ (a special case of the inequality of Minkowski mentioned in the accepted answer, but enough to prove the general case by an argument similar
to that above). For set $d = {\rm det}X$. Let $s_{m}(\lambda_{1},\ldots ,\lambda_{n})$ denote the $m$-th elementary symmetric function evaluated at the eigenvalues. Using the
arithmetic-geometric mean inequality yields that $s_{m}(\lambda_{1},\ldots ,\lambda_{n}) \geq \left( \begin{array}{clcr} n\\m \end{array} \right)d^{m/n}$, so we obtain ${\rm det}(I+X) \geq
add comment
Here is yet another overkill, but hopefully not too bad a way to prove this inequality.
We have the following proof sketch.
up vote 8 $$\begin{eqnarray} x^T(A+B)x &\ge& x^TAx\quad\forall x\\\\ -x^T(A+B)x &\le& -x^TAx\\\\ \exp(-x^T(A+B)x) &\le& \exp(-x^TAx)\\\\ \int\exp(-x^T(A+B)x)dx &\le& \int\exp(-x^TAx)dx\\\\ \frac
down vote {1}{\sqrt{\det(A+B)}} &\le& \frac{1}{\sqrt{\det(A)}}\\\\ \det(A+B) &\ge& \det(A) \end{eqnarray} $$
The only fancy thing that happened is in the second last line, where I used the formula for the Gaussian integral (see multivariate section)
add comment
We have $((A+B)x,x)\ge (Ax,x)$. It then follows from the variational characterization of eigenvalues (min-max theorem) that the eigenvalues of $A+B$ are greater than or equal to
up vote 18 down those of $A$. This implies $det(A+B)\ge det(A)$.
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra ra.rings-and-algebras inequalities or ask your own question. | {"url":"http://mathoverflow.net/questions/65424/determinant-of-sum-of-positive-definite-matrices?answertab=active","timestamp":"2014-04-19T10:35:56Z","content_type":null,"content_length":"74422","record_id":"<urn:uuid:999703fa-9ebe-48fa-8d8b-27c24acfd0df>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Very cool online graphing calculator that allows direct printing and sharing of graphs and visuals.
jault_kghs on 17 Mar 13
Welcome! Real World Math is a collection of free math activities for Google Earth designed for students and educators. Mathematics is much more than a set of problems in a textbook. In the
virtual world of Google Earth, concepts and challenges can be presented in a meaningful way that portray the usefulness of the ideas.
jault_kghs on 30 May 11
SnagLearning features carefully selected films from SnagFilms' award-winning library of over 1,600 documentaries that are appropriate for students from middle school and up. Our titles cover
nearly every classroom subject and many are produced by well-known educational sources, including PBS and National Geographic. The goal of this site is to highlight documentaries that make for
engaging educational tools. We will also feature guest teacher bloggers as well as special programming stunts like Q&As with the filmmakers.Teachers can submit and share their own lesson plans,
quizzes and homework ideas with fellow educators. The commenting area on each film page functions as public forum to share and discuss.
jault_kghs on 22 Mar 11
"What started out as Sal making a few algebra videos for his cousins has grown to over 2,100 videos and 100 self-paced exercises and assessments covering everything from arithmetic to physics,
finance, and history."
jault_kghs on 07 Dec 10
DesignBlocks is a drag-and-drop programming language based on Scratch, TurtleArt and Processing.
It is being developed by the Lifelong Kindergarten group at the MIT Media Lab.
DesignBlocks is currently in beta and we are actively seeking feedback. The current system is based on ActionScript3, and we are currently experimenting with HTML5 as well.
jault_kghs on 13 May 10
"WhoWhatWhen is a database of key people and events from 1000 A.D. to the present. Create graphic timelines of periods in history and of the lives of individuals. "
jault_kghs on 12 May 10
BBC Learning plays a central part in meeting the BBC's purpose of promoting education and learning. We do this by offering a portfolio of resources and activities for children, teachers, parents
and adult learners. These make use of the best of the BBC - its content, its channels of communication, its talent and creative skills.
This site has been designed to "assist you in your pursuit of increased mathematical understanding," or whatever sounds good to you. The subjects covered range from Pre-Algebra to Calculus.
"Math Teacher Link is a web-based professional development program for mathematics teachers at the 9 - 14 grade levels. It provides short courses on the use of technology in teaching mathematics
in the areas of Algebra, Geometry, Calculus, and Statistics"
The Math Forum Is...
... the leading online resource for improving math learning, teaching, and communication since 1992.
_We are teachers, mathematicians, researchers, students, and parents using the power of the Web to learn math and improve math education.
_We offer a wealth of problems and puzzles; online mentoring; research; team problem solving; collaborations; and professional development. Students have fun and learn a lot. Educators share
ideas and acquire new skills.
"The Virginia Algebra Resource Center (ARC) is a web-site designed for Algebra I teachers and students across the Commonwealth. ARC is intended to provide easy access to resources that address
Virginia's SOL. Our emphasis is on locating and cataloging web-based information that can be used to enrich and reinforce the curriculum, while also helping to make algebra fun and relevant.
Check out the Weekly Highlights each week to see what's new at ARC. If you have questions or comments please contact us, or use the Online Forum to initiate discussion with other algebra teachers
across the state."
"Math.com is dedicated to providing revolutionary ways for students, parents, teachers, and everyone to learn math. Combining educationally sound principles with proprietary technology, Math.com
offers a unique experience that quickly guides the user to the solutions they need and the products they want. These solutions include assessment, on-demand modular courses that target key math
concepts, 24/7 live online tutoring, and expert answers to math questions. In addition to solutions, Math.com offers exploratory and recreational introductions to the world of math that will lead
to deeper understanding and enjoyment. The range of services, products and solutions offered makes Math.com the single source for all math needs. Math.com is a division of Leap of Faith Financial
Services Inc. " | {"url":"https://groups.diigo.com/group/kgschools/content/tag/education%20%20math","timestamp":"2014-04-17T13:13:17Z","content_type":null,"content_length":"162078","record_id":"<urn:uuid:332b3332-4c37-4627-9ce5-ccc32219276e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logical Validity & Counterexamples
I have already described formal logic and explained why it’s important for proper reasoning. One of the best ways to learn formal logic is to take a logic class. However, we don’t have to take a
class just to learn the basics and improve our intuitive grasp of formal logic. What I want to do here is explain how to use counterexamples to prove an argument to be logically invalid. This can
help improve our understanding of logic and help us prove arguments to be logically invalid.
What are formal counterexamples?
Whenever someone asserts something false, we can attempt to give a counterexample. For example, someone who claims that all animals are mammals can be proven wrong when we give an example of an
animal that’s a reptile rather than a mammal, such as a lizard. Formal counterexamples prove that an argument is logically invalid rather than that beliefs are false.
An argument is logically valid if it’s impossible for the argument structure to have true premises and a false conclusion at the same time. Any argument that’s not logically valid is invalid—the
argument structure can have true premises and a false conclusion at the same time. A counterexample proves that a logical form is invalid because it can have true premises and a false conclusion at
the same time.
Consider the following argument:
1. All dogs are mammals.
2. All cats are animals.
3. Therefore, all dogs are animals.
This argument has true premises and a true conclusion, but it’s logically invalid. The argument form is the following:
1. All A are B.
2. All C are D.
3. Therefore, all A are D.
A counterexample is the following:
1. All dogs are mammals.
2. All lizards are reptiles.
3. Therefore, all dogs are reptiles.
We kept the same argument form, replaced C and D, and the result is that both premises are still true, but the conclusion is false.
How do we create formal counterexamples?
To create a counterexample, you should (a) find the argument structure, and (b) find content for the argument form that will have true premises and a false conclusion by replacing the variables
For example, consider the following invalid argument:
1. If a human fetus is a person, then it’s wrong to have an abortion.
2. It’s wrong to have an abortion.
3. Therefore, a human fetus is a person.
The argument form can be revealed when we remove all the content until we are left with logical connectives and variables. In the case of this argument the content of the argument are statements
—various truth claims. In this case the argument form is the following:
1. If A, then B.
2. B.
3. Therefore, A.
We can then replace these variables with new content (statements). A counterexample could use the following schema (content for the variables):
A: Dogs are reptiles.
B: Dogs are animals.
The counterexample using this schema is the following:
1. If dogs are reptiles, then dogs are animals.
2. Dogs are animals.
3. Therefore, dogs are reptiles.
The first two premises are true, but the conclusion is false. If dogs are reptiles, then they are animals, even though they aren’t reptiles because “all reptiles are animals.”
Understanding logical form and validity is important for proper argumentation. Although we have an intuitive grasp of logical form and validity, we can learn more about it and improve our
understanding. Learning formal counterexamples not only helps us to improve our understanding of logical form, but it also helps us learn how to prove that certain arguments are logically invalid. | {"url":"http://ethicalrealism.wordpress.com/2011/06/17/logical-validity-counterexamples/","timestamp":"2014-04-16T15:58:50Z","content_type":null,"content_length":"62635","record_id":"<urn:uuid:783558b6-ba4f-4acd-97a6-8dc5508d66eb>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Matheology § 222 Back to the roots
Date: Feb 28, 2013 6:06 AM
Author: mueckenh@rz.fh-augsburg.de
Subject: Re: Matheology § 222 Back to the roots
On 27 Feb., 21:50, Virgil <vir...@ligriv.com> wrote:
> > Note, there is no "set |N" in potential infinity.
> If one can distinguish natural numbers from things which are not natural
> numbers WHY is there no set of all natural numbers
Because natural numbers like all numbers are names created by thinking
individuals. What has not been created is not there or somewhere else.
> Note that there is such a set everywhere else!
Note that people have believed in witches with the same faithfulness.
> > The
> > "set |N" is an actual, i.e., completed infinity. But this assumption
> > is contradicted (see below).
> Not outside of WMYTHEOLOGY !
In the list
1, 2
1, 2, 3
there is nothing unless it is in one single line.
>> Your LIST of lines is constructed so that there is no line containing
> all the elements in all the lines,
There is no "all" but thre is only "every". And there is every element
up to line n in line n. Your error is to fix a line and to look at the
elements in the lines beyond without looking at the lines beyond too.
Regards, WM | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8437587","timestamp":"2014-04-18T01:13:45Z","content_type":null,"content_length":"2472","record_id":"<urn:uuid:cc20fa05-41d7-4ee8-8cc4-7ffd75300595>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Rovibronic Spectrum of a Parallel Band of a Symmetric Rotor
This Demonstration shows the rotationally resolved infrared spectrum of a parallel band of a symmetric rotor, with transitions occurring between nondegenerate vibrational levels. Symmetric rotors
include ammonia , benzene , and the methyl halides , where can be , , , or ). Symmetric rotors have two equal moments of inertia; the axis with the unique moment of inertia is termed the principle
axis. For a symmetric rotor, the direction of the change in dipole moment, or transition moment, determines the appearance of the spectrum because different selection rules must be satisfied. If the
transition moment is parallel to the principal axis, as in this Demonstration, a parallel band spectrum results. On the other hand, if the transition moment is perpendicular to the principal axis, a
perpendicular band spectrum results. Hybrid bands can result if the transition moment has a component both parallel and perpendicular to the principal axis. For a parallel band, the vibrational
selection rules are and the rotational selection rules are , if and , , if , with the restriction that . If the complete band spectrum is deconstructed, it appears as a superposition of sub-bands.
Each sub-band corresponds to a value and consists of a branch (,
, smaller wavenumbers), branch (,
, middle branch, appears when ), and branch (,
, larger wavenumbers). Due to the restriction that , with increasing a decreasing number of lines are observed within each sub-band. The observed line intensities reflect a dependence on the thermal
population of the ground state energy levels (determined by the Boltzmann factor) and the quantum numbers and . With this Demonstration you can view the full spectrum and each branch individually.
When viewing each branch individually, you can deconstruct the complete band by choosing which sub-band to highlight and by labeling individual lines/transitions within the sub-band. By using the
axis lower and upper limit controls, you can zoom in on any region of the spectrum.
In this Demonstration, ground state constants and quantum numbers are denoted by a double prime (″) superscript and excited state constants and quantum numbers are denoted by a prime (′)
The mathematical expressions for simulating the spectrum assume that the centrifugal distortion constants , and the anharmonicity constant, are negligible. The interaction of rotation and vibration
is taken into account since the ground and excited state values for the rotational constants and are not equal ( and ). However, only a small difference between the ground and excited state values
for these constants are used to simulate the spectrum, with , , and , . The spectrum is simulated at a temperature of 30 Kelvin.
The spectral line corresponding to the chosen transition of interest is indicated by an arrow and labeled by , with , , , and as follows.
: transitions with are designated with as a superscript, and all transitions in a parallel band fall into this category.
: , , or depends on which branch the line/transition resides.
: the value of (ground state) is indicated by the subscript.
: the value of (ground state) is indicated in the parentheses.
For example, indicates the line corresponding to the transition in the branch within the sub-band.
Snapshots 1, 2, 3, and 4: full spectrum, branch, branch, branch views, respectively. The text indicates which sub-band is highlighted and describes the energy levels associated with the chosen
transition. The spectral line corresponding to the chosen transition is indicated by an arrow and labeled with the proper transition nomenclature described in this section.
Snapshot 5: with increasing , a decreasing number of lines are observed in the beginning of each branch of a sub-band. For example, in the branch within the sub-band, there are no transitions from
due to the restriction that .
[1] P. Atkins and J. de Paula,
Physical Chemistry
, New York: Oxford University Press, 2006.
[2] G. Herzberg,
Molecular Spectra and Molecular Structure II. Infrared and Raman Spectra of Polyatomic Molecules
, Princeton, NJ: D. Van Nostrand Company, Inc., 1945. | {"url":"http://demonstrations.wolfram.com/RovibronicSpectrumOfAParallelBandOfASymmetricRotor/","timestamp":"2014-04-20T11:33:38Z","content_type":null,"content_length":"56341","record_id":"<urn:uuid:1374eef8-87ca-4af5-a5cd-d74288fa6855>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2007 [00190]
[Date Index] [Thread Index] [Author Index]
Re: Limit and Root Objects
• To: mathgroup at smc.vnet.net
• Subject: [mg72620] Re: Limit and Root Objects
• From: Paul Abbott <paul at physics.uwa.edu.au>
• Date: Fri, 12 Jan 2007 06:05:49 -0500 (EST)
• Organization: The University of Western Australia
• References: <em606c$2o1$1@smc.vnet.net> <4586C045.2020805@metrohm.ch> <em8jfr$pfv$1@smc.vnet.net>
In article <em8jfr$pfv$1 at smc.vnet.net>,
Andrzej Kozlowski <andrzej at akikoz.net> wrote:
> What you describe, including the fact that the numbering or roots
> changes is inevitable and none of it is not a bug. There cannot exist
> an ordering of complex roots that does not suffer from this problem.
> What happens is this.
> Real root objects are ordered in the natural way. A cubic can have
> either three real roots or one real root and two conjugate complex
> ones. Let's assume we have the latter situation. Then the real root
> will be counted as being earlier then the complex ones. Now suppose
> you start changing the coefficients continuously. The roots will
> start "moving in the complex plane", with the real root remaining on
> the real line the two complex roots always remaining conjugate
> (symmetric with respect to the real axis). Eventually they may
> collide and form a double real root. If this double real root is now
> smaller then the the "original real root" (actually than the root to
> which the original real root moved due the the changing of the
> parameter), there will be a jump in the ordering; the former root
> number 1 becoming number 3.
> This is completely unavoidable, not any kind of bug, and I am not
> complaining about it. It takes only elementary topology of
> configuration spaces to prove that this must always be so.
But is there a continuous root numbering if the roots are not ordered?
What I mean is that if you compute the roots of a polynomial, which is a
function of a parameter, then if you assign a number to each root, can
you follow that root continuously as the parameter changes? Two examples
are presented below.
Here is some code to animate numbered roots using the standard root
ordering, displaying the root numbering:
rootplot[r_] := Table[ListPlot[
Transpose[{Re[x /. r[a]], Im[x /. r[a]]}],
PlotStyle -> AbsolutePointSize[10],
PlotRange -> {{-3, 3}, {-3, 3}},
AspectRatio -> Automatic,
PlotLabel -> StringJoin["a=", ToString[PaddedForm[Chop[a], {2, 1}]]],
Epilog -> {GrayLevel[1],
MapIndexed[Text[#2[[1]], {Re[#1], Im[#1]}] & , x /. r[a]]}],
{a, -6, 10, 0.5}]
First, we have a polynomial with real coefficients:
r1[a_] = Solve[x^5 - a x - 1 == 0, x]
Animating the trajectories of the roots using
we observe that, as you mention above, when the complex conjugate roots
2 and 3 coalesce, they become real roots 1 and 2 and root 1 becomes root
3. But, ignoring root ordering, why isn't it possible for these roots to
maintain their identity (I realise that at coelescence, there is an
Second, we have a polynomial with a complex coefficient:
r2[a_] = Solve[x^5 + (1+I) x^4 - a x - 1 == 0, x]
Animating the trajectories of the roots using
we observe that, even though the trajectories of the roots are
continuous, the numbering switches:
2 -> 3 -> 4
5 -> 4 -> 3
3 -> 4 -> 5
4 -> 3 -> 2
and only root 1 remains invariant. Again, ignoring root ordering, why
isn't it possible for all these roots to maintain their identity and so
be continuous functions of the parameter? And wouldn't such continuity
be nicer than enforcing root ordering?
Paul Abbott Phone: 61 8 6488 2734
School of Physics, M013 Fax: +61 8 6488 1014
The University of Western Australia (CRICOS Provider No 00126G)
AUSTRALIA http://physics.uwa.edu.au/~paul
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Jan/msg00190.html","timestamp":"2014-04-16T07:43:50Z","content_type":null,"content_length":"37901","record_id":"<urn:uuid:c289d5e8-375f-40b7-9614-32769d99875d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
In the case of the Ricci flow, the symmetries of the flow are scalings and diffeomorphisms
up vote 3 down vote favorite
Can anyone help me and prove that in the case of the Ricci flow, the symmetries of the flow are scalings and diffeomorphisms?
Thanks for your time.
ricci-flow riemannian-geometry parabolic-pde ap.analysis-of-pdes
crosspost: math.stackexchange.com/questions/401943/… – Sepideh Bakhoda May 26 '13 at 14:56
any textbook will have this proof, see for example amazon.com/gp/… – Carlo Beenakker May 26 '13 at 15:31
2 @Carlo: Most textbooks on Ricci flow will indeed show that scalings and diffeomorphisms are symmetries of the PDE, but I am not so sure that there are many books that show the converse, namely,
that any symmetry of the PDE system (once this has been properly defined) is necessarily a combination of scalings and diffeomorphisms. – Robert Bryant May 26 '13 at 15:53
Could someone say what the definition of a symmetry of the flow is? My reaction to the question was: isn't that the definition of a symmetry? – Deane Yang May 26 '13 at 18:22
@Deane: I took 'the Ricci flow' to mean the PDE system itself which one can think of as a submanifold of an appropriate jet bundle $J$ over $M\times\mathbb{R}$. Then a 'symmetry' would a
3 self-diffeomorphism of $J$ that carries solutions to solutions (thought of via their $k$-jet graphs). @Carlo: You probably want to be more precise about what 'operations' you allow. Let $\mathcal
{S}$ be the set of all Ricci-flat metrics on $\mathbb{R}^n$ and let $\sigma:\mathcal{S}\to\mathcal{S}$ be any mapping whatsoever. This is an 'operation that transforms one solution to another
solution', no? – Robert Bryant May 27 '13 at 0:43
show 1 more comment
1 Answer
active oldest votes
I apologise first for submitting this comment as an answer (not enough MO-cred to comment) and second for submitting a question rather than a comment:
up vote 1 What I can gather from the comments above is that the symmetries of the Ricci flow are completely understood: Every `symmetry' of the Ricci flow is a spatial diffeomorphism combined with
down vote scaling (and every such combination is a symmetry). My question is whether the problem is also solved for the mean curvature flow (of, say, hypersurfaces of Euclidean space)?
add comment
Not the answer you're looking for? Browse other questions tagged ricci-flow riemannian-geometry parabolic-pde ap.analysis-of-pdes or ask your own question. | {"url":"http://mathoverflow.net/questions/131914/in-the-case-of-the-ricci-flow-the-symmetries-of-the-flow-are-scalings-and-diffe","timestamp":"2014-04-18T19:02:28Z","content_type":null,"content_length":"57717","record_id":"<urn:uuid:d0137c09-44dc-4e1e-b3f3-c6a62cb249dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite nilpotent orbits: GL(n,q)-conjugacy classes and a partial order on partitions
up vote 6 down vote favorite
I have a question regarding a partial order $<$ on the set ${\rm Part}(n)$ of partitions of $n$. Given $\lambda=(\lambda_1,\lambda_2,\ldots)\in{\rm Part}(n)$ with $\sum_{i\geq1} \lambda_i=n$ and $\
lambda_1\geq\lambda_2\geq\cdots\geq0$, let $J_\lambda$ denote the $n\times n$ block diagonal matrix $\bigoplus_{i\geq1}J_{\lambda_i}$. For example, $J_{(3,2,1)}=\left(\begin{smallmatrix}0&1&0&&&\\0&0
&1&&&\\0&0&0&&&\\&&&0&1&\\&&&0&0&\\&&&&&0\end{smallmatrix}\right)$. Consider the ${\rm GL}(n,F)$-conjugacy classes of the set ${\rm M}(n,F)$ of all $n\times n$ matrices over a field $F$. A nilpotent
matrix $X\in{\rm M}(n,F)$ lies in a conjugacy classes $\mathcal{O}_\lambda:=J_\lambda^{{\rm GL}(n,F)}$ for a unique $\lambda\in{\rm Part}(n)$. (Nilpotent means $X^n=0$.)
If $F=\mathbb{F}_q$ is a finite field, then set $n_\lambda:=|J_\lambda^{{\rm GL}(n,q)}|$. A formula for $n_\lambda$ is given in Fulman, Cycle indices for finite classical groups. It turns out that
$n_\lambda=n_\lambda(q)$ is a polynomial in $q$ with integer coefficients. Define a partial order $<$ on ${\rm Part}(n)$ as follows: $\lambda<\mu$ if and only if $n_\lambda(q)$ divides $n_\mu(q)$. I
call this the divisibility partial order.
When $F$ is the complex field $\mathbb{C}$, define $\lambda\triangleleft\mu$ if $\overline{\mathcal{O}_\lambda}\subset\overline{\mathcal{O}_\mu}$ where $\overline{\mathcal{O}_\lambda}$ denotes the
Zariski closure of $\mathcal{O}_\lambda$. It is shown in Collingwood and McGovern, Nilpotent orbits of semisimple Lie algebras, pp 93--95, that $\triangleleft$ is the dominance partial order on ${\rm
Part}(n)$. That is, $\lambda\triangleleft\mu$ if and only if $\sum_{i=1}^{k-1}\lambda_i=\sum_{i=1}^{k-1}\mu_i$ and $\lambda_k<\mu_k$ for some $k\geq1$.
If $n\leq5$, then the partial orders $<$ and $\triangleleft$ are identical and are total orders. However, when $n=6$ the partition $(3,2,1)$ of 6 has three partitions divisibility larger, and has
five partitions dominance larger.
Does anyone have any insight into divisibility partial order? or know of its appearance in the literature? (I have not found a reference to $<$ in Roger Carter's book Finite groups of Lie type:
conjugacy classes and complex characters, but $\triangleleft$ appears in 5.5 and 5.11.) For specific $\lambda$, I can (theoretically) factor $n_\lambda(q)$ and so can determined whether $\lambda<\mu$
for specific $\lambda$ and $\mu$, but I have few global results.
rt.representation-theory gr.group-theory
add comment
1 Answer
active oldest votes
I think the right way is to factor $n_\lambda(q)$ in general. In particular, it is obviously a quotient of the order of $GL_n(q)$. The formula for the order of $GL_n(q)$ does not have
very many prime factors: just $q$ and the first $n$ cyclotomic polynomials.
One could consider an alternate question, the order of the centralizer of $J$ in $GL(n,\mathbb F_q)$. This gives the reverse of the partial order, since the divisibility relation is
reversed. The centralizer is the automorphism group of the corresponding $\mathbb F_q[x]/x^n$-module, $M$. The subgroup that fixes $M/x$ is a $q$-group, since it is unipotent. Its
up vote 1 quotient is a product of copies of $GL(k,\mathbb F_q)$: one for each type of block, with $k$ equal to the number of that block that appears.
down vote
accepted The number of times that the $k$th elementary cyclotomic polynomial appears in the size of the centralizer is just the sum over all sizes $n$ of the floor of $a_n/k$, where $a_n$ is the
number of blocks of size $n$.
This function behaves very erratically, so we can conclude that the partial order behaves erratically as well.
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/97439/finite-nilpotent-orbits-gln-q-conjugacy-classes-and-a-partial-order-on-partit?sort=oldest","timestamp":"2014-04-21T09:50:25Z","content_type":null,"content_length":"53949","record_id":"<urn:uuid:d9fdb57a-bdf1-404c-b9b4-115e38289002>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rowland Heights Math Tutor
One of my favorite activities is tutoring students and helping them achieve academic success. I’ve tutored students for more than 20 years in many subjects ranging from mathematics to SAT / ACT /
GRE and other standardized test preparation. I help students •hone their critical reading comprehensi...
41 Subjects: including precalculus, trigonometry, GRE, algebra 1
I have taught math for over 5 years! Many of my students are from grade 2 to 12, some are from college. I also have a math tutor certificate for college students from Pasadena City College. I
graduated in 2012 from UCLA.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I also tutor school students and my youngest so far has been in 4th Grade. My aim is to ensure that my students develop a good basic understanding of the subject so that their self confidence
increases and with it their enjoyment of learning. Their success is reflected in their smiles and improving grades.Algebra is one of the pillars of Math.
12 Subjects: including algebra 1, trigonometry, SAT math, precalculus
...I would love to coach cross-country and track, and pass my love of running and fitness on to you! I have also always had a great passion for nutrition and fitness in general. With all my
experience as a competitive athlete and being a health buff, I know I’ll be a great coach!
62 Subjects: including geometry, algebra 2, algebra 1, precalculus
...My strengths are Algebra and Calculus at all levels. I have also tutored Geometry and Trigonometry. I have tutored 6th to 12th grade Mathematics for over a decade, so I know how to convey the
subject to students.My favorite method is to make the topic come alive by using familiar situations to explain concepts.
8 Subjects: including algebra 1, algebra 2, geometry, prealgebra | {"url":"http://www.purplemath.com/rowland_heights_math_tutors.php","timestamp":"2014-04-20T21:01:32Z","content_type":null,"content_length":"23998","record_id":"<urn:uuid:e5621dd2-df12-41b6-a8b9-432c73e5dc5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] A tale of Project Euler
Andrew Coppin andrewcoppin at btinternet.com
Tue Nov 27 14:34:56 EST 2007
Hi guys.
Somebody just introduced me to a thing called "Project Euler". I gather
it's well known around here...
Anyway, I was a little bit concerned about problem #7. The problem is
basically "figure out what the 10,001st prime number is". Consider the
following two programs for solving this:
First, somebody else wrote this in C:
int n = 2 , m , primesFound = 0;
for( n=0;n < MAX_NUMBERS;n++ )
if( prime[n] )
if( primesFound == 10001 )
cout << n << " is the 10001st prime." << endl;
Second, my implementation in Haskell:
primes :: [Integer]
primes = seive [2..]
seive (p:xs) = p : seive (filter (\x -> x `mod` p > 0) xs)
main = print (primes !! 10000)
Well, we know who's winning the beauty contest. But here's the worrying
C program: 0.016 seconds
Haskell program: 41 seconds
Eeeps! o_O That's Not Good(tm).
Replacing "primes :: [Integer]" with "primes :: [Word32]" speeds the
Haskell version up to 18 seconds, which is much faster but still a joke
compared to C.
Running the Haskell code on a 2.2 GHz AMD Athlon64 X2 instead of a 1.5
GHz AMD Athlon drops the execution time down to 3 seconds. (So clearly
the box I was using in my lunch break at work is *seriously* slow...
presumably the PC running the C code isn't.)
So, now I have a Haskell version that's "only" several hundred times
slower. Neither program is especially optimised, yet the C version is
drastically faster. This makes me sad. :-(
By the way... I tried using Data.List.Stream instead. This made the
program about 40% slower. I also tried both -fasm and -fvia-c. The
difference was statistically insignificant. (Hey guys, nice work on the
native codegen!) The difference in compilation speed was fairly drastic
though, needless to say. (The machine is kinda low on RAM, so trying to
call GCC makes it thrash like hell. So does linking, actually...)
Also, I'm stuck with problem #10. (Find the sum of all primes less than
1 million.) I've let the program run for well over 15 minutes, and still
no answer is forthcomming. It's implemented using the same primes
function as above, with a simple filter and sum. (The type has to be
changed to [Word64] to avoid overflowing the result though.) The other
guy claims to have a C solution that takes 12 ms. There's a hell of a
difference between 12 ms and over half an hour...(!) Clearly something
is horribly wrong here. Uh... help?
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-November/035356.html","timestamp":"2014-04-17T04:19:34Z","content_type":null,"content_length":"5140","record_id":"<urn:uuid:ce6b5b5c-755d-4e91-9d7a-d59b535ed58d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Particle Production in Ultrarelativistic Heavy-Ion Collisions: A Statistical-Thermal Model Review
Advances in High Energy Physics
Volume 2013 (2013), Article ID 805413, 27 pages
Review Article
Particle Production in Ultrarelativistic Heavy-Ion Collisions: A Statistical-Thermal Model Review
Department of Physics, Banaras Hindu University, Varanasi 221005, India
Received 13 June 2013; Accepted 23 August 2013
Academic Editor: Bhartendu K. Singh
Copyright © 2013 S. K. Tiwari and C. P. Singh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
The current status of various thermal and statistical descriptions of particle production in the ultrarelativistic heavy-ion collisions experiments is presented in detail. We discuss the formulation
of various types of thermal models of a hot and dense hadron gas (HG) and the methods incorporated in the implementing of the interactions between hadrons. It includes our new excluded-volume model
which is thermodynamically consistent. The results of the above models together with the experimental results for various ratios of the produced hadrons are compared. We derive some new universal
conditions emerging at the chemical freeze-out of HG fireball showing independence with respect to the energy as well as the structure of the nuclei used in the collision. Further, we calculate
various transport properties of HG such as the ratio of shear viscosity-to-entropy using our thermal model and compare with the results of other models. We also show the rapidity as well as
transverse mass spectra of various hadrons in the thermal HG model in order to outline the presence of flow in the fluid formed in the collision. The purpose of this review article is to organize and
summarize the experimental data obtained in various experiments with heavy-ion collisions and then to examine and analyze them using thermal models so that a firm conclusion regarding the formation
of quark-gluon plasma (QGP) can be obtained.
1. Introduction
One of the main purposes of various heavy-ion collision programmes running at various places such as relativistic heavy-ion collider (RHIC) at Brookhaven National Laboratory (BNL) and large hadron
collider (LHC) at CERN is to understand the properties of strongly interacting matter and to study the possibility of a phase transition from a confined hot, dense hadron gas (HG) phase to a
deconfined and/or chiral symmetric phase of quark matter called quark-gluon plasma (QGP) [1–9]. By colliding heavy-ions at ultrarelativistic energies, such a phase transition is expected to
materialize and QGP can be formed in the laboratory. Unfortunately, the detection of the QGP phase is still regarded as an uphill task. However, the existence of a new form of a matter called
strongly interacting QGP (sQGP) has been demonstrated experimentally [10]. There is much compelling evidence, for example, elliptic flow, high energy densities, and very low viscosity [11]. However,
we do not have supportive evidence that this fluid is associated with the properties quark deconfinement and/or chiral symmetry restoration which are considered as direct indications for QGP
formation [11]. Although various experimental probes have been devised, but a clean unambiguous signal has not yet been outlined in the laboratory. So our prime need at present is to propose some
signals to be used for the detection of QGP. However for this purpose, understanding the behaviour and the properties of the background HG is quite essential because if QGP is not formed, matter will
continue to exist in the hot and dense HG phase. In the ultrarelativistic nucleus-nucleus collisions, a hot and dense matter is formed over an extended region for a very brief time, and it is often
called a “fireball”. The quark matter in the fireball after subsequent expansion and cooling will be finally converted into HG phase. Recently, the studies of the transverse momentum spectra of
dileptons [12–21] and hadrons [22–27] are used to deduce valuable information regarding temperature and energy density of the fireball. The schematic diagram for the conjectured space-time evolution
of the fireball formed in the heavy-ion collisions is shown in Figure 1 [28]. The space-time evolution consists of four different stages as follows. (i) In the initial stage of collisions, labeled as
“Preequilibrium” in Figure 1, processes of parton-parton hard scatterings may predominantly occur in the overlap region of two colliding nuclei, thus depositing a large amount of energy in the
medium. The matter is still not in thermal equilibrium, and perturbative QCD models can describe the underlying dynamics as a cascade of freely colliding partons. The time of the preequilibrium state
is predicted to about 1fm/c or less. (ii) After the short preequilibrium stage, the QGP phase would be formed, in which parton-parton and/or string-string interactions quickly contribute to attain
thermal equilibrium in the medium. The energy density of this state is expected to reach above 3–5GeV/fm^3, equivalent to the temperature of 200–300MeV. The volume then rapidly expands, and matter
cools down. (iii) If the first order phase transition is assumed, the “mixed phase” is expected to exist between the QGP and hadron phases, in which quarks and gluons are again confined into hadrons
at the critical temperature . In the mixed phase, the entropy density is transferred into lower degrees of freedom, and therefore the system is prevented from a fast expansion. This leads to a
maximum value in the lifetime of the mixed phase which is expected to last for a relatively long time (fm/c). (iv) In the hadronic phase, the system keeps collective expansion via hadron-hadron
interactions, decreasing its temperature. Then, the hadronic interactions freeze after the system reaches a certain size and temperature, and hadrons that freely stream out from the medium are to be
detected. There are two types of freeze-out stages. When inelastic collisions between constituents of the fireball do not occur any longer, we call this a chemical freeze-out stage. Later when the
elastic collisions also cease to happen in the fireball, this stage specifies the thermal freeze-out.
Since many experiments running at various places measure the multiplicity, ratios, and so forth of various hadrons, it is necessary to know to which extent the measured hadron yields indicate
equilibration. The level of equilibration of particles produced in these experiments is tested by analyzing the particle abundances [22, 23, 29–57] or their momentum spectra [22–27, 37, 38, 46, 47,
58] using thermal models. Now, in the first case, one establishes the chemical composition of the system, while in the second case, additional information on dynamical evolution and collective flow
can be extracted. Furthermore, study of the variations of multiplicity of produced particles with respect to collision energy, the momentum spectra of particles, and ratios of various particles have
led to perhaps one of the most remarkable results corresponding to high energy strong interaction physics [6].
Recently various approaches have been proposed for the precise formulation of a proper equation of state (EOS) for hot and dense HG. Lacking lattice QCD results for the EOS at finite baryon density ,
a common approach is to construct a phenomenological EOS for both phases of strongly interacting matter. Among those approaches, thermal models are widely used and indeed are very successful in
describing various features of the HG. These models are based on the assumption of thermal equilibrium reached in HG. A simple form of the thermal model of hadrons is the ideal hadron gas (IHG) model
in which hadrons and resonances are treated as pointlike and noninteracting particles. The introduction of resonances in the system is expected to account for the existence of attractive interactions
among hadrons [59]. But in order to account for the realistic behaviour of HG, a short range repulsion must also be introduced. The importance of such correction is more obvious when we calculate the
phase transition using IHG picture which shows the reappearance of hadronic phase as a stable configuration in the simple Gibbs construction of the phase equilibrium between the HG and QGP phases at
very high baryon densities or chemical potentials. This anomalous situation [60–62] cannot be explained because we know that the stable phase at any given is the phase which has a larger pressure.
Once the system makes a transition to the QGP phase, it is expected to remain in that phase even at extremely large and due to the property of asymptotic freedom of QCD. Moreover, it is expected that
the hadronic interactions become significant when hadrons are densely packed in a hot and dense hadron gas. One significant way to handle the repulsive interaction between hadrons is based on a
macroscopic description in which the hadrons are given a geometrical size, and hence they will experience a hardcore repulsive interaction when they touch each other, and consequently a van-der Waals
excluded-volume effect is visible. As a result, the particle yields are essentially reduced in comparison to that of IHG model, and also the anomalous situation in the phase transition mentioned
above disappears. Recently, many phenomenological models incorporating excluded-volume effect have been widely used to account for the properties of hot and dense HG [63–76]. However, these
descriptions usually suffer from some serious shortcomings. First, mostly, the descriptions are thermodynamically inconsistent because one does not have a well-defined partition function or
thermodynamical potential () from which other thermodynamical quantities can be derived, for example, the baryon density . Secondly, for the dense hadron gas, the excluded-volume model violates
causality (i.e., velocity of sound in the medium is greater than the velocity of light). So, although some of the models explain the data very well, such shortcomings make the models mostly
unattractive. Sun et al. [76] have incorporated the effect of relativistic correction in the formulation of an EOS for HG. However, such effect is expected to be very small because the masses of
almost all the hadrons present in the hot and dense HG are larger than the temperature of the system; so they are usually treated as nonrelativistic particles except pion whose mass is comparable to
the temperature, but most of the pions come from resonances whose masses are again larger than the temperature of HG [77]. In [78], two-source thermal model of an ideal hadron gas is used to analyze
the experimental data on hadron yields and ratios. In this model, the two sources, a central core and a surrounding halo, are in local equilibrium at chemical freeze-out. It has been claimed that the
excluded-volume effect becomes less important in the formulation of EOS for hadron gas in the two-source model.
Another important approach used in the formulation of an EOS for the HG phase is mean-field theoretical models [79–82] and their phenomenological generalizations [83–85]. These models use the local
renormalizable Lagrangian densities with baryonic and mesonic degrees of freedom for the description of HG phase. These models rigorously respect causality. Most importantly they also reproduce the
ground state properties of the nuclear matter in the low-density limit. The short-range repulsive interaction in these models arises mainly due to -exchange between a pair of baryons. It leads to the
Yukawa potential , which further gives mean potential energy as . It means that is proportional to the net baryon density . Thus vanishes in the limit. In the baryonless limit, hadrons (mesons) can
still approach pointlike behaviour due to the vanishing of the repulsive interactions between them. It means that, in principle, one can excite a large number of hadronic resonances at large . This
will again make the pressure in the HG phase larger than the pressure in the QGP phase, and the hadronic phase would again become stable at sufficiently high temperature, and the Gibbs construction
can again yield HG phase at large . In some recent approaches this problem has been cured by considering another temperature-dependent mean-field , where is the sum of particle and antiparticle
number densities. Here represents van-der Waals hardcore repulsive interaction between two particles and depends on the total number density which is nonzero even when net baryon density is zero in
the large temperature limit [72, 73, 75]. However, in the high-density limit, the presence of a large number of hyperons and their unknown couplings to the mesonic fields generates a large amount of
uncertainty in the EOS of HG in the mean-field approach. Moreover, the assumption of how many particles and their resonances should be incorporated in the system is a crucial one in the formulation
of EOS in this approach. The mean-field models can usually handle very few resonances only in the description of HG and hence are not as such reliable ones [75].
In this review, we discuss the formulation of various thermal models existing in the literature quite in detail and their applications to the analysis of particle production in ultrarelativistic
heavy-ion collisions. We show that it is important to incorporate the interactions between hadrons in a consistent manner while formulating the EOS for hot, dense HG. For repulsive interactions,
van-der Waals type of excluded-volume effect is often used in thermal models, while resonances are included in the system to account for the attractive interactions. We precisely demand that such
interactions must be incorporated in the models in a thermodynamically consistent way. There are still some thermal models in the literature which lack thermodynamical self-consistency. We have
proposed a new excluded-volume model where an equal hardcore size is assigned to each type of baryons in the HG, while the mesons are treated as pointlike particles. We have successfully used this
model in calculating various properties of HG such as number density, and energy density. We have compared our results with those of the other models. Further, we have extracted chemical freeze-out
parameters in various thermal models by analyzing the particle ratios over a broad energy range and parameterized them with the center-of-mass energy. We use these parameterizations to calculate the
particle ratios at various center-of-mass energies and compare them with the experimental data. We further provide a proposal in the form of freeze-out conditions for a unified description of
chemical freeze-out of hadrons in various thermal models. An extension of the thermal model for the study of the various transport properties of hadrons will also be discussed. We also analyze the
rapidity as well as transverse mass spectra of hadrons using our thermal model and examine the role of any flow existing in the medium by matching the theoretical results with the experimental data.
Thus the thermal approach indeed provides a very satisfactory description of various features of HG by reproducing a large number of experimental results covering wide energy range from alternating
gradient synchrotron (AGS) to large hadron collider (LHC) energy.
2. Formulation of Thermal Models
Various types of thermal models for HG using excluded-volume correction based on van-der Waals type effect have been proposed. Thermal models have often used the grand canonical ensemble description
to write the partition function for the system because it suits well for systems with large number of produced hadrons [30] and/or large volume. However, for nonrelativistic statistical mechanics,
the use of a grand canonical ensemble is usually just a matter of convenience [86]. Furthermore, the canonical ensemble can be used in case of small systems (e.g., peripheral nucleus-nucleus
collisions) and for low energies in case of strangeness production [87, 88] due to canonical suppression of the phase space. Similarly some descriptions also employ isobaric partition function in the
derivation of their HG model. We succinctly summarize the features of some models as follows.
2.1. Hagedorn Model
In the Hagedorn model [67, 68], it is assumed that the excluded-volume correction is proportional to the pointlike energy density . It is also assumed that the density of states of the finite-size
particles in total volume can be taken as precisely the same as that of pointlike particles in the available volume where and is the eigen volume of the particle in the HG. Thus, the grand canonical
partition function satisfies the relation: The sum of eigen volumes is given by the ratio of the invariant cluster mass to the total energy density, and is the fugacity, that is, . Hence , and the
energy density . is the energy density when particles are treated as pointlike. Now, using the expression for , one finally gets When , which is obviously the upper limit for since it gives the
energy density existing inside a nucleon and is usually regarded as the latent heat density required for the phase transition. Here represents the bag constant. The expressions of number density and
pressure can similarly be written as follows: Here, and are the number density and pressure of pointlike particles, respectively.
2.2. Cleymans-Suhonen Model
In order to include van-der Waals type of repulsion between baryons, Cleymans and Suhonen [63] assigned an equal hardcore radius to each baryon. Consequently, the available volume for baryons is ;
here is the eigen volume of baryon, and is the total number. As a result, the net excluded number density, pressure, and the energy density of a multicomponent HG are given as follows: where , , and
are net baryon density, pressure, and energy density of pointlike particles, respectively, and is the fraction of occupied volume. Kuono and Takagi [66] modified these expressions by considering the
existence of a repulsive interaction either between a pair of baryons or between a pair of antibaryons only. Therefore, the expressions (4), (5), and (6) take the folowing forms: where and are the
number density of the pointlike baryons and antibaryons, respectively and and are the corresponding energy density and pressure. Similarly, , , and are the number density, pressure, and energy
density of pointlike mesons, respectively.
2.3. Rischke-Gorenstein-Stocker-Greiner (RGSG) Model
The above discussed models possess a shortcoming that they are thermodynamically inconsistent because the thermodynamical variables like cannot be derived from a partition function or thermodynamical
potential (), for example, . Several proposals have come to correct such types of inconsistencies. Rischke et al. [69] have attempted to obtain a thermodynamically consistent formulation. In this
model, the grand canonical partition function for pointlike baryons can be written in terms of canonical partition function as follows: They further modified the canonical partition function by
introducing a step function in the volume so as to incorporate excluded-volume correction into the formalism. Therefore, the grand canonical partition function (8) finally takes the following form:
The above ansatz is motivated by considering particles with eigen-volume in a volume as pointlike particles in the available volume [69]. But, the problem in the calculation of (9) is the dependence
of the available volume on the varying number of particles [69]. To overcome this difficulty, one should use the Laplace transformation of (9). Using the Laplace transform, one gets the isobaric
partition function as follows: or where and . Finally, we get a transcendental type of equation as follows: where, The expressions for the number density, entropy density, and energy density in this
model can thus take a familiar form like These equations resemble (4) and (6) as given in Cleymans-Suhonen model [63] with being replaced by . The above model can be extended for a hadron gas
involving several baryonic species as follows: where, with . Particle number density for the species can be calculated from following equation: Unfortunately, the above model involves cumbersome,
transcendental expressions which are usually not easy to calculate. Furthermore, this model fails in the limit of because becomes negative in this limit.
2.4. New Excluded-Volume Model
Singh et al. [70] have proposed a thermodynamically consistent excluded-volume model in which the grand canonical partition function using Boltzmann approximation can be written as follows: where and
are the degeneracy factor and the fugacity of species of baryons, respectively, is the magnitude of the momentum of baryons, and is the eigenvolume assigned to each baryon of species; hence becomes
the total occupied volume where represents the total number of baryons of species. We can rewrite (18) as follows: where integral is Thus we have obtained the grand canonical partition function as
given by (19) by incorporating the excluded-volume effect explicitly in the partition function. The number density of baryons after excluded-volume correction () can be obtained as
So our prescription is thermodynamically consistent, and it leads to a transcendental equation: Here is the fractional occupied volume. It is clear that if we put the factor and consider only one
type of baryons in the system, then (22) can be reduced to the thermodynamically inconsistent expression (4). The presence of in (22) thus removes the thermodynamical inconsistency. For
single-component HG, the solution of (22) can be taken as follows: For a multicomponent hadron gas, (22) takes the following form: Using the method of parametric space [89], we write where is the
parameter and gives the space. We finally get the solution of (24) as follows: where is a parameter such that If ’s and are known, one can determine . The quantity is fixed by setting , and one
obtains ; here the subscript 1 denotes the nucleon degree of freedom, and is the total number of baryonic species. Hence by using and one can calculate . It is obvious that the above solution is not
unique, since it contains some parameters such as , one of which has been fixed to zero arbitrarily. Alternatively, one can assume that [70] Here an assumption is made that the number density of
baryon will only depend on the fugacity of same baryon. Then (24) reduces to a simple form as The solution of (29) can be obtained in a straight forward manner as follows [70]: where Now, can be
obtained by using the following relation: where Here is the ratio of the occupied volume to the available volume. Finally, can be written as The solution obtained in this model is very simple and
easy. There is no arbitrary parameter in this formalism, so it can be regarded as a unique solution. However, this theory still depends crucially on the assumption that the number density of species
is a function of the alone, and it is independent of the fugacities of other kinds of baryons. As the interactions between different species become significant in hot and dense HG, this assumption is
no longer valid. Moreover, one serious problem crops up, since we cannot do calculation in this model for MeV (and MeV). This particular limiting value of temperature and baryon chemical potential
depends significantly on the masses and the degeneracy factors of the baryonic resonances considered in the calculation.
In order to remove above discrepancies, Mishra and Singh [32, 33] have proposed a thermodynamically consistent EOS for a hot and dense HG using Boltzmann's statistics. They have proposed an
alternative way to solve the transcendental equation (22). We have extended this model by using quantum statistics into the grand canonical partition function; so that our model works even for the
extreme values of temperature and baryon chemical potential. Thus (20) can be rewritten as follows [31]: and (22) takes the following form after using the quantum statistics in the partition
function: where is the partial derivative of with respect to . We can write in an operator equation form as follows [32, 33, 90, 91]: where with ; is the density of pointlike baryons of th species
and the operator has the following form: Using the Neumann iteration method and retaining the series upto term, we get After solving (39), we finally get the expression for total pressure [70] of the
hadron gas as follows: Here is the pressure due to th type of meson.
Here we emphasize that we consider the repulsion arising only between a pair of baryons and/or antibaryons because we assign each of them exclusively a hardcore volume. In order to make the
calculation simple, we have taken an equal volume for each type of baryons with a hardcore radius fm. We have considered in our calculation all baryons and mesons and their resonances having masses
up to a cutoff value of 2GeV/c^2 and lying in the HG spectrum. Here only those resonances which possess well-defined masses and widths have been incorporated in the calculations. Branching ratios
for sequential decays have been suitably accounted, and in the presence of several decay channels, only dominant mode is included. We have also imposed the condition of strangeness neutrality
strictly by putting , where is the strangeness quantum number of the th hadron and is the strange (antistrange) hadron density, respectively. Using this constraint equation, we get the value of
strange chemical potential in terms of . Having done all these things, we proceed to calculate the energy density of each baryon species by using the following formula: Similarly, entropy density is
It is evident that this approach is more simple in comparison to other thermodynamically consistent excluded-volume approaches which often possess transcendental final expressions [69, 86]. Our
approach does not involve any arbitrary parameter in the calculation. Moreover, this approach can be used for extremely low as well as extremely large values of and , where all other approaches fail
to give a satisfying result since we do not use Boltzmann's approximation in our calculation.
3. Statistical and Thermodynamical Consistency
Recently, question of statistical and thermodynamical consistency in excluded-volume models for HG has widely been discussed [77]. In this section, we reexamine the issue of thermodynamical and
statistical consistency of the excluded-volume models. In RGSG model [69], the single particle grand canonical partition function (9) can be rewritten as follows: where, Here, in this model, in the
available volume () is independent of . Differentiating (43) with respect to , we get the following equation: Multiplying both sides of (45) by , we get We know that the expressions for statistical
and thermodynamical averages of number of baryons are as follows: respectively. Using (47) in (46), we get [77] Thus, we see that in RGSG model, thermodynamical average of number of baryons is
exactly equal to the statistical average of number of baryons. Similarly in this model, we can show that
Now, we calculate the statistical and thermodynamical averages of number of baryons in our excluded-volume model. The grand canonical partition function in our model (i.e., (18)) can take the
following form: where is given by (44). We use Boltzmann's statistics for the sake of convenience and consider only one species of baryons. In our model, , present in the available volume (), is
dependent. However, for multicomponent system, one cannot use “fixed ”, because in this case van-der Waals approximation is not uniquely defined [92, 93]. So, we use average in our multicomponent
grand partition function. However, at high temperatures it is not possible to use one component van-der Waals description for a system of various species with different masses [92, 93]. Now
differentiating (50) with respect to , we get Multiplying both sides of (51) by , we get Using the definitions (47), (52) can take the following form: or Here is the thermal average of number density
of baryons, is the number density of pointlike baryons, and The second term in (54) is the redundant one and arises because , present in the available volume (), is a function of . We call this term
“correction term”. In Figure 2, we have shown the variation of thermodynamical average of the number density of baryons and the “correction term” with respect to at MeV. We see that there is an
almost negligible contribution of this “correction term” to thermodynamical average of number density of baryons. Although, due to this “correction term”, the statistical average of the number
density of baryons is not exactly equal to its thermodynamical average, the difference is so small that it can be neglected. Similarly, we can show that such redundant terms appear while calculating
statistical average of energy density of baryons and arise due to the temperature dependence of . Such terms again give negligible contribution to thermodynamical average of the energy density of
baryons. Here, we see that the statistical and thermodynamical averages of physical quantities such as number density and energy density are approximately equal to each other in our model also. Thus,
our excluded-volume model is not exactly thermodynamically consistent, but it can safely be taken as consistent because the correction term in the averaging procedure appears as negligibly small.
4. Comparisons between Model Results and Experimental Data
In this section, we review various features of hadron gas and present our comparisons between the results of various HG models and the experimental data.
4.1. Chemical Freeze-Out Criteria
Thermal models provide a systematic study of many important properties of hot and dense HG at chemical freezeout (where inelastic interactions cease). To establish a relation between chemical
freezeout parameters () and , a common method is used to fit the experimental hadron ratios. Many papers [30, 49, 50, 94–96] have appeared in which and are extracted in terms of . In [30], various
hadron ratios are analyzed from GeV to 200GeV, and chemical freeze-out parameters are parameterized in terms of by using following expressions: Here, , , , , , and (the limiting temperature) are
fitting parameters. Various authors [94, 95] have included the strangeness suppression factor () in their model while extracting the chemical freeze-out parameters. In the thermal model, is used to
account for the partial equilibration of the strange particles. Such situation may arise in the elementary p-p collisions and/or peripheral A-A collisions, and mostly the use of is warranted in such
cases [94, 97]. Moreover, has been found in the central collisions at RHIC [98, 99]. We do not require in our model as an additional fitting parameter because we assume that strangeness is also fully
equilibrated in the HG. Also, it has been pointed out that inclusion of in thermal model analysis does not affect the values of fitting parameters and much [30]. Dumitru et al. [96] have used
inhomogeneous freeze-out scenario for the extraction of and at various . In a recent paper [100], condition of vanishing value of or equivalently is used to describe the chemical freezeout line where
, , , and are kurtosis, the standard deviation, fourth order moment, and susceptibility, respectively. In [101], first time experimental data on and , here is skewness, has been compared with the
lattice QCD calculations and hadron resonance gas model to determine the critical temperature () for the QCD phase transition. Recently, it is shown that the freeze-out parameters in heavy-ion
collisions can be determined by comparing the lattice QCD results for the first three cumulants of net electric charge fluctuations with the experimental data [102]. In Figure 3, we have shown the
energy dependence of thermal parameters and extracted by various authors. In all the studies, similar behaviour is found except in the Letessier and Rafelski [95], which may be due to usage of many
additional free parameters such as light quark occupancy factor () and an isospin fugacity. We have also extracted freeze-out parameters by fitting the experimental particle-ratios from the lowest
SIS energy to the highest RHIC energy using our model [31]. For comparison, we have shown the values obtained in other models, for example, IHG model, Cleymans-Suhonen model, and RGSG model, in Table
1. We then parameterize the variables and in terms of as follows [103]: where the parameters , , , , and have been determined from the best fits: GeV, GeV^−1, GeV, GeV^−1, and GeV^−3. The
systematic error of the fits can be estimated via quadratic deviation [30] defined as follows: where and are the experimental data and thermal model result of either the hadron yield or the ratio of
hadron yields, respectively.
In this analysis, we have used full phase space () data at all center-of-mass energies except at RHIC energies where only midrapidity data are available for all the ratios. Moreover, the midrapidity
and full phase space data at these energies differ only slightly as pointed out by Alt and NA49 Collaboration for and ratios [104]. In Figure 4, we have shown the parametrization of the freeze-out
values of baryon chemical potential with respect to , and similarly in Figure 5, we have shown the chemical freeze-out curve between temperature and baryon chemical potential [31].
4.2. Hadron Ratios
In an experimental measurement of various particle ratios at various centre-of-mass energies [105–111], it is found that there is an unusual sharp variation in the ratio increasing up to the peak
value. This strong variation of the ratio with energy indicates the critical temperature of QCD phase transition [112] between HG and QGP [113, 114], and a nontrivial information about the critical
temperature MeV has been extracted [114]. Figure 6 shows the variation of with . We compare the experimental data with various thermal models [31, 63, 69] and find that our model calculation gives
much better fit to the experimental data in comparison to other models. We get a sharp peak around centre-of-mass energy of 5GeV, and our results thus almost reproduce all the features of the
experimental data.
In Figure 7, we have shown the variations of ratio with . The yields in the thermal models are often much higher in comparison to data. We notice that no thermal model can suitably account for the
multiplicity ratio of multistrange particle since is hidden-strange quark combination. However, quark coalescence model assuming a QGP formation has been claimed to explain the results [115, 116]
successfully. In the thermal models, the results for the multistrange particles raise doubt over the degree of chemical equilibration for strangeness reached in the HG fireball. We can use an
arbitrary parameter as used in several models. The failures of thermal models in these cases may indicate the presence of QGP formation, but it is still not clear. In Figure 8, we have shown the
energy dependence of antiparticle to particle ratios; for example, , , , and . These ratios increase sharply with respect to and then almost saturate at higher energies reaching the value equal to at
LHC energy. On comparison with the experimental data we find that almost all the thermal models describe these data successfully at all the center-of-mass energies. However, RGSG model [69] fails to
describe the data at SPS and RHIC energies in comparison to other models [31].
4.3. Thermodynamical Properties
We present the thermal model calculations of various thermodynamical properties of HG such as entropy per baryon () and energy density and compare the results with the predictions of a microscopic
model URASiMA event generator developed by Sasaki [122]. URASiMA (ultrarelativistic AA collision simulator based on multiple scattering algorithm) is a microscopic model which includes the realistic
interactions between hadrons. In URASiMA event generator, molecular-dynamical simulations for a system of a HG are performed. URASiMA includes the multibody absorptions, which are the reverse
processes of multiparticle production and are not included in any other model. Although, URASiMA gives a realistic EOS for hot and dense HG, it does not include antibaryons and strange particles in
their simulation, which is very crucial. In Figure 9, we have plotted the variation of with respect to temperature () at fixed net baryon density (). calculated in our model shows a good agreement
with the results of Sasaki [122] in comparison to other excluded-volume models. It is found that thermal model approach, which incorporates macroscopic geometrical features gives a close results with
the simulation involving microscopic interactions between hadrons. There are various parameters such as coupling constants of hadrons appear in URASiMA model due to interactions between hadrons. It
is certainly encouraging to find an excellent agreement between the results obtained with two widely different approaches.
Figure 10 represents the variation of the energy density of HG with respect to at constant . Again our model calculation is more closer to the result of URASiMA in comparison to other excluded-volume
models. Energy density increases very slowly with the temperature initially and then rapidly increases at higher temperatures.
4.4. Causality
One of the deficiencies of excluded-volume models is the violation of causality in the hot and dense hadron gas; that is, the sound velocity is larger than the velocity of light in the medium. In
other words, , in the unit of , means that the medium transmits information at a speed faster than [74]. Since, in this paper we are discussing the results of various excluded-volume models, it would
be interesting to see whether these models respect causality or not. In Figure 11, we have plotted the variations of the total hadronic pressure as a function of the energy density of the HG at a
fixed entropy per particle using our model calculation [31]. We find for a fixed that the pressure varies linearly with respect to energy density. In Figure 12, we have shown the variation of ( at
fixed ) with respect to . We find that in our model with interacting particles. We get (i.e., ) for an ideal gas consisting of ultrarelativistic particles. This feature endorses our viewpoint that
our model is not only thermodynamically consistent but it does not also involve any violation of causality even at large density. Similarly in RGSG model [69], we do not notice that the value of
exceeds as shown in Figure 12. It should be mentioned that we are using full quantum statistics in all the models taken for comparisons here. However, we find that the values in the RGSG model cannot
be extracted when temperature of the HG exceeds 250MeV. No such restriction applies for our model.
4.5. Universal Freeze-Out Criteria
One of the most remarkable successes of thermal models is in explaining the multiplicities and the particle ratios of various particles produced in heavy-ion experiments from the lowest SIS energy to
maximum LHC energy. Some properties of thermal fireball are found to be common to all collision energies which give a universal freeze-out conditions in heavy-ion collisions. Now, we review the
applicability of thermal models in deriving some useful chemical freeze-out criteria for the fireball. Recent studies [39, 48, 103, 123–125] predict that the following empirical conditions are to be
valid on the entire freeze-out hypersurface of the fireball: (i) energy per hadron always has a fixed value at 1.08GeV; (ii) sum of baryon and anti-baryon densities is ; (iii) normalized entropy
density is . Further, Cleymans et al. [103] have found that all the above conditions separately give a satisfactory description of the chemical freeze-out parameters and in an IHG picture only.
Moreover, it was also found that these conditions are independent of collision energy and the geometry of colliding nuclei. Furthermore, Cleymans et al. [103] have hinted that incorporation of
excluded-volume correction leads to wild as well as disastrous effects on these conditions. The purpose in this section is to reinvestigate the validity of these freeze-out criteria in
excluded-volume models. Along with these conditions, a condition, formulated by using percolation theory, is also proposed as a chemical freeze-out condition [124]. An assumption is made that in the
baryonless region the hadronic matter freezes out due to hadron resonances and vacuum percolation, while in the baryon rich region the freeze-out takes place due to baryon percolation. Thus, the
condition which describes the chemical freeze-out line is formulated by following equation [124]: where is the volume of a hadron. The numbers 1.24 and 0.34 are obtained within percolation theory [
In Figure 13, we have shown the variation of with respect to at the chemical freeze-out point of the fireball. The ratio shows a constant value of 1.0 in our model, and it shows also a remarkable
energy independence. Similarly the curve in IHG model shows that the value for is slightly larger than the one as reported in [103]. However, results support the finding that is almost independent of
energy and also of the geometry of the nuclei. Most importantly, we notice that the inclusion of the excluded-volume correction does not change the result much which is contrary to the claim of
Cleymans et al. [103]. The condition GeV was successfully used in the literature to make predictions [45] of freeze-out parameters at SPS energies of 40 and 80A GeV for Pb-Pb collisions long before
the data were taken [31]. Moreover, we have also shown, in Figure 13, the curves in the Cleymans-Suhonen model [63] and the RGSG model [69], and we notice a small variation with particularly at lower
energies. In Figure 14, we study a possible new freeze-out criterion which was not proposed earlier. We show that the quantity entropy per particle, that is, , yields a remarkable energy independence
in our model calculation. The quantity describes the chemical freeze-out criteria and is almost independent of the centre-of-mass energy in our model calculation. However, the results below, GeV, do
not give a promising support to our criterion and reveal some energy dependence also. This criterion thus indicates that the possible use of excluded-volume models and the thermal descriptions at
very low energies is not valid for the HG. Similar results were obtained in the RGSG, Cleymans-Suhonen, and IHG models also [31]. The conditions, that is, GeV and , at the chemical freeze-out form a
constant hypersurface from where all the particles freeze out and all kinds of inelastic collisions cease simultaneously and fly towards the detectors. Thus all particles attain thermal equilibrium
at the line of chemical freeze-out, and when they come out from the fireball, they have an almost constant energy per particle (≈1.0) and entropy per particle (≈7.0). Moreover, these values are
independent of the initial collision energy as well as the geometry of the colliding nuclei.
Our finding lends support to the crucial assumption of HG fireball achieving chemical equilibrium in the heavy-ion collisions from the lowest SIS to RHIC energy, and the EOS of the HG developed by us
indeed gives a proper description of the hot and dense fireball and its subsequent expansion. However, we still do not get any information regarding QGP formation from these studies. The chemical
equilibrium once attained by the hot and dense HG removes any memory regarding QGP existing in the fireball before HG phase. Furthermore, in a heavy-ion collision, a large amount of kinetic energy
becomes available, and part of it is always lost during the collision due to dissipative processes. In thermal description of the fireball, we ignore the effect of such processes, and we assume that
all available kinetic energy (or momentum) is globally thermalized at the freeze-out density. Experimental configuration of the collective flow developed in the hot, dense matter reveals the
unsatisfactory nature of the above assumption.
4.6. Transport Properties of HG
Transport coefficients are very important tools in quantifying the properties of strongly interacting relativistic fluid and its critical phenomena, that is, phase transition and critical point [127–
129]. The fluctuations cause the system to depart from equilibrium and a nonequilibrated system that is created for a brief time. The response of the system to such fluctuations is essentially
described by the transport coefficients, for example, shear viscosity, bulk viscosity, speed of sound, and so forth. Recently the data for the collective flow obtained from RHIC and LHC experiments
indicate that the system created in these experiments behaves as strongly interacting perfect fluid [130, 131], whereas we expected that QGP created in these experiments should behave like a perfect
gas. The perfect fluid created after the phase transition indicates a very low value of shear viscosity to entropy ratio so that the dissipative effects are negligible, and the collective flow is
large as it was obtained by heavy ion collision experiments [10, 132, 133]. There were several analytic calculations for and of simple hadronic systems [134–140] along with some sophisticated
microscopic transport model calculations [141–143] in the literature. Furthermore, some calculations predict that the minimum of shear viscosity to entropy density is related with the QCD phase
transition [144–149]. Similarly sound velocity is an important property of the matter created in heavy-ion collision experiments because the hydrodynamic evolution of this matter strongly depends on
it. A minimum occurred in the sound-velocity has also been interpreted in terms of a phase transition [29, 145, 150–155], and further, the presence of a shallow minimum corresponds to a cross-over
transition [156]. In view of the above, it is worthwhile to study in detail the transport properties of the HG in order to fully comprehend the nature of the matter created in the heavy-ion
collisions as well as the involved phase transition phenomenon. In this section, we have used thermal models to calculate the transport properties of HG such as shear viscosity to entropy ratio [31].
We calculate the shear viscosity in our thermal model as it was done previously by Gorenstein et al. [157] using RGSG model. According to molecular kinetic theory, we can write the dependence of the
shear viscosity as follows [158]: where is the particle density, is the mean free path, and is the average thermal momentum of the baryons or antibaryons. For the mixture of particle species with
different masses and with the same hardcore radius , the shear viscosity can be calculated by the following equation [157]: where is the number density of the ith species of baryons (antibaryons) and
is the total baryon density. In Figure 15, we have shown the variation of with respect to temperature as obtained in our model for HG having a baryonic hardcore size fm, and compared our results
with those of Gorenstein et al. [157]. We find that near the expected QCD phase transition temperature ( = 170–180MeV) shows a lower value in our HG model than the value in other models. In fact, in
our model looks close to the lower bound () suggested by AdS/QCD theories [159, 160]. Recently, measurements in Pb-Pb collisions at the large hadron collider (LHC) support the value when compared
with the viscous fluid hydrodynamic flow [161].
In Figure 16, we have shown the variation of with respect to at a very low temperature (≈10MeV) [31]. Here we find that the is constant as increases upto 700MeV and then sharply decreases. This
kind of valley-like structure at low temperature and at around 950MeV was also obtained by Chen et al. [137] and Itakura et al. [139]. They have related this structure to the liquid-gas phase
transition of the nuclear matter. As we increase the temperature above MeV, this valley-like structure disappears. They further suspect that the observation of a discontinuity in the bottom of
valley may correspond to the location of the critical point. Our HG model yields a curve in complete agreement with these results. Figure 17 represents the variation of and with respect to
temperature at a fixed (=300MeV), for HG having a baryonic hardcore size fm. We have compared our result with the result obtained in [139]. Here we find that increases with temperature in our HG
model as well as in the simple phenomenological calculation [139], but it decreases with increasing temperature in low-temperature effective field theory (EFT) calculations [137, 139]. However,
decreases with increasing temperature in all three calculations, and in our model gives the lowest value at all the temperatures in comparison to other models. In Figure 18, we have shown a
comparison between calculated in our HG model with the results obtained in a microscopic pion gas model used in [141]. Our model results show a fair agreement with the microscopic model results for
the temperature higher than 160MeV, while at lower temperatures the microscopic calculation predicts lower values of in comparison to our results. The most probable reason may be that the
calculations have been done only for pion gas in the microscopic model, while at low temperatures the inclusion of baryons in the HG is very important in order to extract a correct value for the
shear viscosity [31]. Figure 19 shows the variation of with respect to in our model calculation. We have compared our results with that calculated in [157]. There is similarity in our results at
lower energies, while our results significantly differ at higher energies. However both the calculations show that decreases with increasing .
The study of the transport properties of nonequilibrium systems which are not far from an equilibrium state has yielded valuable results in the recent past. Large values of the elliptic flow observed
at RHIC indicate that the matter in the fireball behaves as a nearly perfect liquid with a small value of the ratio. After evaluating in strongly coupled theories using AdS/CFT duality conjecture, a
lower bound was reported as . We surprisingly notice that the fireball with hot, dense HG as described in our model gives transport coefficients which agree with those given in different approaches.
Temperature and baryon chemical potential dependence of the are analyzed and compared with the results obtained in other models. Our results lend support to the claim that knowledge of the EOS and
the transport coefficients of HG is essential for a better understanding of the dynamics of the medium formed in the heavy-ion collisions [31].
4.7. Rapidity and Transverse Mass Spectra
In order to suggest any unambiguous signal for QGP, the dynamics of the collisions should be understood properly. Such information can be obtained by analyzing the properties of various particles
which are emitted from various stages of the collisions. Hadrons are produced at the end of the hot and dense QGP phase, but they subsequently scatter in the confined hadronic phase prior to
decoupling (or “freeze-out”) from the collision system, and finally a collective evolution of the hot and dense matter occurs in the form of transverse, radial, or elliptic flow which is instrumental
in shaping the important features of the particle spectra. The global properties and dynamics of freeze-out can be at best studied via hadronic observables such as rapidity distributions and
transverse mass spectra [162]. There are various approaches for the study of rapidity as well as transverse mass spectra of HG [28, 37, 38, 163–185]. Hadronic spectra from purely thermal models
usually reveal an isotropic distribution of particles [186], and hence the rapidity spectra obtained with the purely thermal models do not reproduce the features of the experimental data
satisfactorily. Similarly the transverse mass spectra from the thermal models reveal a more steeper curve than that observed experimentally. The comparisons illustrate that the fireball formed in
heavy-ion collisions does not expand isotropically in nature, and there is a prominent input of collective flow in the longitudinal and transverse directions which finally causes anisotropy in the
rapidity and transverse mass distributions of the hadrons after the freeze-out. Here we mention some kinds of models of thermal and collective flow used in the literature. Hydrodynamical properties
of the expanding fireball have been initially discussed by Bjorken and Landau for the central-rapidity and stopping regimes, respectively [28, 163]. However, collisions even at RHIC energies reveal
that they are neither fully stopped, nor fully transparent. As the collision energy increases, the longitudinal flow grows stronger and leads to a cylindrical geometry as postulated in [37, 38, 164,
165]. They assume that the fireballs are distributed uniformally in the longitudinal direction and demonstrate that the available data can consistently be described in a thermal model with inputs of
chemical equilibrium and flow, although they have also used the experimental data for small systems only. They use two simple parameters: transverse flow velocity () and temperature () in their
models. In [166, 167], nonuniform flow model is used to analyze the spectra specially to reproduce the dip at midrapidity in the rapidity spectra of baryons by assuming that the fireballs are
distributed nonuniformly in the longitudinal phase space. In [175–179], the rapidity-dependent baryon chemical potential has been invoked to study the rapidity spectra of hadrons. In certain
hydrodynamical models [180], measured transverse momentum () distributions in Au-Au collisions at GeV [181–183] have been described successfully by incorporating a radial flow. In [184], rapidity
spectra of mesons have been studied using viscous relativistic hydrodynamics in a 1+1 dimension assuming a nonboost invariant Bjorken’s flow in the longitudinal direction. They have also analyzed the
effect of the shear viscosity on the longitudinal expansion of the matter. Shear viscosity counteracts the gradients of the velocity field; as a consequence it slows down the longitudinal expansion.
Ivanov [185] has employed 3FD model [187] for the study of rapidity distributions of hadrons in the energy range from 2.7GeV to 62.4GeV. In 3FD model, three different EOS: (i) a purely hadronic
EOS, (ii) the EOS involving first order phase transition from hot, dense HG to QGP, and (iii) the EOS with smooth crossover transition are used. Within all three scenarios they reproduced the data at
the almost same extent. In [188], rapidity distributions of various hadrons in the central nucleus-nucleus collisions have been studied in the Landau's and Bjorken's hydrodynamical model. The effect
of speed of sound () on the hadronic spectra and the correlation of with freeze-out parameters are indicated.
In this section, we study the rapidity and transverse mass spectra of hadrons using thermal approach. We can rewrite (36) in the following manner [58]: It means that the invariant distributions are [
37, 38, 164]
If we use Boltzmann's approximation, (63) differs from the one used in the paper of Schnedermann et al. [164] by the presence of a prefactor . However, we measure all these quantities precisely at
the chemical freeze-out using our model, and hence quantitatively we do not require any normalizing factor as required in [164]. We use the following expression to calculate the rapidity
distributions of baryons in the thermal model [58]: Here is the rapidity variable and is the transverse mass . Also is the total volume of the fireball formed at chemical freeze-out, and is the total
number of th baryons. We assume that the freeze-out volume of the fireball for all types of hadrons at the time of the homogeneous emissions of hadrons remains the same. It can be mentioned here that
in the above equation, there occurs no free parameter because all the quantities , , , , and so forth, are determined in the model. However, (64) describes the experimental data only at midrapidity,
while it fails at forward and backward rapidities, so we need to modify it by incorporating a flow factor in the longitudinal direction. Thus the resulting rapidity spectra of ith hadron is [37, 38,
58, 164] where can be calculated by using (64). The expression for average longitudinal velocity is [166, 167, 189] Here is a free parameter which provides the upper rapidity limit for the
longitudinal flow velocity at a particular , and its value is determined by the best experimental fit. The value of increases with the increasing , and hence also increases. Cleymans et al. [179]
have extended the thermal model [175, 176], in which the chemical freeze-out parameters are rapidity-dependent, to calculate the rapidity spectra of hadrons. They use the following expression for
rapidity spectra: where is the thermal rapidity distribution of particles calculated by using (64) and is a Gaussian distribution of fireballs centered at zero and given by Similarly we calculate the
transverse mass spectra of hadrons by using following expression [58]: where is the modified Bessel's function: The above expression for transverse mass spectra arises from a stationary thermal
source alone which is not capable of describing the experimental data successfully. So, we incorporate flow velocity in both the directions in (69), longitudinal as well as transverse, in order to
describe the experimental data satisfactorily. After defining the flow velocity field, we can calculate the invariant momentum spectrum by using the following formula [164, 190]: While deriving (71),
we assume that the local fluid velocity gives a boost to an isotropic thermal distribution of hadrons. Now the final expression of transverse mass spectra of hadrons after incorporation of flow
velocity in our model is [58] Here is the modified Bessel's function: where is given by , with the velocity profile chosen as [37, 38, 164]. is the maximum surface velocity and is treated as a free
parameter and . The average of the transverse velocity can be evaluated as [182]
In our calculation, we use a linear velocity profile (), and is the maximum radius of the expanding source at freeze-out () [182].
In Figure 20, we have shown the variations of and with the calculated in our excluded-volume model and compared with the results of various thermal models. We show the total freeze-out volume for
calculated in our model using Boltzmann's statistics. We see that there is a significant difference between the results arising from quantum statistics and Boltzmann's statistics [58]. We also show
the total freeze-out volume for in IHG model calculation by dash-dotted line . We clearly notice a remarkable difference between the results of our excluded-volume model and those of IHG model also.
We have also compared predictions from our model with the data obtained from the pion interferometry (HBT) [191] which in fact reveals thermal (kinetic) freeze-out volumes. The results of thermal
models support the finding that the decoupling of strange mesons from the fireball takes place earlier than the -mesons. Moreover, a flat minimum occurs in the curves around the center-of-mass energy
≈8GeV, and this feature is well supported by HBT data. In Figure 21, we present the rapidity distribution of for central Au+Au collisions at GeV over full rapidity range. Dotted line shows the
distribution of due to purely thermal model. Solid line shows the rapidity distributions of after the incorporation of longitudinal flow in our thermal model, and results give a good agreement with
the experimental data [192]. In fitting the experimental data, we use the value of and hence the longitudinal flow velocity at GeV. For comparison and testing the appropriateness of this parameter,
we also show the rapidity distributions at a different value, that is, (or, ), by a dashed line in the figure. We find that the results slightly differ, and hence it shows a small dependence on [58].
Figure 22 represents the rapidity distributions of pion at various calculated by using (67) [179]. There is a good agreement between the model results and experimental data at all .
In Figure 23, we show the transverse mass spectra for and proton for the most central collisions of Au+Au at GeV. We have neglected the contributions from the resonance decays in our calculations
since these contributions affect the transverse mass spectra only towards the lower transverse mass side, that is, GeV. We find a good agreement between our calculations and the experimental results
for all except GeV after incorporating the flow velocity in purely thermal model. This again shows the importance of collective flow in the description of the experimental data [193]. At this
energy, the value of is taken as and transverse flow velocity . This set of transverse flow velocity is able to reproduce the transverse mass spectra of almost all the hadrons at GeV. We notice that
the transverse flow velocity slowly increases with the increasing . If we take , we find that the results differ with data as shown in Figure 23. In Figure 24, we show the transverse momentum ()
spectra for , , and in the most central collisions of Au-Au at GeV. Our model calculations reveal a close agreement with the experimental data [193]. In Figure 25, we show the spectra of , , and for
the Pb-Pb collisions at TeV at the LHC. Our calculations again give a good fit to the experimental results [194]. We also compare our results for spectrum with the hydrodynamical model of Shen et
al. [195], which successfully explains and spectra but strongly fails in the case of spectrum [58]. In comparison, our model results show closer agreement with the experimental data. Shen et al. [195
] have employed (2+1)-dimensional viscous hydrodynamics with the lattice QCD-based EOS. They use Cooper-Frye prescription to implement kinetic freeze-out in converting the hydrodynamic output into
the particle spectra. Due to lack of a proper theoretical and phenomenological knowledge, they use the same parameters for Pb-Pb collisions at LHC energy, which was used for Au-Au collisions at GeV.
Furthermore, they use the temperature-independent ratio in their calculation. After fitting the experimental data, we get at this energy which indicates the collective flow as becoming stronger at
LHC energy than that observed at RHIC energies. In this plot, we also attempt to show how the spectra for will change at a slightly different value of the parameter, that is, [58].
5. Summary and Conclusions
The main aim in this paper is to emphasize the use of the thermal approach in describing various yields of different particle species that have been measured in various experiments running at various
places. We have discussed various types of thermal approaches for the formulation of EOS for HG. We have argued that, incorporation of interactions between hadrons in a thermodynamically consistent
way is important for the realistic formulation of HG from both qualitatively and quantitatively point of view. We have presented the systematic study of the particle production in heavy-ion
collisions from AGS to LHC energy. We have observed from this analysis that the production of the particles seems to occurr according to principle of equilibrium. Yields of hadrons and their ratios
measured in heavy-ion collisions match with the predictions of thermal models assured the thermalization of the collision fireball formed in heavy-ion collisions. Furthermore, various experimental
observables such as transverse momentum spectra and elliptic flow indicate the presence of the thermodynamical pressure, developed in the early stage, and correlations which are expected in a
thermalized medium.
We have discussed a detailed formulation of various excluded-volume models and their shortcomings. Some excluded-volume models are not thermodynamically consistent because they do not possess a well
defined partition function from which various thermodynamical quantities such as number density can be calculated. However, some of them are the thermodynamically consistent but suffer from some
unphysical situations cropping up in the calculations. We have proposed a new approximately thermodynamically consistent excluded-volume model for a hot and dense HG. We have used quantum statistics
in the grand canonical partition function of our model so that it works even at extreme values of and where all other models fail. Moreover, our model respects causality. We have presented the
calculations of various thermodynamical quantities such as entropy per baryon and energy density in various excluded-volume models and compare the results with those of a microscopic approach
URASiMA. We find that our model results are in close agreement with that of the entirely different approach URASiMA model. We have calculated various particle ratios at various , and we confronted
the results of various thermal models with the experimental data and find that they are indeed successful in describing the particle ratios. However, we find that our model results are closer to the
experimental data in comparison to those of other excluded-volume models. We have calculated some conditions such as and at chemical freeze-out points and attempted to test whether these conditions
involve energy independence as well as independence of structure of the nuclei involved in the collisions. We find that GeV and are the two robust freeze-out criteria which show independence of the
energy and structure of nuclei. Moreover, the calculations of transport properties in our model match well with the results obtained in other widely different approaches. Further, we present an
analysis of rapidity distributions and transverse mass spectra of hadrons in central nucleus-nucleus collision at various using our EOS for HG. We see that the stationary thermal source alone cannot
describe the experimental data fully unless we incorporate flow velocities in the longitudinal as well as in the transverse direction, and as a result, our modified model predictions show a good
agreement with the experimental data. Our analysis shows that a collective flow develops at each which increases further with the increasing . The description of the rapidity distributions and
transverse mass spectra of hadrons at each matches very well with the experimental data. Thus, we emphasize that thermal models are indeed an important tool to describe the various features of hot
and dense HG. Although, these models are not capable of telling whether QGP was formed before HG phase, they can give an indirect indication of it by showing any anomalous feature as observed in the
experimental data.
In conclusion, the net outcome of this review is indeed a surprising one. The excluded-volume HG models are really successful in describing all kinds of features of the HG formed in ultrarelativistic
heavy-ion collisions. The most important property indicated by such description is the chemical equilibrium reached in such collisions. However, the description is still a geometrical one and does
not involve any microscopic picture of interactions. Moreover, its relativistic and field theoretic generalizations are still needed in order to make the picture a more realistic description. But it
is amazing to find that these models still work much better than expected. Notwithstanding these remarks, we should add that Lattice QCD results are now available for the pressure, entropy density,
and energy density, and so forth for the entire temperature range from to higher values at . Here low temperature phase of QCD is the HG, and recently our excluded-volume model reproduces these
properties quite in agreement with Lattice results [196]. We have also used these calculations in the precise determination of the QCD critical end point [197, 198]. Thus we conclude that the
excluded-volume models are successful in reproducing numerical results obtained in various experiments, and, therefore, further research is required to show how these descriptions are connected with
the microscopic interactions.
S. K. Tiwari is grateful to the Council of Scientific and Industrial Research (CSIR), New Delhi, for providing a research grant.
1. C. P. Singh, “Signals of quark-gluon plasma,” Physics Report, vol. 236, no. 3, pp. 147–224, 1993. View at Scopus
2. C. P. Singh, “current status of properties and signals of the quark-gluon plasma,” International Journal of Modern Physics A, vol. 7, p. 7185, 1992. View at Publisher · View at Google Scholar
3. M. J. Tannenbaum, “Recent results in relativistic heavy ion collisions: from “a new state of matter” to “the perfect fluid”,” Reports on Progress in Physics, vol. 69, p. 2005, 2006. View at
Publisher · View at Google Scholar
4. B. Müller, “Physics and signatures of the quark-gluon plasma,” Reports on Progress in Physics, vol. 58, p. 611, 1995. View at Publisher · View at Google Scholar
5. H. Satz, “Colour deconfinement in nuclear collisions,” Reports on Progress in Physics, vol. 63, no. 9, pp. 1511–1574, 2000. View at Publisher · View at Google Scholar · View at Scopus
6. H. Satz, “quark matter and nuclear collisions a brief history of strong interaction thermodynamics,” International Journal of Modern Physics E, vol. 21, Article ID 1230006, 23 pages, 2012. View
at Publisher · View at Google Scholar
7. P. Braun-Munzinger, K. Redlich, and J. Stachel, Invited Review for Quark Gluon Plasma, vol. 3, Edited by R. C. Hwa and X.-N. Wang, World Scientific Publishing.
8. P. Braun-Munzinger and J. Wambach, “Colloquiumml: phase diagram of strongly interacting matter,” Reviews of Modern Physics, vol. 81, no. 3, pp. 1031–1050, 2009. View at Publisher · View at Google
Scholar · View at Scopus
9. P. Braun-Munzinger and J. Stachel, “The quest for the quark-gluon plasma,” Nature, vol. 448, no. 7151, pp. 302–309, 2007. View at Publisher · View at Google Scholar · View at Scopus
10. M. Gyulassy and L. McLerran, “New forms of QCD matter discovered at RHIC,” Nuclear Physics A, vol. 750, no. 1, pp. 30–63, 2005. View at Publisher · View at Google Scholar · View at Scopus
11. K. Adcoxbd, S.S. Adlere, S. Afanasiev, et al., “Formation of dense partonic matter in relativistic nucleus-nucleus collisions at RHIC: experimental evaluation by the PHENIX Collaboration,”
Nuclear Physics A, vol. 757, pp. 184–283, 2005. View at Publisher · View at Google Scholar
12. R. Rapp, “Signatures of thermal dilepton radiation at ultrarelativistic energies,” Physical Review C, vol. 63, Article ID 054907, 13 pages, 2001. View at Publisher · View at Google Scholar
13. V. Ruuskanen, “transverse hydrodynamics with a first order phase transition in very high-energy nuclear collisions,” Acta Physica Polonica B, vol. 18, p. 551, 1987.
14. I. Krasnikova, C. Gale, and D. K. Srivastava, “Production of intermediate mass dileptons in relativistic heavy ion collisions,” Physical Review C, vol. 65, Article ID 064903, 2002.
15. W. Cassing, E. L. Bratkovskaya, R. Rapp, and J. Wambach, “Probing the ρ spectral function in hot and dense nuclear matter by dileptons,” Physical Review C, vol. 57, no. 2, pp. 916–921, 1998. View
at Scopus
16. J. Wambach and R. Rapp, “Theoretical interpretations of low-mass dileptons,” Nuclear Physics A, vol. 638, no. 1-2, p. 171, 1998. View at Scopus
17. K. Gallmeister, B. Kämpfer, and O. P. Pavlenko, “Is there a unique thermal source of dileptons in Pb(158 A · GeV) + Au, Pb reactions?” Physics Letters B, vol. 473, no. 1-2, pp. 20–24, 2000. View
at Publisher · View at Google Scholar · View at Scopus
18. F. Karsch, K. Redlich, and L. Turko, “Chiral symmetry and dileptons in heavy ion collisions,” Zeitschrift für Physik C, vol. 60, no. 3, pp. 519–525, 1993. View at Publisher · View at Google
Scholar · View at Scopus
19. K. Kajantie, J. Kapusta, L. McLerran, and A. Mekjian, “Dilepton emission and the QCD phase transition in ultrarelativistic nuclear collisions,” Physical Review D, vol. 34, no. 9, pp. 2746–2754,
1986. View at Publisher · View at Google Scholar · View at Scopus
20. T. Hatsuda, “Spectral change of hadrons and chiral symmetry,” Nuclear Physics A, vol. 698, p. 243, 2002. View at Scopus
21. K. Yokokawa, T. Hatsuda, A. Hayashigaki, and T. Kunihiro, “Simultaneous softening of σ and ρ mesons associated with chiral restoration,” Physical Review C, vol. 66, Article ID 022201, 2002. View
at Publisher · View at Google Scholar
22. U. Heinz, “The Little Bang: searching for quark-gluon matter in relativistic heavy-ion collisions,” Nuclear Physics A, vol. 685, pp. 414–431, 2001. View at Publisher · View at Google Scholar
23. P. F. Kolb, J. Sollfrank, P. V. Ruuskanen, and U. Heinz, “Hydrodynamic simulation of elliptic flow,” Nuclear Physics A, vol. 661, p. 349, 1999. View at Scopus
24. D. Teaney, J. Lauret, and E. V. Shuryak, “Flow at the SPS and RHIC as a quark-gluon plasma signature,” Physical Review Letters, vol. 86, no. 21, pp. 4783–4786, 2001. View at Publisher · View at
Google Scholar · View at Scopus
25. U. A. Wiedemann and U. Heinz, “Particle interferometry for relativistic heavy-ion collisions,” Physics Report, vol. 319, no. 4-5, pp. 145–230, 1999. View at Scopus
26. B. Tomasik, U. A. Wiedemann, and U. Heinz, “Dynamics and sizes of the fireball at freeze-out,” Nuclear Physics A, vol. 663, pp. 753–756, 2000. View at Publisher · View at Google Scholar
27. J. R. Nix, “Low freeze-out temperature and high collective velocities in relativistic heavy-ion collisions,” Physical Review C, vol. 58, no. 4, pp. 2303–2310, 1998. View at Scopus
28. J. D. Bjorken, “Highly relativistic nucleus-nucleus collisions: the central rapidity region,” Physical Review D, vol. 27, no. 1, pp. 140–151, 1983. View at Publisher · View at Google Scholar ·
View at Scopus
29. P. Braun-Munzinger and J. Stachel, “Probing the phase boundary between hadronic matter and the quark-gluon plasma in relativistic heavy-ion collisions,” Nuclear Physics A, vol. 606, no. 1-2, pp.
320–328, 1996. View at Scopus
30. A. Andronic, P. Braun-Munzinger, and J. Stachel, “Hadron production in central nucleus-nucleus collisions at chemical freeze-out,” Nuclear Physics A, vol. 772, no. 3-4, pp. 167–199, 2006. View at
Publisher · View at Google Scholar · View at Scopus
31. S. K. Tiwari, P. K. Srivastava, and C. P. Singh, “Description of hot and dense hadron-gas properties in a new excluded-volume model,” Physical Review C, vol. 85, no. 1, Article ID 014908, 2012.
View at Publisher · View at Google Scholar · View at Scopus
32. M. Mishra and C. P. Singh, “Effect of geometrical size of the particles in a hot and dense hadron gas,” Physical Review C, vol. 76, Article ID 024908, 9 pages, 2007. View at Publisher · View at
Google Scholar
33. M. Mishra and C. P. Singh, “Particle multiplicities and ratios in excluded volume models,” Physical Review C, vol. 78, Article ID 024910, 9 pages, 2008.
34. J. Cleymans and K. Redlich, “Unified description of freeze-out parameters in relativistic heavy ion collisions,” Physical Review Letters, vol. 81, no. 24, pp. 5284–5286, 1998. View at Scopus
35. P. Braun-Munzinger, D. Magestro, K. Redlich, and J. Stachel, “Hadron production in Au-Au collisions at RHIC,” Physics Letters B, vol. 518, no. 1-2, pp. 41–46, 2001. View at Publisher · View at
Google Scholar · View at Scopus
36. D. Magestro, “Evidence for chemical equilibration at RHIC,” Journal of Physics G, vol. 28, no. 7, pp. 1745–1752, 2002. View at Publisher · View at Google Scholar · View at Scopus
37. P. Braun-Munzinger, J. Stachel, J. P. Wessels, and N. Xu, “Thermal equilibration and expansion in nucleus-nucleus collisions at the AGS,” Physics Letters B, vol. 344, no. 1–4, pp. 43–48, 1995.
View at Scopus
38. P. Braun-Munzinger, J. Stachel, J.P. Wessels, and N. Xu, “Thermal and hadrochemical equilibration in nucleus-nucleus collisions at the SPS,” Physics Letters B, vol. 365, pp. 1–6, 1996. View at
Publisher · View at Google Scholar
39. P. Braun-Munzinger, I. Heppe, and J. Stachel, “Chemical equilibration in Pb+Pb collisions at the SPS,” Physics Letters B, vol. 465, no. 1–4, pp. 15–20, 1999. View at Scopus
40. S. V. Akkelin, P. Braun-Munzinger, and Y. M. Sinyukov, “Reconstruction of hadronization stage in Pb+Pb collisions at 158 A GeV/c,” Nuclear Physics A, vol. 710, no. 3-4, pp. 439–465, 2002. View at
Publisher · View at Google Scholar · View at Scopus
41. F. Becattini, J. Cleymans, A. Keränen, E. Suhonen, and K. Redlich, “Features of particle multiplicities and strangeness production in central heavy ion collisions between 1.7A and 158A GeV/c,”
Physical Review C, vol. 64, no. 2, Article ID 024901, 2001. View at Publisher · View at Google Scholar · View at Scopus
42. A. Keranen and F. Becattini, “The canonical effect in statistical models for relativistic heavy ion collisions,” Journal of Physics G, vol. 28, p. 2041, 2002. View at Publisher · View at Google
43. A. Keranen and F. Becattini, “Chemical factors in canonical statistical models for relativistic heavy ion collisions,” Physical Review C, vol. 65, Article ID 044901, 7 pages, 2002. View at
Publisher · View at Google Scholar
44. J. Rafelski, J. Letessier, and A. Tounsi, “Strange particles from dense hadronic matter,” Acta Physica Polonica B, vol. 27, pp. 1037–1140, 1996.
45. K. Redlich and A. Tounsi, “Strangeness enhancement and energy dependence in heavy ion collisions,” European Physical Journal C, vol. 24, no. 4, pp. 589–594, 2002. View at Publisher · View at
Google Scholar · View at Scopus
46. J. Sollfrank, P. Huovinen, M. Kataja, P. V. Ruuskanen, M. Prakash, and R. Venugopalan, “Hydrodynamical description of 200A GeV/c S+Au collisions: hadron and electromagnetic spectra,” Physical
Review C, vol. 55, no. 1, pp. 392–410, 1997. View at Scopus
47. P. Huovinena, P. F. Kolbb, U. Heinzb, P. V. Ruuskanend, and S. A. Voloshine, “Radial and elliptic flow at RHIC: further predictions,” Physics Letters B, vol. 503, pp. 58–64, 2001. View at
Publisher · View at Google Scholar
48. J. Cleymans and K. Redlich, “Chemical and thermal freeze-out parameters from 1A to 200A GeV,” Physical Review C, vol. 60, Article ID 054908, 9 pages, 1999. View at Publisher · View at Google
49. J. Cleymans, H. Oeschler, and K. Redlich, “Influence of impact parameter on thermal description of relativistic heavy ion collisions at (1-2)A GeV,” Physical Review C, vol. 59, no. 3, pp.
1663–1673, 1999. View at Scopus
50. J. Cleymans, D. Elliott, A. Keränen, and E. Suhonen, “Thermal model analysis of particle ratios in Ni+Ni experiments using exact strangeness conservation,” Physical Review C, vol. 57, no. 6, pp.
3319–3323, 1998. View at Scopus
51. J. Letessier and J. Rafelski, “Observing quark-gluon plasma with strange hadrons,” International Journal of Modern Physics E, vol. 9, no. 2, pp. 107–147, 2000. View at Scopus
52. P. Braun-Munzinger, J. Cleymans, H. Oeschler, and K. Redlich, “Maximum relative strangeness content in heavy-ion collisions around 30A GeV,” Nuclear Physics A, vol. 697, no. 3-4, pp. 902–912,
2002. View at Publisher · View at Google Scholar · View at Scopus
53. W. Broniowski and W. Florkowski, “Description of the RHIC p⊥ spectra in a thermal model with expansion,” Physical Review Letters, vol. 87, no. 27, Article ID 272302, 2001. View at Scopus
54. K. Redlich, “Strangeness production in heavy ion collisions,” Nuclear Physics A, vol. 698, no. 1–4, p. 94, 2002. View at Scopus
55. J. Cleymans and H. Satz, “Thermal hadron production in high energy heavy ion collisions,” Zeitschrift für Physik C, vol. 57, no. 1, pp. 135–147, 1993. View at Publisher · View at Google Scholar ·
View at Scopus
56. P. Braun-Munzinger and J. Stachel, “(Non) thermal aspects of charmonium production and a new look at J/ψ suppression,” Physics Letters B, vol. 490, no. 3-4, pp. 196–202, 2000. View at Publisher ·
View at Google Scholar · View at Scopus
57. P. Braun-Munzinge and J. Stachel, “On charm production near the phase boundary,” Nuclear Physics A, vol. 690, no. 1–3, p. 119, 2001. View at Scopus
58. S. K. Tiwari, P. K. Srivastava, and C. P. Singh, “The effect of flow on hadronic spectra in an excluded-volume model,” Journal of Physics G, vol. 40, Article ID 045102, 2013. View at Publisher ·
View at Google Scholar
59. G. M. Welke, R. Venugopalan, and M. Prakash, “The speed of sound in an interacting pion gas,” Physics Letters B, vol. 245, no. 2, pp. 137–141, 1990. View at Scopus
60. V. V. Dixit and E. Suhonen, “Bare-bones model for phase transition in hadronic matter,” Zeitschrift für Physik C, vol. 18, no. 4, pp. 355–360, 1983. View at Publisher · View at Google Scholar ·
View at Scopus
61. J. Cleymans, K. Redlich, H. Satz, and E. Suhonen, “On the phenomenology of deconfinement and chiral symmetry restoration,” Zeitschrift für Physik C, vol. 33, no. 1, pp. 151–156, 1986. View at
Publisher · View at Google Scholar · View at Scopus
62. J. Cleymans, R. V. Gavai, and E. Suhonen, “Quarks and gluons at high temperatures and densities,” Physics Reports, vol. 130, no. 4, pp. 217–292, 1986. View at Scopus
63. J. Cleymans and E. Suhonen, “Influence of hadronic hard core radius on detonations and deflagrations in quark matter,” Zeitschrift für Physik C, vol. 37, no. 1, pp. 51–56, 1987. View at Publisher
· View at Google Scholar · View at Scopus
64. J. Cleymans and D. W. Von Oertzen, “Bose-Einstein condensation of kaons in dense nuclear matter,” Physics Letters B, vol. 249, no. 3-4, pp. 511–513, 1990. View at Scopus
65. N. J. Davidson, H. G. Miller, R. M. Quick, and J. Cleymans, “Chemical equilibration in heavy-ion collisions,” Physics Letters B, vol. 255, no. 1, pp. 105–109, 1991. View at Scopus
66. H. Kuono and F. Takagi, “Excluded volume, bag constant and hadron-quark phase transition,” Zeitschrift für Physik C, vol. 42, pp. 209–213, 1989. View at Publisher · View at Google Scholar
67. R. Hagedorn and J. Rafelski, “Hot hadronic matter and nuclear collisions,” Physics Letters B, vol. 97, p. 136, 1980. View at Publisher · View at Google Scholar
68. R. Hagedorn, “The pressure ensemble as a tool for describing the hadron-quark phase transition,” Zeitschrift für Physik C, vol. 17, pp. 265–281, 1983. View at Publisher · View at Google Scholar
69. D. H. Rischke, M. I. Gorenstein, H. Stöcker, and W. Greiner, “Excluded volume effect for the nuclear matter equation of state,” Zeitschrift für Physik C, vol. 51, no. 3, pp. 485–489, 1991. View
at Publisher · View at Google Scholar · View at Scopus
70. C. P. Singh, B. K. Patra, and K. K. Singh, “Thermodynamically consistent EOS for hot dense hadron gas,” Physics Letters B, vol. 387, no. 4, pp. 680–684, 1996. View at Publisher · View at Google
Scholar · View at Scopus
71. P. K. Panda, M. E. Bracco, M. Chiapparini, E. Conte, and G. Krein, “Excluded volume effects in the quark meson coupling model,” Physical Review C, vol. 65, no. 6, Article ID 065206, 2002. View at
72. D. Anchishkin and E. Suhonen, “Generalization of mean-field models to account for effects of excluded volume,” Nuclear Physics A, vol. 586, no. 4, pp. 734–754, 1995. View at Scopus
73. D. Anchishkin, “Particle finite size effects as mean field approximation,” Soviet Physics JETP, vol. 75, p. 195, 1992.
74. N. Prasad, K. K. Singh, and C. P. Singh, “Causality in an excluded volume model,” Physical Review C, vol. 62, no. 3, Article ID 037903, 2000. View at Scopus
75. V. K. Tiwari, K. K. Singh, N. Prasad, and C. P. Singh, “Mean field model with excluded volume correction for a multicomponent hadron gas,” Nuclear Physics A, vol. 637, no. 1, pp. 159–172, 1998.
View at Scopus
76. B. X. Sun, X. F. Lu, and E. G. Zhao, “Perturbative calculation of the excluded volume effect for nuclear matter in a relativistic mean-field approximation,” Physical Review C, vol. 65, Article ID
054301, 2002. View at Publisher · View at Google Scholar
77. M. I. Gorenstein, “Examination of the thermodynamical consistency of excluded-volume hadron gas models,” Physical Review C, vol. 86, Article ID 044907, 3 pages, 2012. View at Publisher · View at
Google Scholar
78. Z.-D. Lu, A. Faessler, C. Fuchs, and E. E. Zabrodin, “Analysis of particle production in ultrarelativistic heavy-ion collisions within a two-source statistical model,” Physical Review C, vol. 66,
Article ID 044905, 7 pages, 2002. View at Publisher · View at Google Scholar
79. J. I. Kapusta, A. P. Vischer, and R. Venugopalan, “Nucleation of quark-gluon plasma from hadronic matter,” Physical Review C, vol. 51, pp. 901–910, 1995. View at Publisher · View at Google
80. J. I. Kapusta and K. A. Olive, “Thermodynamics of hadrons: delimiting the temperature,” Nuclear Physics A, vol. 408, no. 3, pp. 478–494, 1983. View at Scopus
81. J. D. Walecka, “A theory of highly condensed matter,” Annals of Physics, vol. 83, no. 2, pp. 491–529, 1974. View at Scopus
82. B. D. Serot and J. D. Walecka, “Properties of finite nuclei in a relativistic quantum field theory,” Physics Letters B, vol. 87, no. 3, pp. 172–176, 1979. View at Scopus
83. D. H. Rischke, B. L. Friman, H. Stocker, and W. Greiner, “Phase transition from hadron gas to quark-gluon plasma: influence of the stiffness of the nuclear equation of state,” Journal of Physics
G, vol. 14, no. 2, pp. 191–203, 1988. View at Publisher · View at Google Scholar · View at Scopus
84. J. Zimányi, B. Lukács, P. Lévai, J. P. Bondorf, and N. L. Balazs, “An interpretable family of equations of state for dense hadronic matter,” Nuclear Physics A, vol. 484, no. 3-4, pp. 647–660,
1988. View at Scopus
85. K. A. Bugaev and M. I. Gorenstein, “Thermodynamically self-consistent class of nuclear matter EOS and compression shocks in relativistic nuclear collisions,” Zeitschrift für Physik C, vol. 43,
no. 2, pp. 261–265, 1989. View at Publisher · View at Google Scholar · View at Scopus
86. G. D. Yen, M. I. Gorenstein, W. Greiner, and S. N. Yang, “Excluded volume hadron gas model for particle number ratios in a+a collisions,” Physical Review C, vol. 56, no. 4, pp. 2210–2218, 1997.
View at Scopus
87. F. Becattini, “A Thermodynamical approach to hadron production in e+ e- collisions,” Zeitschrift für Physik C, vol. 69, pp. 485–492, 1996. View at Publisher · View at Google Scholar
88. F. Becattini and U. Heinz, “Thermal hadron production in pp and pp̄ collisions,” Zeitschrift für Physik C, vol. 76, no. 2, pp. 269–286, 1997. View at Scopus
89. S. Uddin and C. P. Singh, “Equation of state of finite-size hadrons: thermodynamical consistency,” Zeitschrift für Physik C, vol. 63, no. 1, pp. 147–150, 1994. View at Publisher · View at Google
Scholar · View at Scopus
90. C. P. Singh, P. K. Srivastava, and S. K. Tiwari, “QCD phase boundary and critical point in a bag model calculation,” Physical Review D, vol. 80, Article ID 114508, 2011.
91. C. P. Singh, P. K. Srivastava, and S. K. Tiwari, “Erratum in QCD phase boundary and critical point in a bag model calculation,” Physical Review D, vol. 83, Article ID 039904, 1 pages, 2011. View
at Publisher · View at Google Scholar · View at Scopus
92. G. Zeeb, K. A. Bugaev, P. T. Reuter, and H. Stöcker, “Equation of state for the two-component van der waals gas with relativistic excluded volumes,” Ukrainian Journal of Physics, vol. 53, no. 3,
pp. 279–295, 2008. View at Scopus
93. K. A. Bugaev, Ph.D. thesis, ch 3.
94. F. Becattini, M. Gazdzicki, A. Keranen, J. Manninen, and R. Stock, “Chemical equilibrium study in nucleus-nucleus collisions at relativistic energies,” Physical Review C, vol. 69, Article ID
024905, 19 pages, 2004.
95. J. Letessier and J. Rafelski, “Hadron production and phase changes in relativistic heavy-ion collisions,” The European Physical Journal A, vol. 35, pp. 221–242, 2008.
96. A. Dumitru, L. Portugal, and D. Zschiesche, “Inhomogeneous freeze-out in relativistic heavy-ion collisions,” Physical Review C, vol. 73, no. 2, Article ID 024902, 2006. View at Publisher · View
at Google Scholar · View at Scopus
97. F. Becattini and G. Passaleva, “Statistical hadronization model and transverse momentum spectra of hadrons in high energy collisions,” European Physical Journal C, vol. 23, no. 3, pp. 551–583,
2002. View at Publisher · View at Google Scholar · View at Scopus
98. M. Kaneta and N. Xu, “Centrality dependence of chemical freeze-out in Au+Au collisions at RHIC,” http://arxiv.org/abs/nucl-th/0405068.
99. J. Adams and STAR Collaboration, “Experimental and theoretical challenges in the search for the quark gluon plasma. The STAR Collaboration's Critical Assessment of the Evidence from RHIC
Collisions,” Nuclear Physics A, vol. 757, p. 102, 2005.
100. A. Tawfik, “Chemical freeze-out and higher order multiplicity moments,” http://arxiv.org/abs/1306.1025.
101. S. Gupta, X. Luo, B. Mohanty, H. G. Ritter, and N. Xu, “Scale for the phase diagram of quantum chromodynamics,” Science, vol. 332, no. 6037, pp. 1525–1528, 2011. View at Publisher · View at
Google Scholar · View at Scopus
102. A. Bazavov, H.-T. Ding, and P. Hegde, “Freeze-out conditions in heavy ion collisions from QCD thermodynamics,” Physical Review Letters, vol. 109, Article ID 192302, 5 pages, 2012.
103. J. Cleymans, H. Oeschler, K. Redlich, and S. Wheaton, “Comparison of chemical freeze-out criteria in heavy-ion collisions,” Physical Review C, vol. 73, no. 3, Article ID 034905, 2006. View at
Publisher · View at Google Scholar · View at Scopus
104. C. Alt and NA49 Collaboration, “Pion and kaon production in central Pb+Pb collisions at 20A and 30A GeV: evidence for the onset of deconfinement,” Physical Review C, vol. 77, Article ID 024903,
10 pages, 2008.
105. M. Gazdzicki and the NA49 Collaboration, “Report from NA49,” Journal of Physics G, vol. 30, p. 701, 2004. View at Publisher · View at Google Scholar
106. S. V. Afanasiev, T. Anticic, and D. Barna, “Energy dependence of pion and kaon production in central Pb+Pb collisions,” Physical Review C, vol. 66, Article ID 054902, 2002. View at Publisher ·
View at Google Scholar
107. T. Anticic, B. Baatar, D. Barna, et al., “$\mathrm{\Lambda }$ and $\overline{\mathrm{\Lambda }}$ production in central Pb-Pb collisions at 40, 80, and 158AGeV,” Physical Review Letters, vol.
93, Article ID 022302, 2004. View at Publisher · View at Google Scholar
108. L. Ahle, L. Ahle, Y. Akiba, and K. Ashktorab, “Particle production at high baryon density in central Au+Au reactions at 11.6A GeV/c,” Physical Review C, vol. 57, pp. R466–R470, 1998. View at
Publisher · View at Google Scholar
109. L. Ahle, Y. Akiba, K. Ashktorab, et al., “Centrality dependence of kaon yields in Si+A and Au+Au collisions at the AGS,” Physical Review C, vol. 60, Article ID 044904, 1999.
110. S. Albergo, R. Bellwied, and M. Bennett, “$\mathrm{\Lambda }$ spectra in 11.6A GeV/c Au-Au collisions,” Physical Review Lett, vol. 88, Article ID 062301, 2002. View at Publisher · View at Google
111. J. L. Klay, N. N. Ajitanand, J. M. Alexander, et al., “Charged pion production in 2A to 8A GeV central Au+Au collisions,” Physical Review C, vol. 68, Article ID 054905, 2003. View at Publisher ·
View at Google Scholar
112. S. Chatterjee, R. M. Godbole, and S. Gupta, “Stabilizing hadron resonance gas models,” Physical Review C, vol. 81, no. 4, Article ID 044907, 2010. View at Publisher · View at Google Scholar ·
View at Scopus
113. R. Stock, “Relativistic nucleus-nucleus collisions: from the BEVALAC to RHIC,” Journal of Physics G, vol. 30, no. 8, pp. S633–S648, 2004. View at Publisher · View at Google Scholar · View at
114. J. Noronha-Hostler, H. Ahmad, J. Noronha, and C. Greiner, “Particle ratios as a probe of the QCD critical temperature,” Physical Review C, vol. 82, no. 2, Article ID 024913, 2010. View at
Publisher · View at Google Scholar · View at Scopus
115. D. Molnar and S. A. Voloshin, “Elliptic flow at large transverse momenta from quark coalescence,” Physical Review Letters, vol. 91, Article ID 092301, 2003. View at Publisher · View at Google
116. D. Molnar, “Particle correlations at RHIC from parton coalescence dynamics—first results,” Journal of Physics G, vol. 30, p. S1239, 2004. View at Publisher · View at Google Scholar
117. C. Alt, T. Anticic, and B. Baatar, “Energy dependence of Λ and ≡ production in central Pb+Pb collisions at 20A, 30A, 40A, 80A, and 158A GeV measured at the CERN Super Proton Synchrotron,”
Physical Review C, vol. 78, Article ID 034918, 2008.
118. C. Blume and The NA49 collaboration, “Review of results from the NA49 Collaboration,” Journal of Physics G, vol. 31, no. 6, pp. S685–S691, 2005. View at Publisher · View at Google Scholar · View
at Scopus
119. M. M. Aggarwal and The STAR Collaboration, “Strange and multi-strange particle production in Au+Au collisions at $\sqrt{\text{sNN}}$ = 62.4 GeV,” Physical Review C, vol. 83, Article ID 024901,
120. A. Andronic, D. Blaschke, P. Braun-Munzinger et al., “Hadron production in ultra-relativistic nuclear collisions: quarkyonic matter and a triple point in the phase diagram of QCD,” Nuclear
Physics A, vol. 837, no. 1-2, pp. 65–86, 2010. View at Publisher · View at Google Scholar · View at Scopus
121. B. Abelev, N. Arbor, G. Conesa Balbastre, et al., “Pion, Kaon, and Proton production in central Pb–Pb Collisions at $\sqrt{{s}_{NN}}$ = 2.76 TeV,” Physical Review Letters, vol. 109, Article ID
252301, 2012.
122. N. Sasaki, “A study of the thermodynamic properties of a hot, dense hadron gas using an event generator: hadro-molecular-dynamical calculation,” Progress of Theoretical Physics, vol. 10, no. 4,
pp. 783–805, 2001. View at Scopus
123. P. Braun-Munzinger and J. Stachel, “Particle ratios, equilibration and the QCD phase boundary,” Journal of Physics G, vol. 28, no. 7, pp. 1971–1976, 2002. View at Publisher · View at Google
Scholar · View at Scopus
124. V. Magas and H. Satz, “Conditions for confinement and freeze-out,” European Physical Journal C, vol. 32, no. 1, pp. 115–119, 2003. View at Publisher · View at Google Scholar · View at Scopus
125. A. Tawfik, “Influence of strange quarks on the QCD phase diagram and chemical freeze-out,” Journal of Physics G, vol. 31, no. 6, pp. S1105–S1110, 2005. View at Publisher · View at Google Scholar
· View at Scopus
126. M. B. Isichenko, “Percolation, statistical topography, and transport in random media,” Reviews of Modern Physics, vol. 64, no. 4, pp. 961–1043, 1992. View at Publisher · View at Google Scholar ·
View at Scopus
127. J. Kapusta, “Viscous heating of expanding fireballs,” Physical Review C, vol. 24, no. 6, pp. 2545–2551, 1981. View at Publisher · View at Google Scholar · View at Scopus
128. A. M. Polyakov, Journal of Experimental and Theoretical Physics, vol. 57, p. 2144, 1969.
129. F. Karsch, D. Kharzeev, and K. Tuchin, “Universal properties of bulk viscosity near the QCD phase transition,” Physics Letters B, vol. 663, no. 3, pp. 217–221, 2008. View at Publisher · View at
Google Scholar · View at Scopus
130. U. Heinz and P. Kolb, “Early thermalization at RHIC,” Nuclear Physics A, vol. 702, no. 1–4, p. 269, 2002. View at Scopus
131. A. Adare and PHENIX Collaboration, “Energy loss and flow of heavy quarks in Au+Au collisions at $\sqrt{\text{sNN}}$ = 200 GeV,” Physical Review Letters, vol. 98, Article ID 172301, 2007.
132. E. Shuryak, “What RHIC experiments and theory tell us about properties of quark-gluon plasma?” Nuclear Physics A, vol. 750, no. 1, pp. 64–83, 2005. View at Publisher · View at Google Scholar ·
View at Scopus
133. P. Romatschke and U. Romatschke, “Viscosity information from relativistic nuclear collisions: how perfect is the fluid observed at RHIC?” Physical Review Letters, vol. 99, no. 17, Article ID
172301, 2007. View at Publisher · View at Google Scholar · View at Scopus
134. S. Gavin, “Transport coefficients in ultra-relativistic heavy-ion collisions,” Nuclear Physics A, vol. 435, no. 3-4, pp. 826–843, 1985. View at Scopus
135. A. Dobado and S. N. Santalla, “Pion gas viscosity at low temperature and density,” Physical Review D, vol. 65, no. 9, Article ID 096011, 2002. View at Scopus
136. M. Prakash, M. Prakash, R. Venugopalan, and G. Welke, “Non-equilibrium properties of hadronic mixtures,” Physics Report, vol. 227, no. 6, pp. 321–366, 1993. View at Scopus
137. J.-W. Chen, Y.-H. Li, Y.-F. Liu, and E. Nakano, “QCD viscosity to entropy density ratio in the hadronic phase,” Physical Review D, vol. 76, Article ID 114011, 8 pages, 2007.
138. J.-W. Chen and E. Nakano, “Shear viscosity to entropy density ratio of QCD below the deconfinement temperature,” Physics Letters B, vol. 647, pp. 371–375, 2007.
139. K. Itakura, O. Morimatsu, and H. Otomo, “Shear viscosity of a hadronic gas mixture,” Physical Review D, vol. 77, no. 1, Article ID 014014, 2008. View at Publisher · View at Google Scholar · View
at Scopus
140. A. S. Khvorostukhin, V. D. Toneev, and D. N. Voskresensky, “Viscosity coefficients for hadron and quark-gluon phases,” Nuclear Physics A, vol. 845, no. 1–4, pp. 106–146, 2010. View at Publisher
· View at Google Scholar · View at Scopus
141. A. Muronga, “Shear viscosity coefficient from microscopic models,” Physical Review C, vol. 69, no. 4, Article ID 044901, 2004. View at Publisher · View at Google Scholar · View at Scopus
142. S. Muroya and N. Sasaki, “A calculation of the viscosity to entropy ratio of a hadronic gas,” Progress of Theoretical Physics, vol. 113, no. 2, pp. 457–462, 2005. View at Publisher · View at
Google Scholar · View at Scopus
143. N. Demir and S. A. Bass, “Shear-viscosity to entropy density ratio of a relativistic hadron gas,” Physical Review Letters, vol. 102, Article ID 172302, 2009.
144. L. P. Csernai, J. I. Kapusta, and L. D. McLerran, “Strongly interacting low-viscosity matter created in relativistic nuclear collisions,” Physical Review Letters, vol. 97, no. 15, Article ID
152303, 2006. View at Publisher · View at Google Scholar · View at Scopus
145. J. Noronha-Hostler, J. Noronha, and C. Greiner, “Transport coefficients of hadronic matter near Tc,” Physical Review Letters, vol. 103, no. 17, Article ID 172302, 2009. View at Publisher · View
at Google Scholar · View at Scopus
146. R. A. Lacey, N. N. Ajitanand, J. M. Alexander, et al., “Has the QCD critical point been signaled by observations at the BNL relativistic heavy ion collider?” Physical Review Letters, vol. 98,
Article ID 092301, 2007.
147. A. Dobado, F. J. Llanes-Estrada, and J. M. Torres-Rincon, “η/s and phase transitions,” Physical Review D, vol. 79, no. 1, Article ID 014002, 2009. View at Publisher · View at Google Scholar ·
View at Scopus
148. A. Dobado, F. Llanes-Estrada, and J. M. Torres-Rincon, “eta/s and phase transitions,” Physical Review D, vol. 79, Article ID 014002, 2009.
149. S. Pal, “Shear viscosity to entropy density ratio of a relativistic Hagedorn resonance gas,” Physics Letters B, vol. 684, pp. 211–215, 2010.
150. P. Castorina, J. Cleymans, D. E. Miller, and H. Satz, “The speed of sound in hadronic matter,” European Physical Journal C, vol. 66, no. 1, pp. 207–213, 2010. View at Publisher · View at Google
Scholar · View at Scopus
151. J. Cleymans and D. Worku, “The Hagedorn temperature revisited,” Modern Physics Letters A, vol. 26, no. 16, pp. 1197–1209, 2011. View at Publisher · View at Google Scholar · View at Scopus
152. R. V. Gavai and A. Gocksch, “Velocity of sound in SU(3) lattice gauge theory,” Physical Review D, vol. 33, no. 2, pp. 614–616, 1986. View at Publisher · View at Google Scholar · View at Scopus
153. K. Redlich and H. Satz, “Critical behavior near deconfinement,” Physical Review D, vol. 33, no. 12, pp. 3747–3752, 1986. View at Publisher · View at Google Scholar · View at Scopus
154. F. Karsch, “Recent lattice results on finite temperature and density QCD—part I,” Proceedings of Science, CPOD07, 026, 2007.
155. D. Prorok and L. Turko, “$J/\psi$ absorption in a multicomponent hadron gas,” in Proceedings of the AIP Conference, vol. 1038, p. 10, 2008.
156. M. Chojnacki and W. Florkowski, “Temperature dependence of sound velocity and hydrodynamics of ultra-relativistic heavy-ion collisions,” Acta Physica Polonica B, vol. 38, pp. 3249–3262, 2007.
157. M. I. Gorenstein, M. Hauer, and O. N. Moroz, “Viscosity in the excluded volume hadron gas model,” Physical Review C, vol. 77, no. 2, Article ID 024911, 2008. View at Publisher · View at Google
Scholar · View at Scopus
158. E. M. Lifschitz and L. P. Pitaevski, Physical Kinetics, Pergamon Press, Oxford, UK, 2nd edition, 1981.
159. G. Policastro, D. T. Son, and A. O. Starinets, “Shear viscosity of strongly coupled N = 4 supersymmetric Yang-Mills plasma,” Physical Review Letters, vol. 87, no. 8, Article ID 081601, 2001.
View at Scopus
160. J. M. Maldacena, “The large N limit of superconformal field theories and supergravity,” Advances in Theoretical and Mathematical Physics, vol. 2, pp. 231–252, 1998. View at Zentralblatt MATH
161. Z. Qiu, C. Shen, and U. Heinz, “Hydrodynamic elliptic and triangular flow in Pb-Pb collisions at $\sqrt{{s}_{NN}}$ = 2.76A TeV,” Physics Letters B, vol. 707, no. 1, pp. 151–155, 2012. View at
Publisher · View at Google Scholar · View at Scopus
162. J. Letessier and J. Rafelski, Hadrons and Quark-Gluon Plasma, Cambridge University Press, Cambridge, UK, 2004.
163. L. D. Landau, Izvestiya Akademii Nauk, vol. 17, p. 51, 1953.
164. E. Schnedermann, J. Sollfrank, and U. Heinz, “Thermal phenomenology of hadrons from 200A GeV S+S collisions,” Physical Review C, vol. 48, no. 5, pp. 2462–2475, 1993. View at Publisher · View at
Google Scholar · View at Scopus
165. E. Schnedermann and U. Heinz, “Circumstantial evidence for transverse flow in 200A GeV S+S collisions,” Physical Review Letters, vol. 69, no. 20, pp. 2908–2911, 1992. View at Publisher · View at
Google Scholar · View at Scopus
166. S. Q. Feng and Y. Zhong, “Baryon production and collective flow in relativistic heavy-ion collisions in the AGS, SPS, RHIC, and LHC energy regions ($\sqrt{{s}_{NN}}\le 5$ GeV to 5.5 TeV),”
Physical Review C, vol. 83, Article ID 034908, 2011.
167. S. Q. Feng and X. B. Yuan, “The feature study on the pion and proton rapidity distributions at AGS, SPS and RHIC,” Science in China G, vol. 52, p. 198, 2009.
168. S. Uddin, J. S. Ahmad, W. Bashir, and R. A. Bhat, “A unified approach towards describing rapidity and transverse momentum distributions in a thermal freeze-out model,” Journal of Physics G, vol.
39, Article ID 015012, 2012.
169. T. Hirano, K. Morita, S. Muroya, and C. Nonaka, “Hydrodynamical analysis of hadronic spectra in the 130 GeV/nucleon Au+Au collisions,” Physical Review C, vol. 65, no. 6, Article ID 061902, 2002.
170. K. Morita, S. Muroya, C. Nonaka, and T. Hirano, “Comparison of space-time evolutions of hot, dense matter in $\sqrt{{s}_{NN}}$ = 17 and 130 GeV relativistic heavy ion collisions based on a
hydrodynamical model,” Physical Review C, vol. 66, Article ID 054904, 2002.
171. J. Manninen, E. L. Bratkovskaya, W. Cassing, and O. Linnyk, “Dilepton production in p+p, Cu+Cu and Au+Au collisions at $\sqrt{\text{s}}=200$ GeV,” European Physical Journal C, vol. 71, article
1615, 2011.
172. S. A. Bass, M. Belkacem, and M. Bleicher, “Microscopic models for ultrarelativistic heavy ion collisions,” Progress in Particle and Nuclear Physics, vol. 41, pp. 255–369, 1998.
173. U. Mayer and U. Heinz, “Global hydrodynamics with continuous freeze-out,” Physical Review C, vol. 56, no. 1, pp. 439–452, 1997. View at Scopus
174. U. Heinz, “Primordial hadrosynthesis in the little bang,” Nuclear Physics A, vol. 661, pp. 140–149, 1999.
175. F. Becattini and J. Cleymans, “Chemical equilibrium in heavy ion collisions: rapidity dependence,” Journal of Physics G, vol. 34, no. 8, pp. S959–S963, 2007. View at Publisher · View at Google
Scholar · View at Scopus
176. F. Becattini, J. Cleymans, and J. Strumpfer, “Rapidity variation of thermal parameters at SPS and RHIC,” Proceeding of Science, CPOD07, 012, 2007.
177. B. Biedron and W. Broniowski, “Rapidity-dependent spectra from a single-freeze-out model of relativistic heavy-ion collisions,” Physical Review C, vol. 75, Article ID 054905, 2007.
178. W. Broniowski and B. Biedron, “Rapidity-dependent chemical potentials in a statistical approach,” Journal of Physics G, vol. 35, Article ID 044018, 2008.
179. J. Cleymans, J. Strümpfer, and L. Turko, “Extended longitudinal scaling and the thermal model,” Physical Review C, vol. 78, no. 1, Article ID 017901, 2008. View at Publisher · View at Google
Scholar · View at Scopus
180. W. Broniowski and W. Florkowski, “Geometric relation between centrality and the impact parameter in relativistic heavy-ion collisions,” Physical Review C, vol. 65, no. 2, Article ID 064905,
181. C. Adler, Z. Ahammed, and C. Allgower, “Measurement of inclusive antiprotons from Au+Au collisions at $\sqrt{{s}_{NN}}=130$ GeV,” Physical Review Letters, vol. 87, Article ID 262302, 2001.
182. K. Adcox, S. S. Adler, N. N. Ajitanand, et al., “Single identified hadron spectra from $\sqrt{{s}_{NN}}=130$ GeV Au+Au collisions,” Physical Review C, vol. 69, Article ID 024904, 2004.
183. A. Adare, C. Aidala, and N. N. Ajitanand, “Cold-nuclear-matter effects on heavy-quark production in d+Au collisions at $\sqrt{{s}_{NN}}=200$ GeV,” Physical Review Letters, vol. 109, Article ID
242301, 7 pages, 2012.
184. P. Bozek, “Viscous evolution of the rapidity distribution of matter created in relativistic heavy-ion collisions,” Physical Review C, vol. 77, Article ID 034911, 2008. View at Publisher · View
at Google Scholar
185. Y. B. Ivanov, Physical Review C, vol. 87, Article ID 064905, 2013.
186. P. Huovinen, P. F. Kolb, and U. Heinz, “Is there elliptic flow without transverse flow?” Nuclear Physics A, vol. 698, no. 1–4, p. 475, 2002. View at Scopus
187. Y. B. Ivanov, V. N. Russkikh, and V. D. Toneev, “Relativistic heavy-ion collisions within three-fluid hydrodynamics: hadronic scenario,” Physical Review C, vol. 73, no. 4, Article ID 044904,
2006. View at Publisher · View at Google Scholar · View at Scopus
188. B. Mohanty and J. Alam, “Velocity of sound in relativistic heavy-ion collisions,” Physical Review C, vol. 68, Article ID 064903, 2003.
189. P. K. Netrakanti and B. Mohanty, “Width of the rapidity distribution in heavy-ion collisions,” Physical Review C, vol. 71, no. 4, Article ID 047901, 2005. View at Publisher · View at Google
Scholar · View at Scopus
190. F. Cooper and G. Frye, “Single-particle distribution in the hydrodynamic and statistical thermodynamic models of multiparticle production,” Physical Review D, vol. 10, no. 1, pp. 186–189, 1974.
View at Publisher · View at Google Scholar · View at Scopus
191. D. Adamová, G. Agakichiev, H. Appelshäuser, et al., “Universal pion freeze-out in heavy-ion collisions,” Physical Review Letters, vol. 90, Article ID 022301, 2003. View at Publisher · View at
Google Scholar
192. I. G. Bearden, “Charged meson rapidity distributions in central Au+Au collisions at $\sqrt{{s}_{NN}}=200$ GeV,” Physical Review Letters, vol. 94, Article ID 162301, 2005. View at Publisher ·
View at Google Scholar
193. S. S. Adler, S. Afanasiev, and C. Aidala, “Identified charged particle spectra and yields in Au+Au collisions at $\sqrt{{s}_{NN}}=200$ GeV,” Physical Review C, vol. 69, Article ID 034909, 2004.
194. M. Floris and ALICE collaboration, “Identified particles in pp and Pb-Pb collisions at LHC energies with the ALICE detector,” Journal of Physics G, vol. 38, Article ID 124025, 2011.
195. C. Shen, U. Heinz, P. Huovinen, and H. Song, “Radial and elliptic flow in Pb+Pb collisions at energies available at the CERN Large Hadron Collider from viscous hydrodynamics,” Physical Review C,
vol. 84, no. 4, Article ID 044903, 2011. View at Publisher · View at Google Scholar · View at Scopus
196. P. K. Srivastava and C. P. Singh, “A hybrid model for QCD deconfining phase boundary,” Physical Review D, vol. 85, Article ID 114016, 2012.
197. P. K. Srivastava, S. K. Tiwari, and C. P. Singh, “QCD critical point in a quasiparticle model,” Physical Review D, vol. 82, no. 1, Article ID 014023, 2010. View at Publisher · View at Google
Scholar · View at Scopus
198. P. K. Srivastava, S. K. Tiwari, and C. P. Singh, “On locating the critical end point in QCD phase diagram,” Nuclear Physics A, vol. 862-863, pp. 424–426, 2011. | {"url":"http://www.hindawi.com/journals/ahep/2013/805413/","timestamp":"2014-04-17T05:11:18Z","content_type":null,"content_length":"882065","record_id":"<urn:uuid:9255aa4f-f65b-4863-90d0-5f9c141eaab6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abington, PA Math Tutor
Find an Abington, PA Math Tutor
...I studied civil engineering in college. During my coursework, I took Calc I, Calc II, Calc III and Differential equations. My knowledge of calculus was also applied in my structural
engineering coursework during senior year.
21 Subjects: including prealgebra, precalculus, trigonometry, algebra 1
Hi! I'm a newly retired teacher with many years of experience in both traditional and cyber schools. I love to help students become successful learners.
12 Subjects: including prealgebra, English, reading, writing
...Adobe Photoshop learning guides and hands on work will be given and the use of studio strobes and backdrops are available. Reading and Writing students will be given simple lessons to help
guide them in the right direction and will focus on specific grammatical and technical aspects of writing w...
34 Subjects: including prealgebra, geometry, Spanish, English
...The SAT Writing section is my favorite component of the test. I've consistently scored 800, often without a single error. The dreaded GMAT.
34 Subjects: including precalculus, ACT Math, SAT math, English
I'm a certified middle school and high school math teacher with experience teaching in suburban and urban schools. I've been tutoring off and on for more than 8 years (before graduating high
school). I am a passionate educator and have excellent communication skills and experience with various levels of math.
20 Subjects: including calculus, ACT Math, probability, SAT math
Related Abington, PA Tutors
Abington, PA Accounting Tutors
Abington, PA ACT Tutors
Abington, PA Algebra Tutors
Abington, PA Algebra 2 Tutors
Abington, PA Calculus Tutors
Abington, PA Geometry Tutors
Abington, PA Math Tutors
Abington, PA Prealgebra Tutors
Abington, PA Precalculus Tutors
Abington, PA SAT Tutors
Abington, PA SAT Math Tutors
Abington, PA Science Tutors
Abington, PA Statistics Tutors
Abington, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Abington_PA_Math_tutors.php","timestamp":"2014-04-19T12:15:27Z","content_type":null,"content_length":"23505","record_id":"<urn:uuid:998a7187-67e8-4026-8b1f-149684f59c01>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Automorphisms and Conjugation
November 25th 2011, 06:53 PM #1
Automorphisms and Conjugation
Exercise 1 in Dummit and Foote section 4.4 Automorphisms reads as follows:
If $\sigma$$\in$ Aut(G) and ${\phi}_g$ is conjugation by g prove that $\sigma$${\phi}_g$${\sigma}^{-1}$ = ${\phi}_{\sigma{(g)}$.
Deduce that Inn(G) is a normal subgroup of Aut(G).
Can anyone help me get started on this problem?
(1) I have attached Dummit and Foote section 4.4 in case readers need to view the terminology and notation used. This upload includes the text of the exercise.
(2) I am assuming that the mapping ${\phi}_g$ is h $\longrightarrow$ gh $g^{-1}$
(3) I am not sure what Dummit and Foote mean by ${\sigma{(g)}$ - unless they mean the set of automorphisms that map g into g???
Re: Automorphisms and Conjugation
Exercise 1 in Dummit and Foote section 4.4 Automorphisms reads as follows:
If $\sigma$$\in$ Aut(G) and ${\phi}_g$ is conjugation by g prove that $\sigma$${\phi}_g$${\sigma}^{-1}$ = ${\phi}_{\sigma{(g)}$.
Deduce that Inn(G) is a normal subgroup of Aut(G).
Can anyone help me get started on this problem?
(1) I have attached Dummit and Foote section 4.4 in case readers need to view the terminology and notation used. This upload includes the text of the exercise.
(2) I am assuming that the mapping ${\phi}_g$ is h $\longrightarrow$ gh $g^{-1}$
(3) I am not sure what Dummit and Foote mean by ${\sigma{(g)}$ - unless they mean the set of automorphisms that map g into g???
What they mean is this. There is a natural map $\Phi:G\to \text{Aut}(G)$ defined by the rule $(\Phi(g))(x)=gxg^{-1}$ (where $x\in G$--note that since $\Phi(g)$ is supposed to be a map that this
makes sense). They then define $\text{im }\Phi$ to be $\text{Inn}(G)$. They then ask you to show that $\sigma\circ\Phi(g)\circ\sigma^{-1}=\Phi(\sigma(g))$ (where, note again, that $\sigma(g)\in
G$) and so one can deduce that $\text{Inn}(G)\unlhd \text{Aut}(G)$.
Re: Automorphisms and Conjugation
I am still struggling with both the meaning and the difference between $\phi_g$ and $\phi_{\sigma{(g)}}$?
Can anyone clarify this for me and go a bit further with the proof?
Re: Automorphisms and Conjugation
Think about this. Take $G=\mathbb{Z}$, $\sigma(x)=2x$, and $g=3$. Then, they're trying to have you prove that $\sigma\circ\Phi(3)\circ\sigma^{-1}=\Phi(\sigma(3))=\Phi(6)$.
Re: Automorphisms and Conjugation
The example was very helpful in getting an understanding of the problem.
Basically he equivalence between your notation and D&Fs is as follows:
$\phi_g$$\equiv$$\Phi (g)$
$\phi_{\sigma (g)}$$\equiv$$\Phi (\sigma (g))$ where $\sigma (g)$ (as is usual) is the map of g under $\sigma$
I am still struggling with the proof of $\sigma$$\phi_g$$\sigma^{-1}$ = $\phi_{\phi (g)}$ and would appreciate some further help.
Regarding Inn(G) $\unlhd$ Aut(G) it appears to me that $\phi_g$ is essentially conjugation of G by g and hence $\phi_g$ is an inner automorphism of G and hence $\phi_g$$\in$ Inn(G).
But where to from here to show Inn(G) $\unlhd$ Aut(G)
Re: Automorphisms and Conjugation
two maps are equal if they take on equal values for every element in their domain.
let's say we have any old automorphism of G, which we call $\sigma$. this is a bijective homomorphism, and let's say $\sigma:x \to y$. clearly, $\sigma^{-1}:y \to x$.
now by $\sigma_g$, we mean the particular automorphism that is conjugation by g: $\sigma_g:x \to gxg^{-1}$ for any x. so:
$\sigma \circ \sigma_g \circ \sigma^{-1}(y) = \sigma(\sigma_g(\sigma^{-1}(y)))$
$= \sigma(\sigma_g(x)) = \sigma(gxg^{-1})$
$= \sigma(g)\sigma(x)\sigma(g^{-1})$ (since $\sigma$ is a homomorphism)
$= \sigma(g)y(\sigma(g))^{-1}$ (again, since $\sigma$ is a homomorphism, and by our definition of y)
$= \sigma_{\sigma(g)}(y)$.
so when we conjugate the inner automorphism corresponding to conjugation by g, by another automorphism $\sigma$, we get the same map as conjugation by the element $\sigma(g)$.
this shows that for any automorphism $\sigma$, that $\sigma\mathrm{Inn}(G)\sigma^{-1} \subseteq \mathrm{Inn}(G)$
Re: Automorphisms and Conjugation
Thanks for that really helpful post
I will now work through the proof
Re: Automorphisms and Conjugation
note: i just noticed i use $\sigma_g$ where you use $\phi_g$, which is probably preferrable to avoid confusion.
Re: Automorphisms and Conjugation
Thanks for that helpful clarification
November 25th 2011, 07:51 PM #2
November 25th 2011, 09:34 PM #3
November 25th 2011, 09:52 PM #4
November 26th 2011, 12:13 AM #5
November 26th 2011, 05:35 AM #6
MHF Contributor
Mar 2011
November 26th 2011, 02:40 PM #7
November 26th 2011, 02:53 PM #8
MHF Contributor
Mar 2011
November 26th 2011, 03:18 PM #9 | {"url":"http://mathhelpforum.com/advanced-algebra/192690-automorphisms-conjugation.html","timestamp":"2014-04-17T11:05:26Z","content_type":null,"content_length":"72020","record_id":"<urn:uuid:3c7f138b-294e-423d-b15e-cc086e497dd3>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
trouble with the bounds of integration
October 21st 2008, 11:16 AM
trouble with the bounds of integration
verify that this is a valid jdf.
f(y1, y2)= 6y1^2 y2 0<=y1<=y2, y1+y2<=2
0 elsewhere
I was trying to integrate y1 from 0 to y2
and y2 from 0 to 2
also i tried y1 from 0 to y2
and y2 from 0 to 1
all times 2
but still not confident with my answers. any help would be great
October 21st 2008, 03:23 PM
mr fantastic
verify that this is a valid jdf.
f(y1, y2)= 6y1^2 y2 0<=y1<=y2, y1+y2<=2
0 elsewhere
I was trying to integrate y1 from 0 to y2
and y2 from 0 to 2
also i tried y1 from 0 to y2
and y2 from 0 to 1
all times 2
but still not confident with my answers. any help would be great
A simple sketch graph will show you that the double integral you need to calculate is
$\int_{y_1 = 0}^{y_1 = 1} \int^{y_2 = - y_1 + 2}_{y_2 = y_1} 6 y_1^2 y_2 \, dy_2 \, dy_1$,
where the integral terminals should be evident (especially in hindsight) from a sketch graph of the region over which the joint pdf is non-zero (the triangle bounded by the vertical $y_2$-axis
and the lines $y_2 = y_1$ and $y_2 = -y_1 + 2$. Note that the two lines intersect at (1, 1)). | {"url":"http://mathhelpforum.com/advanced-statistics/54943-trouble-bounds-integration-print.html","timestamp":"2014-04-19T09:58:01Z","content_type":null,"content_length":"5793","record_id":"<urn:uuid:eb2cdea0-e21c-4659-8a7c-47ee7dd7a5e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Loyola University Chicago - Mathematics and Statistics
An undergraduate lecture series cosponsored by the College of Arts and Sciences. In this lecture series, speakers will connect mathematics to a variety of disciplines within the college and beyond.
Tea and Cookies beforehand.
Organizers: Rafal Goebel & Aaron Lauve
(We are still finalizing the location for this semester. Last year, talks were in Cuneo Hall.)
Cuneo Hall is located at southwest corner of the central quad on Loyola's lakeshore campus. (map)
Public parking available on-campus in the Parking Garage (building P1 on the campus map). To get to the Parking Garage, enter campus at the corner of West Sheridan Road and North Kenmore Avenue.
Titles and abstracts of selected talks from the past
Jerry Bona, Mathematics, Statistics, and Computer Science, University of Illinois at Chicago (fall 2011)
Large Waves on Deep Water: Rogue Waves and Tsunamis
Large scale ocean waves have been of interest for centuries. Investigated here are some of the underpinnings of our attempts to understand such phenomena. The journey will take us through some 19th
century history and culminate in more recent developments. We will be in a position to cast light on the genesis of such waves and understand how they can transmit coherent energy across thousands of
miles of ocean.
Andrew Odlyzko, Digital Technology Center, University of Minnesota (spring 2011)
Cybersecurity, Mathematics, and Limits on Technology
Mathematics has contributed immensely to the development of secure cryptosystems and protocols. Yet our networks are terribly insecure, and we are constantly threatened with the prospect of imminent
doom. Furthermore, even though such warnings have been common for the last two decades, the situation has not gotten any better. On the other hand, there have not been any great disasters either. To
understand this paradox, we need to consider not just the technology, but also the economics, sociology, and psychology of security. Any technology that requires care from millions of people, most
very unsophisticated in technical issues, will be limited in its effectiveness by what those people are willing and able to do. This imposes strong limits on what formal mathematical methods can
accomplish, and suggests that we will have to put up with the equivalent of baling wire and chewing gum, and to live on the edge of intolerable frustration.
Emily Peters, Mathematics, Northwestern University (fall 2012)
Knots, the four-color theorem, and proof by pictures.
Your friend hands you two knots and asks you if they're the same 'Well, they're both knots!', you reply. 'What do you mean, the same?' Your friend convinces you that some things that look like knots
really aren't knotted (like a necklace that's gotten tangled, but can be combed out with patience), while some other things really and truly are (like my bike frame and a bike rack and my bike lock,
I hope). But now you want to know: how can you be sure? Knot invariants, which I'll introduce, can sometimes answer this.The rest of this talk will be variations on the theme of "proof by pictures"
which emerges from talking about knot invariants. I'll try to convince you of the general usefulness of this point of view in math, by explaining how it can be used to prove the four-color theorem (a
famous, simple-to-state-and-hard-to-prove fact about coloring maps). If time permits I'll also talk about some instances of "proof by pictures" in the field of operator algebras (ie, linear algebra
in infinite dimensions).
Brian Adams, Sandia National Laboratories (spring 2012)
Applied Mathematical Sciences at Sandia
Through this presentation I will relate my six year experience working in a mathematics and computer science research group at Sandia, a national security laboratory. The broad mission areas of the
lab foster research in disciplines including engineering, materials, bioscience, energy and water, infrastructure security, scalable scientific computation, and beyond. Computational scientists
support them with contributions ranging from theory and hardware to algorithms and software to solve application problems of national importance.
I will survey a number of application problems whose solution relies on mathematics, statistics, disciplinary science, and high-performance parallel computing. These are used in creating
computational models (simulations) that scientists and engineers use for insight and decision making. I will also introduce optimization and uncertainty quantification algorithms and discuss their
application to nuclear reactor performance assessment, water network security, micro-electro-mechanical system (MEMS) design, and disease spread modeling. I will touch on challenges of simulation
credibility, or knowing that computer models are appropriate in the context in which they are used.
Aaron Greicius, Mathematics and Statistics, Loyola University Chicago (fall 2011)
Music is Applied Mathematics
Well, at least according to the Pythagoreans, who related consonant intervals to ratios of small integers and subsumed music into their mathematical quadrivium along with arithmetic, geometry and
astronomy. Since then the strong connection between music and mathematics has been much trumpeted; however, explanations seldom go beyond this first point of contact. We hope to improve on this by
taking a deeper look at the musical parameters of pitch, harmony, rhythm and timbre. One-upping Pythagoras, we will see how pitch-space is naturally represented by a circle, and how the space of
dyads (two-note chords) can be considered as a Möbius strip. Regarded in this light, a musical composition comes to resemble exactly the sort of structure that abstract mathematics is concerned with.
As a result, relatively sophisticated mathematics can be applied both to the understanding of these structures (musical analysis), and to their construction (composition). Musical examples will
abound. At the end the speaker will raise the philosophical question of whether we have really come any further in establishing the supposed deep link between these two disciplines, and then quickly
duck out of the room.
Jon Bougie, Physics, Loyola University Chicago (spring 2011)
Pattern Formation in Granular Materials: Nonlinear Dynamics in a Sandbox
Systems composed of grains are all around us, but there is much still to be learned about their behavior. Granular materials are found in many forms, including sand on a beach, wheat in a hopper,
boulders in an avalanche, and pills at a pharmaceutical factory. While the behavior of individual grains can be quite simple, interactions between many grains can lead to very complex and interesting
behavior. Therefore, despite the fact that granular materials are common in nature, agriculture, and industrial applications, the basic equations governing granular systems are not yet known. In this
talk, I will illustrate these ideas by examining spontaneous pattern formation in vertically shaken granular layers. When a layer of grains is placed on a vertically oscillating plate, the layer
leaves the plate at some point in the oscillation if the maximum acceleration of the plate is greater than that of gravity. If the acceleration increases above a critical value, standing waves form
in the layer. These waves spontaneously order themselves to create patterns such as stripes, squares, and hexagons. I will discuss the phenomenon of pattern formation in these systems as an example
of pattern formation in nonlinear systems. I will also compare these patterns to those found in shaken fluid layers, and discuss whether this analogy with fluid dynamics can lead us to a set of
“granular hydrodynamic” equations governing granular flow.
Cary Huffman, Mathematics and Statistics, Loyola University Chicago (fall 2011)
From CDs to Deep Space
Why can you still play a CD even after it is scratched? How does NASA get perfect pictures from Saturn? This talk will examine how error-correcting codes are used in CD/DVD recording and deep space
communications. Really cool pictures will be shown.
Karen Smilowitz, Industrial Engineering and Management Sciences, Northwestern University (fall 2012)
Operations research for non-profit settings
This talk will discuss opportunities and challenges related to the development and application of operations research techniques to transportation and logistics problems in non-profit settings. Much
research has been conducted on transportation and logistics problems in commercial settings where the goal is either to maximize profit or to minimize cost. Significantly less work has been conducted
for non-profit applications. In such settings, the objectives are often more difficult to quantify since issues such as equity and sustainability must be considered, yet efficient operations are
still crucial. This talk will present several research projects that introduce new approaches tailored to the objectives and constraints unique to non-profit agencies, which are often concerned with
obtaining equitable solutions given limited, and often uncertain, budgets, rather than with maximizing profits. This talk will assess the potential of operations research to address the problems
faced by non-profit agencies and attempt to understand why these problems have been understudied within the operations research community. To do so, we will ask the following questions: Are
non-profit operations problems rich enough for academic study? Are solutions to non-profit operations problems applicable to real communities? | {"url":"http://www.luc.edu/math/ucms/","timestamp":"2014-04-16T13:09:27Z","content_type":null,"content_length":"37119","record_id":"<urn:uuid:541c4571-0904-40f0-9903-a01309825bb9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Niman's 2PairWind Review (2)
This video is a two minute preview. To view the entire video, please sign in or Sign Up Now!
Sep 13th, 2011
Mid Stakes
Teaching Method:
Session Video 3+ Table
6 Max
3408 Views
Niman continues with part 2 from his 2PairWind review played at midstakes NL.
• Niman's 2PairWind Review (2)
Discussion for Niman's 2PairWind Review (2).
• Re: Niman's 2PairWind Review (2)
Hi Niman, great video as always. I have a couple questions. 52:30 Table1: Hero flats A5dd and the flop is Kh3hJd. You talk about it being a good spot to checkraise and continue betting on the
turn with the backdoor equity you have but how would you play an Ace on the turn if you checkraise and get called?
• Re: Niman's 2PairWind Review (2)
Oh and would you consider bluffing on a heart turn as well?
• Re: Niman's 2PairWind Review (2)
Great questions doncamatic. A couple things. First of all, if I did decide to check raise this flop, I would give up on the turn unless I pick up some of my backdoor outs. If called, I would
usually not keep betting unless a diamond, deuce, or four came up. A heart also can work well as one of your fake outs. On the other hand, an ace on the turn is a little tricky. I would probably
check and evaluate against most opponents, but if my opponent was aggressive and well balanced, I might just go ahead and keep betting - especially if he is a sheriffy type of player. ... As far
as your second question, yes, a heart is a very good bluffing card against the type of player that would (a) get it in earlier with a flush draw himself (or do something other than call a check
raise on the flop if he had a flush draw) and/or (b) would put a flush draw as a big part of your check raising range. I'm going to be traveling now and won't have much access to the internet for
a few days. I'll check back and answer more questions early next week.
• Re: Niman's 2PairWind Review (2)
this could be a level to get action, considering this is 3/6 and 5/10.
• Re: Niman's 2PairWind Review (2)
In all fairness Tom, the member reached out to us asking if we were taking member submissions and that he plays games up to 5/10. I said yes, if he wanted to record a session, please "record some
2/4 - 5/10 footage", as I figured members would like some midstakes play as well as those being his regular games.
• Re: Niman's 2PairWind Review (2)
sorry for my rude post, just getting paranoid with all the cheating accusations happening right now and in the last weeks + bad day. i can not delete it.
• Re: Niman's 2PairWind Review (2)
Heh, understood. I can delete it if you would like!
• Re: Niman's 2PairWind Review (2)
I played this session. First a comment on my 3betting aggressive. Those I chose to 3bet fold a lot to 3bets (and 4bets). Like over 70%. So I chose to 3bet any hand in my opening range against
these. This to get immediate value from my 3bet, rather than post flop value. The recreational player on table 3 folds 100% to 3bets over a small sample. My plan was to 3bet him until he adjusted
and would be in a lose mode with me. The shortie on table 4. He folds 72% to 3bets. So I chose the strategy of 3bet wide. Also he opens a ton of small blinds like 50%. Then with SQUIRT in the
end. I can see on my HUD he raises flop over 30% of the time. That is huge. Very unnormal. He also fires second barrel more than 50% of the time, which is also over average. His VPIP is not that
low either. So I opt to solve this problem with aggression. SQUIRT also 3bets over 40% in heads up. He likes to carry on with his bet on the turn. Then the first hand, where I 3bet the
recreational player. I think the flop is to fold is better. If I chose to 3bet, then click it back is better I think. No need to make it big. I also do not think he will be able to fold his Jx by
the river. He is recreational with a VPIP of over 40. They just can not fold top pair. On the turn I was trying to get him to fold a pocket pair. But I do not like this bet. Against a regular I
would like to have seen myself shove the river. But not against a recreational player.
• Re: Niman's 2PairWind Review (2)
My previous comment was suposed to have been for part 1, and I have posted it there as well. I am unable to delete or edit in my first post. I believe Niman gives very good comments. Especially
his ability to go deep and visualize different scenarios I like a lot. I do not feel like Niman is after me. I feel he has a different style than me. I think this is due to me having a lot of
stats from my HUD available. I appreciate a lot Niman's comments.
• Re: Niman's 2PairWind Review (2)
On table 4 at 21:00, I am usually betting this turn against most standard players, I think we're good the vast majority of the time ( I think villain often has hit a Q ) and feel like we're more
likely to get called on the turn then on the river,.and there are a lot of rivers that we're just gonna have to check back,like on a K or J, as well as villain being very unlikely to bluff the
• Re: Niman's 2PairWind Review (2)
Robburgundy, if you want me to pipe in with my thoughts and analysis of the hand you are talking about, please provide the details of the hand. Thanks!
• Re: Niman's 2PairWind Review (2)
Ah ok, it's btn vs bb, we get donked into on A7Thh, then villain checks a Q turn bringing second flush draw, I obviously wasn't paying enough attention, when I posted I thought we had been c/r on
the flop. I still think I prefer betting the turn though.
• Re: Re: Niman's 2PairWind Review (2)
robburgundy wrote:
Ah ok, it's btn vs bb, we get donked into on A7Thh, then villain checks a Q turn bringing second flush draw, I obviously wasn't paying enough attention, when I posted I thought we had been c/
r on the flop. I still think I prefer betting the turn though.
You still haven't mentioned what the hero's cards were.
• Re: Niman's 2PairWind Review (2)
Haha, Sorry I'm not very good at this, hero has A3ss (no FD) aswell as how the hand actually played out, had we been c/r on the flop, do you bet turn?
• Re: Re: Niman's 2PairWind Review (2)
robburgundy wrote:
Haha, Sorry I'm not very good at this, hero has A3ss (no FD) aswell as how the hand actually played out, had we been c/r on the flop, do you bet turn?
Definitely not a spot to bet if we just called a donk on the flop, as villains continuing range to our bet should be well ahead of our range. If we were check raised on the flop instead, we can
bet the turn if we know enough about our villain's tendencies such that we can feel comfortable with our decision (whther that is go with our hand or to fold) based on our opponent's range and
tendencies - if we get check raised again.
• Re: Niman's 2PairWind Review (2)
So what sort of hands do you think villain is checking the turn with? I feel like he mainly has weak/medium strength hands (like QJ, which we have a better chance of getting value from on the
turn than river) or bluffs which he's giving up on anyway, so we might as well bet turn. | {"url":"http://www.bluefirepoker.com/videos/niman-s-2pairwind-review-2/","timestamp":"2014-04-20T21:05:03Z","content_type":null,"content_length":"44130","record_id":"<urn:uuid:f60540eb-0278-471a-94b2-8074a216d4c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 505
12/14 Final exam statistics (Ave/Out of [SD]):
Question 1: 13.2/30 [6.5]; (Max 22, Min 0)
Question 2: 30.1/40 [7.3]; (Max 38, Min 11)
Question 3: 20.9/30 [6.9]; (Max 29, Min 2)
TOTAL: 64.2/100 [16.7]; (Max 85, Min 13)
As you can see from the results, the first question was found to be the most difficult. Many students tried to solve using the Euler equations, which turns out to be more complicated than simply
using L-dot=torque. For completeness, and in case anyone is interested, a solution using the Euler equations is posted here. I also include a note on the correspondence between the angles used in the
problem and the canonical Euler angles.
My solutions for the last problem had one error---the shape of the trajectories depend on the amplitude. For a purely quartic potential then are squashed horizontally at large amplitudes but then
switch over to being squashed vertically one the amplitude drops below about unity.
I will put final exams into mailboxes tomorrow (for those with mailboxes). For others, I will have them in my office. Have a good break! | {"url":"http://www.phys.washington.edu/~sharpe/505/course.html","timestamp":"2014-04-17T06:48:06Z","content_type":null,"content_length":"14040","record_id":"<urn:uuid:8f31c6b9-9e9d-4776-a8f7-b547d6aec936>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rowland Heights Math Tutor
One of my favorite activities is tutoring students and helping them achieve academic success. I’ve tutored students for more than 20 years in many subjects ranging from mathematics to SAT / ACT /
GRE and other standardized test preparation. I help students •hone their critical reading comprehensi...
41 Subjects: including precalculus, trigonometry, GRE, algebra 1
I have taught math for over 5 years! Many of my students are from grade 2 to 12, some are from college. I also have a math tutor certificate for college students from Pasadena City College. I
graduated in 2012 from UCLA.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I also tutor school students and my youngest so far has been in 4th Grade. My aim is to ensure that my students develop a good basic understanding of the subject so that their self confidence
increases and with it their enjoyment of learning. Their success is reflected in their smiles and improving grades.Algebra is one of the pillars of Math.
12 Subjects: including algebra 1, trigonometry, SAT math, precalculus
...I would love to coach cross-country and track, and pass my love of running and fitness on to you! I have also always had a great passion for nutrition and fitness in general. With all my
experience as a competitive athlete and being a health buff, I know I’ll be a great coach!
62 Subjects: including geometry, algebra 2, algebra 1, precalculus
...My strengths are Algebra and Calculus at all levels. I have also tutored Geometry and Trigonometry. I have tutored 6th to 12th grade Mathematics for over a decade, so I know how to convey the
subject to students.My favorite method is to make the topic come alive by using familiar situations to explain concepts.
8 Subjects: including algebra 1, algebra 2, geometry, prealgebra | {"url":"http://www.purplemath.com/rowland_heights_math_tutors.php","timestamp":"2014-04-20T21:01:32Z","content_type":null,"content_length":"23998","record_id":"<urn:uuid:e5621dd2-df12-41b6-a8b9-432c73e5dc5c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many 2D graph combinations can 498 different datasets provide?
August 4th 2011, 12:27 AM
How many 2D graph combinations can 498 different datasets provide?
I did a course in discrete maths last year, but for the life of me cannot remember how to do the simplest of calculations anymore, eventhough I managed to get a first for it.
I want to know how many different 2D graphs (i.e. x-y graphs) I can make from a total of 498 different datasets. EDIT: wait the set = {x, y} and each element can take 498 values, therefore the
answer should be 498^2. Is this correct?
Thanks guys, cheers.
August 6th 2011, 01:39 PM
Re: How many 2D graph combinations can 498 different datasets provide?
If I understand you correctly you have a 2-tuple such that each element belongs to a set of cardinality 498.
Then you're correct, the number of different combinations would be 498^2. | {"url":"http://mathhelpforum.com/discrete-math/185572-how-many-2d-graph-combinations-can-498-different-datasets-provide-print.html","timestamp":"2014-04-18T13:43:21Z","content_type":null,"content_length":"4118","record_id":"<urn:uuid:b7a871c5-2ba4-4426-9034-fbb6214721a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new audio amplifier topology - Part 4: Noise in folded cascode stages | EE Times
Design How-To
A new audio amplifier topology - Part 4: Noise in folded cascode stages
This article originally appeared in Linear Audio Volume 2, September 2011. Linear Audio, a book-size printed tech audio resource, is published half-yearly by Jan Didden.
[Part 1 introduces an audio amplifier topology which uses a novel push-pull transimpedance stage that offers a substantial improvement in power supply rejection over standard amplifier
configurations. Part 2 discusses the amplifier's biasing, stability and AC performance. Part 3 compares the performance of the new topology with that of a standard amplifier.]
Appendix: Noise in Folded Cascode Stages
The noise contribution of folded cascodes is a major consideration for the newly introduced amplifier topology. In this appendix I will thus present a brief analysis of the major noise sources in
folded cascodes. For an exact analysis the mathematical expressions quickly become rather involved. I will hence apply several simplifications; however it is ensured that the result is still valid at
least to the extent that it leads to the correct conclusions and design guide lines in typical implementations.
The basic folded cascode consists of three fundamental circuit elements: a common-base transistor, an associated emitter resistor and a voltage reference, which is connected to the base of the
cascode transistor. The input of such a stage is in the formof a current, which is applied to the emitter of the common-base transistor. The output is also in the form of a current, available at the
collector of the common-base transistor.
In the following we will consider the three fundamental circuit elements to be noise-free (which is denoted by the addition of an asterisk to the according denominator), and model their actual noise
contribution by the addition of explicit voltage and current noise sources. In figure A1, Q* embodies the cascode transistor; its voltage and current noise generators are combined and referred to the
input by E[nQ] and I[nQ]. R* forms the emitter resistor, and its noise contribution is represented by the series voltage source E[nQ]. Finally, the voltage reference is shown as V*, with associated
noise generator E[nV]. The incremental impedance of the voltage reference is of some importance as well, and represented as RV*.
Figure A1: Folded cascode noise generators.
We will now independently analyse every of the four noise generators, and derive their contribution at the output of the folded cascode, i.e. the contributions to the collector current of Q*. The
total of these contributions may then be derived by the usual root-mean-square summation, which needs to be applied for uncorrelated sources.
For the analysis we will make the following assumptions: the h[FE] of Q* is much larger than unity such that base current losses are negligible, the reciprocal of Q* transconductance is much smaller
than R*, and h[FE] • R* is much larger than RV*. All assumptions are valid for typical implementations.
The voltage noise sources of Q* and V* (E[nQ] and E[nV]) effectively appear as input signal to an emitter degenerated common-emitter stage. Their contribution at the cascode output is then given by:
Similarly the noise generator E[nQ] appears in the folded cascode output current as:
The current noise generator of Q* (I[nQ]) has two different contribution paths. First of all, it appears directly in the collector current. That is seen by considering that the sum of the Q* emitter
current and I[nQ] is constant (as set by V*, R* and Q* base-emitter voltage); hence I[nQ] must modulate the emitter current of Q*.
As the collector current is equal to the emitter current, the emitter current modulation also appears at the collector of Q*. However, I[nQ] also flows trough the voltage reference. There RV*
converts the noise current to a corresponding voltage,which again drives an emitter degenerated common- emitter stage. Note that this mechanism is fully correlated to the first contribution path, and
hence the two terms must be linearly added:
Contemplation of (1) trough (4) reveals that, everything else equal, the contribution of any of the four noise generators is reduced by increasing R*. Increasing R* however also increases E[nQ] (as
this transistor is then operated at lower collector current, which increases its voltage noise) and E[nQ] (higher resistance values imply higher voltage noise); yet this increase is typically
proportional to the square root of R* only. Thus overall a net improvement of about v2 (or 3 dB) for doubling R* is gained. I[nQ] reduces itself as well at lower quiescent currents (lower base
current implies lower base current noise).
From this discussion it follows that, as a first means to reduce the noise contribution of a folded cascode, the emitter resistor value should be chosen as large as possible. This corresponds to the
choice of a low quiescent collector current, and a voltage reference with large DC value. There is usually a lower limit on quiescent current, dictated by distortion concerns. Further noise
improvements beyond this point must hence be achieved solely by the increase in reference voltage. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280168","timestamp":"2014-04-16T11:17:48Z","content_type":null,"content_length":"134244","record_id":"<urn:uuid:29d4ddcf-9d3c-4e20-bce2-678daf638eff>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |