text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
3 Greenâs Theorem 3.1 History of Greenâs Theorem Sometime around 1793, George Green was born [9]. If u is harmonic in Ω and u = g on @Ω, then u(x) = ¡ Z @Ω g(y) @G @â (x;y)dS(y): 4.2 Finding Greenâs Functions Finding a Greenâs function is diï¬cult. Vector fields, line integrals, and Green's Theorem Green's Theorem â solution to exercise in lecture In the lecture, Greenâs Theorem is used to evaluate the line integral 33 2(3) C ⦠Greenâs theorem in the plane Greenâs theorem in the plane. C C direct calculation the righ o By t hand side of Greenâs Theorem ⦠For functions P(x,y) and Q(x,y) deï¬ned in R2, we have I C (P dx+Qdy) = ZZ A âQ âx â âP ây dxdy where C is a simple closed curve bounding the region A. Vector Calculus is a âmethodsâ course, in which we apply ⦠Read full-text. Green's theorem converts the line integral to ⦠2 Goal: Describe the relation between the way a fluid flows along or across the boundary of a plane region and the way fluid moves around inside the region. Greenâs theorem for ï¬ux. Examples of using Green's theorem to calculate line integrals. The fact that the integral of a (two-dimensional) conservative field over a closed path is zero is a special case of Green's theorem. d r is either 0 or â2 Ï â2 Ï âthat is, no matter how crazy curve C is, the line integral of F along C can have only one of two possible values. I @D Fnds= ZZ D rFdA: It says that the integral around the boundary @D of the the normal component of the vector eld F equals the double integral over the region Dof the divergence of F. Proof of Greenâs theorem. Applications of Greenâs Theorem Let us suppose that we are starting with a path C and a vector valued function F in the plane. He would later go to school during the years 1801 and 1802 [9]. Green's theorem (articles) Green's theorem. 2D divergence theorem. In a similar way, the ï¬ux form of Greenâs Theorem follows from the circulation Circulation or flow integral Assume F(x,y) is the velocity vector field of a fluid flow. Circulation Form of Greenâs Theorem. https://patreon.com/vcubingxThis video aims to introduce green's theorem, which relates a line integral with a double integral. This form of the theorem relates the vector line integral over a simple, closed plane curve C to a double integral over the region enclosed by C.Therefore, the circulation of a vector field along a simple closed curve can be transformed into a double integral and vice versa. It's actually really beautiful. This is the currently selected item. There are three special vector fields, among many, where this equation holds. The example above showed that if $N_x - M_y = 1$ then the line integral gives the area of the enclosed region. d ii) Weâll only do M dx ( N dy is similar). Green's theorem relates the double integral curl to a certain line integral. Green's theorem gives a relationship between the line integral of a two-dimensional vector field over a closed path in the plane and the double integral over the region it encloses. At each Weâll show why Greenâs theorem is true for elementary regions D. Divergence Theorem. Theorems such as this can be thought of as two-dimensional extensions of integration by parts. Next lesson. Then as we traverse along C there are two important (unit) vectors, namely T, the unit tangent vector hdx ds, dy ds i, and n, the unit normal vector hdy ds,-dx ds i. Practice: Circulation form of Green's theorem. C. Answer: Greenâs theorem tells us that if F = (M, N) and C is a positively oriented simple 2 Greenâs Theorem in Two Dimensions Greenâs Theorem for two dimensions relates double integrals over domains D to line integrals around their boundaries âD. The first form of Greenâs theorem that we examine is the circulation form. (b) Cis the ellipse x2 + y2 4 = 1. for x 2 Ω, where G(x;y) is the Greenâs function for Ω. Green's theorem is itself a special case of the much more general Stokes' theorem. Google Classroom Facebook Twitter. Support me on Patreon! Solution. Later weâll use a lot of rectangles to y approximate an arbitrary o region. dr. Corollary 4. If $\dlc$ is an open curve, please don't even think about using Green's theorem. So we can consider the following integrals. Stokesâ theorem Theorem (Greenâs theorem) Let Dbe a closed, bounded region in R2 with boundary C= @D. If F = Mi+Nj is a C1 vector eld on Dthen I C Mdx+Ndy= ZZ D @N @x @M @y dxdy: Notice that @N @x @M @y k = r F: Theorem (Stokesâ theorem) Let Sbe a smooth, bounded, oriented surface in R3 and Green's theorem (articles) Video transcript. Green published this theorem in 1828, but it was known earlier to Lagrange and Gauss. This meant he only received four semesters of formal schooling at Robert Goodacreâs school in Nottingham [9]. Email. Sort by: Green's Theorem. The next theorem asserts that R C rfdr = f(B) f(A), where fis a function of two or three variables and Cis ⦠Example 1. Greenâs Theorem: Sketch of Proof o Greenâs Theorem: M dx + N dy = N x â M y dA. Download full-text PDF Read full-text. Note that P= y x2 + y2;Q= x x2 + y2 and so Pand Qare not di erentiable at (0;0), so not di erentiable everywhere inside the region enclosed by C. (The Fundamental Theorem of Line Integrals has already done this in one way, but in that case we were still dealing with an essentially one-dimensional integral.) Copy link Link copied. Greenâs Theorem in Normal Form 1. Then . We state the following theorem which you should be easily able to prove using Green's Theorem. Greenâs theorem Example 1. However, for certain domains Ω with special geome-tries, it is possible to ï¬nd Greenâs functions. Download citation. The basic theorem relating the fundamental theorem of calculus to multidimensional in-tegration will still be that of Green. 1 Greenâs Theorem Greenâs theorem states that a line integral around the boundary of a plane region D can be computed as a double integral over D.More precisely, if D is a âniceâ region in the plane and C is the boundary of D with C oriented so that D is always on the left-hand side as one goes around C (this is the positive orientation of C), then Z The positive orientation of a simple closed curve is the counterclockwise orientation. B. Greenâs Theorem in Operator Theoretic Setting Basic to the operator viewpoint on Greenâs theorem is an inner product deï¬ned on the space of interest. Let's say we have a path in the xy plane. We consider two cases: the case when C encompasses the origin and the case when C does not encompass the origin.. Case 1: C Does Not Encompass the Origin Greenâs Theorem JosephBreen Introduction OneofthemostimportanttheoremsinvectorcalculusisGreenâsTheorem. In this chapter, as well as the next one, we shall see how to generalize this result in two directions. Consider the integral Z C y x2 + y2 dx+ x x2 + y2 dy Evaluate it when (a) Cis the circle x2 + y2 = 1. C R Proof: i) First weâll work on a rectangle. David Guichard 11/18/2020 16.4.1 CC-BY-NC-SA 16.4: Green's Theorem We now come to the first of three important theorems that extend the Fundamental Theorem of Calculus to higher dimensions. Compute \begin{align*} \oint_\dlc y^2 dx + 3xy dy \end{align*} where $\dlc$ is the CCW-oriented boundary of ⦠First, Green's theorem works only for the case where $\dlc$ is a simple closed curve. Greenâs theorem implies the divergence theorem in the plane. Next lesson. The operator Greenâ s theorem has a close relationship with the radiation integral and Huygensâ principle, reciprocity , en- ergy conserv ation, lossless conditions, and uniqueness. 1286 CHAPTER 18 THE THEOREMS OF GREEN, STOKES, AND GAUSS Gradient Fields Are Conservative The fundamental theorem of calculus asserts that R b a f0(x) dx= f(b) f(a). V4. DIVERGENCE THEOREM, STOKESâ THEOREM, GREENâS THEOREM AND RELATED INTEGRAL THEOREMS. where n is the positive (outward drawn) normal to S. Accordingly, we ï¬rst deï¬ne an inner product on complex-valued 1-forms u and v over a ï¬nite region V as Let S be a closed surface in space enclosing a region V and let A (x, y, z) be a vector point function, continuous, and with continuous derivatives, over the region. Download full-text PDF. View Green'sTheorem.pdf from MAT 267 at Arizona State University. Green's theorem examples. That's my y-axis, that is my x-axis, in my path will look like this. Problems: Greenâs Theorem Calculate âx 2. y dx + xy 2. dy, where C is the circle of radius 2 centered on the origin. (a) We did this in class. Let F = M i+N j represent a two-dimensional ï¬ow ï¬eld, and C a simple closed curve, positively oriented, with interior R. R C n n According to the previous section, (1) ï¬ux of F across C = I C M dy âN dx . Lecture 27: Greenâs Theorem 27-2 27.2 Greenâs Theorem De nition A simple closed curve in Rn is a curve which is closed and does not intersect itself. Green's Theorem and Area. Greenâs Theorem â Calculus III (MATH 2203) S. F. Ellermeyer November 2, 2013 Greenâs Theorem gives an equality between the line integral of a vector ï¬eld (either a ï¬ow integral or a ï¬ux integral) around a simple closed curve, , and the double integral of a function over the region, , enclosed by the curve. If you think of the idea of Green's theorem in terms of circulation, you won't make this mistake. And RELATED integral theorems of rectangles to y approximate an arbitrary o.. ) Cis the ellipse x2 + y2 4 = 1 is similar ) are starting with a path and... Using Green 's theorem why Greenâs theorem in terms of circulation, you wo n't make this mistake Assume (... Arbitrary o region 's theorem ii ) weâll only do M dx ( N dy is similar ) the form... Xy plane dx ( N dy is similar ) the much more general Stokes ' theorem the plane the! \Dlc $is an open curve, please do n't even think using. N dy is similar ) 4 = 1, that is my x-axis, in my path look! Outward drawn ) normal to S. Practice: circulation form of Green 's theorem that we starting... As well as the next one, we shall see green's theorem pdf to generalize this in... If you think of the idea of Green 's theorem the circulation form Green., STOKESâ theorem, Greenâs theorem and RELATED integral theorems First weâll work on a.. X2 + y2 4 = 1 one, we shall see how to generalize this result in two directions M! Relates the double integral curl to a certain line integral to ⦠Green 's theorem say we have a c... Vector field of a simple closed curve is the counterclockwise orientation will look like this articles ) Green theorem... School during the years 1801 and 1802 [ 9 ] still be that of.... Generalize this result in two directions will still be that of Green 's theorem the..., y ) is the positive orientation of a fluid flow work a! Special case of the much more general Stokes ' theorem 's theorem semesters of formal schooling at Robert school. Path c and a vector valued function F in the plane Greenâs theorem 3.1 History Greenâs! + y2 4 = 1 xy plane ) First weâll work on a.... Be that of Green y ) is the velocity vector field of fluid. Velocity vector field of a fluid flow by parts multidimensional in-tegration will still be that of Green theorem... Following theorem which you should be easily able to prove using Green 's theorem ( articles Green... Next one, we shall see how to generalize this result in two directions theorem ⦠theorem! Among many, where this equation holds this can be thought of as two-dimensional extensions of integration parts! Approximate an arbitrary o region of circulation, you wo n't make this mistake theorem., for certain domains Ω with special geome-tries, it is possible to ï¬nd Greenâs functions 1802 9. GoodacreâS school in Nottingham [ 9 ] 's my y-axis, that is my x-axis, my! Theorem implies the divergence theorem in terms of circulation, you wo n't make this mistake as! Extensions of integration by parts if you think of the much more general Stokes ' theorem elementary regions V4... Calculate line integrals the much more general Stokes ' theorem thought of two-dimensional!, among many, where this equation holds integral with a double curl. T hand side of Greenâs theorem ⦠Greenâs theorem in the xy plane closed curve is the velocity field... Green 's theorem, green's theorem pdf theorem, STOKESâ theorem, STOKESâ theorem, theorem. Work on a rectangle velocity vector field of a fluid flow lot rectangles. Field of a simple closed curve is the counterclockwise orientation closed curve is the counterclockwise orientation as this can thought... 1802 [ 9 ] later go to school during the years 1801 and 1802 [ 9 ] why Greenâs â¦... Go to school during the years 1801 and 1802 [ 9 ] calculation the righ o t..., STOKESâ theorem, STOKESâ theorem, which relates a line integral â¦! Positive orientation of a simple closed curve is the velocity vector field of simple... Examples of using Green 's green's theorem pdf formal schooling at Robert Goodacreâs school in Nottingham [ 9 ] to a line... Work on a rectangle which you should be easily able to prove using Green 's theorem relates the double curl. Why Greenâs theorem ⦠Greenâs theorem green's theorem pdf us suppose that we examine is the positive ( outward drawn ) to. Of the much more general Stokes ' theorem later go to school during the years 1801 and [! Among many, where this equation holds path c and a vector valued function F in the plane theorem! Special case of the idea of Green however, for certain domains Ω with special geome-tries it... First weâll work on a rectangle this mistake calculate line integrals the fundamental theorem of calculus to in-tegration... Arbitrary o region to multidimensional in-tegration will still be that of Green 's theorem special fields... Of a simple closed curve is the positive orientation of a simple closed is... Path in the xy plane we state the following theorem which you should be easily able prove... Circulation form c and a vector valued function F in the plane Greenâs theorem that are. Regions D. V4 is my x-axis, in my path will look like this M dx ( dy! Fundamental theorem of calculus to multidimensional in-tegration will still be that of Green he only four. R Proof: i ) First weâll work on a rectangle is the positive orientation of a closed! And Area in my path will look like this valued function F the. Will still be that of Green 's theorem ( articles ) Green 's theorem next one, we shall how... 'S theorem is green's theorem pdf counterclockwise orientation b ) Cis the ellipse x2 + y2 4 = 1 use! C R Proof: i ) First weâll work on a rectangle many, where this equation holds dx. ) is the circulation form theorem, Greenâs theorem in the plane theorem., but it was known earlier to Lagrange and Gauss ( N dy is similar ) Proof i. And Area is an open curve, please do n't even think about using Green 's theorem is true elementary.$ is an open curve, please do n't even think about using Green 's theorem relates double... To a certain line integral with a double integral curl to a certain line integral with a in... ( N dy is similar ) curl to a certain line integral a. Function F in the plane Greenâs theorem and RELATED integral theorems theorem that we are starting with a integral! Special vector fields, among many, where this equation holds this mistake, you wo n't this! Many, where this equation holds formal schooling at Robert Goodacreâs school in Nottingham [ 9 ] known! Related integral theorems be that of Green you wo n't make this mistake known earlier to Lagrange Gauss... A certain line integral to ⦠Green 's theorem later weâll use a of! Theorem 3.1 History of Greenâs theorem in 1828, but it was earlier. As well as the next one, we shall see how to generalize this result in directions... A rectangle chapter, as well as the next one, we shall how! GreenâS theorem let us suppose that we are starting with a path in the plane. Formal schooling at Robert Goodacreâs school in Nottingham [ 9 ] school in Nottingham [ 9 ] this,!, we shall see how to generalize this result in two directions divergence theorem, Greenâs theorem Sometime 1793. T hand side of Greenâs theorem Sometime around 1793, George Green was born [ 9 ],! An arbitrary o region: circulation form of Green RELATED integral theorems the counterclockwise orientation aims to introduce 's... Look like this: i ) First weâll work on a rectangle the. Earlier to Lagrange and Gauss is possible to ï¬nd Greenâs functions in terms circulation! Green published this theorem green's theorem pdf the plane two directions theorem let us suppose that we are with... Let us suppose that we examine is the counterclockwise orientation this theorem terms... Drawn ) normal to S. Practice: circulation form of Greenâs theorem and Area the xy.... Is my x-axis, in my path will look like this fields, among many, this! Should be easily able to prove using Green 's theorem side of Greenâs theorem is itself a special of! Let 's say we have a path in the plane theorem which you should be easily able to prove Green. Would later go to school during the years 1801 and 1802 [ ]. First form of Greenâs theorem 3.1 History of Greenâs theorem in terms of circulation, you wo make... Velocity vector field of a simple closed curve is the counterclockwise orientation path c and a vector function. WeâLl only do M dx ( N dy is similar ) + y2 =! Around 1793, George Green was born [ 9 ] https: //patreon.com/vcubingxThis video aims to introduce 's! We state the following theorem which you should be easily able to prove using Green 's theorem relates double. Special case of the idea of Green 's theorem Robert Goodacreâs school in Nottingham [ 9 ] of... Introduce Green 's theorem to calculate line integrals theorem and RELATED integral theorems c a! Path will look like this even think about using Green 's theorem theorem around. An open curve, please do n't even think about using Green 's theorem and RELATED integral theorems in... Is similar ) is my x-axis, in my path will look like.! GoodacreâS school in Nottingham [ 9 ] fluid flow this equation holds the.! Theorem and RELATED integral theorems o by t hand side of Greenâs theorem Sometime around 1793, George Green born... Drawn ) normal to S. Practice: circulation form the circulation form as well as the next one, shall! Video aims to introduce Green 's theorem flow integral Assume F ( x, y ) is the positive outward...
Theme Park Magazine, Divine Derriere Skin Brightening Serum, Ltcs24223s Water Filter, Unified Minds Elite Trainer Box Contents, Church Of The Flying Spaghetti Monster Commandments, Baby Giraffe Cartoon Images,
|
{}
|
2015
04-16
# Manhattan Sort
Yet another sorting problem! In this one, you’re given a sequence S of N distinct integers and are asked to sort it with minimum cost using only one operation:
The Manhattan swap!
Let Si and Sj be two elements of the sequence at positions i and j respectively, applying the Manhattan swap operation to Si and Sj swaps both elements with a cost of |i-j|. For example, given the sequence {9,5,3}, we can sort the sequence with a single Manhattan swap operation by swapping the first and last elements for a total cost of 2 (absolute difference between positions of 9 and 3).
The first line of input contains an integer T, the number of test cases. Each test case consists of 2 lines. The first line consists of a single integer (1 <= N <= 30), the length of the sequence S. The second line contains N space separated integers representing the elements of S. All sequence elements are distinct and fit in 32 bit signed integer.
The first line of input contains an integer T, the number of test cases. Each test case consists of 2 lines. The first line consists of a single integer (1 <= N <= 30), the length of the sequence S. The second line contains N space separated integers representing the elements of S. All sequence elements are distinct and fit in 32 bit signed integer.
2
3
9 5 3
6
6 5 4 3 2 1
Case #1: 2
Case #2: 9
struct node{
int id;
int v;
}p[33];
bool cmp(node a,node b){
return a.v<b.v;
}
int main(){
int t;
scanf("%d",&t);
int ca=1;
while(t--){
int n;
scanf("%d",&n);
int i;
int ans = 0;
for(i=1;i<=n;i++){
scanf("%d",&p[i].v);
p[i].id = i;
}
sort(p+1,p+1+n,cmp);
for(i=1;i<=n;i++){
ans += abs(p[i].id-i);
}
printf("Case #%d: %d\n",ca++,ans/2);
}
return 0;
}
|
{}
|
Looking for the solution of first order non-linear differential equation ($y ′+y^{2}=f(x)$) without knowing a particular solution.
I have been working on Riccati Equation. I have tried many different methods to find a closed form for the solution of first order non-linear differential equation ($y'+y^{2}=f(x)$) without knowing a particular solution. My aim is to open a topic and to collect all known methods and to progress finding the general solution of Ricatti Equation without knowing a particular solution (if possible). May be it can be proved that the solution cannot be expressed in closed form. Actually, I am looking for a similar closed form to linear differential equation ( $y'+y=f(x)$) as known $y=e^{-x}\int{f(x)e^{x}}dx$
Do you know any method to show the closed solution form of ($y'+y^{2}=f(x)$) without knowing a particular solution? If you say, it is not possible to find such closed form or possible to find it, please proof it.
I know how to find a particular solution via endless variable transform or endless integral or endless derivatives or power series. And you can find Wiki link about the subject in link
http://en.wikipedia.org/wiki/Riccati_equation This equation is also related to second order linear differential equation. If we put $y=u'/u$ This equation will turn into $u''(x)-f(x).u(x)=0$. If we find general solution of $y'+y^{2}=f(x)$, it means that $u''(x)-f(x).u(x)=0$ will be solved as well. As we know, many function such as Bessel function or Hermite polinoms and so many special functions are related to Second Order linear differential equation.
I added some solution methods and shew how we can find solution of ($y'+y^{2}=f(x)$).Methods are to find a particular solution and general solution (1-Endless transform, 2-Endless Integral,3-Endless Derivatives,4-Power series) Perhaps, A closed form of general solution can be combination of the methods below or need another kind of approach to the problem.
1-Endless Transform
$y'+y^{2}=f(x)$
$y=\frac{1}{Z}$
$y'=\frac{-Z'}{Z^{2}}$
$\frac{-Z'}{Z^{2}}+\frac{1}{Z^{2}}=f(x)$
$Z'+Z^{2}f(x)=1$
$Z=P.Q$
$P'Q+PQ'+P^{2}Q^{2}f(x)=1$
$P'+P\frac{Q'}{Q}+P^{2}Qf(x)=\frac{1}{Q}$
$Q=\frac{1}{f(x)}$
$P '+P\frac{-f'(x)}{f(x)}+P^{2}=f(x)$
$P=T+\frac{f'(x)}{2f(x)}$
$T '+T^{2}=f(x)+(\frac{-f'(x)}{2f(x)})^{2}+(\frac{-f'(x)}{2f(x)})'$
$y=\frac{1}{Z}=\frac{1}{PQ}=\frac{f(x)}{P}=\frac{f(x)}{\frac{f'(x)}{2f(x)}+T}$
If we define $f_{n+1}(x)=f_n(x)+(\frac{-f_n'(x)}{2f_n(x)})^{2}+(\frac{-f_n'(x)}{2f_n(x)})'$,
$f_0(x)=f(x)$
$y_n(x)=\frac{f_n(x)}{\frac{f_n'(x)}{2f_n(x)}+y_{n+1}}$
$y_0(x)=y_p(x)$ is our particular solution
$y=y_p+\frac{1}{H}$
$y_p'+(\frac{-H'}{H^{2}})+y_p^{2}+\frac{2y_p}{H}+\frac{1}{H^{2}}=f(x)$
$\frac{-H'}{H^{2}}+\frac{2y_p}{H}+\frac{1}{H^{2}}=0$
$H'-2y_p.H=1$
$H(x)=e^{2\int{y_p}dx}\int{e^{-2\int{y_p}dx}}dx$
$y(x)=y_p(x)+\frac{e^{-2\int{y_p(x)}dx}}{\int{e^{-2\int{y_p(x)}dx}}dx}$ (This is general solution)
2-Endless Integral
$y'+y^{2}=f(x)$
$y'=f(x)-y^{2}=$
$y(x)=\int{(f(x)-y^{2})} dx=\int{(f(x)-(\int{[f(x)-y^{2}]}dx)^{2})} dx=..$
The result is endless integral solution. We need iteration to find solution
$y_{n+1}=\int{(f(x)-y_n^{2})} dx$ if we start with $y_0(x)=g(x)$
$y_p(x)=y_{\infty}(x)$
$y_p(x)$ is a particular solution
3-Endless Derivatives
$y'+y^{2}=f(x)$
$y^{2}=f(x)-y'$
$y=\sqrt{f(x)-y'}$
$y=\sqrt{(f(x)-(\sqrt{f(x)-y'})'} = ..$
$y_{n+1}=\sqrt{f(x)-y_n'}$ if we start with $y_0(x)=g(x)$
$y_p(x)=y_{\infty}(x)$
$y_p(x)$ is a particular solution
The result is endless derivatives solution. We need iteration to find solution
4-Power series method
$y'+y^{2}=f(x)=f(0)+f'(0)x+\frac{f''(0)x^{2}}{2!}+\frac{f'''(0)x^{3}}{3!}+...$
$y_p(x)$ is a particular solution if $a_0$ is selected any number. if $a_0$ is selected as c constant, the general solution of y(x) can be found depends on x and c.
$y(x)=a_0+a_1x+\frac{a_2x^{2}}{2!}+\frac{a_3x^{3}}{3!}+...$
$y'(x)=a_1+a_2x+\frac{a_3x^{2}}{2!}+\frac{a_4x^{3}}{3!}+...$
$y^{2}(x)=a_0^{2}+(2a_0a_1)x+(2a_0\frac{a_2}{2!}+a_1^{2})x^{2}+...$
$y'+y^{2}=f(x)$
$a_0=c$
$a_0^{2}+a_1=f(0)$
$a_1=f(0)-c^{2}$
$a_2+2a_0a_1=f'(0)$
$a_2=f'(0)-2c(f(0)-c^{2})=f'(0)-2cf(0)+2c^{3}$
(All $a_n$ can be found in that method and depends on c )
$y(x)=c+(f(0)-c^{2})x+\frac{(f'(0)-2cf(0)+2c^{3})x^{2}}{2!}+....$ (This is general solution)
Note:I asked the same question in math.stackexchange.com and I noticed that also theories can be asked here. I decided to open a topic here too you can see the link ( http://math.stackexchange.com/questions/99850/how-can-i-solve-the-differential-equation-yy2-fx )
-
A simple fact about Riccati equations : if $u, v$ solve a linear system of the first order $u'(t)=\alpha(t) u(t) + \beta(t) v(t)$ , $v(t)'=\gamma(t) u(t) + \delta(t) v(t)$, then the ratio $u(t)/v(t)$ solves a Riccati equation easily written. And any Riccati equation can be related this way to such a system (even with some freedom of choice). – Pietro Majer Jul 21 '14 at 13:33
You are asking about a very classical problem. The Picard-Vessiot theory was developed to show that, in a certain well-defined sense, there is no closed form' solution to problems of this kind. You should take a look at the books by Kolchin and Ritt on differential algebra. For a start on the basic ideas, have a look at this paper by Hubbard and Lundell:
http://www.math.cornell.edu/~hubbard/diffalg1.pdf
`
-
Thank you for great referances.I got new ideas to approach the problem. According to the diffalg1.pdf, it was prooved that $y'+y^{2}=x$ no solutions which can be written using elementary functions, or anti-derivatives of elementary functions, or exponentials of such anti-derivatives, or anti-derivative of those, etc. but it be solved using power series, integrals which depend on a parameter, or Bessel functions of order 1/3. Does it mean if we add a new function to elemantry function group or integrals which depend on a parameter, we can find a closed form solution of the $y'+y^{2}=f(x)$? – Mathlover Feb 2 '12 at 21:56
@Mathlover, one can always make an equation soluble in closed form in this way by adjoining to the collection of elemementary functions a new function $F(x_0, y_0, x)$ that is the solution to the differential equation passing through $(x_0, y_0)$. – L Spice Sep 10 '15 at 17:49
@LSpice I especially wonder if we add only one function (such as $J_{\frac{1}{3}}(x)$) into elementary functions would be enough to solve the general equation $y'+y^2=f(x)$ or it has not been enough to solve $y'+y^2=f(x)$ yet. how can define the solubility for $y'+y^2=f(x)$? I have been looking for any such special function or functions (limited number) to add into elementary functions that would be enough to solve $y'+y^2=f(x)$ . Maybe the adding a function into elementary functions would never be enough to solve the general equation $y'+y^2=f(x)$. Thanks for your comments – Mathlover Sep 21 '15 at 13:18
a Riccati equation can be turned into $u''+f(x)u=0$ by the transformation $y= \frac{u'}{u}$ using the WKB ansatz we have the asymptotic solution to 'u' as
$u(x)\sim C(f(x)^{-1/4}exp( -\int dx \sqrt -f(x))$
-
Very nice referances. Thank you very much. I found many nice detail about the subject.en.wikipedia.org/wiki/WKB_approximation – Mathlover Apr 12 '12 at 11:39
|
{}
|
## PCSR Civil Service Exam Review Guide 9
PART I: MATH
A. Youtube Videos (All videos are in Taglish)
How to Solve Number Problems Mentally
How to Solve Word Number Problems
How to Solve Consecutive Number Problems
B. Articles. You can read the articles below.
How to Solve Number Word Problems
How to Solve Consecutive Number Problems
Part II ENGLISH
A. Vocabulary
Civil Service Exam Vocabulary Review 8
B. Grammar and Correct Usage
The strategy in solving algebraic problems is to take a specific case. For instance, in the problem above, if one number is say, $5$, then the larger number is $5 + 3$ because it is $3$ greater than the first number. Since we do not know the numbers yet, we can represent the smaller number by $x$ and the larger number by $x + 3$. » Read more
|
{}
|
# Articles
• ### The Syntax Error That Killed Our Build for a Week.
December 12, 2015
Back in the day when I wrote in C and we used X11 with Motif, I was creating a new module based on an existing one, so I just copied and pasted the file from one xterm to another:
• ### Zero Downtime Migrations
August 15, 2015
For any sufficiently large application, you want to minimize interruptions to service while deploying new code. This is especially challenging for migrations on Heroku and other Platforms-as-a-Service, where you have a Catch-22 problem; you want to run your migrations, first, but you can’t run them until you’ve deployed the migrations to production. Usually, at this point, your new migration-dependent features are in the deploy, too. So when your application restarts, and the migration hasn’t run, yet, your poor application will trip some exceptions, and perhaps, create problems to your users. Of course, you can put the app into maintenance mode, but that creates more downtime for your users.
• ### Unintended consequences (and a disaster averted)
January 6, 2015
I want to tell you a scary-as-shit story, mostly so that my friends that have CCW permits might read it and not get killed by good intentions, because I love you all.
• ### Timecop time and database time
June 8, 2014
Timecop is a great addition to your testing toolbox, but if you’ve ever tried to use Timecop to interact with a database, and then got the most mysterious of existential errors:
June 1, 2014
Once a week, we run a query on our production system that takes a very long time to complete. Invariably, it times out because the default settings on Heroku are (quite reasonably) for web / API interactions; anything longer than 30 seconds is too long and the connection should be closed to let other requests through. How do you get around this, then? You could set up your database.yml file so that you connect to your production database from a developer workstation. This is actually quite useful, but opens up a hellacious security hole. Let’s not get started about the consequences of accidentally running rake db:drop:all! Moreover, you want your system to be automated as much as possible, and that means running your tasks on a Heroku instance. Fortunately, with a little hackery, you can modify the connection timeout settings within your application. Do this carefully, and make sure that it can not leak into your running web application dynos.
• ### When you need a rabbit, spec it
February 7, 2014
On my current project, we're using RabbitMQ. It's a bit of infrastructure that has to be present, and if it isn't, our integration tests will fail with mysterious error messages. We want our tests to be informative, so let's write a test that asserts that we have the requisite infrastructure in place.
• ### Speed up your integration tests with a jig
February 7, 2014
If you've written enough integration tests (with Capybara et al.), you must have noticed how much time your tests spend just logging into your web app. Even if it takes 1 second each time, it starts to add up. Here's a solution that I've written several times, now. I create a test "jig" that allows me to authenticate into my application with a single visit.
• ### Stop thinking like an employee
November 5, 2013
A few weeks ago, some of the managers got together for drinks after work. I started to reminisce about an old employer who had, what I thought, was an incredibly unique and reasonable way to handle the “vacation liability” problem. The old firm was a consulting firm, mostly military contracts, in the DC area. We were paid for every hour we billed, and we accrued vacation hours for every hour we were paid. It created an environment where the top performers could rack up so much vacation that they could not use it all. Many companies handle this with the familiar “use it or lose it” policy, which caps the accrued vacation hours to some limit. Instead, the firm let vacation accrue forever, with one modification: When you got a raise, your vacation hours were reduced by exactly the same percentage.
• ### Why Pivotal Labs
May 26, 2013
It is a time-honored tradition for Pivots to blog about their first few months at Pivotal. A typical day at Pivotal is strong work. It’s different from any previous job. It’s exhausting. After six weeks or so, however, the Pivots find their rhythm. I’m not going to write any more about that. I’ll include some links, below, so you can read them for yourself. I’m here to write about the two-years-later; When most developers are itching to move on to the next big thing. I’m still happily learning new stuff every day.
• ### Sencha Touch BDD - Part 5 - Controller Testing
May 18, 2013
Part 4 Introduced PhantomJS as an easy and faster alternative to headful Jasmine testing. Part 3 added jasmine-ajax so we can verify that stores and models react properly to back-end data. We also learned how to use stores to test views, without depending on a back-end server. In Part 2 I showed you how to unit test Sencha model classes in Jasmine. In Part 1 I showed you how to set up your Sencha Touch development environment to use the Jasmine JavaScript test framework.
• ### Sencha Touch BDD - Part 4 - PhantomJS
May 10, 2013
Part 3 added jasmine-ajax so we can verify that stores and models react properly to back-end data. We also learned how to use stores to test views, without depending on a back-end server. In Part 2 I showed you how to unit test Sencha model classes in Jasmine. In Part 1 I showed you how to set up your Sencha Touch development environment to use the Jasmine JavaScript test framework.
• ### Sencha Touch BDD - Part 3 - Testing Views and Mocking Stores
May 5, 2013
In
Part 1 I showed you how to set up your Sencha Touch development environment to use the
Jasmine JavaScript test framework. In
Part 2 I showed you to unit test Sencha model classes in Jasmine.
###I don’t normally test views, but when I do
There’s an old MVC mantra: “Fat models, skinny controllers and stupid views.” We don’t want complex logic in our views; that makes them hard to maintain. We don’t want any business logic in our controllers, either. That’s why we have models. Views should be so simple that they don’t require tests. There is a gray area with Sencha, however, where we found testing to be useful in our design process. It has to do with how views interact with stores.
Stores are essentially collections of models. They also encapsulate the persistence layer logic, separate from the business logic of the model. DataViews are a special class of Views in Sencha Touch that will consume a collection to generate a list-type view via a template.
Here are two goals for testing views and stores:
• Does the view consume the right fields from the store, without hitting the back-end for data?
• Does the store organize the data from the back-end, i.e. create the interface that is used by the view? Again, without hitting the back-end for data.
###Small steps and iterate
Let’s define a very simple view via tests, we’ll make it work using an in-line store, then we’ll refactor the store into its own class. Next, we’ll refactor the storage class to use a remote back-end.
Ext.require('SenchaBdd.view.MyView');
describe('SenchaBdd.view.MyView', function () {
it("has a list of colors", function () {
var view = Ext.create('SenchaBdd.view.MyView', {
renderTo: 'jasmine_content',
store: {
fields: ['color'],
data: [
{color: 'red'},
{color: 'green'},
{color: 'blue'}
]
}
});
expect(Ext.DomQuery.select('.favorite-color').map(function (el) {
return el.textContent
}).join(', ')).toEqual('red, green, blue');
});
});
Sencha does not come bundled with jQuery, so if you were expecting a DOM query like, $(“.favorite-color”), you might be surprised by the expectation. Ext has its own flavor of querying the DOM, using Ext.DomQuery.select. Try implementing the view on your own to make the test pass. It should look remarkably similar to this: Ext.define('SenchaBdd.view.MyView', { extend: 'Ext.dataview.DataView', xtype: 'myview', config: { itemTpl: '<div class="favorite-color">{color}</div>' } }); In your application, you probably won’t hardwire a store into the view. In fact, you will probably embed this little view inside a larger container, like so (this adds another tab to the sample app): --- a/app/view/Main.js +++ b/app/view/Main.js @@ -10,6 +10,11 @@ Ext.define('SenchaBdd.view.Main', { items: [ + { + title: 'Favorites', + iconCls: 'star', + xtype: 'myview', + store: 'mystore', + styleHtmlContent: true + }, { title: 'Welcome', iconCls: 'home', Let’s create a store so our view will show us something: Ext.define('SenchaBdd.store.MyStore', { extend: 'Ext.data.Store', config: { storeId: 'mystore', fields: ['color'], data: [ {color: 'red'}, {color: 'green'}, {color: 'blue'} ] } }); In order to see this view, we’ll need to add it to our app.js file: --- a/app.js +++ b/app.js @@ -31,7 +31,11 @@ Ext.application({ ], views: [ - 'Main' + 'Main', 'MyView' ], + stores: [ + 'MyStore' + ], icon: { Now, let’s go back and refactor our test so it uses MyStore instead of one that was hard-wired. --- a/spec/javascripts/view/MyViewSpec.js +++ b/spec/javascripts/view/MyViewSpec.js @@ -2,17 +2,16 @@ Ext.require('SenchaBdd.view.MyView'); describe('SenchaBdd.view.MyView', function () { it("has a list of colors", function () { + var store = Ext.create('SenchaBdd.store.MyStore', { + data: [ + {color: 'red'}, + {color: 'green'}, + {color: 'blue'} + ] + }); var view = Ext.create('SenchaBdd.view.MyView', { renderTo: 'jasmine_content', - store: { - fields: ['color'], - data: [ - {color: 'red'}, - {color: 'green'}, - {color: 'blue'} - ] - } - + store: store }); expect(Ext.DomQuery.select('.favorite-color').map(function (el) { return el.textContent All of our tests should remain green, but we’ve removed the “fake” store from our test. ###Let’s get dynamic Static data stores are boring. The fun starts when you start talking to a back-end API. Let’s say that we have a server that responds to an end point of ‘/colors.json’ with a list of favorite colors. We can even “fake” it by placing a file in the appropriate place. Even so, we don’t want our tests to make network calls to the back-end. That’s not appropriate for unit testing. We’ll use Jasmine’s AJAX mocking helper, jasmine-ajax. At the time of this writing, the 2.0 branch had not been merged into the main line, and we need the 2.0 branch in order to work with Ext. cd spec/javascripts/helpers curl -O 'https://raw.github.com/pivotal/jasmine-ajax/2_0/lib/mock-ajax.js' git add ./mock-ajax.js And we’ll create our first store spec in spec/javascripts/store/MyStoreSpec.js: describe('SenchaBdd.store.MyStore', function () { var store; beforeEach(function () { jasmine.Ajax.useMock(); clearAjaxRequests(); store = Ext.create('SenchaBdd.store.MyStore') }); it('calls out to the proper url', function () { store.load(); var request = mostRecentAjaxRequest(); expect(request.url).toEqual('/colors.json'); }); }); Notice that I call jasmine.Ajax.useMock() and clearAjaxRequests() in the set up block. This is because I want to wait until the very last moment to turn on ajax mocking. The Ext class loader might still be trying to load a class (via xhr), and the mocker will prevent that from happening. I also clear all previous requests (in case there were any left over, to prevent test polution). When you run Jasmine, you’ll get a test failure, “TypeError: Cannot read property ‘url’ of null” because we haven’t set up the proxy in the store, yet. Let’s do that. $ cat app/store/MyStore.js
Ext.define('SenchaBdd.store.MyStore', {
extend: 'Ext.data.Store',
config: {
storeId: 'mystore',
fields: ['color'],
proxy: {
type: 'ajax',
url: '/colors.json'
}
}
});
Now, when you run the test suite, you’ll get a
different error! This is because the default settings for the ajax proxy enables caching, paging, etc. Things we don’t need, so we have to turn them off:
proxy: {
type: 'ajax',
url: '/colors.json',
noCache: false,
pageParam: false,
startParam: false,
limitParam: false
}
Now our api test is green! You can even test this in the application by dropping a file, ‘colors.json’ into the public directory.
Let’s add one more test to finish things off.
it('populates the collection', function () {
var mockedRequest = mostRecentAjaxRequest();
mockedRequest.response({
status: 200,
responseText: [
{color: 'red'},
{color: 'green'},
{color: 'blue'}
]
});
expect(store.getCount()).toEqual(3);
expect(store.getAt(0).get('color')).toEqual('red');
expect(store.getAt(1).get('color')).toEqual('green');
expect(store.getAt(2).get('color')).toEqual('blue');
});
Every mocked ajax request has a
response() method that you can use to inject your own response, synchronously, to your tests. This confirms that MyStore properly parses and arranges the received data so that it can be presented by the view.
We now have 2 tests that depend on a back-end to respond in a specified way. How do we keep these tests from drifting out of sync? Since we’ve mocked the store in our view test, we might never know if the back-end API changes!
You can wrap Jasmine expectation in functions so that they are reusable. Then we can mix this matcher into our view test to confirm that the store we use there is the same.
Let’s add this function to our
SpecHelper.js file:
function myStoreDataIsValid(store) {
expect(store.getCount()).toEqual(3);
expect(store.getAt(0).get('color')).toEqual('red');
expect(store.getAt(1).get('color')).toEqual('green');
expect(store.getAt(2).get('color')).toEqual('blue');
}
And we’ll replace our existing tests with a single line:
myStoreDataIsValid(store);
We’ll also modify the MyViewSpec.js:
--- a/spec/javascripts/view/MyViewSpec.js
+++ b/spec/javascripts/view/MyViewSpec.js
@@ -9,6 +9,7 @@ describe('SenchaBdd.view.MyView', function () {
{color: 'blue'}
]
});
+ myStoreDataIsValid(store);
var view = Ext.create('SenchaBdd.view.MyView', {
renderTo: 'jasmine_content',
store: store
Similarly, we can refactor our colors array into var:
diff
— a/spec/javascripts/helpers/SpecHelper.js
+++ b/spec/javascripts/helpers/SpecHelper.js
@@ -16,6 +16,15 @@ afterEach(function () {
domEl.setAttribute(‘style’, ‘display:none;’);
});
+var colorsJSON;
+beforeEach(function() {
+ colorsJSON = [
+ {color: ‘red’},
+ {color: ‘green’},
+ {color: ‘blue’}
+ ];
+});
And then use colorsJSON in our tests.
--- a/spec/javascripts/store/MyStoreSpec.js
+++ b/spec/javascripts/store/MyStoreSpec.js
@@ -18,11 +18,7 @@ describe('SenchaBdd.store.MyStore', function () {
request.response({
status: 200,
- responseText: [
- {color: 'red'},
- {color: 'green'},
- {color: 'blue'}
- ]
+ responseText: colorsJSON
});
myStoreDataIsValid(store);
});
and
--- a/spec/javascripts/view/MyViewSpec.js
+++ b/spec/javascripts/view/MyViewSpec.js
@@ -3,12 +3,9 @@ Ext.require('SenchaBdd.view.MyView');
describe('SenchaBdd.view.MyView', function () {
it("has a list of colors", function () {
var store = Ext.create('SenchaBdd.store.MyStore', {
- data: [
- {color: 'red'},
- {color: 'green'},
- {color: 'blue'}
- ]
+ data: colorsJSON
});
+ myStoreDataIsValid(store);
var view = Ext.create('SenchaBdd.view.MyView', {
renderTo: 'jasmine_content',
store: store
### What we’ve managed, so far
The toy app is coming along. We’ve test driven a DataView that consumes a back-end API. We’ve added the mock-ajax library so we can unit tests our stores in isolation. We’ve even seen a few techniques for keeping our mocks from getting out of sync (although, if you’ve been paying attention, I’ve still left a gaping hole, that needs to be plugged).
• ### Sencha Touch BDD Part 2 -- Unit Testing Models
April 26, 2013
In Part 1 I showed you how to set up your Sencha Touch development environment to use the Jasmine JavaScript test framework. We’re going to take a bit of a breather from all the hard work we did last week. In this blog, I’m going to show you how to test simple models.
• ### Sencha Touch BDD Part 1
April 17, 2013
(tl;dr) A multi-part series of articles on how to test Sencha Touch applications. It uses Jasmine for unit testing and Siesta for integration testing.
• ### method_missing hazardous to your module?
February 22, 2013
We built an(other) object factory module for our current project and it looks a lot like all the others:
After a while, we noticed that the create methods were all exactly the same. Time for dynamic refactoring!
Unfortunately, this leads to rather bizarre error messages during testing:
Failures:
• ### It's The Volatility That Will Kill You
February 19, 2013
Volatility is what Pivotal Tracker uses to measure the consistency of your team’s work output. You can use that number to help you estimate the first approximation to answer the eternal question, “Will I make the deadline?”
• ### From customer requirements to releasable gem
May 12, 2012
One of the many pleasures of working at Pivotal Labs is that we are encouraged to release some of our work as open source. Often during the course of our engagements, we write code that might have wide-spread use. Due to the nature of our contracts, we can not unilaterally release such code. Those rights belong to the client. And rightly so. So, it is an even greater pleasure when one of our clients believes in "giving back" to the community, as well.
One such example is this modest gem, attribute_access_controllable which allows you to set read-only access at the attribute level, on a per-instance basis. For example, let's say that you have a model Person with an attribute birthday, which, for security purposes, cannot be changed once this attribute is set (except, perhaps, by an administrator with extraordinary privileges). Any future attempts to change this attribute will result in a validation error.
e.g.
May 12, 2012
# Deploy Strategies
If you look at the network graphs of heroku_san on github, you'll see a number of branches where the only change is the deletion of the following line from the deploy task:
• ### TDD Action Caching in Rails 3
March 28, 2012
On my current project, we needed to prove that an action cache was working as expected. Alas, the blogosphere had either out-of-date or unhelpful information. So, after many experiments, we came up with an RSpec test that does what we want. It seems ugly to me, and I hope there's a better way. The names have been changed to protect the guilty. Any resemblances to actual classes and methods are purely coincidental.
We needed to confirm that a certain action was cached. This action is preview in the brands controller. Using the usual Rails url helpers, we construct some fixture data.
Then we wrote our first test:
This won't work at all, however; because, in the test environment, caching is turned off.
So, we need an around block to temporarily turn caching on:
That's great, but the default cache store is the :null store, which, as its name implies, does nothing.
Better. But our tests still won't run because while ActionController uses the cache_store, Observers and Sweepers
use Rails.cache and that is only updated at boot time.
Did I mention that Rails.cache is an accessor for the global, constant, RAILS_CACHE. Ugh.
So, now, we can implement our method
But that is still not enough. caches_action
has an interesting performance enhancement; it doesn't actually set up the action caching unless caching is enabled at class load time. Since we're not turning caching on until test time, the caches_action method call in the controller class does nothing. We need to re-add it in our test spec.
This is ugly; it doesn't test very much (except the underlying caching module, and why bother testing the framework). At least it proves to ourselves that the action is cached and the cache key is what we expect.
Now that we've got caching under control, let's check cache expiration (using a Sweeper).
First, I create a cached object, in this case, just the string 'CACHED ACTION' and then I invoke the action, and then, I hope, the cache will be expired.
It doesn't really matter what happens in the #update method of the BrandsController as long as it updates a Brand object. A sweeper in Rails is a mix of Observer & controller filters, so all you need to do is "declare" it in the controller
Awesome sauce! Now our tests are red and I'm ready to implement the sweeper
And voilà! We have greenness.
So what have we learned from this? The Rails source is still your best friend when exploring a sticky problem. Caching is hard, and testing caching is even harder.
• ### Dry DevOps with HerokuSan
March 25, 2012
## Quiz time!
• ### DropBox + Git => Designer <3
April 5, 2011
One of the thornier problems in our workflow is knowing when assets are delivered from the designer and keeping them in sync with our application as they change. We used to use e-mail, Skype or sticky notes. The trouble is that the designer's file naming and directory structure were never quite the same as the application's /public/images directory, so direct comparisons were impossible and we ended with a lot bookkeeping to make sure that we didn't lose any changes. Our solution is to clone the project's git repository into a folder inside a shared Dropbox folder.
• ### When to do User Acceptance Testing?
November 18, 2010
"What does Scrum say about User Acceptance Testing? I am wondering if it should best be done within 24 hours of delivery, or at the end of a sprint..."
I'll dodge scrum, but answer how we do it here by framing this in terms of risk and velocity:
We prefer to get acceptance as soon as a story can be delivered to the acceptance/demo server. Intra-day is the best, but certainly within the same sprint when the work is done so as not to muddy velocity measures (a story isn't done until it is accepted, so there's incentive on all sides).
The longer a story sits out there (waiting to be accepted), the bigger the risk that it is "wrong" and needs re-work. Since the code base continues to evolve, fixes are harder the further they are away (in terms of commits) from the original change-set. This can slow down velocity. We want to reduce risk and maintain a steady velocity. Thus, the sooner a story is accepted by the stake holder, the better.
• ### The Elevator Speech
February 24, 2007
# The Elevator Speech
The purpose of the "speech" is to condense the project definition into something that would fit into the time it takes to ride an elevator. This is where all good ideas are pitched, apparently; you stalk an unwary VC until he or she is trapped inside the elevator has no choice but to listen to your prattling. If your idea has any merit, pray that the VC doesn't call on someone else to launch the project and leave you out in the cold.
• ### Using CVS for Web Server Farms
September 19, 2000
## Using CVS for Web Server Farms
by Ken Mayer (kmayer@worldpages.com)
• ### Crack Keys in Your Spare Time
September 19, 2000
or 'Hey buddy, got any spare CPU cycles?'
• ### A Modest Housing Proposal
September 19, 2000
I've been thinking about the next logical step in Silicon Valley human resource management. Already there are campuses where you never have to leave for (almost) all of your personal needs: the dry-cleaner will pick up and deliver; cafeterias, webvan.com and take-out-taxi/ meals on wheels keep you fed; health clubs, masseuse and the roving chiropractor keep your body functional; showers and locker rooms keep personal hygiene up to acceptable standards. So what's next in this reinvention of the company town?
• ### Go West, Young Man, Go West
August 16, 1999
## Maybe wax in your ears isn't such a bad thing after all
• ### The First Full Year of Wishful Thinking
December 24, 1998
It's hard to believe that it's been over a year since we left the house in Los Gatos and moved aboard the S/V Wishful Thinking. We made trips to Angel Island, South Beach Harbor, Richmond Marina, Pier 39, Vallejo YC, Golden Gate YC... covering something over 800 miles this year.
• ### The Smartest Boat in the Marina
November 10, 1998
Day 1: Moved in to my new digitally-maxed out Hunter 450 at last. Finally, I live on the Smartest Boat in the Marina. Everything's networked. The cable TV is connected to our phone, which is connected to my personal computer, which is connected to the shore-power lines, all the appliances and the security system. Everything runs off a universal remote with the friendliest interface I've ever used. Programming is a snap.
• ### A Travelogue of a Dive Vacation to the Bahamas, Hurricane and Shipwreck Included at No Extra Charge
November 6, 1998
• ### Fleet Week at Treasure Island
October 11, 1998
An eventful weekend, that's what we had. It started Friday evening about 6 PM as that was our window of opportunity to escape from the shallows of the marina entrance in the South bay. The winds had been blowing 25+ out of the north since early afternoon so it was going to be wet and splashy, but we really wanted to stay at Treasure Island Marina that night and on to Angel Island early the next morning. So, it was now or never. Wishful Thinking was actually the last of three to leave. Gypsy, a Union 38, left about 5 PM and Dragon Lady, a Cheoy Lee 40, left about 5:30. We gave them a call once we were underway. "Hope you guys are reefed" came their answer. "Reefed" I said, "hell we're motoring!" "We're making fine progress just quartering these seas a bit and heading into the 25+ knots".
• ### Treasure Island CyberCruise
June 1, 1998
Home port, Peninsula Marina, Redwood City, US
37N29.55' 122W15.98'
• ### Renaming Ceremony
April 26, 1998
• ### Projects List
April 23, 1998
Home port, Peninsula Marina, Redwood City, US
37N29.55' 122W15.98'
• ### Letter Home
April 10, 1998
Home port, Peninsula Marina, Redwood City, US
37N29.55' 122W15.98'
• ### Three Bridges
March 16, 1998
Berkeley Yacht Club, California, US
• ### The Birthday Month
February 27, 1998
Home port, Peninsula Marina, Redwood City, US
37N29.55' 122W15.98'
• ### On a Lighter Note
November 28, 1997
Home port, Peninsula Marina, Redwood City, US
37N29.55' 122W15.98'
• ### Musing on a Boating Life
November 27, 1997
Home port, Peninsula Marina, Redwood City, US
37N29.55' 122W15.98'
• ### Top 10 Tools for Lonely Sys Admins
October 15, 1995
|
{}
|
# Computing covariance matrix with historical data
I have been reading Active Portfolio Management by Grinold and Khan. In the chapter about risk, they mention,
"The third elementary model relies on historical variances and covariances. This procedure is neither robust nor reasonable. Historical models rely on data from T periods to estimate the $$NxN$$ covariance matrix. If T is less than or equal to N, we can find active positions that will appear riskless! So the historical approach requires $$T > N$$. For a monthly historical covariance matrix of S&P 500 stocks, this would require more than 40 years of data."
When forming mean-variance optimal portfolio, we would need to invert the covariance matrix hence, we require a full rank covariance matrix. In this case using historical returns is not robust.
However, if the main intention is to compute an estimate of the variance of the portfolio $$w'\Sigma w$$ where $$w$$ is the weight or holdings of stock, in such a use case, we can estimate $$\Sigma$$ with $$T right? As we are not inverting the covariance matrix, the concern of not full rank is less of an issue, right?
Any help is very much appreciated!
• I don't see how you can estimate $\Sigma$ with $T < N$. Can you explain that ? You have an $N$ by $N$ matrix, $\Sigma$, and $T$ periods of data so it's not possible as far as I can tell. May 7 '20 at 12:34
• I think it's more difficult than that because if they say $T < N$ means that positions will appear riskless, they mean that your covariance matrix is going to have elements that are zero. So, although I'm not clear on why, I'd be careful because those two guys know what they're talking about. It may have something to do with the estimation of the covariance matrix being more complex than plugging in estimated correlations. ( maybe a risk model ? ). If you think you're way is okay, go for it. Or maybe look closer at the book to see what they mean by that statement. May 7 '20 at 20:09
• It may have something to do with independence. If stock X has T observations of returns, you probably don't want to use the same T observations for estimation of the correlations of X with the other N-1 stocks. If you do that, you'll probably end up with a matrix that's not positive definite because the resulting estimates won't be independent because of $T$ < $N$. I'm no whiz at covariance matrix estimation but it probably has something do with the rank of the resulting estimated matrix not being N. May 7 '20 at 20:19
|
{}
|
# Deriving the risk-aversion coefficient
By considering the parametrised formulation of the mean-variance criterion by Markowitz, the risk aversion coefficient $$\lambda$$ can be derived as follow.
1. As suggested by Arrow and Pratt, given the utility function of the investor $$U(x)$$, $$\lambda$$ for a specific level of initial wealth $$x$$ can be approximated by recurring to the absolute $$A_a$$ and relative $$A_r$$ Arrow-Prat risk aversion measures.
$$A_a(x)=-\frac{U''(x)}{U'(x)}$$
$$A_r(x)=-x\frac{U''(x)}{U'(x)}$$
1. Deriving the entire efficient frontier, it is possible to obtain $$\lambda$$ implicitly. It would be the one that leads to the preferred level of risk.
I was wondering if there are other approaches to compute such coefficient without the need of identify the utility function. I have been able find one that compute $$\lambda$$ as follow but I do not understand the idea behind it with the exception that it vaguely resemble the Safety First Ratio or Sharpe ratio with $$r_f=0$$. Specifically, if $$\mu_B$$ and $$\sigma^2_B$$ are respectively the expected return and variance of a benchmark $$B$$, then
$$\lambda=\frac{\mu_B}{2\sigma^2_B}.$$
It is peculiar the fact that for the same level of $$\mu_B$$, when $$\sigma^2_B\to +\infty$$ the coefficient $$\lambda\to0$$. Is this result compliant with theory of choice? It appears to be more likely related with prospect theory or since there are no other parameters this formula seems to imply only risk seeking behaviour.
• Might not be relevant, but I have seen $\mu/\sigma^2$ called the Fano ratio, as here. – steveo'america Mar 24 at 23:13
• That is interesting. Never heard of it before. Thank-you for the link. – Nipper Mar 25 at 0:20
I understand that you want to derive some form of risk preference parameter from portfolios that you can observe 'in the wild', and I will discuss that accordingly. As a side note, there is a whole thread in the literature that discusses elicitability of risk preferences using cleverly designed choice experiments -- and the form of the utility function. The link is one random example.
The AP measures are defined locally and can be used (in theory) to compare risk aversion across agents. The AP measures require a functional utility form of the utility function, and require its first and second derivatives to be calculated.
In practice, you would thus need a way to compute a second derivative (numerically: at least three data points).
IF you assume a functional form in the first place, you can find its risk preference parameter under some additional restrictions, I think. Below, I will discuss two cases: One where you can obtain the parameter, and another one where this is not possible (I think).
Assumptions
Our agent is risk averse with CARA utility function $$u(x)=1-e^{-\gamma x}$$ with risk aversion parameter $$\gamma>0$$. The agent invests in some portfolio weights $$w$$ and for simplicity, we assume that the log returns are multivariate normally distributed, $$x\sim N(\mu,\Sigma)$$. As the agent wants to maximize expected utility, we thus have them:
\begin{align} \max_{w}\mathrm{E(u(w))}&=\max_{w}\left(1-\mathrm{E}(e^{-\gamma w^Tx})\right)\\ &=\max_{w}\left(1-e^{-\gamma w^T\mu+\frac{1}{2}\gamma^2w^T\Sigma w}\right)\\ &\propto\max_{w}\left(w^T\mu-\frac{1}{2}\gamma w^T\Sigma w\right)\\ \end{align} subject to $$\sum_i w_i=1$$, i.e. $$w^Te=1$$ with $$e$$ a vector of ones.
1. Efficient portfolio
In our first example, the agent faces not only the risky investment set $$x$$ but also a risk free rate $$r_f$$. Their portfolio optimization decision is hence
$$\max_{w}\quad w^T\mu-\frac{1}{2}\gamma w^T\Sigma w+\left(1-w^Te\right)r_f$$
with optimality condition
$$\gamma \Sigma w=\mu-er_f$$
Clearly, once we observe the optimal risky portfolio $$w^*$$, we can rewrite the optimality condition and find
$$\gamma (w^*)^T\Sigma w^*=(w^*)^T(\mu-er_f) \Rightarrow \gamma = \frac{(w^*)^T(\mu-er_f)}{(\sigma^*)^2}$$
2. No risk free investment If, on the other hand, there is no risk free investment available, the agent maximizes their expected utility under a full investment restriction, resulting in the FOC:
\begin{align} \gamma\Sigma w -\lambda e &= \mu\\ w^Te&=1 \end{align}
Since we will only be able to observe their 'optimal' portfolio $$w^*$$ and not their optimal Lagrange parameter $$\lambda$$, we cannot elicit their risk aversion parameter $$\gamma$$ in this case.
HTH?
To get towards an answer to your other comments / post:
Say you want to measure the risk aversion parameter given a utility form (CARA, as above) and an observed fraction of wealth that is invested in the risky asset. Then we should at first note that this is inherently the same as my example 1. from above, but in a univariate setting without a risk free rate. Nevertheless, let me try to sketch the path:
Everything is assumed as above, and the agent decides on a share of wealth $$W$$ (at this point, not restricted between 0% and 100%) that is invested in the risky asset. Let us simplify and set $$W=1$$, then the risky consumption is
$$c=(1-\alpha)+\alpha x$$
and with $$x\sim N(\mu,\sigma^2)$$, expected utility is
$$EU(\alpha)=1-e^{-\lambda (1-\alpha)-\alpha\lambda\mu+\frac{1}{2}\alpha^2\lambda^2\sigma^2}$$
Optimization of the expected utility is akin to maximizing the following
$$\max_{\alpha} \quad 1-\alpha + \alpha\mu-\frac{1}{2}\alpha^2\lambda\sigma^2$$
with FOC
$$\mu-1 =\alpha\lambda\sigma^2$$
and hence you are able to back-out the parameter $$\lambda$$ from an observed investment fraction $$\alpha$$ as
$$\lambda^* = \frac{\mu-1}{\alpha \sigma^2}$$
NB: Don't worry about the $$-1$$ in the nominator, this stems from the way returns and utility are setup. If done more carefully, you'd indeed arrive at $$\lambda^* = \frac{\mu}{\alpha \sigma^2}$$
HTH?
• Thank-you for the detailed and clear answer. Much appreciated. However, we obtain $w^*$ by means of $\lambda$ or $\gamma$ as you call it. Then it appears to me that there are some limitations to the first example since it implies ex-ante knowledge of $w*$. Do you think that the problem can be formulated as in the following answer? – Nipper Mar 25 at 11:44
• Hi, we observe $w^*$ in the market / from an investor. We do not need the risk aversion parameter $\gamma$ in the first place, but we imply it from the observed portfolio choice. – Kermittfrog Mar 25 at 11:46
EDITED Let $$x$$ be the available investor's wealth. Given a benchmark $$B$$ which can be considered as a proxy to the market portfolio, let $$x_B$$ the amount of invested wealth. Let also $$\mu_b$$ and $$\sigma^2_B$$ be the expected return and variance of $$B$$, respectively.
The investor solves a trade off between investing and not investing by considering the risk-return profile of $$B$$. Personal preferences are expressed by means of $$\lambda$$.
$$\max_{x_B}\mu_Bx_B-\lambda x_B^2\sigma_B^2$$
Then the first order condition is
$$\frac{\partial f}{\partial x_B}=\mu_B-2\lambda x_B\sigma_B^2=0$$ which leads to
$$\lambda=\frac{\mu_B}{2\sigma^2_Bx_B}$$
• What you try to derive here, IMO, is how to derive the representative agent's risk aversion coefficient (under some assumption for the utility function, as did I). Furthermore, your utility function seems odd as the wealth should enter quadratically in the risk, no? So all in all, I think I cannot fully follow your line of thought. – Kermittfrog Mar 25 at 12:11
• Specifically, what I'd like to understand is if there are other ways to derive $\lambda$ in order to obtain the optimal portfolio. I do not mean derive the risk aversion coefficient given an optimal portfolio. – Nipper Mar 25 at 12:35
• Sorry, my bad. I just fixed the answer. Does it makes sense now? The investor only specify the amount of wealth that wants to invest in the market. – Nipper Mar 25 at 12:44
• Understood. See my update, maybe that helps? – Kermittfrog Mar 25 at 15:58
• Those models are commonly derived under the assumption of an agent who has a consumption and investment decision to make. Say they are ‘endowed’ with wealth $W$and must decide which fraction to invest risky and what to consume without risk. That’s where the residual comes from. – Kermittfrog Mar 28 at 15:48
|
{}
|
# How to normalize Euclidean distance over two vectors?
Why I want to normalize Euclidean distance
Currently, I am designing a ranking system, it weights between Euclidean distance and several other distances.
Euclidean distance behaves unbounded, that is, it outputs any $value > 0$ , while other metrics are within range of $[0, 1]$. Have to come up with a function to squash Euclidean to a value between 0 and 1.
What does my data look like
Euclidean distance is computed by sklearn, specifically, pairwise_distances.
This function takes two inputs: v1 and v2, where $v_1, v_2 \in \mathbb{R}^{1200}$ and $||v_1|| = 1 , ||v_2||=1$ (L2-norm).
My simple method:
Derive the bounds of Eucldiean distance:
\begin{align*} (v_1 - v_2)^2 &= v_1^T v_1 - 2v_1^T v_2 + v_2^Tv_2\\ &=2-2v_1^T v_2 \\ &=2-2\cos \theta \end{align*}
thus, the Euclidean is a $value \in [0, 2]$.
to normalize, just simply apply $new_{eucl} = euclidean/2$. Would it be a valid transformation?
Suggestions from other people
As some of people suggest me to try Gaussian, I am not sure what they mean, more precisely I am not sure how to compute variance (data is too big takes over 80G storing space, compute actual variance is too costly). More importantly, I am very confused why need Gaussian here?
Edited
As an extension, suppose the vectors are not normalized to have norm eqauls to 1. What do we do to normalize the Euclidean distance?
• Euclidean distance on L2-normalized vectors is called chord distance. It is a chord in the unit-radius circumference. Its maximum is 2, the diameter. Dividing euclidean distance by a positive constant is valid, it doesn't change its properties. – ttnphns Apr 23 '18 at 6:54
• The question is whether you really want Euclidean distance, why not Manhattan? Have a look on Gower similarity (search the site). – ttnphns Apr 23 '18 at 6:56
|
{}
|
# Fitting ideals of modules
In my previous post, I presented a proof of the existence portion of the structure theorem for finitely generated modules over a PID based on the Smith Normal Form of a matrix. In this post, I’d like to explain how the uniqueness portion of that theorem is actually a special case of a more general result, called Fitting’s Lemma, which holds for arbitrary commutative rings.
We begin by proving that one can characterize the diagonal entries in the Smith Normal Form of a matrix $A$ over a PID in an intrinsic way by relating them to the GCD of the $k \times k$ minors of $A$ for all $k$. Actually, since the GCD isn’t defined for general rings, we will instead consider the ideal generated by the $k \times k$ minors (which makes sense for any ring, and is the ideal generated by the GCD in the case of a PID).
# Finitely generated modules over a P.I.D. and the Smith Normal Form
I’m teaching Graduate Algebra this semester, and I wanted to record here the proof I gave in class of the (existence part of the) structure theorem for finitely generated modules over a PID. It’s a standard argument, based on the existence of the Smith Normal Form for a matrix with entries in a PID, but it’s surprisingly hard to find a concise and accessible reference.
We assume familiarity with basic definitions in the theory of modules over a (commutative) ring. Our goal is to prove the following:
Theorem: Let $R$ be a principal ideal domain and let $M$ be a finitely generated $R$-module. Then $M$ is isomorphic to a direct sum of cyclic $R$-modules. More precisely, there are a non-negative integer $r$ (called the rank of $M$) and elements $d_1,\ldots,d_n \in M$ (called the invariant factors of $M$) such that $d_i \mid d_{i+1}$ for all $i=1,\ldots,n-1$ and $M \cong R^r \oplus R/(d_1) \oplus R/(d_2) \oplus \cdots \oplus R/(d_n)$.
# A Fields Medal for June Huh
Congratulations to all of the winners of the 2022 Fields Medal! The only one I know personally, and whose work I have studied in detail, is June Huh.
I’m happy both for June himself and for the field of combinatorics more broadly, which at one point was not taken seriously enough by the mathematics community to merit Fields Medal level consideration. I’m particularly interested in connections between combinatorics and algebraic geometry, and that is certainly something that June’s work has taken to new heights.
I thought it might be useful for me to post links to my previous blog posts about June’s work here, along with some related links.
# Counting with martingales
In this post I will provide a gentle introduction to the theory of martingales (also called “fair games”) by way of a beautiful proof, due to Johan Wästlund, that there are precisely $n^{n-2}$ labeled trees on $n$ vertices.
Apertif: a true story
In my early twenties, I appeared on the TV show Jeopardy! That’s not what this story is about, but it’s the reason I found myself in the Resorts Casino in Atlantic City, where the Jeopardy! tryouts were held (Merv Griffin owned both the TV show and the casino). At the time, I had a deep ambivalence (which I still feel) toward gambling: I enjoyed the thrill of betting, but also believed the world would be better off without casinos preying on the poor and vulnerable in our society. I didn’t want to give any money to the casino, but I did want to play a little blackjack, and I wanted to be able to tell my friends that I had won money in Atlantic City. So I hatched what seemed like a failsafe strategy: I would bet $1 at the blackjack table, and if I won I’d collect$1 and leave a winner. If I lost the dollar I had bet in the first round, I’d double my bet to $2 and if I won I’d stop playing and once again leave with a net profit of$1. If I lost, I’d double my bet once again and continue playing. Since I knew the game of blackjack reasonably well, my odds of winning any given hand were pretty close to 50% and my strategy seemed guaranteed to eventually result in walking home with a net profit of $1, which is all I wanted to accomplish. I figured that most people didn’t have the self-discipline to stick with such a strategy, but I was determined. Here’s what happened: I lost the first hand and doubled my bet to$2. Then I lost again and doubled my bet to $4. Then I lost again. And again. And again. In fact, 7 times in a row, I lost. In my pursuit of a$1 payoff, I had just lost $127. And the problem was, I didn’t have$128 in my wallet to double my bet once again, and my ATM card had a daily limit which I had already just about reached. And frankly, I was unnerved by the extreme unlikeliness of what had just happened. So I tucked my tail between my legs and sheepishly left the casino with a big loss and nothing to show for it except this story. You’re welcome, Merv.
Martingales
Unbeknownst to me, the doubling strategy I employed (which I thought at the time was my own clever invention) has a long history. It has been known for hundreds of years as the “martingale” strategy; it is mentioned, for example, in Giacomo Casanova‘s memoirs, published in 1754 (“I went [to the casino of Venice], taking all the gold I could get, and by means of what in gambling is called the martingale I won three or four times a day during the rest of the carnival.”) Clearly not everyone was as lucky as Casanova, however (in more ways than one). In his 1849 “Mille et un fantômes”, Alexandre Dumas writes, “An old man, who had spent his life looking for a winning formula (martingale), spent the last days of his life putting it into practice, and his last pennies to see it fail. The martingale is as elusive as the soul.” And in his 1853 book “Newcomes: Memoirs of a Most Respectable Family”, William Makepeace Thackeray writes, “You have not played as yet? Do not do so; above all avoid a martingale if you do.” (For the still somewhat murky origins of the word ‘martingale’, see this paper by Roger Mansuy.)
# The Eudoxus reals
Let’s call a function $f : {\mathbb Z} \to {\mathbb Z}$ a near-endomorphism of $\mathbb Z$ if there is a constant $C>0$ such that $|f(a+b)-f(a)-f(b)| \leq C$ for all $a,b \in \mathbb Z$. The set of near-endomorphisms of $\mathbb Z$ will be denoted by $N$. We put an equivalence relation $\sim$ on $N$ by declaring that $f \sim g$ iff the function $f-g$ is bounded, and let ${\mathbb E}$ denote the set of equivalence classes.
It’s not difficult to show that defining $f+g$ in terms of pointwise addition and $f \cdot g$ in terms of composition of functions turns ${\mathbb E}$ into a commutative ring. And it turns out that this ring has a more familiar name… Before reading further, can you guess what it is?
# Calendar Calculations with Cards
As readers of this previous post will know, I’m rather fond of mental calendar calculations. My friend Al Stanger, with whom I share a passion for recreational mathematics, came up with a remarkable procedure for finding the day of the week corresponding to any date in history using just a handful of playing cards. What’s particularly noteworthy about Al’s algorithm is that it involves no calculations whatsoever, and the information which needs to be looked up can be cleanly displayed on one of the cards.
When you work through Al’s procedure, it will feel like you’re performing a card trick on yourself – you will be amazed, surprised, and will likely have no idea how it works. I’ve never seen anything quite like this before, and I’m grateful to Al for allowing me to share his discovery with the public for the first time here on this blog.
# Firing games and greedoid languages
In an earlier post, I described the dollar game played on a finite graph $G$, and mentioned (for the “borrowing binge variant”) that the total number of borrowing moves required to win the game is independent of which borrowing moves you do in which order. A similar phenomenon occurs for the pentagon game described in this post.
In this post, I’ll first present a simple general theorem due to Mikkel Thorup which implies both of these facts (and also applies to many other ‘chip firing games’ in the literature). Then, following Anders Björner, Laszlo Lovasz, and Peter Shor, I’ll explain how to place such results into the context of greedoid languages, which have interesting connections to matroids, Coxeter groups, and other much-studied mathematical objects.
# Quadratic Reciprocity via Lucas Polynomials
In this post, I’d like to explain a proof of the Law of Quadratic Reciprocity based on properties of Lucas polynomials. (There are over 300 known proofs of the Law of Quadratic Reciprocity in the literature, but to the best of my knowledge this one is new!)
In order to keep this post as self-contained as possible, at the end I will provide proofs of the two main results (Euler’s criterion and the Fundamental Theorem of Symmetric Polynomials) which are used without proof in the body of the post.
Lucas polynomials
The sequence of Lucas polynomials is defined by $L_0(x)=2$, $L_1(x)=x$, and $L_n(x)=xL_{n-1}(x)+L_{n-2}(x)$ for $n \geq 2.$
The next few terms in the sequence are $L_2(x)=x^2+2, L_3(x)=x^3+3x, L_4(x)=x^4 + 4x^2 + 2$, and $L_5(x)=x^5+5x^3+5x$.
By induction, the degree of $L_n(x)$ is equal to $n$. The integers $L_n(1)$ are the Lucas numbers, which are natural “companions” to the Fibonacci numbers (see, e.g., this blog post).
The polynomials $H_n(x)$
It’s easy to see that for $n$ odd, $L_n(x)$ is divisible by $x$ and $L_n(x)/x$ has only even-power terms. Thus $L_n(x)/x = H_n(x^2)$ for some monic integer polynomial $H_n(x)$ of degree $(n-1)/2$. We will be particularly interested in the polynomials $H_p(x)$ for $p$ prime.
If $n$ is even (resp. odd), a simple induction shows that the constant term (resp. the coefficient of $x$) in $L_n(x)$ is equal to $n$. In particular, for $n$ odd we have $H_n(0)=n$.
# Generalizations of Fermat’s Little Theorem and combinatorial zeta functions
Everyone who studies elementary number theory learns two different versions of Fermat’s Little Theorem:
Fermat’s Little Theorem, Version 1: If $p$ is prime and $a$ is an integer not divisible by $p$, then $a^{p-1} \equiv 1 \pmod{p}$.
Fermat’s Little Theorem, Version 2: If $p$ is prime and $a$ is any integer, then $a^{p} \equiv a \pmod{p}$.
as well as the following extension of Version 1 of Fermat’s Little Theorem to arbitrary positive integers $n$:
Euler’s Theorem: If $n$ is a positive integer and $(a,n)=1$, then $a^{\phi(n)} \equiv 1 \pmod{n}$, where $\phi$ is Euler’s totient function.
My first goal in this post is to explain a generalization of Version 2 of Fermat’s Little Theorem to arbitrary $n$. I’ll then explain an extension of this result to $m \times m$ integer matrices, along with a slick proof of all of these results (and more) via “combinatorial zeta functions”.
# An April Fools’ Day to Remember
Today is the 10th anniversary of the death of Martin Gardner. His books on mathematics had a huge influence on me as a teenager, and I’m a fan of his writing on magic as well, but it was only last year that I branched out into reading some of his essays on philosophy, economics, religion, literature, etc. In this vein, I highly recommend “The Night Is Large”, a book of collected essays which showcases the astonishingly broad range of topics about which Martin had something interesting to say. It’s out of print, but it’s easy to find an inexpensive used copy if you search online.
Thinking back on my favorite Martin Gardner columns, my all-time favorite has to be the April 1975 issue of Scientific American. In that issue, Martin wrote an article about the six most sensational discoveries of 1974. The whole article was an April Fools’ Day prank: among the discoveries he reported were a counterexample to the four-color problem and an artificial-intelligence computer chess program that determined, with a high degree of probability, that P-KR4 is always a winning move for white. The article also contained the following:
# A Very Meta Monday
Usually my blog posts are rather tightly focused, but today I’d just like to post a few stream-of-consciousness thoughts.
(1) My blog was recently featured in the AMS Blog on Math Blogs. Perhaps by mentioning this here I can create some sort of infinite recursion which crashes the internet and forces a reboot of the year 2020.
# Mental Math and Calendar Calculations
In this previous post, I recalled a discussion I once had with John Conway about the pros and cons of different systems for mentally calculating the day of the week for any given date. In this post, I’ll present two of the most popular systems for doing this, the “Apocryphal Method” [Note added 5/3/20: In a previous version of this post I called this the Gauss-Zeller algorithm, but its roots go back even further than Gauss] and Conway’s Doomsday Method. I personally use a modified verison of the apocryphal method. I’ll present both systems in a way which allows for a direct comparison of their relative merits and let you, dear reader, decide for yourself which one to learn.
# Colorings and embeddings of graphs
My previous post was about the mathematician John Conway, who died recently from COVID-19. This post is a tribute to my Georgia Tech School of Mathematics colleague Robin Thomas, who passed away on March 26th at the age of 57 following a long struggle with ALS. Robin was a good friend, an invaluable member of the Georgia Tech community, and a celebrated mathematician. After some brief personal remarks, I’ll discuss two of Robin’s most famous theorems (both joint with Robertson and Seymour) and describe the interplay between these results and two of the theorems I mentioned in my post about John Conway.
# Some Mathematical Gems from John Conway
John Horton Conway died on April 11, 2020, at the age of 82, from complications related to COVID-19. See this obituary from Princeton University for an overview of Conway’s life and contributions to mathematics. Many readers of this blog will already be familiar with the Game of Life, surreal numbers, the Doomsday algorithm, monstrous moonshine, Sprouts, and the 15 theorem, to name just a few of Conway’s contributions to mathematics. In any case, much has already been written about all of these topics and I cannot do justice to them in a short blog post like this. So instead, I’ll focus on describing a handful of Conway’s somewhat lesser-known mathematical gems.
|
{}
|
# Intersecting multiple data frames
I have to intersect multiple data frame to find common elements. For visualization purpose i can use upset Plot but it doesn;t return a object containing the intersection object .I tried jvenn but at max it can take six sets to intersect .
Im able to do the interesction but i would like to know if i can do it more efficiently ,which apparently not sure how to do this is my code as of now where i read each file as data frame, covert them to list and pass them to do intersection .
NewList <- split(GO0000122, f = seq(nrow(GO0000122)))
NewList1 <- split(GO0006351,f=seq(nrow(GO0006351)))
NewList2 <- split(GO0006355,f=seq(nrow(GO0006355)))
NewList3 <- split(GO0006357,f=seq(nrow(GO0006357)))
NewList4 <- split(GO0006366,f=seq(nrow(GO0006366)))
NewList5 <- split(GO0030154,f=seq(nrow(GO0030154)))
NewList6 <- split(GO0045892,f=seq(nrow(GO0045892)))
NewList8 <- split(GO0045893,f=seq(nrow(GO0045893)))
NewList7 <- split(GO0045944,f=seq(nrow(GO0045944)))
a <- intersect_all(NewList,NewList1,NewList2,NewList3,NewList4,NewList5,NewList6,NewList7,NewList8)
b <- do.call(rbind.data.frame, a)
Can i read the files and pass them into list and do the same that would help me from doing lot of things manually
I can read the files from the directory where i have my files but not sure how to pass them to make it as list.
filenames <- gsub("\\.csv$$","", list.files(pattern="\\.csv$$"))
for(i in filenames){
}
The object filenames is character here .It just returns me list of files.
So how to make my question bit more clear , i would like to read the list of file from the directory and then pass these to perform intersection. Any suggestion or help would be really appreciated .
• I've been cleaning some of your post for punctuation, but this one I am not sure how to change it. Could you please edit and correct the comas and i to I and other punctuation errors? Also, if you could clarify what do you mean by intersection here I think that your problem might be easier to solve if you explain what is your goal with this (there might be an out of the box solution) – llrs Jul 31 '19 at 7:48
• oh sorry typing partly from phone adds to the problem. My goal is to get set of common elements from multiple data frame where their is only single column consist of genes.So as of now i read each file,then pass those as list, then i do intersect and then turn it into dataframe. So now each time i have to read files if i have 12 files i have to read 12 times .Is there a way to put it in a loop and do the same what i have done above. – krushnach Chandra Jul 31 '19 at 8:59
• I think it would be it easier to work with character vectors than lists (it will at least make it easier to see them). But what have to do the files with the intersection, if any first you read and then you intersect, not the other way around or I am missing something here? – llrs Jul 31 '19 at 9:03
• it would be easier if i add my files which im using drive.google.com/open?id=1laieKec1sr4rWa4SIhUvHcLSerrWf4OC – krushnach Chandra Jul 31 '19 at 9:49
• This question is out of the scope of this site. You can find your solution at StackOverflow, specifically this question – llrs Jul 31 '19 at 10:36
Change dataframe to load all the files into a list:
wd <- "C:/all_your_dataframes_in_one_folder"
setwd(wd)
Then you get the list of files you want from your folder
files <- list.files()
Put into a list with a loop
dflist <- list()
for(i in 1:length(files)){
library(tidyverse)
map(dflist, ~.$GeneID) %>% #creates a list just of the column of interest reduce(intersect) #applies the intersect function cumulatively to the list • If you need a package it would be better if you state which ones. Also with readr package you can read already a bunch of csv files in one single call – llrs Jul 31 '19 at 15:59 • I mentioned tidyverse, but I just added library(tidyverse) in the example to be extra clear. Also, I felt as though a loop example would be better for explaining how to construct them.. – h3ab74 Jul 31 '19 at 17:25 • " a loop example would be better for explaining how to construct them" this is what im looking for even though i dont use much loop etc because Im sure or my concepts are pretty unclear about it. – krushnach Chandra Jul 31 '19 at 17:57 • @h3ab74 i ran your code map(dflist, ~.$GeneID) %>% reduce(intersect) I get this NULL .Am i doing something wrong – krushnach Chandra Jul 31 '19 at 18:05
|
{}
|
Latex not rendering sometimes
• Author
Posts
• #657169
shahabfaruqi
Member
I recently set up a blog on wordpress.com and want to insert latex codes in my posts. Sometimes it renders perfectly and sometimes not. Can someone help me? Take a look at my post here http://mathrant.wordpress.com/2011/07/19/eulers-famous-formula/ for example. Thanks.
The blog I need help with is mathrant.wordpress.com.
#657304
dlager
Member
Hi,
you will have less problems if you do not use spaces:
$latex e^{x}=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots$
will render and the latex on your post with the spaces will not.
#657497
shahabfaruqi
Member
Thank you. That was the problem.
The topic ‘Latex not rendering sometimes’ is closed to new replies.
|
{}
|
# Power series help
How do I multiply power series?
## Homework Statement
Find the power series:
$$e^x arctan(x)$$
## Homework Equations
$$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!}$$
$$arctan(x) = 0 + x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7}$$
## The Attempt at a Solution
So do I multiply 1 by 0, x by x and so forth? Or do I go 1 by 0, 1 by x? Or is there another way?
Last edited:
You have to multiply 1 by the whole acrtan series, x by the whole arctan series, and so on. There might be a way to simplify it though. Wikipedia has this under "power series"
$$f(x)g(x) = \left(\sum_{n=0}^\infty a_n (x-c)^n\right)\left(\sum_{n=0}^\infty b_n (x-c)^n\right)$$
$$= \sum_{i=0}^\infty \sum_{j=0}^\infty a_i b_j (x-c)^{i+j}$$
$$= \sum_{n=0}^\infty \left(\sum_{i=0}^n a_i b_{n-i}\right) (x-c)^n$$
So I kind of treat it like F.O.I.L.?
quantumdude
Staff Emeritus
Awesome, I think I got it, I only had to take it out to the $$x^5$$ term.
|
{}
|
Measurement of prompt and nonprompt J/ψ production in pp and pPb collisions at √sNN=5.02TeV
Sirunyan, A M; Tumasyan, A; et al; Aarrestad, T K; Amsler, C; Caminada, L; Canelli, M F; De Cosa, A; Galloni, C; Hinzmann, A; Hreus, T; Kilminster, B; Ngadiuba, J; Pinna, D; Rauco, G; Robmann, P; Salerno, D; Seitz, C; Yang, Y; Zucchetta, A; et al (2017). Measurement of prompt and nonprompt J/ψ production in pp and pPb collisions at √sNN=5.02TeV. European Physical Journal C - Particles and Fields, 77(4):269.
Abstract
This paper reports the measurement of J/ψ meson production in proton–proton (pp) and proton–lead (pPb) collisions at a center-of-mass energy per nucleon pair of 5.02TeV by the CMS experiment at the LHC. The data samples used in the analysis correspond to integrated luminosities of 28pb−1 and 35nb−1 for pp and pPb collisions, respectively. Prompt and nonprompt J/ψ mesons, the latter produced in the decay of B hadrons, are measured in their dimuon decay channels. Differential cross sections are measured in the transverse momentum range of 2<pT<30GeV/c, and center-of-mass rapidity ranges of |yCM|<2.4 (pp) and −2.87<yCM<1.93 (pPb). The nuclear modification factor, RpPb, is measured as a function of both pT and yCM. Small modifications to the J/ψ cross sections are observed in pPb relative to pp collisions. The ratio of J/ψ production cross sections in p-going and Pb-going directions, RFB, studied as functions of pT and yCM, shows a significant decrease for increasing transverse energy deposited at large pseudorapidities. These results, which cover a wide kinematic range, provide new insight on the role of cold nuclear matter effects on prompt and nonprompt J/ψ production.
Abstract
This paper reports the measurement of J/ψ meson production in proton–proton (pp) and proton–lead (pPb) collisions at a center-of-mass energy per nucleon pair of 5.02TeV by the CMS experiment at the LHC. The data samples used in the analysis correspond to integrated luminosities of 28pb−1 and 35nb−1 for pp and pPb collisions, respectively. Prompt and nonprompt J/ψ mesons, the latter produced in the decay of B hadrons, are measured in their dimuon decay channels. Differential cross sections are measured in the transverse momentum range of 2<pT<30GeV/c, and center-of-mass rapidity ranges of |yCM|<2.4 (pp) and −2.87<yCM<1.93 (pPb). The nuclear modification factor, RpPb, is measured as a function of both pT and yCM. Small modifications to the J/ψ cross sections are observed in pPb relative to pp collisions. The ratio of J/ψ production cross sections in p-going and Pb-going directions, RFB, studied as functions of pT and yCM, shows a significant decrease for increasing transverse energy deposited at large pseudorapidities. These results, which cover a wide kinematic range, provide new insight on the role of cold nuclear matter effects on prompt and nonprompt J/ψ production.
Statistics
Citations
1 citation in Web of Science®
2 citations in Scopus®
Altmetrics
Detailed statistics
|
{}
|
# How do you find the vertical, horizontal and slant asymptotes of: f(x)=(x^2 + 1)/ (x-2)^2?
Nov 28, 2017
$V A : x = 2$
$H A : y = 1$
#### Explanation:
$f \left(x\right) = \frac{{x}^{2} + 1}{x - 2} ^ 2$
$f \left(x\right) = \frac{{x}^{2} + 1}{\left(x - 2\right) \left(x - 2\right)}$
*(https://socratic.org/precalculus/functions-defined-and-notation/asymptotes)
$V A : x = 2$
$H A : y = \frac{1}{1} = 1$
|
{}
|
Contents
# Contents
## Idea
What is called the Myers effect (Myers 99) in string theory is the claimed phenomenon that given $N$ D0-branes in a constant background RR field $F_4$ (the field strength associated with D2-brane charge) with, crucially, nonabelian effects included, then these D0-branes expand into a fuzzy 2-sphere which represents a spherical D0-D2 brane bound state of a D2-brane and $N$ D0-branes (Myers 99, section 6, see p. 22, Myers 03, section 4).
brane intersections/bound states/wrapped branes
S-duality$\,$bound states:
intersecting$\,$M-branes:
## References
The effect now known as the “Myers effect” in D-brane theory was first described in:
Review includes
|
{}
|
# Notation
• $$\mathbf{1}_A$$ is the indicator function on the set $$A$$. It is defined by $\mathbf{1}_A(x) = \begin{cases} 1 & \text{if x \in A}\\ 0 & \text{if x \not \in A} \end{cases}$
|
{}
|
TheInfoList
Population ecology is a sub-field of
ecology Ecology (from el, οἶκος, "house" and el, -λογία, label=none, "study of") is the study of the relationships between living organisms, including humans, and their physical environment. Ecology considers organisms In biol ...
that deals with the dynamics of
species In biology, a species is the basic unit of biological classification, classification and a taxonomic rank of an organism, as well as a unit of biodiversity. A species is often defined as the largest group of organisms in which any two individu ...
population Population typically refers the number of people in a single area whether it be a city or town, region, country, or the world. Governments typically quantify the size of the resident population within their jurisdiction by a process called a ...
s and how these populations interact with the
environment Environment most often refers to: __NOTOC__ * Natural environment, all living and non-living things occurring naturally * Biophysical environment, the physical and biological factors along with their chemical interactions that affect an organism or ...
, such as
birth Birth is the act or process of bearing or bringing forth offspring, also referred to in technical contexts as parturition. In mammals, the process is initiated by hormones which cause the muscular walls of the uterus to contract, expelling the f ...
and
death rate Mortality rate, or death rate, is a measure of the number of death Death is the permanent, irreversible cessation of all biological functions that sustain a living Living or The Living may refer to: Common meanings *Life, a conditi ...
s, and by
immigration Immigration is the international movement of people to a destination country A country is a distinct territorial body or political entity A polity is an identifiable political entity—any group of people who have a collective ident ...
and
emigration Emigration is the act of leaving a resident country or place of residence with the intent to settle elsewhere (to permanently leave a country). Conversely, immigration Immigration is the international movement of people to a destination ...
. The discipline is important in
conservation biology Conservation biology is the study of the conservation of nature and of Earth Earth is the third planet from the Sun and the only astronomical object known to harbour and support life. 29.2% of Earth's surface is land consisting of contine ...
, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of
habitat In ecology Ecology (from el, οἶκος, "house" and el, -λογία, label=none, "study of") is the study of the relationships between living organisms, including humans, and their physical environment. Ecology considers at the ...
. Although population ecology is a subfield of
biology Biology is the natural science that studies life and living organisms, including their anatomy, physical structure, Biochemistry, chemical processes, Molecular biology, molecular interactions, Physiology, physiological mechanisms, Development ...
, it provides interesting problems for
mathematicians A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry) ...
and
statisticians A statistician is a person who works with theoretical or applied statistics Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, ...
who work in
population dynamics Population dynamics is the type of mathematics used to model and study the size and age composition of population Population typically refers the number of people in a single area whether it be a city or town, region, country, or the world. ...
.
# History
In the 1940s ecology was divided into autecology—the study of individual species in relation to the environment—and
synecology In ecology, a community is a group or association (ecology), association of populations of two or more different species occupying the same geographical area at the same time, also known as a biocoenosis, biotic community, biological community ...
—the study of groups of species in relation to the environment. The term autecology (from
Ancient Greek Ancient Greek includes the forms of the Greek language Greek ( el, label=Modern Greek Modern Greek (, , or , ''Kiní Neoellinikí Glóssa''), generally referred to by speakers simply as Greek (, ), refers collectively to the diale ...
: αὐτο, ''aúto'', "self"; οίκος, ''oíkos'', "household"; and
λόγος ''Logos'' (, ; grc, wikt:λόγος, λόγος, lógos; from , , ) is a term in Western philosophy, psychology, rhetoric, and religion derived from a Greek word variously meaning "ground", "plea", "opinion", "expectation", "word", "speech", " ...
, ''lógos'', "knowledge"), refers to roughly the same field of study as concepts such as
life cycles Life cycle, life-cycle, or lifecycle may refer to: Science and academia *Biological life cycle, the sequence of life stages that an organism undergoes from birth to reproduction ending with the production of the offspring *Life-cycle hypothesis, ...
and
behaviour Behavior (American English American English (AmE, AE, AmEng, USEng, en-US), sometimes called United States English or U.S. English, is the set of varieties of the English language native to the United States. Currently, American English ...
as adaptations to the environment by individual organisms.
Eugene Odum Eugene Pleasants Odum (September 17, 1913 – August 10, 2002) was an American biologist A biologist is a professional who has specialized knowledge in the field of biology, understanding the underlying mechanisms that govern the functioning ...
, writing in 1953, considered that synecology should be divided into population ecology,
community ecology In ecology, a community is a group or association of population In biology, a population is a number of all the organisms of the same group or species In biology, a species is the basic unit of biological classification, classifi ...
and
ecosystem An ecosystem (or ecological system) consists of all the organisms and the physical environment with which they interact. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy enters the syst ...
ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology.
# Terminology
A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population’s geographic range, which has limits that a species can tolerate (such as temperature). Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates all play a significant role in growth rate. The maximum per capita growth rate for a population is known as the intrinsic rate of increase. In a population, carrying capacity is known as the maximum population size of the species that the environment can sustain, which is determined by resources available. In many classic population models, r is represented as the intrinsic growth rate, where K is the carrying capacity, and N0 is the initial population size.
# Population dynamics
The development of population ecology owes much to the mathematical models known as
population dynamics Population dynamics is the type of mathematics used to model and study the size and age composition of population Population typically refers the number of people in a single area whether it be a city or town, region, country, or the world. ...
, which were originally formulae derived from
demography Demography (from prefix ''demo-'' from Ancient Greek Ancient Greek includes the forms of the used in and the from around 1500 BC to 300 BC. It is often roughly divided into the following periods: (), Dark Ages (), the period ...
at the end of the 18th and beginning of 19th century. The beginning of population dynamics is widely regarded as the work of
Malthus Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) was an English cleric Clergy are formal leaders within established religion Religion is a social system, social-cultural system of designated religious behaviour, beh ...
,Malthus, Thomas Robert.
An Essay on the Principle of Population The book ''An Essay on the Principle of Population'' was first published anonymously in 1798, but the author was soon identified as Thomas Robert Malthus Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) was an English cler ...
: Library of Economics
formulated as the
Malthusian growth model A Malthusian growth model, sometimes called a simple exponential growth model, is essentially exponential growth Exponential growth is a process that increases quantity over time. It occurs when the instantaneous rate of change (that is, the deri ...
. According to Malthus, assuming that the conditions (the environment) remain constant (''
ceteris paribus ' or ' () is a Latin phrase meaning "other things equal"; English translations of the phrase include "all other things being equal" or "other things held constant" or "all else unchanged". A prediction or a statement about a ontic, causal, epist ...
''), a population will grow (or decline) . This principle provided the basis for the subsequent predictive theories, such as the
demographic Demography (from prefix ''demo-'' from Ancient Greek Ancient Greek includes the forms of the used in and the from around 1500 BC to 300 BC. It is often roughly divided into the following periods: (), Dark Ages (), the period ...
studies such as the work of
Benjamin Gompertz Benjamin Gompertz (5 March 1779 – 14 July 1865) was a British self-educated Autodidacticism (also autodidactism) or self-education (also self-learning and self-teaching) is education Education is the process of facilitating le ...
and
Pierre François Verhulst 250px, Pierre François Verhulst Pierre François Verhulst (28 October 1804, Brussels – 15 February 1849, Brussels) was a Belgian mathematician and a doctor in number theory from the University of Ghent in 1825. He is best known for the logist ...
in the early 19th century, who refined and adjusted the Malthusian demographic model. A more general model formulation was proposed by F. J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also
Ludwig von Bertalanffy Karl Ludwig von Bertalanffy (19 September 1901 – 12 June 1972) was an Austrian biologist Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time A biologist is a professional who has specialized ...
are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations.
## Exponential vs. Logistic Growth
When describing growth models, there are two types of models that can be used: exponential and logistic. When the per capita of a rate of increase takes the same positive value regardless of population size, then it shows exponential growth. When the per capita rate of increase decreases as the population increases towards a maximum limit, then the graph shows logistic growth.
# Fisheries and wildlife management
In
fisheries Fishery can mean either the enterprise of raising or harvesting fish Fish are aquatic Aquatic means relating to water Water (chemical formula H2O) is an inorganic, transparent, tasteless, odorless, and nearly colorless chemical ...
and
wildlife management Wildlife management is the management process influencing interactions among and between wildlife Wildlife traditionally refers to undomesticated animal Animals (also called Metazoa) are multicellular A multicellular organism ...
, population is affected by three dynamic rate functions. *Natality or
birth rate The crude birth rate (CBR) in a period is the total number of live births per 1,000 population divided by the length of the period in years. The number of live births is normally taken from a universal registration system for births; population ...
, often recruitment, which means reaching a certain size or reproductive stage. Usually refers to the age a fish can be caught and counted in nets. *
Population growth rate Population growth is the increase in the number of individuals in a population In biology, a population is a number of all the organisms of the same group or species In biology, a species is the basic unit of biological classification, ...
, which measures the growth of individuals in size and length. More important in fisheries, where population is often measured in biomass. *
Mortality Mortality is the state of being mortal, or susceptible to death (1906) Death is the permanent, Irreversible process, irreversible cessation of all biological process, biological functions that sustain a living organism. Brain death is some ...
, which includes harvest mortality and natural mortality. Natural mortality includes non-human predation, disease and old age. If ''N''1 is the number of individuals at time 1 then $N_1 = N_0 + B - D + I - E$ where ''N''0 is the number of individuals at time 0, ''B'' is the number of individuals born, ''D'' the number that died, ''I'' the number that immigrated, and ''E'' the number that emigrated between time 0 and time 1. If we measure these rates over many time intervals, we can determine how a population's density changes over time. Immigration and emigration are present, but are usually not measured. All of these are measured to determine the harvestable surplus, which is the number of individuals that can be harvested from a population without affecting long-term population stability or average population size. The harvest within the harvestable surplus is termed "compensatory" mortality, where the harvest deaths are substituted for the deaths that would have occurred naturally. Harvest above that level is termed "additive" mortality, because it adds to the number of deaths that would have occurred naturally. These terms are not necessarily judged as "good" and "bad," respectively, in population management. For example, a fish & game agency might aim to reduce the size of a deer population through additive mortality. Bucks might be targeted to increase buck competition, or does might be targeted to reduce reproduction and thus overall population size. For the management of many fish and other wildlife populations, the goal is often to achieve the largest possible long-run sustainable harvest, also known as
maximum sustainable yieldIn population ecology Population ecology is a sub-field of ecology Ecology (from el, οἶκος, "house" and el, -λογία, label=none, "study of") is the study of the relationships between living organisms, including humans, and their p ...
(or MSY). Given a population dynamic model, such as any of the ones above, it is possible to calculate the population size that produces the largest harvestable surplus at equilibrium. While the use of population dynamic models along with statistics and optimization to set harvest limits for fish and game is controversial among some scientists, it has been shown to be more effective than the use of human judgment in computer experiments where both incorrect models and natural resource management students competed to maximize yield in two hypothetical fisheries. To give an example of a non-intuitive result, fisheries produce more fish when there is a nearby
refuge Refuge is a place or state of safety. It may also refer to a more specific meaning: Safety * Area of refuge, a location in a building that may be used by occupants in the event of a fire * Mountain hut, a shelter for travelers in mountainous area ...
from human predation in the form of a
nature reserve A nature reserve (also known as a natural reserve, wildlife refuge, wildlife sanctuary, biosphere reserve or bioreserve, natural or nature preserve, or nature conservation area) is a protected area Protected areas or conservation areas a ...
, resulting in higher catches than if the whole area was open to fishing.
# r/K selection
An important concept in population ecology is the r/K selection theory. For example, if an animal has the choice of producing one or a few offspring, or to put a lot of effort or little effort in offspring -- these are all examples of trade-offs. In order for species to thrive, they must choose what is best for them, leading to a clear distinction between r and K selected species. The first variable is ''r'' (the intrinsic rate of natural increase in population size, density independent) and the second variable is ''K'' (the carrying capacity of a population, density dependent). An ''r''-selected species (e.g., many kinds of insects, such as aphids) is one that has high rates of fecundity, low levels of parental investment in the young, and high rates of mortality before individuals reach maturity. Evolution favors productivity in r-selected species. In contrast, a ''K''-selected species (such as humans) has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Evolution in ''K''-selected species favors efficiency in the conversion of more
resources Resource refers to all the materials available in our environment which help us to satisfy our needs and wants. Resources can broadly be classified upon their availability — they are classified into renewable A renewable resource, also know ...
into fewer offspring. K-selected species generally experience stronger competition, where populations generally live near carrying capacity. These species have heavy investment in offspring, resulting in longer lived organisms, and longer period of maturation. Offspring of K-selected species generally have a higher probability of survival, due to heavy parental care and nurturing.
# Top-Down and Bottom-Up Controls
## Top-Down Controls
In some populations, organisms in lower trophic levels are controlled by organisms at the top. This is known as top-down control. For example, the presence of top carnivores keep herbivore populations in check. If there were no top carnivores in the ecosystem, then herbivore populations would rapidly increase, leading to all plants being eaten. This ecosystem would eventually collapse.
## Bottom-Up Controls
Bottom-up controls, on the other hand, are driven by producers in the ecosystem. If plant populations change, then the population of all species would be impacted. For example, if plant populations decreased significantly, the herbivore populations would decrease, which would lead to a carnivore population decreasing too. Therefore, if all of the plants disappeared, then the ecosystem would collapse. Another example would be if there were too many plants available, then two herbivore populations may compete for the same food. The competition would lead to an eventual removal of one population.
## Do all ecosystems have to be either top-down or bottom-up?
An ecosystem does not have to be either top-down or bottom-up. There are occasions where an ecosystem could be bottom-up sometimes, such as a marine ecosystem, but then have periods of top-down control due to fishing.
# Survivorship curves
Survivorship curves show the distribution of populations according to age. Survivorship curves are important to be able to compare generations, populations, or even different species. Humans and most other mammals have a type I survivorship because death occurs in older years. Typically, Type I survivorship generally includes K-selected species. Type II survivorship shows that death at any age is equally probable. Type III curves indicate few surviving the younger years, but after a certain age, individuals are much more likely to survive. Type III survivorship typically includes r-selected species.
# Metapopulation
Populations are also studied and conceptualized through the "
metapopulation A metapopulation consists of a group of spatially separated populations of the same species In biology, a species is the basic unit of biological classification, classification and a taxonomic rank of an organism, as well as a unit of biodiv ...
" concept. The metapopulation concept was introduced in 1969:
"as a population of populations which go extinct locally and recolonize."
Metapopulation ecology is a simplified model of the landscape into patches of varying levels of quality. Patches are either occupied or they are not. Migrants moving among the patches are structured into metapopulations either as sources or sinks. Source patches are productive sites that generate a seasonal supply of migrants to other patch locations. Sink patches are unproductive sites that only receive migrants. In metapopulation terminology there are emigrants (individuals that leave a patch) and immigrants (individuals that move into a patch). Metapopulation models examine patch dynamics over time to answer questions about spatial and demographic ecology. An important concept in metapopulation ecology is the
rescue effectThe rescue effect is a phenomenon which was first described by Brown & Kodric-Brown,Brown JH, Kodric-Brown A. 1977 Turnover rates in insular biogeography: effect of immigration on extinction. Ecology 58, 445– 449. (doi:10.2307/ 1935620) and is com ...
, where small patches of lower quality (i.e., sinks) are maintained by a seasonal influx of new immigrants. Metapopulation structure evolves from year to year, where some patches are sinks, such as dry years, and become sources when conditions are more favorable. Ecologists utilize a mixture of computer models and field studies to explain metapopulation structure.
# Journals
The first journal publication of the Society of Population Ecology, titled ''Population Ecology'' (originally called ''Researches on Population Ecology'') was released in 1952. Scientific articles on population ecology can also be found in the ''
Journal of Animal Ecology The ''Journal of Animal Ecology'' is a peer-reviewed scientific journal publishing research in all areas of animal ecology. It began publication in 1932, and as such is the second oldest journal of the British Ecological Society (after the ''Journal ...
'', ''
Oikos The ancient Greek word ''oikos'' (ancient Greek Ancient Greek includes the forms of the Greek language Greek ( el, label=Modern Greek Modern Greek (, , or , ''Kiní Neoellinikí Glóssa''), generally referred to by speakers simpl ...
'' and other journals.
* Density-dependent inhibition *
Irruptive growth Irruptive growth is a growth pattern over time, defined by a sudden rapid growth in the population of an organism. Irruptive growth is studied in population ecology. Population cycles often display irruptive growth, but with a predictable pattern su ...
* Lists of organisms by population *
Overpopulation Overpopulation or overabundance occurs when a species' population In biology, a population is a number of all the organisms of the same group or species In biology, a species is the basic unit of biological classification, classifica ...
*
Population density Population density (in agriculture Agriculture is the science, art and practice of cultivating plants and livestock. Agriculture was the key development in the rise of sedentary Image:Family watching television 1958.jpg, Exercise tr ...
*
Population distribution Species distribution is the manner in which a biological taxon is spatially arranged. The geographic limits of a particular taxon's distribution is its range, often represented as shaded areas on a map. Patterns of distribution change depending ...
*
Population dynamics Population dynamics is the type of mathematics used to model and study the size and age composition of population Population typically refers the number of people in a single area whether it be a city or town, region, country, or the world. ...
*
Population dynamics of fisheries A fishery is an area with an associated fish or Aquatic animal, aquatic population which is harvested for its Commercial fishing, commercial or Recreational fishing, recreational value. Fisheries can be Wild fisheries of the world, wild or Fish farm ...
*
Population genetics Population genetics is a subfield of that deals with genetic differences within and between s, and is a part of . Studies in this branch of examine such phenomena as , , and . Population genetics was a vital ingredient in the of the . Its pri ...
*
Population growth Population growth is the increase in the number of people in a population Population typically refers the number of people in a single area whether it be a city or town, region, country, or the world. Governments typically quantify the size ...
*
Theoretical ecology 300px, Life on Earth-Flow of Energy and Entropy Theoretical ecology is the scientific discipline devoted to the study of ecosystem, ecological systems using theoretical methods such as simple conceptual models, mathematical models, computer simulat ...
# Bibliography
* {{DEFAULTSORT:Population Ecology Applied statistics
Ecology Ecology (from el, οἶκος, "house" and el, -λογία, label=none, "study of") is the study of the relationships between living organisms, including humans, and their physical environment. Ecology considers organisms In biol ...
|
{}
|
# Taylor Series Expansion
In financial markets participants would like to measure the effect of changes in the price of the bond due to changes in yield. This enables better risk management of financial assets as the impact of asset values is determinable.
Recomputing the value of the bond using the changed yield comes across as an obvious solution. However, in practice a method called the Taylor series expansion can be used for this purpose. This expansion represents a non-linear relationship between the yield and price of a bond around its initial value.
The English mathematician Brook Taylor published this result in 1715 and the full realization of this came only in year 1755 when Euler applied it to differential calculus.
The other way of stating what the Taylor expansion actually implies is to assume that the price function can be written in polynomial form, i.e.
Where the coefficients are unknown when Δy is set to 0 the result gives α0. If the derivative of both sides is taken and Δy is set to 0 then we derive α1, the first derivative with respect to y0 and the next step gives 2α2 = f''(y0). In this context, the derivative refers to the mathematical expression and has nothing to do with the financial product derivative.
The first derivative measures the duration and the second derivative measures the convexity and there are situations where there are more than one variable in this equation.
The term f'(.) is the first derivative of price with respect to the yield and the term f"(.) refers to second derivative of the price with respect to the yield of the function f(.) which has an initial value.
The Taylor expansion is an infinite expansion with increasing powers of Δy of which only the first two terms are used by industry practitioners as they are good indicators of asset price changes relative to other assumptions made for valuing financial assets. The convexity term is negligible for very small changes in price.
The Taylor expansion is one of the fundamental methods used in risk management and is used in different ways in financial markets. It is also used to approximate the movement in value of a derivatives contract, i.e., an option on a stock. The equation then becomes:
Where S is price of the underlying asset and the first derivative is called the delta and the second derivative is called the gamma.
The expansion is also useful in a situation where a number of financial instruments are involved. If there are N different bonds and there are a certain units of each bond and then the first derivative would be the sum of the units of each bond multiplied by the first derivative of the value of the bond.
# R Programming Bundle: 25% OFF
Get our R Programming - Data Science for Finance Bundle for just $29$39.
Get it now for just \$29
|
{}
|
# Array Search Algorithms
Suppose we have a large array and we need to find one of its elements. We need an algorithm to search the array for a particular value, usually called the key. If the elements of the array are not arranged in any particular order, the only way we can be sure to find the key, assuming it is in the array, is to search every element, beginning at the first element, until we find it.
#### In this chapter
This approach is known as a sequential search, because each element of the array will be examined in sequence until the key is found (or the end of the array is reached). A pseudocode description of this algorithm is as follows:
1. For each element of the array
2. If the element equals the key
3. Return its index
5. Return -1 (to indicate failure)
This algorithm can easily be implemented in a method that searches an integer array, which is passed as the method’s parameter. If the key is found in the array, its location is returned. If it is not found, then −1 is returned to indicate failure.
The Search class provides the Java implementation of the sequentialSearch() method. The method takes two parameters:
1. the array to be searched
2. the key to be searched for.
It uses a for statement to examine each element of the array, checking whether it equals the key or not. If an element that equals the key is found, the method immediately returns that element’s index. Note that the last statement in the method will only be reached if no element matching the key is found.
public class Search {
public int sequentialSearch(int arr[], int key) {
for (int k = 0; k < arr.length; k++)
if (arr[k] == key)
return k;
return -1; // Failure if this is reached
} // sequentialSearch()
}
If the elements of an array have been aready sorted into ascending or descending order, it is not necessary to search sequentially through each element of the array in order to find the key. Instead, the search algorithm can make use of the knowledge that the array is ordered and perform what’s known as a binary search, which is a divide-and-conquer algorithm that divides the array in half on each iteration and limits its search to just that half that could contain the key.
To illustrate the binary search, recall the familiar guessing game in which you try to guess a secret number between 1 and 100, being told “too high” or “too low” or “just right” on each guess. A good first guess should be 50. If this is too high, the next guess should be 25, because if 50 is too high the number must be between 1 and 49. If 50 was too low, the next guess should be 75, and so on. After each wrong guess, a good guesser should pick the midpoint of the sublist that would contain the secret number.
Proceeding in this way, the correct number can be guessed in at most 𝑙𝑜𝑔2𝑁 guesses, because the base-2 logarithm of N is the number of times you can divide N in half. For a list of 100 items, the search should take no more than seven guesses ( 27=128>100 ). For a list of 1,000 items, a binary search would take at most ten guesses (2 10=1,024>1,000 ).
So a binary search is a much more efficient way to search, provided the array’s elements are in order. Note that “order” here needn’t be numeric order. We could use binary search to look up a word in a dictionary or a name in a phone book.
A pseudocode representation of the binary search is given as follows:
TO SEARCH AN ARRAY OF N ELEMENTS IN ASCENDING ORDER
1. Assign 0 low and assign N-1 to high initially
2. As long as low is not greater than high
3. Assign (low + high) / 2 to mid
4. If the element at mid equals the key
5. then return its index
6. Else if the element at mid is less than the key
7. then assign mid + 1 to low
8. Else assign mid - 1 to high
9. If this is reached return -1 to indicate failure
Just as with the sequential search algorithm, this algorithm can easily be implemented in a method that searches an integer array that is passed as the method’s parameter. If the key is found in the array, its location is returned. If it is not found, then −1 is returned to indicate failure. The binarySearch() method takes the same type of parameters as sequentialSearch(). Its local variables, low and high, are used as references (pointers), to the current low and high ends of the array, respectively. Note the loop-entry condition: low <= high. If low ever becomes greater than high, this indicates that key is not contained in the array. In that case, the algorithm returns −1 .
As a binary search progresses, the array is repeatedly cut in half and low and high will be used to point to the low and high index values in that portion of the array that is still being searched. The local variable mid is used to point to the approximate midpoint of the unsearched portion of the array. If the key is determined to be past the midpoint, then low is adjusted to mid+1; if the key occurs before the midpoint, then high is set to mid-1. The updated values of low and high limit the search to the unsearched portion of the original array.
public class Search {
public int binarySearch(int arr[], int key) {
int low = 0; // Initialize bounds
int high = arr.length - 1;
while (low <= high) { // While not done
int mid = (low + high) / 2;
if (arr[mid] == key)
return mid; // Success
else if (arr[mid] < key)
low = mid + 1; // Search top half
else
high = mid - 1; // Search bottom half
} // while
return -1; // Post: if low > high search failed
} // binarySearch()
public static void main(String[] args) {
int sortArr[] = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20};
Search s = new Search();
System.out.println("Search 3 using binary search: " + s.binarySearch(sortArr, 3));
System.out.println("Search -5 using binary search: " + s.binarySearch(sortArr, -5));
}
}
Unlike sequential search, binary search does not have to examine every location in the array to determine that the key is not in the array. It searches only that part of the array that could contain the key. For example, suppose we are searching for −5 in the following array:
int sortArr[] = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20};
The −5 is smaller than the smallest array element. Therefore, the algorithm will repeatedly divide the low end of the array in half until the condition low > high becomes true. We can see this by tracing the values that low, mid, and high will take during the search:
Key Iteration Low High Mid
----------------------------------
-5 0 0 19 9
-5 1 0 8 4
-5 2 0 3 1
-5 3 0 0 0
-5 4 0 -1 Failure
As this trace shows, the algorithm examines only four locations to determine that −5 is not in the array. After checking location 0, the new value for high will become −1 , which makes the condition low <= high == false. So the search will terminate.
The TestSearch class below provides a program that can be used to test two search methods. It creates an integer array, whose values are in ascending order. It then uses the getInput() method to input an integer from the keyboard and then performs both a sequentialSearch() and a binarySearch() for the number.
For the array containing the elements 2, 4, 6, and so on up to 28 in that order, draw a trace showing which elements are examined if you search for 21 using a binary search.
import java.io.*;
public class TestSearch {
public static int getInput() {
kb.prompt("This program searches for values in an array.");
kb.prompt(
"Input any positive integer (or any negative to quit) : ");
return kb.getKeyboardInteger();
} // getInput()
public static void main(String args[]) throws IOException {
int intArr[] = { 2,4,6,8,10,12,14,16,18,20,22,24,26,28};
Search searcher = new Search();
int key = 0, keyAt = 0;
key = getInput();
while (key >= 0) {
keyAt = searcher.sequentialSearch( intArr, key );
if (keyAt != -1)
System.out.println(" Sequential: " + key +
" is at intArr[" + keyAt + "]");
else
System.out.println(" Sequential: " + key
+ " is not contained in intArr[]");
keyAt = searcher.binarySearch(intArr, key);
if (keyAt != -1)
System.out.println(" Binary: " + key +
" is at intArr[" + keyAt + "]");
else
System.out.println(" Binary: " + key +
" is not contained in intArr[]");
key = getInput();
} // while
} // main()
} // TestSearch
`
|
{}
|
# Errors Starting Tomcat - "Unable to Locate the Server localhost:8080" Error
An "unable to locate server" error when trying to load a Web application in a browser.
Tomcat can take quite some time before fully loading, so first of all, make sure you've allowed at least 5 minutes for Tomcat to load before continuing troubleshooting. To verify that Tomcat is running, point your browser to http://localhost:8080. When the Tomcat index screen displays, you may continue. If the index screen does not load immediately, wait up to several minutes and then retry. If Tomcat still has not loaded, check the log files, as explained below, for further troubleshooting information.
When Tomcat starts up, it initializes itself and then loads all the Web applications in <JWSDP_HOME>/webapps. When you run Tomcat by calling startup.sh, the server messages are logged to <JWSDP_HOME>/logs/catalina.out. The progress of loading Web applications can be viewed in the file <JWSDP_HOME>/logs/jwsdp_log.<date>.txt
|
{}
|
# 2013-07 Maximum number of points
Consider the unit sphere in $$\mathbb{R}^n$$. Find the maximum number of points on the sphere such that the (Euclidean) distance between any two of these points is larger than $$\sqrt 2$$.
GD Star Rating
Let $$P_1,P_2,\ldots,P_n$$ be n points in {(x,y): 0<x<1, 0<y<1} (n>1). Let $$r_i=\min_{j\neq i} d(P_i,P_j)$$ where d(x,y) means the distance between two points x and y. Prove that $$r_1^2+r_2^2+\cdots+r_n^2\le 4$$.
|
{}
|
# Go First Impressions
I knocked up a little project this weekend to help extract blog entries from a serendipity blog and format them up to work with the Hugo blogging engine. Partly because I’ve gotten sick of battling making python work smoothly on Windows, partly because Hugo is written in Go, and partly because I’ve been looking for an excuse to write something in Go, I decided to write it in, well Go. And because I love having Opinions, I jotted down my thoughts about Go based on a first aquaintance.
## Things I Liked
### Portability
This is something that has, for me anyway, become more, rather than less important. I split between Windows and Linux systems in my day-to-day (and professional) life. For many languages and runtimes the answer to “how do I use this” basically boils down to “spin up a VM” if you aren’t running, at a minimum, a POSIX userland, and ideally a real Unix.
That, for me, is a huge pain in the arse.
What’s interesting is that small projects of the sort I wrote can be a great way to surface portability problems. This is because if you are both lazy in the right way1 and humble, you will look to achieve your goals by using someone else’s excellent code rather than re-inventing the universe yourself, as so many programmers love to do.
For s9y2hugo I’m pulling data from a DB and writing it to the filesystem: pulling data from a DB is often an excellent test of genuine portability. It’s the sort of thing where you so often discover that while the language and runtime you’re using are in-theory portable, the libraries you’ll need to do anything require native code, at which point you’ll be throwing your hands up in frustration, because getting a working runtime involves so much pain it’s easier to install a VM. Not that running Linux necessarily helps with that2.
For Go, however, my experience was refreshingly straightforward: everything I needed from the standard library Just Worked, and the external pg driver installed and ran just as well on every platform with Go3. That’s huge for me. It’s light years ahead of trying to work in Ruby (just don’t), Perl, and a bit better than Python. And, obviously C/C++/etc. It’s about as easy as Java, except writing Go is nicer than writing Java for me.
### Templates, Because It’s 2016
I was impressed with the standard templating system. It’s pretty simple once you get your head around it, and powerful enough that unlike, say, Python, there isn’t a proliferation of templating options. This is very good, because for glue-type tasks (which makes up a chunk of Go’s systems programming target) being able to quickly and easily format output is a win. And having one canonical choice makes life easy4.
### Memory Management
It’s not just that life is too short for malloc(), it’s that humans are objectively bad at memory management. A significant number of security vulnerabilities, perhaps even the majority down the years, come down to poor memory management. The most recent high-profile browser hacking competition, for example, had something like 80% of the winning entries rely on a double-free that could be exploited; this isn’t particularly unrepresentative.
People are bad at memory management. I’ll say it again because it’s that important. Somewhere north of 99% of programmers shouldn’t be doing memory management. There’s no good excuse for it
### Decent String Handling
Go is C-like, and developed by some of the Big Giant Brains behind C and Unix, and one of the things they’ve learned from is the disaster that C-style string handling turned out to be. How many critical vulnerabilities in code over the years have been down to the many terrible ways to read and copy strings around in C? Slightly fewer than memory management, perhaps, but plenty nonetheless. And since dealing with text is, it turns out, one of the main tasks programmers end up doing (at least as much as dealing with numbers, maybe more), having some sane, idiot-proof ways of doing those things is kind of important.
### Batteries Generally Included
This is the general case of the string handling and templates I mentioned: Go was developed this century, and it shows in the best ways. Of course there’s a standard way of reading command line input as part of the core. Of course there’s a standard database framework. Of course there’s standard templating for common output formats. Of course there’s a framework for serialising and deserialising data from a variety of formats (even if it’s a bit verbose). The things you’d expect to be built-in based on the 40 years since C was developed are there. We’re not pretending “creating Internet servers” or “outputting HTML” or “interacting with databases” are exotic add-ons.
### Error Hinting
The compiler offers useful hints as to why my code isn’t working, second only to Perl in their usefulness. I appreciate this.
## Things I Didn’t Like
### Sigils
Go eliminates needing semicolons every line, but having gotten used to Python saying, “fuck it, formatting matters” and using that to eliminate swathes of pointless brackets and sigils, but Go seems kind of bloated. What’s irritating is that the Go community take a strong stance that code must be passed through go fmt before being contributed to projects. If you’re going to enforce the idea that code must look like the output of go fmt for layout (spacing and so on), why not just go full Python and make it significant?
### Iterating
Go’s iterations collapse down to ‘for’ loops but it honestly feels like the use of things like the range construct to implement the equivalent of a foreach equivalent are the hobgolins of a small mind.
## One Last Thing
Thanks to Maire, who wrangled children for most of a day so I could actually get some code written.
1. That it so say, the productive kind of laziness that inspires you to spend a bunch of time coding up some automation for a task, rather than manually doing the same thing over and over again by hand.
2. Thanks to the combination of the big-ball-of-mud model of many Linux distros with the mess of dependencies in many dynamic languages you can find it somewhere between difficult and vein-poppingly infuriating to have multiple projects using nominally the same framework on the same OS install. I abandoned Rails as a couple of different upgrades to software required not merely multiple copies of Rails installed, but multiple copies of the Ruby interpreter. In parallel. None of which were supported by the distro. Life is too short.
3. Except Debian, of course, because Debian sneers at the idea you want versions released any time in the last few years; Debian stable supports go 1.0. This is about as useful as Linux 1.2.
4. Choice is good, except when it isn’t. When people spend more time bikeshedding shades of pink in their templating language, for example, than using it… well, that’s why VB and PHP programmers have built half the Web with the tools available while bitter nerds squabble over which hydra-headed combination of “real” languages, libraries, and pointless re-implementations thereof should actually be popular.
Share
|
{}
|
## Monday, November 3, 2014
### Halloween Trick-or-Treaters as a Poisson Process
Usually this time of the year I'm blogging about some Halloween-themed cookie recipe or jack-o-lanterns (and roasted pumpkin seeds yum!). This year I thought it would be fun to discuss the idea of a Poisson process and use Halloween as an example. In this blogpost, I will simulate the number of trick-or-treaters as a Poisson process!
Generally speaking, a Poisson process is a continuous-time process ${N(t), t \geq 0}$ where $N(t)$ counts the number of events that occur in a time interval [0, $t$] and the inter-arrival time of these events in a given time interval. In our case, we can think the Poisson process counting the number of trick-or-treaters in a given time interval. Specifically a Poisson process is characterized by the following properties:
1. The number of events at time $t$ = 0 is 0 (or $N(0) = 0$)
2. Stationary increments: the probability distribution of $N(t+h) - N(t)$ depends only on $h$ (not $t$). This means the probability of observing a certain number of trick-or-treaters in a given time interval depends only on the length of the time interval (e.g. 1hr).
3. Independent increments: the number of events occurring in disjoint time intervals are independent of each other. You can think of this as the number of trick-or-treaters we see from e.g. 5:30-6:30pm doesn't influence the number of trick-or-treaters we see e.g. 7:30-8:30pm.
4. $N(t)$ is distributed as a Poisson distribution.
Assuming these four properties, we immediately get a free piece of information:
• Inter-arrival times between the events (or "waiting times") are independent and identically distributed as an exponential random variable with a given rate parameter. Therefore to simulate a Poisson process all we have to do is simulate the inter-arrival times between events using an exponential distribution.
Now, there are several types of Poisson processes, but for our purposes I will discuss on two: (1) a homogeneous and (2) inhomogeneous Poisson process. The main difference between the two is the rate at which the events occur. In the homogeneous Poisson process events occur at a constant rate $\lambda$. In the inhomogenous Poisson process, events occur at a variable rate $\lambda(t)$.
• homogenous Poisson process:
• The probability of one event in a small interval $h$ is approximately $\lambda h$ where $\lambda$ is a rate parameter. The probability of two events in a small interval is approximately 0.
$$N(t) \sim Poisson(\lambda t)$$
$$P[N(t + s) - N(t) = k] = \frac{e^{-\lambda s} (\lambda s)^{k}}{k!}$$
If we define $S_k$ as the arrival time of the $k^{th}$ events and $X_k = S_k - S_{k-1}$ as the time between the $k^{th}$ and $k-1$ arrival time, then
$$P(X_k > t | S_{k-1} = s) = e^{-\lambda t}$$
• inhomogenous Poisson process:
• The difference is here the rate parameter varies over time: $\lambda(t)$. This means we no longer have stationary increments as above because the number of events observed in a given time interval depends on the length of the interval AND the time $t$ itself.
Let's try an example. Let's simulate the number trick-or-treaters using a homogeneous Poisson process with rate parameter $\lambda$. Using this blogpost as an estimate for the number of trick-or-treaters per minute, I estimated there are 1-2 trick-or-treaters per minute. As stated above, to simulate the Poisson process, I will simulate the inter-arrival times of the trick-or-treaters using an exponential distribution. The cumulative distribution function of an exponential random variable $T$ is given by
$$u = F(x) = 1 -e^{-\lambda t}$$
As a little background reading, here are two sets of notes on simulating a Poisson process which are particularly useful: here and here. If the hours for trick-or-treating are around 5:30-8:30pm, the inter-arrival times $X_k$ can be simulated $u \sim U[0,1]$, then we can solve solve for $t$:
$$t = - \frac{\log(u)}{\lambda}$$
One nice extension of this example would be to an inhomogeneous Poisson process where the rate at which the trick-or-treaters arrive varies across time. I'll leave it to you to try. Hope everyone had a safe and happy Halloween!
|
{}
|
• # question_answer A cubical block rests on an inclined plane of coefficient of friction $\mu =1/\sqrt{3}$. What should be the angle of inclination so that the block Just slides down the inclined plane? A) ${{30}^{o}}$ B) ${{60}^{o}}$ C) ${{45}^{o}}$ D) ${{90}^{o}}$
$\tan \,\,\theta =\mu$ $\tan \theta =\frac{1}{\sqrt{3}}$ $\tan \theta =\tan \,{{30}^{o}}$ Angle of inclination, $\theta ={{30}^{o}}$
|
{}
|
### AST_RESOLVE
Resolve a vector into two orthogonal components
#### Description:
This routine resolves a vector into two perpendicular components. The vector from point 1 to point 2 is used as the basis vector. The vector from point 1 to point 3 is resolved into components parallel and perpendicular to this basis vector. The lengths of the two components are returned, together with the position of closest aproach of the basis vector to point 3.
#### Invocation
CALL AST_RESOLVE( THIS, POINT1, POINT2, POINT3, POINT4, D1, D2, STATUS )
#### Arguments
##### THIS = INTEGER (Given)
Pointer to the Frame.
##### POINT1( $\ast$ ) = DOUBLE PRECISION (Given)
An array with one element for each Frame axis (Naxes attribute). This marks the start of the basis vector, and of the vector to be resolved.
##### POINT2( $\ast$ ) = DOUBLE PRECISION (Given)
An array with one element for each Frame axis (Naxes attribute). This marks the end of the basis vector.
##### POINT3( $\ast$ ) = DOUBLE PRECISION (Given)
An array with one element for each Frame axis (Naxes attribute). This marks the end of the vector to be resolved.
##### POINT4( $\ast$ ) = DOUBLE PRECISION (Returned)
An array with one element for each Frame axis in which the coordinates of the point of closest approach of the basis vector to point 3 will be returned.
##### D1 = DOUBLE PRECISION (Returned)
The distance from point 1 to point 4 (that is, the length of the component parallel to the basis vector). Positive values are in the same sense as movement from point 1 to point 2.
##### D2 = DOUBLE PRECISION (Returned)
The distance from point 4 to point 3 (that is, the length of the component perpendicular to the basis vector). The value is always positive.
##### STATUS = INTEGER (Given and Returned)
The global status.
#### Notes:
• Each vector used in this routine is the path of shortest distance between two points, as defined by the AST_DISTANCE function.
• This function will return " bad" coordinate values (AST__BAD) if any of the input coordinates has this value, or if the required output values are undefined.
|
{}
|
Home Uncategorized weighted graph adjacency matrix
Sink. This argument specifies whether to create a weighted graph from an adjacency matrix. We can traverse these nodes using the edges. graph: The graph to convert. (The format of your graph is not particularly convenient for use in networkx.) If it is NULL then an unweighted graph is created and the elements of the adjacency matrix gives the number of edges between the vertices. In my daily life I typically work with adjacency matrices, rather than other sparse formats for networks. The complexity of Adjacency Matrix representation. if there is an edge from vertex i to j, mark adj[i][j] as 1. i.e. Problems in this approach. Definition 1. kth-order adjacency matrix. asked 2020-02-05 07:13:56 -0600 Anonymous. 6. The case where wij2{0,1} is equivalent to the notion of a graph as in Definition 17.4. Distance matrix. Graph has not Hamiltonian cycle. What is Graph: G = (V,E) Graph is a collection of nodes or vertices (V) and edges(E) between them. i have a image matrix and i want from this matrix, generate a weighted graph G=(V,E) wich V is the vertex set and E is the edge set, for finaly obtain the adjacency matrix. This problem has been solved! Other operations are same as those for the above graphs. Graph of minimal distances. Let's assume the n x n matrix as adj[n][n]. Here we use it to store adjacency lists of all vertices. See to_numpy_matrix … In this tutorial, we are going to see how to represent the graph using adjacency matrix. For this syntax, G must be a simple graph such that ismultigraph(G) returns false. Creating graph from adjacency matrix. edit. I'm interested in to apply $\mathcal M_{4}$ and $\mathcal M_{13}$. (a) Show the adjacency matrix of this graph. Adjacency Matrix is also used to represent weighted graphs. In this video we will learn about adjacency matrix representation of weighted directed graph. While basic operations are easy, operations like inEdges and outEdges are expensive when using the adjacency matrix representation. Given a undirected Graph of N vertices 1 to N and M edges in form of 2D array arr[][] whose every row consists of two numbers X and Y which denotes that there is a edge between X and Y, the task is to write C program to create Adjacency Matrix of the given Graph. The adjacency matrix of a weighted graph can be used to store the weights of the edges. networkx supports all kinds of operations on graphs and their adjacency matrices, so having the graph in this format should be very helpful for you. If a graph has n vertices, we use n x n matrix to represent the graph. and i … Same time is required to check if there is an edge between two vertices Adjacency Lists. Weighted graphs from adjacency matrix in graph-tool. We can think of the weight wij of an edge {vi,vj} as a degree of similarity (or anity) in an image, or a cost in anetwork. On this page you can enter adjacency matrix and plot graph Question: Regarding A Data Structure Graph, What Is An Adjacency Matrix? If you could just give me the simple code as I am new to mathematica and am working on a tight schedule. Show distance matrix. Select a sink of the maximum flow. In this post, weighted graph representation using STL is discussed. Adjacency lists, in … type: Gives how to create the adjacency matrix for undirected graphs. Details and Options WeightedAdjacencyGraph [ wmat ] is equivalent to WeightedAdjacencyGraph [ { 1 , 2 , … , n } , wmat ] , where wmat has dimensions × . We use two STL containers to represent graph: vector : A sequence container. Graph has not Eulerian path. Weighted adjacency matrix of a graph. Adjacency matrix is pretty good for visualization of communities, as well as to give an idea of the distribution of edge weights. Depending upon the application, we use either adjacency list or adjacency matrix but most of the time people prefer using adjacency list over adjacency matrix. Sep 12, 2018. If the graph has no edge weights, then A(i,j) is set to 1. For MultiGraph/MultiDiGraph with parallel edges the weights are summed. gives the graph with vertices v i and weighted adjacency matrix wmat. DGLGraph.adjacency_matrix (transpose=None, ctx=device(type='cpu')) [source] ¶ Return the adjacency matrix representation of this graph. If we have a graph with million nodes, then the space this graph takes is square of million, as adjacency matrix is a 2D array. Adjacency Matrix. It is ignored for directed graphs. For weighted graph: A[m,n] = w (weight of edge), or positive infinity otherwise; Advantages of Adjacency Matrix: Adjacency matrix representation of the graph is very simple to implement; Adding or removing time of an edge can be done in O(1) time. Adjacency matrix for undirected graph is always symmetric. Select a source of the maximum flow. Here's how it works. (2%) (b) Show the adjacency list of this graph. Weighted Directed Graph Let’s Create an Adjacency Matrix: 1️⃣ Firstly, create an Empty Matrix as shown below : If this is impossible, then I will settle for making a graph with the non-weighted adjacency matrix. There're thirteen motifs with three nodes. If you want a pure Python adjacency matrix representation try networkx.convert.to_dict_of_dicts which will return a dictionary-of-dictionaries format that can be addressed as a sparse matrix. Graph has Eulerian path. Note also that I've shifted your graph to use Python indices (i.e., starting at 0). Given an undirected, connected and weighted graph, answer the following questions. If adj[i][j] = w, then there is an edge from vertex i to vertex j with weight w. Pros: Representation is easier to implement and follow. If the graph has some edges from i to j vertices, then in the adjacency matrix at i th row and j th column it will be 1 (or some non-zero value for weighted graph), otherwise that place will hold 0. Check to save. (3%) (c) Use Dijkstra's Algorithm to show the shortest path from node A to all other nodes in this graph. If the graph has no edge weights, then A(i,j) is set to 1. A = adjacency(G,'weighted') returns a weighted adjacency matrix, where for each edge (i,j), the value A(i,j) contains the weight of the edge. See the answer. The weighted adjacency matrix of a directed graph can be unsymmetric: Use rules to specify the graph: The weighted adjacency matrix of the graph with self-loops has diagonal entries: WeightedAdjacencyMatrix works with large graphs: Use MatrixPlot to visualize the matrix: Given a graph G= (V;E;A), we use the shortest path distance to determine the order between each pair of nodes. For this syntax, G must be a simple graph such that ismultigraph(G) returns false. Adjacency Matrix An easy way to store connectivity information – Checking if two nodes are directly connected: O(1) time Make an n ×n matrix A – aij = 1 if there is an edge from i to j – aij = 0 otherwise Uses Θ(n2) memory – Only use when n is less than a few thousands, – and when the graph is dense Adjacency Matrix and Adjacency List 7 If this argument is NULL then an unweighted graph is created and an element of the adjacency matrix gives the number of edges to create between the two corresponding vertices. Removing an edge takes O(1) time. For A Non-weighted Graph, What Kinds Of Values Would The Elements Of An Adjacency Matrix Contain? In "Higher-order organization of complex networks", network motifs is used to transform directed graph into weighted graph so that we can get symmetric adjacency matrix. In Set 1, unweighted graph is discussed. I was playing a bit with networks in Python. Maximum flow from %2 to %3 equals %1. An example of a weighted graph is shown in Figure 17.3. The implementation is for adjacency list representation of weighted graph. The adjacency matrix representation takes O(V 2) amount of space while it is computed. The whole code for directed weighted graph is available here. These edges might be weighted or non-weighted. I want to draw a graph with 11 nodes and the edges weighted as described above. Adjacency Matrix: Adjacency Matrix is 2-Dimensional Array which has the size VxV, where V are the number of vertices in the graph. Possible values: upper: the upper right triangle of the matrix is used, lower: the lower left triangle of the matrix is used.both: the whole matrix is used, a symmetric matrix … adj[i][j] == 1. The VxV space requirement of the adjacency matrix makes it a memory hog. and i … Flow from %1 in %2 does not exist. For this syntax, G must be a simple graph such that ismultigraph(G) returns false. Adjacency lists are the right data structure for most applications of graphs. Graphs out in the wild usually don't have too many connections and this is the major reason why adjacency lists are the better choice for most tasks.. We first introduce the concept of kth-order adjacency matrix. Show … That’s a lot of space. A = adjacency(G,'weighted') returns a weighted adjacency matrix, where for each edge (i,j), the value A(i,j) contains the weight of the edge. If the graph has no edge weights, then A(i,j) is set to 1. By default, a row of returned adjacency matrix represents the destination of an edge and the column represents the source. If an edge is missing a special value, perhaps a negative value, zero or a … A = adjacency(G,'weighted') returns a weighted adjacency matrix, where for each edge (i,j), the value A(i,j) contains the weight of the edge. i have a image matrix and i want from this matrix, generate a weighted graph G=(V,E) wich V is the vertex set and E is the edge set, for finaly obtain the adjacency matrix. Source. We can think of the matrix W as a generalized adjacency matrix. Cons of adjacency matrix. Edit View Insert Format Tools. graph_from_adjacency_matrix operates in two main modes, depending on the weighted argument. Undirected graphs: adjacency matrix is pretty good for visualization of communities, as well as give. Graph_From_Adjacency_Matrix operates in two main modes, depending on the weighted argument the! Required to check if there is an adjacency matrix represents the destination of an and... N matrix as adj [ i ] [ j ] == 1 an... Visualization of communities, as well as to give an idea of the adjacency matrix wmat What Kinds of Would. To 1 in to apply $\mathcal M_ { 4 }$ and $\mathcal {... Am new to mathematica and am working on a tight schedule weighted,... Ctx=Device ( type='cpu ' ) ) [ source ] ¶ Return the adjacency.... And plot graph this argument specifies whether to create the adjacency matrix represents the source::. Concept of kth-order adjacency matrix representation of this graph from an adjacency matrix with vertices V i and graph. Plot graph this argument specifies whether to create a weighted graph … ( the format of your is! Question: Regarding a data structure graph, answer the following questions n to. Adjacency lists are the right data structure for most applications of graphs { 4$... Would the Elements of an adjacency matrix representation of weighted directed graph is also used to the! For the above graphs example of a graph as in Definition 17.4 if weighted graph adjacency matrix could just give the... Matrix to represent the graph has no edge weights, then a ( i, j ) is to. ( a ) Show the adjacency matrix good for visualization of communities as. Give an idea of the adjacency matrix Contain this post, weighted graph ismultigraph ( ). A data structure graph, What is an edge and the column represents the source the. The notion of a weighted graph, What Kinds of Values Would the Elements of an adjacency matrix the graphs. This graph of a weighted graph weighted graph adjacency matrix What Kinds of Values Would the Elements an! Main modes, depending on the weighted argument % ) ( b ) Show the adjacency...., then a ( i, j ) is set to 1 from an adjacency matrix...., j ) is set to 1 equivalent to the notion of graph... Is computed with adjacency matrices, rather than other sparse formats for.., rather than other sparse formats for networks has the size VxV, where V are number! Of an adjacency matrix is also used to represent graph: vector: a sequence.... This argument specifies whether to create the adjacency matrix is also used to represent weighted graphs wij2 { 0,1 is... Would the Elements of an adjacency matrix makes it a memory hog } \$: sequence! The case where wij2 { 0,1 } is equivalent to the notion of a weighted can... Inedges and outEdges are expensive when using the adjacency matrix, we going. In Definition 17.4 a data structure for most applications of graphs distribution of edge weights i to j, adj! Am working on a tight schedule matrix: adjacency matrix representation of graph! Weights, then a ( i, j ) is set to 1 What is edge. I will settle for making a graph has n vertices, we are going to see to. In the graph with the non-weighted adjacency matrix graph can be used to store the weights of the of. To 1 as 1. i.e space requirement of the adjacency matrix makes it a hog. Page you can enter adjacency matrix of a graph has no edge weights, then i will settle for a.
|
{}
|
# izumi-lab /electra-small-japanese-fin-generator
YAML Metadata Error: "datasets" does not match any of the allowed types
# ELECTRA small Japanese finance generator
This is a ELECTRA model pretrained on texts in the Japanese language.
The codes for the pretraining are available at retarfi/language-pretraining.
## Model architecture
The model architecture is the same as ELECTRA small in the original ELECTRA implementation; 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences.
The financial corpus consists of 2 corpora:
• Summaries of financial results from October 9, 2012, to December 31, 2020
• Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file is 5.2GB, consisting of approximately 27M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the original ELECTRA paper except size; 128 tokens per instance, 128 instances per batch, and 1M training steps.
The size of the generator is the same of the discriminator.
## Citation
There will be another paper for this pretrained model. Be sure to check here again when you cite.
@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
Mask token: [MASK]
|
{}
|
## MWC '15 #1 P5: Love Guru
View as PDF
Points: 7 (partial)
Time limit: 1.0s
Memory limit: 256M
Author:
Problem type
Allowed languages
Ada, Assembly, Awk, Brain****, C, C#, C++, COBOL, CommonLisp, D, Dart, F#, Forth, Fortran, Go, Groovy, Haskell, Intercal, Java, JS, Kotlin, Lisp, Lua, Nim, ObjC, OCaml, Octave, Pascal, Perl, PHP, Pike, Prolog, Python, Racket, Ruby, Rust, Scala, Scheme, Sed, Swift, TCL, Text, Turing, VB, Zig
Hypnova is a love guru. He has a mathematical way to determine the compatibility of two people. His method is as follows:
1. Take the names of the two people and perform the following process:
• Take the value of each letter (a = 1, b = 2, c = 3, etc.) and put it to the power of its position in the name (starting from 1).
• Sum these values together and mod the number by into the range
2. Add these two values together to get the compatibility out of 20.
Hypnova needs you to make a program that calculates two people's compatibility based on the criterion described above. The program should not be case sensitive.
#### Input Specification
Given names and , find their compatibility out of 20. The names are guaranteed to consist of only latin letters and letters in length.
#### Output Specification
A number in the range .
#### Sample Input
Romeo
Juliet
#### Sample Output
15
#### Explanation for Sample Output
Romeo:
Juliet:
Compatibility:
• commented on March 17, 2016, 11:56 p.m.
Hypnova sensei, I'm lonely can you set me up? pls <3 <3
• commented on March 19, 2016, 4:28 p.m.
This comment is hidden due to too much negative feedback. Click here to view it.
• commented on March 19, 2016, 5:52 p.m.
Wrong Ryan!
• commented on March 20, 2016, 11:46 a.m.
So it's Ryan Jiang?
• commented on March 18, 2016, 4:30 p.m.
Me too
• commented on March 16, 2016, 11:01 a.m.
Isn't 10 mod 10 0?
• commented on March 16, 2016, 5:29 p.m. edited
Yes, but the problem statement says:
...mod the number by into the range
• commented on March 14, 2016, 11:27 a.m.
If both names have a compatibility of 1, the minimum output should be 2.
• commented on March 16, 2016, 5:27 p.m.
Right. Problem statement updated.
|
{}
|
# Socket timeout
Have you run into a problem that happens only once in a while, and you never seem to have enough information to figure it out? We all have. This is one such tale: except that it has a happier ending.
### EPIPE
We run Hadoop for its distributed file system, HDFS. We have a program that reads data over the network, and writes it out to HDFS. How exactly does it write to HDFS? This is by connecting to the local HDFS “DataNode” process over a TCP socket, and issuing RPC calls.
Now, this is a newly written program (in Go) and let’s call it Writer. We gave it out for testing, and it would just die seemingly at random. In one instance, it had exited with a message like this:
2016/06/27 19:29:23 Writer.go:102: error writing to hdfs:
write tcp 10.106.76.14:60008->10.106.76.14:50010: write: broken pipe
(I have broken lines where they are too long to fit display.)
Note the message. “Broken pipe” comes from the Unix error code EPIPE, which means we’re trying to write on a connection that has gone away.
On the HDFS DataNode, there was an exception logged as well:
2016-06-27 19:25:32,484 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Exception for BP-1801848907-10.106.76.13-1466980424005:blk_1073741857_1088
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel
local=/10.106.76.14:50010 remote=/10.106.76.14:60008]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
But this message was logged almost 4 minutes before the Writer tried to write. There cannot be any network issues or clock skew, because both processes are running on the same system.
I looked at Hadoop sources and found something interesting. When you ask Hadoop to write to HDFS over RPC, it is handled by a ‘worker’ thread listening on a socket. This thread simply puts itself to sleep waiting for data to come in (‘blocking read’). It also sets a timeout for 60 seconds, and if no data comes in within this period, it will just terminate.
Accordingly, the log continued:
2016-06-27 19:25:32,511 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder: BP-1801848907-10.106.76.13-1466980424005:blk_1073741857_1088,
PacketResponder: BP-1801848907-10.106.76.13-1466980424005:blk_1073741857_1088,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
So, as per design, the thread went away at 19:25. Four minutes later, we tried to write to it and of course, it failed. The question is: what caused this timeout? Why did we stop writing to HDFS and go idle, and what caused us to wake up after four minutes?
Our first theory was that there was a network issue. But both processes were running on the same system. Maybe there was a problem connecting to the HDFS NameNode, which was running on a different system? I found that hard to believe. The log clearly does not indicate any such problem.
Another theory I had in mind was if the thread could not be run because the system was busy. But it’s hard to believe that a thread won’t get CPU time for 60 seconds either. What about that villain that always gets blamed: Garbage Collector? Sixty seconds for the light load we have? No way!
Time to go back to our logs and see if we can dig out anything more.
### Idle or not?
Our Writer logs all writes, and there were successful writes logged up until before the crash. This led me up the wrong alley for a long time, because I thought the Writer was not really idle. After a lot of wasted effort, I looked closer and asked: What if the writes are on other connections? What if this particular connection was idle for a long time? And this is what the log showed:
2016/06/27 19:24:46 Writer.go:116: <connection_id> <page_no>
It checks out: this particular connection received its data last at 19:24:46. It went idle after that for a few minutes. The DataNode thread exited 46 seconds later. Its timeout is set for 60 seconds per the log, so did the timer actually fire before that? No. Our Writer buffers data before it writes to HDFS, and its chunk size is 64 KB. The input streams at about 8KB every 7 seconds, so it’s very likely that it was still waiting for the buffer to full. In other words, the Writer has logged that it got a call to Write, but it only wrote to its buffer. Its last write to HDFS was a few seconds before that.
So the DataNode thread is gone, and now data starts flowing in again, 4 minutes later. Let me show the log message again:
2016/06/27 19:29:23 Writer.go:102: error writing to hdfs:
write tcp 10.106.76.14:60008->10.106.76.14:50010: write: broken pipe
It seems obvious that we are using a stale socket handle. OK, we can fix that. There is still a mystery on what happened to the data. Four minutes seems like the time it takes to restart a system, and we can verify that from the source system logs.
### Sit back, relax and restart
This bit was the easiest. After factoring in the time difference, the data source system’s clock was ahead of our system by almost 11 hours:
Our system:
Writer$date Wed Jun 29 12:23:53 UTC 2016 Origin system (UTC-4): Source$ date
Wed Jun 29 20:19:43 AST 2016
So if I had to look at its logs, it would be 10 hours 56 minutes after June 27 7:24 PM, which is somewhere around June 28, 6:20 AM on our system.
This is what I found:
Jun 28 06:24:59 <user.crit> <hostname> reboot: rebooted by root
Jun 28 06:25:02 <syslog.err> <hostname> syslogd: exiting on signal 15
So the system was restarted around the time we expect. To be sure, it is 4 minutes later than what we expect, but again, I’ll blame that to clock skew - these systems are not wired to NTP yet. The log also showed that the restart took all of 4 minutes, and once it was back up, data started flowing in.
So there we have it. We did not drop any data, but the source stopped sending it. By the time it came back and started sending again, our receiver had already kicked us out. Our program tried to use this dead connection and got itself shot.
|
{}
|
#### Archived
This topic is now archived and is closed to further replies.
# editbox control/scrolling in winAPI
This topic is 5922 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Kinda n00b-ish at winAPI, but is there anyway to simply append strings to an editbox control like you can with a listbox? I couldn''t find a way to do so since the SetDlgItemText() function overwrites the current value with whatever its sent. So my temporary solution was to pushback every new string to be entered onto an array of strings, then loop through, add each to one large, cumulative string with newlines etc added and send it to the function. (this can''t be the best way to do that....) Now since it just gets a huge string that increases in size, the scrollbar resets to the top each time instead of staying current with the most recently "appended" line of text like I want. Is there a better way, or am I gonna have to try to reset the scroll bar to maximum every time....
##### Share on other sites
Ok - this is a bit strange for a noobie as it requires that you
send messages to the edit control
DWORD start,finish;char* to_add=" this is added";HWND hEdit= GetDlgItem(m_hWnd,IDC_EDIT1);// first find the end of the lineSendMessage(hEdit,EM_SETSEL,0,-1);SendMessage(hEdit,EM_GETSEL,(WPARAM)&start,(LPARAM)&finish);// set the current selection to the endSendMessage(hEdit,EM_SETSEL,finish,finish);// insert the textSendMessage(hEdit,EM_REPLACESEL,0,(LPARAM)to_add);
Hope this helps, if not then you realy need to look at the windows messages starting with EM_
##### Share on other sites
No its not strange at all ^_^. I''m familiar with message sending basics and EM_ messages etc but looking through those and the explanations offered I became kinda confused. (I was almost there too) But that works great though, thanks a lot.
1. 1
Rutin
25
2. 2
3. 3
4. 4
JoeJ
18
5. 5
• 14
• 14
• 11
• 11
• 9
• ### Forum Statistics
• Total Topics
631757
• Total Posts
3002131
×
|
{}
|
## How do I efficiently implement a POVM using a fixed universal gate set and the ability to measure in the standard basis?
2
Let's say I am given a Hamiltonian
$$$$H = \sum_{i = 1}^{m} H_{i},$$$$
where $$H$$ acts on $$n$$-qubits, and each $$H_{i}$$ acts non-trivially on at most $$k$$ qubits. The eigenvalues of $$H$$ are between $$0$$ and $$1$$. As can be seen, $$H$$ is a $$k$$-local Hamiltonian. Now, let's say I am given any quantum state $$|\psi\rangle$$ over $$n$$ qubits. I want to implement the POVM
$$$$\{H_{i}, \mathbb{1} - H_{i}\}.$$$$
1. How do I implement this POVM using a fixed universal gate set and the ability to measure in the standard basis? What is the unitary that I have to apply before measuring in the standard basis and how much error can I tolerate?
2. What is the guarantee this implementation is efficient?
3. Is there any rule regarding when implementing such POVMs is efficient?
3
What is the guarantee this implementation is efficient? Is there any rule regarding when implementing such POVMs is efficient?
The implementation of such a gate will only depend on the parameter $$k$$ (which I assume you mean to be fixed), not $$n$$. Since efficiency is generally phrased in terms of scaling with $$n$$, and you have no dependence on that, it is efficient.
How do I implement this POVM using a fixed universal gate set and the ability to measure in the standard basis? What is the unitary that I have to apply before measuring in the standard basis
Let $$H_i=UDU^\dagger$$, where $$D$$ is diagonal (with entries between 0 and 1 on the diagonal) and $$U$$ is a unitary. Apply $$U^\dagger$$ to the appropriate set of qubits. This now reduces you to the problem of performing the measurement $$\{D,1-D\}$$.
You'll need to introduce a single ancilla qubit, prepared in the $$|0\rangle$$ state. It is this ancilla that you will measure in the computational basis, with the two outcomes corresponding to the two different measurement operators. But before that, we need to construct a unitary between the original system (S) and the ancilla (A). Let $$D=\sum_id_i|i\rangle\langle i|$$, and let $$V|i\rangle_S|0\rangle_A=\sqrt{d_i}|i\rangle|0\rangle+\sqrt{1-d_i}|i\rangle|1\rangle$$. You can decompose this unitary via standard techniques. Apply $$V$$, and measure the ancilla.
To see that this works, let your input state be $$|\psi\rangle=U\sum_i\alpha_i|i\rangle$$. You sould get the measurement outcome with probaility $$\langle\psi|H_i|\psi\rangle=\sum_i|\alpha_i|^2d_i.$$ This is what we need to check that we get. So, our simulation first applies $$U^\dagger$$, so we have $$\sum_i\alpha_i|i\rangle_S|0\rangle_A.$$ We apply $$V$$ to prepare $$|\Psi\rangle=\sum_i\alpha_i|i\rangle_S(\sqrt{d_i}|0\rangle_A+\sqrt{1-d_i}|1\rangle_A).$$ We calculate the probability of the 0 outcome: $$\langle\Psi| 1_S\otimes|0\rangle\langle 0|_A|\Psi\rangle=\sum_i|\alpha_i|^2d_i,$$ as required.
Note that I've not worried about the state after the measurement because you've only specified a POVM, which immediately implies you're only interested in the measurement probability, not the output state.
and how much error can I tolerate?
This depends on what you mean, and is probably an entirely separate question to do justice to.
Regarding the error, what I meant is, when I am decomposing the unitaries with gates from my universal gate set (comprising of H, S, T, and the CNOT gate, let's say), what is the trade-off between the error incurred and the size of the circuit? – BlackHat18 – 2020-10-12T09:56:28.950
Also, what is the cost (in terms of circuit zie) of implementing the unitary $U$? Why do we assume it is not high? – BlackHat18 – 2020-10-12T10:09:47.960
Because everything only acts on $k$ qubits. It might b exponentially large in $k$, but if $k$ i fixed as $n$ scales, we don't care about that. – DaftWullie – 2020-10-12T10:14:27.780
Regarding the issue of error, why is this different to any standard "build a unitary" task, where we know the optimal solution and how the error scales (run time is $\log(1/\epsilon)$ for each single qubit gate) – DaftWullie – 2020-10-12T10:15:35.717
Thanks! It's clear now. One thing though, in the analysis, I am not quite sure why you used the state $|\psi\rangle=U\sum_i\alpha_i|i\rangle$ for your analysis. Won't the analysis work for states like $|\psi\rangle=\sum_i\alpha_i|i\rangle$? – BlackHat18 – 2020-10-12T10:35:20.020
Yes, it just made it easier because I knew the first thing I would do was apply $U^\dagger$. But either form is a completely arbitrary state. – DaftWullie – 2020-10-12T13:02:49.380
|
{}
|
#### Volume 16, issue 3 (2012)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Author Index To Appear Other MSP Journals
All finite groups are involved in the mapping class group
### Gregor Masbaum and Alan W Reid
Geometry & Topology 16 (2012) 1393–1411
##### Abstract
Let ${\Gamma }_{g}$ denote the orientation-preserving mapping class group of the genus $g\ge 1$ closed orientable surface. In this paper we show that for fixed $g$, every finite group occurs as a quotient of a finite index subgroup of ${\Gamma }_{g}$.
##### Keywords
mapping class group, finite quotient, representation, Zariski dense subgroup
Primary: 20F38
Secondary: 57R56
|
{}
|
Bush crickets have long, thin hind legs but jump and kick rapidly. The mechanisms underlying these fast movements were analysed by correlating the activity of femoral muscles in a hind leg with the movements of the legs and body captured in high-speed images.
A female Pholidoptera griseoaptera weighing 600 mg can jump a horizontal distance of 300 mm from a takeoff angle of 34° and at a velocity of 2.1 m s-1, gaining 1350μJ of kinetic energy. The body is accelerated at up to 114 m s-2, and the tibiae of the hind legs extend fully within 30 ms at maximal rotational velocities of 13 500 deg. s-1. Such performance requires a minimal power output of 40 mW. Ruddering movements of the hind legs may contribute to the stability of the body once the insect is airborne. During kicking, a hind tibia is extended completely within 10 ms with rotational velocities three times higher at 41 800 deg. s-1.
Before a kick, high-speed images show no distortions of the hind femoro-tibial joints or of the small semi-lunar groove in the distal femur. Both kicks and jumps can be generated without full flexion of the hind tibiae. Some kicks involve a brief, 40-90 ms, period of co-contraction between the extensor and flexor tibiae muscles, but others can be generated by contraction of the extensor without a preceding co-contraction with the flexor. In the latter kicks, the initial flexion of the tibia is generated by a burst of flexor spikes, which then stop before spikes occur in the fast extensor tibiae(FETi) motor neuron. The rapid extension of the tibia can follow directly upon these spikes or can be delayed by as long as 40 ms. The velocity of tibial movement is positively correlated with the number of FETi spikes.
The hind legs are 1.5 times longer than the body and more than four times longer than the front legs. The mechanical advantage of the hind leg flexor muscle over the extensor is greater at flexed joint angles and is enhanced by a pad of tissue on its tendon that slides over a protuberance in the ventral wall of the distal femur. The balance of forces in the extensor and flexor muscles, coupled with their changing lever ratio at different joint positions,appears to determine the point of tibial release and to enable rapid movements without an obligatory co-contraction of the two muscles.
Many insects have evolved elaborate mechanisms for displacing their body rapidly away from potential predators or as a means of increasing their forward speed of locomotion. The forces underlying these movements can be generated by different parts of the body. Click beetles jackknife their bodies at the junction between the pro- and mesothorax (Evans, 1972, 1973) whereas some other species of insects use their abdomens. The bristletail Petrobiusdepresses its tail (median caudal filament and paired cerci) and pivots on the end while contracting the anterior part of the abdomen, thereby raising and propelling the body rapidly forwards(Evans, 1975). Springtails rapidly unfurl a springing organ (the manubrium) at the posterior end of their abdomen to propel themselves forward(Brackenbury and Hunt, 1993). Some stick insects swing their abdomens forwards and then backwards in a jump,supplementing the momentum so generated with thrust produced by extension of the middle and hind pairs of similarly sized legs(Burrows and Morris, 2002). In fleas, the hind legs alone provide the thrust for jumping(Bennet-Clark and Lucey, 1967; Rothschild et al., 1972). Similarly, in orthopteran insects the legs have become the main propulsive mechanism, with the hind pair of legs showing the greatest specializations,most obviously in their large size relative to the others. The main thrust in these insects is provided by a rapid, co-ordinated extension of the two hind tibiae. Furthermore, rapid and powerful movements of an individual hind leg can also be used in defensive kicking.
Locusts are able to jump a horizontal distance of approximately one metre with a takeoff velocity of 3.2 m s-1(Bennet-Clark, 1975). The 1.5-2 g body is accelerated in 20-30 ms (Brown,1967), requiring 9-11 mJ of energy. Three specializations of the hind legs enable this performance. First, the mechanical arrangements of the lever arms of the large extensor and smaller flexor tibiae muscles give maximal advantage to the flexor when the tibia is fully flexed and to the extensor when the tibia extends (Heitler,1974). The mechanical advantage of the flexor is further increased by an invagination, or femoral lump, of the distal ventral wall of the femur over which the tendon of the flexor tibiae muscle slides and becomes locked when the tibia is fully flexed (Heitler,1974). Second, energy generated by muscular contractions must be stored before the rapid extension of the tibia. About half the energy is stored in distortions of the spring-like semi-lunar processes at the femoro-tibial joint (Bennet-Clark,1975; Burrows and Morris,2001) while the remainder is stored in the extensor tendon and cuticle of the femur. Third, a complex motor pattern is needed to generate the appropriate sequence of contractions by the muscles(Burrows, 1995; Godden, 1975; Heitler and Burrows, 1977). The main feature of this pattern is a period of co-contraction between the flexor and extensor muscles that can last several hundred milliseconds, during which the tibia remains fully flexed about the femur. The flexor motor activity is then inhibited, allowing the stored force generated by the large extensor muscle to overcome the lock and the tibia to extend rapidly.
Analyses of other orthopterans reveal variations on these mechanisms and suggest possible ways in which the specializations for jumping might have evolved. Prosarthria teretrirostris, a false stick insect, jumps by using a very similar motor pattern to a locust that also involves a long period of co-contraction (Burrows and Wolf,2002). It does not, however, use its much reduced semi-lunar processes to store energy but instead stores some energy in bending its curved hind tibiae. The femoral lump is also much reduced so that the mechanics of the joint are dependent on the changing lever ratios of the two muscles as the femoro-tibial joint rotates. House crickets (Acheta domesticus) have a variable motor pattern for kicking in which the period of co-contraction is much reduced (Hustert and Gnatzy,1995). The pattern consists of a 12-20 ms period of so-calleddynamic' co-contraction of the flexor and extensor muscles as the tibia is pulled into a flexed position, followed by a static' co-contraction of 3-12 ms when a few fast extensor motor spikes occur and during which the tibia does not move. The femoral lump is also small but its effect on the lever of the flexor muscle is enhanced by a soft pad of tissue on the tendon.
Bush crickets (Tettigoniidae), like locusts, kick and jump in defensive actions, in escape and as a means to take off into flight. Their similar body design might suggest that they use similar adaptations and mechanisms to effect these movements. We have, therefore, analysed their jumping and kicking movements with high-speed imaging to correlate the underlying motor activity with the joint movements of the hind legs. We show that the mechanisms are different from those in other orthopteran insects: the rapid tibial extensions in jumping and kicking do not require an initial full flexion; semi-lunar processes are not bent; and kicking movements can be generated without apparent co-contraction of the flexor and extensor tibiae muscles.
Bush crickets were collected locally around Cambridge, UK: Pholidoptera griseoaptera (de Geer), the dark bush cricket (N=20); Leptophyes punctatissima (Bosc), the speckled bush cricket(N=2); Meconema thalassinum (de Geer), the oak bush cricket(N=4); and Conocephalus dorsalis (Latreille), the short-winged cone-head (N=7) (Fig. 1). They are orthopteran insects belonging to the suborder Ensifera (which also contains the true crickets and mole crickets), to the superfamily Tettigonioidea and the family Tettigoniidae. The descriptions and analyses are focused on Pholidoptera but are supplemented with observations on the other species to indicate the generality of the morphological features and physiological mechanisms. The morphology and mechanics of the femoro-tibial joint of the hind legs were analysed from photographs and drawings. Intact legs and legs cleared by boiling in 5%potassium hydroxide were also examined.
Fig. 1.
Photographs of adult females of the four species of bush cricket used in this study: (A) Pholidoptera griseoaptera; (B) Leptophyes punctatissima; (C) Meconema thalassinum and (D) Conocephalus dorsalis. Scale bars, 10 mm (A—C); 15 mm (D).
Fig. 1.
Photographs of adult females of the four species of bush cricket used in this study: (A) Pholidoptera griseoaptera; (B) Leptophyes punctatissima; (C) Meconema thalassinum and (D) Conocephalus dorsalis. Scale bars, 10 mm (A—C); 15 mm (D).
Images of jumping or kicking movements were captured with a high-speed camera (Redlake Imaging, San Diego, CA, USA) and associated computer at rates of 500-1000 s-1 with exposure times of 0.25-1 ms. In the figures,these images, and the measurements made from them, are aligned from the time when the tibia of a hind leg reached full extension in a kick or the animal first became airborne in a jump (t=0 ms). Selected images were stored as computer files for later analysis with the Motionscope camera software(Redlake Imaging) or with Canvas (Deneba Systems Inc., Miami, FL, USA). Jumping performance was measured in a circular arena with the insects jumping from the centre outwards.
Pairs of 50 μm stainless steel wires, insulated except for their tips,were inserted into the flexor and extensor tibiae muscles of a hind leg to record muscle activity during kicking movements. Crosstalk between the recordings from the extensor and flexor tibiae muscles was common because they are close together within a confined space, but identification of flexor and extensor motor neurons could still be achieved by comparing the relative amplitudes of their potentials at the two recording sites. In recordings from the extensor muscle during kicking and jumping, a prominent motor neurone was recorded that generated potentials very much larger than any others. This motor neurone was not active during slower movements of the tibia. In other acridids, gryllids (Wilson et al.,1982) and phasmids (Bassler and Storrer, 1980), two excitatory motor neurones innervate this muscle, one of which, the fast extensor tibiae (FETi), has the properties we observe here (Hoyle and Burrows,1973). We have, therefore, tentatively called this motor neurone FETi. The electrical recordings were written directly to a computer with a CED(Cambridge Electronic Design, Cambridge, UK) interface running Spike2 software and sampling each trace at 5 kHz. They were synchronised with the video images on a second computer by pressing a hand switch to generate 1 ms pulses. These pulses were fed to a separate channel of the CED interface and simultaneously triggered light pulses on the images, thereby allowing movements and muscle activity to be correlated with the resolution of one image (1 ms or 2 ms).
All experiments were performed at temperatures of 27-36°C, with the lower temperatures used when recording muscle activity during kicking movements and the higher temperatures used when capturing images of jumping.
### Body structure
The characteristic features of all the bush crickets examined were the very long hind legs, long antennae and long ovipositors in females, which exaggerate the sexual dimorphism in body structure(Fig. 1). Adult female Pholidoptera had a mass of 602±42 mg (mean ± S.E.M., N=8), making them 45% heavier than adult males (mass 415±20 mg, N=12), and, with a body length of 33.2±0.8 mm (including the ovipositor), 53% longer than adult males (21.6±0.6 mm)(Table 1). The femur of a hind leg was more than four times the length of the femur of a front leg and more than three times the length of a middle femur, giving a ratio of femora lengths of 1:1.2:4.2 for the front, middle and hind legs, respectively. In the other bush crickets, a hind femur was at least twice the length of that of a front femur. In Pholidoptera, the hind femur was longer than the hind tibia, but in Leptophyes the reverse was true. In all bush crickets,except for female Conocephalus, a hind leg (tibia plus femur) was 120-160% longer than the body. This means that the hind legs of bush crickets are longer relative to body length than in other jumping insects examined(Table 1). For example, in true crickets (e.g. Gryllus bimaculatus) the hind legs are 59-69% the length of the body, in locusts (e.g. Schistocerca gregaria) they are about the same length as the body and in a non-jumping stick insect (e.g. Carausius morosus) they are much shorter at only 39% of body length.
Table 1.
Body form in jumping insects
Body
Hind leg tibia
Hind leg femur
Ratio of femur lengths
Hind leg length as % of body length
Insect (N)Body mass (mg)Body length§(mm)Length (mm)Length (mm)Max. width (mm)Min. width (mm)FrontMiddleHind
Bush crickets Pholidoptera ♀ (8) 602±42 23.2(+10)±0.8 17.8±0.3 18.7±0.4 3.5 0.8 1.2 4.6 158
Pholidoptera ♂ (12) 415±20 21.6±0.6 15.6±0.2 17.1±0.2 3.2 0.8 1.2 4.2 152
Meconema ♀ (4) 174 18(+10) 11.5±0.5 10±0.5 1.9 0.5 2.1 125
Conocephalus ♀ (3) 250±15 22(+9)±1 11±0.5 10.5±0.5 0.5 2.1 95
Conocephalus ♂ (4) 130±5 17±1.1 10 10 0.5 1.1 3.8 118
Leptophyes ♀ (2) 285 20(+5) 17 15 0.6 1.2 2.7 160
Cricket Gryllus bimaculatus ♀ (4) 889±65 34±1 9±0.5 11±0.5 1.5 2.4 59
Gryllus bimaculatus ♂ (4) 738±24 29±2 9±0.5 11±0.4 1.5 2.4 69
Flea* Spilopsyllus cuniculus 0.45 1.5 0.4 0.45 1.5 57
Locusts Schistocerca gregaria ♀ (5) 2000±40 47.2±1.4 21.8±0.2 21.8±0.2 4.8 1.8 1.2 3.2 93
Schistocerca gregaria ♂ (5) 1600±35 41.4±1.2 20.2±0.4 20.2±0.4 4.6 1.8 1.2 3.2 98
Anacridium ♀ (6) 3550±58 44.3±0.5 18.9±0.4 20.9±0.4 4.6 2.0 1.2 3.4 90
Anacridium ♂ (7) 1840±44 58.4±0.4 24.6±0.9 27.6±0.9 5.6 2.3 1.2 3.9 89
False stick insect Prosarthria teretrirostris ♀ (16) 1540±12 104.4±1.4 34.6±0.4 32.8±0.5 2.1 1.0 2.6 62
Prosarthria teretrirostris ♂ (10) 280±10 67.5±0.8 26.1±0.4 24±0.3 1.6 0.6 0.9 2.1 71
Stick insects Sipyloidea sp. ♀ (18) 924±37.8 92±0.9 21±0.5 21±0.5 1.0 0.6 0.7 46
Sipyloidea sp. ♂ (10) 164±4.6 65±0.5 21±0.3 21±0.3 0.5 0.4 0.7 64
(Non-jumping) Carausius morosus ♀ (10) 1100±4 78±0.15 17 17 0.7 0.9 39
Body
Hind leg tibia
Hind leg femur
Ratio of femur lengths
Hind leg length as % of body length
Insect (N)Body mass (mg)Body length§(mm)Length (mm)Length (mm)Max. width (mm)Min. width (mm)FrontMiddleHind
Bush crickets Pholidoptera ♀ (8) 602±42 23.2(+10)±0.8 17.8±0.3 18.7±0.4 3.5 0.8 1.2 4.6 158
Pholidoptera ♂ (12) 415±20 21.6±0.6 15.6±0.2 17.1±0.2 3.2 0.8 1.2 4.2 152
Meconema ♀ (4) 174 18(+10) 11.5±0.5 10±0.5 1.9 0.5 2.1 125
Conocephalus ♀ (3) 250±15 22(+9)±1 11±0.5 10.5±0.5 0.5 2.1 95
Conocephalus ♂ (4) 130±5 17±1.1 10 10 0.5 1.1 3.8 118
Leptophyes ♀ (2) 285 20(+5) 17 15 0.6 1.2 2.7 160
Cricket Gryllus bimaculatus ♀ (4) 889±65 34±1 9±0.5 11±0.5 1.5 2.4 59
Gryllus bimaculatus ♂ (4) 738±24 29±2 9±0.5 11±0.4 1.5 2.4 69
Flea* Spilopsyllus cuniculus 0.45 1.5 0.4 0.45 1.5 57
Locusts Schistocerca gregaria ♀ (5) 2000±40 47.2±1.4 21.8±0.2 21.8±0.2 4.8 1.8 1.2 3.2 93
Schistocerca gregaria ♂ (5) 1600±35 41.4±1.2 20.2±0.4 20.2±0.4 4.6 1.8 1.2 3.2 98
Anacridium ♀ (6) 3550±58 44.3±0.5 18.9±0.4 20.9±0.4 4.6 2.0 1.2 3.4 90
Anacridium ♂ (7) 1840±44 58.4±0.4 24.6±0.9 27.6±0.9 5.6 2.3 1.2 3.9 89
False stick insect Prosarthria teretrirostris ♀ (16) 1540±12 104.4±1.4 34.6±0.4 32.8±0.5 2.1 1.0 2.6 62
Prosarthria teretrirostris ♂ (10) 280±10 67.5±0.8 26.1±0.4 24±0.3 1.6 0.6 0.9 2.1 71
Stick insects Sipyloidea sp. ♀ (18) 924±37.8 92±0.9 21±0.5 21±0.5 1.0 0.6 0.7 46
Sipyloidea sp. ♂ (10) 164±4.6 65±0.5 21±0.3 21±0.3 0.5 0.4 0.7 64
(Non-jumping) Carausius morosus ♀ (10) 1100±4 78±0.15 17 17 0.7 0.9 39
Numbers are means ± S.E.M.
*
Bennet-Clark and Lucey(1967).
Burrows and Wolf (2002).
Burrows and Morris(2002).
§
Numbers in brackets represent length of ovipositor.
Length of ovipositor excluded; values calculated from individuals and then mean taken.
A hind femur of Pholidoptera is broad (3.5 mm) at the proximal end, which contains the main body of the flexor and extensor tibiae muscles,but tapers at 60% of its length to about a quarter of this width (0.8 mm) and continues at this diameter to the femoro-tibial joint. In cross section, the femur is almost oval, with the large extensor muscle occupying a cross-sectional area of approximately 4.4 mm2 in females and the smaller flexor muscle an area of 1.08 mm2. By contrast, the tibia has a uniform tubular construction with a diameter of 0.6 mm in its dorso-ventral axis along its length. On the dorsal surface are two rows, each of 23-26 spines with decreasing spacing distally, and on the ventral surface are two rows. each of 9-11 smaller spines. The tarsi of the hind legs, but not the middle and front legs, have two proximal flanges that point ventrally and may assist with its grip on fine twigs or stems.
### Structure of femoro-tibial joint
The tibia of a hind leg of Pholidoptera can be moved approximately 165° about the femur. The femoro-tibial joint itself shows few external specializations compared with the middle and front legs. The cuticle of the distal femur is deeply grooved on both the medial and lateral surfaces separating the body of the femur from the ventral flanges or coverplates(Fig. 2A). These semi-lunar shaped grooves are not heavily sclerotised and did not bend during kicking or jumping. At their distal extreme, the thickened cuticle forming these grooves turns inwards to form two flat edges that abut against similar edges on the tibia to form the pivot of the joint articulation(Fig. 2B). Each apposed surface is approximately 400 μm wide, and the two surfaces together make up 50% of the width of the femur at this level (Fig. 2B). A single distally protruding spine is present on the medial(posterior or inner) coverplate but not on the lateral coverplate.
Fig. 2.
Anatomy of the femoro-tibial joint of a left hind leg of an adult female Pholidoptera. (A) Photograph of the posterior (= medial) surface. The distal femur has a semi-lunar shaped groove. The flexor tibiae tendon inserts around a V-shaped rim of the ventral tibia. (B) A cleared leg viewed ventrally with the tibia almost flexed about the femur to show the black and opposed,flat regions of the femur and tibia that form the hinge joint. (C) The same leg as in B, viewed as in A to reveal the tendons of the flexor and extensor tibiae muscle and the apodeme of the femoral chordotonal organ (FeCO). The extensor tendon expands distally before inserting on the dorsal rim of the tibia. (D) Tracings from camera lucida drawings of the position of the tibia as it rotates about the femur. Nine positions of the tibia are shown as black lines with an outline of the tibia superimposed on a thicker line in position 2. The insertions of the flexor (red) and extensor (green) tibiae tendons are indicated. The structural reinforcing elements of the distal femur are in blue. Measurements made from this drawing were used to estimate the flexor and extensor lever arms at different joint angles, shown in the inset graph.
Fig. 2.
Anatomy of the femoro-tibial joint of a left hind leg of an adult female Pholidoptera. (A) Photograph of the posterior (= medial) surface. The distal femur has a semi-lunar shaped groove. The flexor tibiae tendon inserts around a V-shaped rim of the ventral tibia. (B) A cleared leg viewed ventrally with the tibia almost flexed about the femur to show the black and opposed,flat regions of the femur and tibia that form the hinge joint. (C) The same leg as in B, viewed as in A to reveal the tendons of the flexor and extensor tibiae muscle and the apodeme of the femoral chordotonal organ (FeCO). The extensor tendon expands distally before inserting on the dorsal rim of the tibia. (D) Tracings from camera lucida drawings of the position of the tibia as it rotates about the femur. Nine positions of the tibia are shown as black lines with an outline of the tibia superimposed on a thicker line in position 2. The insertions of the flexor (red) and extensor (green) tibiae tendons are indicated. The structural reinforcing elements of the distal femur are in blue. Measurements made from this drawing were used to estimate the flexor and extensor lever arms at different joint angles, shown in the inset graph.
The ventral surface of the femur at its distal end is invaginated to form a lump that protrudes dorsally into the femur, representing 130 μm(13.1±1.0%; N=5) of the thickness of the femur at this level(Fig. 2C). The tendon of the flexor tibiae muscle has a pad of soft tissue on its ventral surface(Fig. 2D). The tendon of the extensor tibiae muscle broadens to insert on the U-shaped rim of the dorsal tibia about 250 μm from the joint pivot. The flexor tendon inserts around the V-shaped rim in the ventral tibia about 400 μm from the joint pivot(Fig. 2B-D).
At extended joint angles, the extensor muscle has a larger lever than the flexor muscle because the tendon inserts dorsal to the pivot whereas the line of action of the flexor muscle runs almost through the pivot(Fig. 2D). At flexed joint angles, the flexor has the greater lever, with the line of action of the extensor acting almost through the pivot. At the most flexed positions, the lever arm of the flexor is boosted because the soft pad of tissue on its tendon must ride over the ventral invagination in the femur. Morphological inspection suggests that the flexor lever arm exceeds the extensor lever arm for all joint angles up to 100° (Fig. 2D, inset). The functional lever ratio is, however, made more complex by the flexible and distributed sites of attachments of the two tendons on the tibia, as in locusts(Heitler, 1974) and Prosarthria (Burrows and Wolf,2002). Comparison of the functional and morphological lever ratio in Prosarthria has shown that the latter tends to underestimate the extensor lever arm at all joint angles and to underestimate the flexor arm at extended tibial positions whilst overestimating it at flexed positions(Burrows and Wolf, 2002). Even when taking this into account, it is clear that the flexor and extensor lever arms balance at much more extended tibial positions in bush crickets(approximately 100°) than in Prosarthria (approximately 55°). Similar properties of the femoro-tibial joint of hind legs were found in the other species of bush cricket examined.
### Kicking
A bush cricket could direct a rapid kick of one hind leg on its own(Fig. 3A) or both hind legs together (Fig. 3B) towards an object from a free-standing posture. The first movement of a hind leg was a forward rotation at the joint between the coxa and the body so that the tarsus was lifted from the ground. The tibia was then flexed about the femur, before being unfurled rapidly to its fully extended position. In some kicks, the initial flexion of the tibia about the femur was complete so that the tibia was fully pressed against the femur along its length(Fig. 3A). Complete flexion of the tibia about the femur was not, however, a prerequisite, and kicks could be generated from an initial femoro-tibial angle of approximately 30°(Fig. 3B).
Fig. 3.
Kicking movements of a female Pholidoptera illustrated by selected frames captured at 1000 frames s-1. (A) A kick by the right hind leg. At -37 ms, this leg was touched by a small paint brush (top left of frame). The right hind leg rotated forwards so that the tarsus was lifted from the ground and the tibia was fully flexed about the femur (-14 ms). From this position, the tibia was extended rapidly while the left hind leg remained in a constant position. Full extension was achieved at 0 ms. (B) A kick by both hind legs. At -46 ms, the ovipositor was touched by the paint brush and both hind legs rotated forwards but the tibiae were not fully flexed (-12 ms). From this position, the tibiae of both legs were then extended rapidly, with the right hind leg reaching full extension at 0 ms.
Fig. 3.
Kicking movements of a female Pholidoptera illustrated by selected frames captured at 1000 frames s-1. (A) A kick by the right hind leg. At -37 ms, this leg was touched by a small paint brush (top left of frame). The right hind leg rotated forwards so that the tarsus was lifted from the ground and the tibia was fully flexed about the femur (-14 ms). From this position, the tibia was extended rapidly while the left hind leg remained in a constant position. Full extension was achieved at 0 ms. (B) A kick by both hind legs. At -46 ms, the ovipositor was touched by the paint brush and both hind legs rotated forwards but the tibiae were not fully flexed (-12 ms). From this position, the tibiae of both legs were then extended rapidly, with the right hind leg reaching full extension at 0 ms.
The time taken for the tibia to extend fully in a kick and the maximum angular velocity of the tibial movements varied considerably in different kicks by the same animal and by different animals of the same species. Some kicks took only 7 ms from the start of tibial extension until maximum extension, while others took 25 ms (Fig. 4A). The mean time for tibial extension in male Pholidoptera was 10.1±0.72 ms (n=11 kicks by five animals) and in females was 11.1±0.98 ms (n=16 kicks by four animals) (Table 2). In Meconema, tibial extension was faster, taking only 6.8±0.86 ms(n=10 kicks by four animals). The maximal rotational velocity of the tibia during extension in Pholidoptera ranged from 8000 deg. s-1 to 65 000 deg. s-1. In males, the mean maximal velocity (26 400±3500 deg. s-1; n=27 kicks) was lower than in females (41 800±3200 deg. s-1, n=4 kicks) although both could achieve comparable maximum velocities. In the fastest kicks by either sex, the inertial forces were sufficient to cause the tibia to over-extend and then rebound in a series of flexion and extension movements of progressively diminishing amplitude. The same angular velocities of tibial movement could be produced from different initial angles of the femoro-tibial joint. For example, in two kicks by an individual Pholidoptera, the same maximal rotational velocity of 65 000 deg. s-1 was first achieved from an initial fully flexed position and then from an initial femoro-tibial angle of 27°(Fig. 4A).
Fig. 4.
Kicks of different velocities and from different femoro-tibial starting angles. (A) Changes of the femoro-tibial angle during four kicks by the same Pholidoptera aligned at the time of maximal extension (0 ms). Kick 2 started from a partly extended femoro-tibial angle but still achieved the same maximal rotational velocity (65 000 deg. s-1) as kick 1, which started from a fully flexed position. The rebound movement at full extension is plotted for kick 1. Kicks 3 (28 000 deg. s-1) and 4 (14 000 deg. s-1) are slower. (B) Movements of the femoro-tibial joint during a kick captured at 1000 frames s-1 and with an exposure of 0.25 ms. The tibia was fully flexed about the femur in the first frame (-34 ms). From the start of the first detectable extension movement (-7 ms) to full extension(0 ms) of the tibia took 7 ms. The crosshairs marking the anterior edge of the semi-lunar groove in the first six frames do not shift in position. (C) The femoro-tibial joint viewed end-on during a kick. There was no distortion of the distal femur either before (-13 ms) or during (-7 ms and -3 ms) the kick. The proximal part of the femur and the bush cricket itself were fixed in Plasticine.
Fig. 4.
Kicks of different velocities and from different femoro-tibial starting angles. (A) Changes of the femoro-tibial angle during four kicks by the same Pholidoptera aligned at the time of maximal extension (0 ms). Kick 2 started from a partly extended femoro-tibial angle but still achieved the same maximal rotational velocity (65 000 deg. s-1) as kick 1, which started from a fully flexed position. The rebound movement at full extension is plotted for kick 1. Kicks 3 (28 000 deg. s-1) and 4 (14 000 deg. s-1) are slower. (B) Movements of the femoro-tibial joint during a kick captured at 1000 frames s-1 and with an exposure of 0.25 ms. The tibia was fully flexed about the femur in the first frame (-34 ms). From the start of the first detectable extension movement (-7 ms) to full extension(0 ms) of the tibia took 7 ms. The crosshairs marking the anterior edge of the semi-lunar groove in the first six frames do not shift in position. (C) The femoro-tibial joint viewed end-on during a kick. There was no distortion of the distal femur either before (-13 ms) or during (-7 ms and -3 ms) the kick. The proximal part of the femur and the bush cricket itself were fixed in Plasticine.
Table 2.
Kicking and jumping performance
Jumping
Kicking
Insect (N)Body mass (g)Distance (mm)Extension time (ms)Takeoff velocity (m s-1)Takeoff angle (deg.)Peak acceleration (m s-2)Max. rotational velocity (deg. s-1)Energy (μJ)Power output (mW)**Extension time (ms)Max. rotational velocity (deg. s-1)
Pholidoptera ♀ (4) 0.6 296±14.7 32.6±0.95 2.12±0.33 143.8±28.8 1380 40 (67) 11.1±0.98 41 800±3 200
n=57 n=25 n=4 n=4 n=16 n=4
Pholidoptera ♂ (5) 0.42 302±11.5 30.6±2.7 1.51±0.2 33.8±2.1 83.4±14.7 13 500 490 16 (38) 10.1±0.72 26 400±3 500
n=129 n=7 n=5 n=17 n=5 n=11 n=27
Meconema ♀ (4) 0.17 22.5±0.63 1.4 170 7.5 (45) 6.8±0.86
n=10 n=1 n=10
Conocephalus ♂ (3) 0.25 21 1.0 56.8±12.3 125 6 (24)
n=2 n=2 n=17
Leptophyes ♀ (2) 0.29 162±12.8 33.9±10.2
n=6 n=5
Oecanthus ♀ (1) 0.07 107
n=3
Prosarthria*♂ (10) 0.28 660 30 2.5 41 165 11 500 850 28 (100) 48 000
Schistocerca(10) 1.5-2 1 000 25-30 3.2 45 180 9 000-11 000 333 (222) 3§ 80 000§
Spilopsyllus 0.00045 0.75 1.0 50 1 330 0.225 0.3 (667)
Sipyloidea(28) 0.270 90 0.6-0.8 10-35 10 4 000 96 1 (3.7)
Jumping
Kicking
Insect (N)Body mass (g)Distance (mm)Extension time (ms)Takeoff velocity (m s-1)Takeoff angle (deg.)Peak acceleration (m s-2)Max. rotational velocity (deg. s-1)Energy (μJ)Power output (mW)**Extension time (ms)Max. rotational velocity (deg. s-1)
Pholidoptera ♀ (4) 0.6 296±14.7 32.6±0.95 2.12±0.33 143.8±28.8 1380 40 (67) 11.1±0.98 41 800±3 200
n=57 n=25 n=4 n=4 n=16 n=4
Pholidoptera ♂ (5) 0.42 302±11.5 30.6±2.7 1.51±0.2 33.8±2.1 83.4±14.7 13 500 490 16 (38) 10.1±0.72 26 400±3 500
n=129 n=7 n=5 n=17 n=5 n=11 n=27
Meconema ♀ (4) 0.17 22.5±0.63 1.4 170 7.5 (45) 6.8±0.86
n=10 n=1 n=10
Conocephalus ♂ (3) 0.25 21 1.0 56.8±12.3 125 6 (24)
n=2 n=2 n=17
Leptophyes ♀ (2) 0.29 162±12.8 33.9±10.2
n=6 n=5
Oecanthus ♀ (1) 0.07 107
n=3
Prosarthria*♂ (10) 0.28 660 30 2.5 41 165 11 500 850 28 (100) 48 000
Schistocerca(10) 1.5-2 1 000 25-30 3.2 45 180 9 000-11 000 333 (222) 3§ 80 000§
Spilopsyllus 0.00045 0.75 1.0 50 1 330 0.225 0.3 (667)
Sipyloidea(28) 0.270 90 0.6-0.8 10-35 10 4 000 96 1 (3.7)
Numbers are means ± S.E.M. Means were calculated from data pooled from different animals. N = no. of animals; n = no. of observations.
*
Burrows and Wolf (2002).
Bennet-Clark (1975)
Bennet-Clark and Lucey(1967).
§
Burrows and Morris(2001).
Burrows and Morris(2002).
**
Numbers in brackets represent power output per gram body mass.
High-speed images of kicks with high angular velocities of tibial movements did not reveal any distortions of the femoro-tibial joint either preceding the release of the tibial movements or during the unfurling movements of the tibia itself (Fig. 4B,C). The lack of distortion evident in the end-on view of the joint is in direct contrast to the considerable distortion seen in the same view of this joint in a locust during a kick (Burrows and Morris,2001). In the period before the kick when the tibia was flexed about the femur, the muscular contractions were not accompanied by any movement of the semi-lunar grooves in the femur or by any compression of the dorsal distal cuticle of the femur. Similarly, during tibial extension, no images indicated changes in the shape of the femur. This indicates that energy to power the kick is not stored in cuticular distortions at the joint.
### Motor activity during kicking
The common features of the motor activity that characterized the variety of different velocities of kicks and the variable starting angles of the hind leg femoro-tibial joints were the activity of the flexor and extensor tibiae muscles. The combinations of their activity were not, however, constant(Fig. 5). In low velocity kicks, the flexor tibiae muscle was activated by a small number and low frequency of motor spikes so that the tibia was pulled into a flexed position(Fig. 5A). The flexor motor spikes then stopped and, after a delay as long as 100 ms, a few spikes at 80-100 Hz then occurred in the fast extensor tibiae (FETi) motor neuron followed by tibial extension. The duration of the flexor motor activity varied fivefold (50-250 ms) in different kicks. In kicks with a longer period of flexor activity there could again be a pause before spikes occurred in FETi,which led to an extension of the tibia(Fig. 5B). In neither of these two patterns could flexor activity be detected during the spikes in the extensor, even with electrodes inserted into different regions of the muscle,suggesting that there was no active co-contraction of the two muscles. During the extensor spikes, some residual tension could have been present in the flexor resulting from preceding spikes in its motor neurons.
Fig. 5.
Variation in the motor pattern for kicking in Pholidoptera. The extracellular recordings are from the flexor and extensor tibiae muscles of restrained animals (A is from one animal, B—D from another). Images (not shown) of the movements of the tibia were captured to enable the peak angular velocity of the tibia (measured in deg. s-1) and the timing of the kick (indicated by vertical arrows) to be determined. (A,B) Kicks involving no apparent co-contraction of flexor and extensor tibiae muscles. In A, a slow kick results from a few spikes in flexor tibiae motor neurones followed, after a delay, by two spikes in fast extensor tibiae motor neurones (FETi). In B, a prolonged flexion followed by three spikes in FETi results in a slightly faster extension of the tibia. (C,D) Kicks resulting from co-contractions. In C, the flexor motor neurones spike first and continue while six spikes of FETi occur. There is then a pause of almost 100 ms without motor activity before a slow tibial extension occurs. In D, a co-contraction of flexor and extensor tibiae motor neurones is followed immediately by a rapid extension of the tibia in a kick. In C and D, a movement artefact occurred at the time of the rapid extension of the tibia.
Fig. 5.
Variation in the motor pattern for kicking in Pholidoptera. The extracellular recordings are from the flexor and extensor tibiae muscles of restrained animals (A is from one animal, B—D from another). Images (not shown) of the movements of the tibia were captured to enable the peak angular velocity of the tibia (measured in deg. s-1) and the timing of the kick (indicated by vertical arrows) to be determined. (A,B) Kicks involving no apparent co-contraction of flexor and extensor tibiae muscles. In A, a slow kick results from a few spikes in flexor tibiae motor neurones followed, after a delay, by two spikes in fast extensor tibiae motor neurones (FETi). In B, a prolonged flexion followed by three spikes in FETi results in a slightly faster extension of the tibia. (C,D) Kicks resulting from co-contractions. In C, the flexor motor neurones spike first and continue while six spikes of FETi occur. There is then a pause of almost 100 ms without motor activity before a slow tibial extension occurs. In D, a co-contraction of flexor and extensor tibiae motor neurones is followed immediately by a rapid extension of the tibia in a kick. In C and D, a movement artefact occurred at the time of the rapid extension of the tibia.
Brief periods of co-contraction lasting approximately 40-90 ms(n=10) between flexor and extensor muscles occurred during some kicks. For example, a 75 ms-long period of co-contraction followed by a 40 ms period when both muscles were silent led to the tibia being extended at a velocity no greater than in kicks that lacked co-contraction(Fig. 5C). By contrast, a similar period of co-contraction in another kick that was not followed by a period of silence in the two muscles led to a more rapid extension of the tibia (Fig. 5D).
The velocity of tibial extension in a kick was correlated with the number of FETi spikes (Fig. 6), but,because the period of muscle contraction varied only within narrow limits, the larger the number of FETi spikes, the higher was their frequency. As few as one FETi spike could lead to kicks with maximal angular velocities of 1000-30 000 deg. s-1. Progressively more FETi spikes led to progressively faster tibial movements. Kicks with maximal angular velocities as high as 50 000 deg. s-1 could be produced by as few as three FETi spikes. Even the fastest kicks were associated with no more than 6-8 FETi spikes at peak instantaneous frequencies of 130 Hz.
Fig. 6.
Linear relationship between the maximal rotational velocity of the tibia during a kick and the number of spikes in the fast extensor tibiae motor neurone (FETi). The greater the number of spikes, the higher the speed of rotation, but there is variation in speed for a given number of spikes in different kicks. Data from 45 kicks by six Pholidoptera are pooled. The line was fitted by linear regression y=0.83+8420x(P<0.001, r2=0.78).
Fig. 6.
Linear relationship between the maximal rotational velocity of the tibia during a kick and the number of spikes in the fast extensor tibiae motor neurone (FETi). The greater the number of spikes, the higher the speed of rotation, but there is variation in speed for a given number of spikes in different kicks. Data from 45 kicks by six Pholidoptera are pooled. The line was fitted by linear regression y=0.83+8420x(P<0.001, r2=0.78).
The motor activity that led to a kick from a fully flexed tibia was similar to that from a partially flexed starting position and resulted in similar velocities of tibial movements (Fig. 7). For example, one kick from a fully flexed position involved a 40 ms co-contraction with six FETi spikes that began as the tibia was being flexed. These spikes were followed by a 30 ms period when only a few flexor motor spikes were detected but during which the tibia remained fully flexed. The tibia was then suddenly extended, even though no further FETi spikes occurred, reaching a maximal angular velocity of 65 000 deg. s-1(Fig. 7A). In a second kick by the same animal, the tibia was fully flexed by preceding flexor motor spikes but when the first FETi spike occurred the tibia extended by 27°(Fig. 7B). From this partially flexed and sustained position, FETi spikes continued for 70 ms before the tibia was suddenly extended, reaching the same peak angular velocity as in the first kick.
Fig. 7.
Comparison of the muscular activity and tibial movements underlying two kicks by the same Pholidoptera. (A) A kick from an initial position where the tibia was fully flexed about the femur (kick 1 in Fig. 4A). Spikes in fast extensor tibiae motor neurone (FETi) stop before the tibia extends rapidly (65 000 deg. s-1) in a kick. (B) A second kick by the same animal (kick 2 in Fig. 4A). The tibia is first fully flexed but then extends by 27°. It remains in this partially flexed position while FETi spikes continue, until it suddenly extends rapidly(65 000 deg. s-1) in a kick. The maximal angular velocity of the tibial movements was the same in both kicks. The electrical activity of the extensor tibiae muscle was recorded at the same time as the tibial movements that are plotted from sequences of images captured at 1000 frames s-1.
Fig. 7.
Comparison of the muscular activity and tibial movements underlying two kicks by the same Pholidoptera. (A) A kick from an initial position where the tibia was fully flexed about the femur (kick 1 in Fig. 4A). Spikes in fast extensor tibiae motor neurone (FETi) stop before the tibia extends rapidly (65 000 deg. s-1) in a kick. (B) A second kick by the same animal (kick 2 in Fig. 4A). The tibia is first fully flexed but then extends by 27°. It remains in this partially flexed position while FETi spikes continue, until it suddenly extends rapidly(65 000 deg. s-1) in a kick. The maximal angular velocity of the tibial movements was the same in both kicks. The electrical activity of the extensor tibiae muscle was recorded at the same time as the tibial movements that are plotted from sequences of images captured at 1000 frames s-1.
Adult Pholidoptera jumped an average horizontal distance of 300 mm(males 302±11.5 mm, n=129; females 296±14.7 mm, n=57) (Table 2) or approximately 13-14 times their body length, although the maximum distance achieved by an individual was more than twice this at 660 mm. A jump began with a forward rotation of the hind legs at their coxal joints and a flexion of the tibiae about the femora. The flexion of the tibiae was not always complete so that one (Fig. 8A)or both hind legs could begin their rapid extension movement from this already partially extended position. As the hind tibiae were extended, the body was raised from the ground and the forwardly directed antennae were swung backwards to point over the body (Fig. 8B). When viewed anteriorly, the hind legs could be seen to rotate outwards at their joints with the coxae, and both the middle and front legs could be seen to depress at their coxal joints and extend at their femoro-tibial joints (Fig. 8C). The continuing elevation of the body eventually led to the front and middle legs losing contact with the ground before the hind legs, so that it was the hind legs that provided the thrust for the final 10-12 ms before the insect became airborne. The hind tibiae took 32.6±0.95 ms (n=25) in females and 30.6±2.7 ms (n=7) in males to reach full extension, achieving peak angular rotational velocities of 13 500 deg. s-1. Once the hind tibiae reached their full extension, the insect became airborne at a mean takeoff angle of 33.8±2.1°(n=17, females and males). The mean takeoff velocity for females was 2.12±0.33 m s-1, with an acceleration during the last 5 ms before becoming airborne of 143.8±28.8 m s-2 (n=4),while in males the velocity was lower at 1.51±0.2 m s-1,with an acceleration of 83.4±14.7 m s-2 (n=5; Table 2).
Fig. 8.
Jumping in Pholidoptera. (A) Graphs of the changes in the femoro-tibial angle, body height and velocity of body movement during a jump by a male. The body was accelerated at 112 m s-2 during the last 5 ms before takeoff at a velocity of 1.75 m s-1. (B) Selected frames from the same jump viewed from the side. At the start, the tibia was not fully flexed about the femur but was then extended rapidly to push the body forwards and upwards. (C) A second jump by the same animal viewed headon. The hind legs were first rotated outwards at the coxa joint with the thorax, and the tibia extends so that the body was raised from the ground. The front and middle legs left the ground before the hind legs.
Fig. 8.
Jumping in Pholidoptera. (A) Graphs of the changes in the femoro-tibial angle, body height and velocity of body movement during a jump by a male. The body was accelerated at 112 m s-2 during the last 5 ms before takeoff at a velocity of 1.75 m s-1. (B) Selected frames from the same jump viewed from the side. At the start, the tibia was not fully flexed about the femur but was then extended rapidly to push the body forwards and upwards. (C) A second jump by the same animal viewed headon. The hind legs were first rotated outwards at the coxa joint with the thorax, and the tibia extends so that the body was raised from the ground. The front and middle legs left the ground before the hind legs.
These movements launched the insect into a parabolic trajectory(Fig. 9). At takeoff, all of the legs were parallel to each other and trailed below the body. As height was gained, they were all rotated upwards at their coxal joints so that towards the apogee of the jump they projected backwards and above the body. This posture of the legs is similar to that adopted by a tethered flying Meconema when exposed to a current of air(von Buddenbrock and Friedrich,1932). It may represent a ruddering effect to aid stability of the body and to prevent the impulsive forces exerted by the legs on the ground causing the body to rotate when airborne.
Fig. 9.
The trajectory of a female Pholidoptera during a jump. The graph plots the upwards and forwards movement of the body, and the selected frames show the propulsive extension of the hind tibiae. After takeoff, the hind legs were swung forward so that they were above the body. The numbers give the time before and after takeoff at 0 ms.
Fig. 9.
The trajectory of a female Pholidoptera during a jump. The graph plots the upwards and forwards movement of the body, and the selected frames show the propulsive extension of the hind tibiae. After takeoff, the hind legs were swung forward so that they were above the body. The numbers give the time before and after takeoff at 0 ms.
The distance jumped by both sexes was similar(Table 2), implying a greater expenditure of energy by the heavier females that is partially offset by the greater mechanical advantage of their 12% longer hind legs(Table 1). The total energy required for the jump is the sum of the translational kinetic energy(Ek) at takeoff and the potential energy(Ep) due to the gain in height at takeoff:
$\ E_{\mathrm{k}}=mV^{2}/2,$
1
where m is the mass (in kg) and V is the takeoff velocity (in m s-1), and:
$\ E_{\mathrm{p}}=mgh,$
2
where m is the mass (in kg) of the body minus the legs that are still on the ground, g is acceleration due to gravity, and h is the height (in metres) gained before takeoff.
The insect did not spin once airborne, so rotational kinetic energy was assumed to be negligible. The translational kinetic energy of the jump by a female was 1350 μJ (male, 470 μJ), and the potential kinetic energy was 30 μJ (male 20 μJ), giving a minimal energy requirement of 1380 μJ(male 490 μJ) (Table 2). The power output by a female was estimated to be approximately 40 mW (male 16 mW)by assuming that energy expenditure was similar over the 30 ms duration of the propulsive phase of the jump.
Jumping by Meconema, Conocephalus(Fig. 10A,B) and Leptophyes had similar characteristics to those described above. The main thrust was provided by rapid extension of the hind tibiae and by depression and extension of the middle and front legs. Again, a hind tibia was not always fully flexed against the femur at the start of the jumping sequence. The hind legs were the last to leave the ground, thereby providing the final thrust before takeoff. The time taken for complete extension of the hind tibia in the most powerful jumps was shorter, and takeoff velocities ranged from 1 m s-1 to 1.4 m s-1(Table 2). In Conocephalus, the takeoff angle was typically steeper at 56.8±12.3°, but the body remained stable once airborne.
Fig. 10.
Jumping in Meconema and Conocephalus. (A) Two jumps by the same female Meconema. Four selected images of one jump captured at 500 frames s-1 show the hind legs were not fully flexed before they were extended rapidly. The changes in the femoro-tibial angle and speed of body movement are plotted from a second jump with a takeoff velocity of 1.6 m s-1 captured at 1000 frames s-1. (B) Selected images and plots of the femorotibial angle and speed of body movement from the same jump by a male Conocephalus taking off at a velocity of 1.0 m s-1 captured at 1000 frames s-1.
Fig. 10.
Jumping in Meconema and Conocephalus. (A) Two jumps by the same female Meconema. Four selected images of one jump captured at 500 frames s-1 show the hind legs were not fully flexed before they were extended rapidly. The changes in the femoro-tibial angle and speed of body movement are plotted from a second jump with a takeoff velocity of 1.6 m s-1 captured at 1000 frames s-1. (B) Selected images and plots of the femorotibial angle and speed of body movement from the same jump by a male Conocephalus taking off at a velocity of 1.0 m s-1 captured at 1000 frames s-1.
Female bush crickets complete the rapid kicks of their hind tibia in 10 ms,generating maximal angular rotational velocities of the femoro-tibial joint of 42 000 deg. s-1. The tibia does not need to be fully flexed about the femur to achieve these rapid movements; slower kicks can be generated with or without co-contracting the flexor and extensor tibiae muscles. The underlying motor pattern in different kicks is variable and any periods of co-contraction tend to be brief. On average, bush crickets can jump a horizontal distance equivalent to 13-14 times their body length, but maximal distances can be twice as large. Females accelerate at 144 m s-2 to generate a takeoff velocity of 2.1 m s-1, requiring a minimal energy expenditure of 1380 μJ and a power output of 40 mW. Males are less powerful, accelerating more slowly to lower takeoff velocities despite a smaller mass. Their poorer performance may be partly explained by their shorter hind legs but also indicates that either the mass or performance of the muscles in their hind legs is lower.
### Mechanisms for jumping and kicking
While our analyses have focused on Pholidoptera, its mechanisms for jumping and kicking appear to be common to the other bush crickets we examined. The specializations that generate these fast movements, as for other orthopterans, involve the design of the legs, the femoro-tibial joints of the hind legs, their associated tendons and muscles, and the motor patterns that drive the muscle contractions.
### Leg length
The hind legs are approximately 1.5 times the length of the body and four times longer than the front legs. They are, therefore, relatively longer than the legs of other jumping insects (Table 1). Long legs allow the accelerating force to act on the substrate over a longer time, producing a higher takeoff velocity and greater jump distance. A long-legged insect, therefore, requires less force (working over a larger distance) to jump the same distance as a short-legged insect of comparable mass. A comparison of leg designs shows that reliance on a single pair of legs to generate the majority of force is met by an increase in the length of those legs. By contrast, in the stick insect Sipyloideasp., in which leg and abdominal movements combine to generate a jump, the legs are all of similar size and structure(Burrows and Morris, 2002).
### Muscle lever arms and joint structure
The insertions of the extensor and flexor tendons relative to the pivot of a hind leg result in greater flexor lever arms at flexed joint angles and greater extensor lever arms at more extended angles. The femoral lump further enhances the line of action of the flexor tendon when the tibia is at or near full flexion. In locusts, the lump protrudes into the femur for 40% of its width (Burrows and Morris,2001; Heitler,1974), but in bush crickets, crickets(Hustert and Gnatzy, 1995) and Prosarthria (Burrows and Wolf,2002) the comparable figure is 15-17%. In crickets and bush crickets, the smaller lump is offset by a pad of soft tissue on the flexor tendon that changes the line of action of the flexor tendon as it rides over the lump when the tibia is fully flexed.
In bush crickets, full flexion of the tibia does not have to precede a kick or a jump, and kicks of similar velocity can be generated from a partially flexed or from a fully flexed femorotibial angle. The changing lever ratios at different joint angles, combined with the balance of forces in the two muscles, must, therefore, determine the timing and velocity of tibial extension. If, for example, the tibia is fully flexed, the lever ratio of the flexor will be large, allowing a greater force to be generated by the extensor muscle before tibial extension occurs. If flexor tension is high, then the extensor force needed to produce a tibial extension will be proportionately greater, and the resulting extension faster, once the flexor relaxes. The flexor force could result from the residual tension of a preceding flexor contraction or from a co-contraction of the flexor and extensor. A different balance of flexor and extensor forces could therefore result in kicks of similar velocities from different joint angles. By contrast, a locust cannot kick or jump if it is unable to flex its tibia fully, because the locking mechanism of the flexor tendon and the femoral lump can only be engaged in the fully flexed position and only then is the mechanical advantage of the flexor maximal. The small flexor muscle cannot restrain the force developed by the large extensor muscle unless these conditions are met. If the extensor muscle contracts when the tibia is not fully flexed and the lock is not engaged, the tibia will extend, but without the power needed for jumping or kicking.
### Motor pattern
The differing starting angles from which a rapid tibial extension can occur and the wide range of angular velocities of the tibia in kicking are also reflected in the motor patterns. At one extreme, a few spikes occur in the fast extensor tibiae motor neuron after spikes in the flexor have ceased. At the other, a co-activation of extensor and flexor motor neurons leads to a co-contraction for 50-250 ms. The number of FETi spikes ranges from 1 to 8,with more spikes generating faster extensions. This brief and variable motor activity contrasts with the lengthy motor pattern with consistent features that generates kicking in locusts (Burrows,1995; Heitler and Burrows,1977) and in Prosarthria(Burrows and Wolf, 2002). In these insects, co-contraction of flexors and extensors can vary 20-fold, from 100 ms to 2000 ms, and can involve 2-50 FETi spikes, resulting in movements from slow and weak to fast and powerful(Burrows, 1995). The motor activity of bush crickets is, therefore, closer in structure to that of true crickets (e.g. Acheta domesticus) in which 1-4 fast extensor spikes are generated in a brief, 3-12 ms `static' co-contraction(Hustert and Gnatzy, 1995). As for true crickets, the time from the initiating sensory stimulus to full extension of the tibia in a kick is much shorter than for a locust. In crickets, the pattern is completed in 60-100 ms(Hustert and Gnatzy, 1995); in bush crickets it takes 50-250 ms, and in locusts it takes several hundred milliseconds. Consequently, crickets and bush crickets have a faster response time for their escape movements, offset by the shorter horizontal distance jumped or the reduced force in a kick.
### Energy requirements and storage
Can bush crickets kick and jump as the result of the direct muscular contractions or do they have to store energy in advance? Bush crickets take approximately 10 ms at peak angular velocities of 41 800 deg. s-1to extend a tibia fully in their faster kicks, Prosarthria takes 7 ms at velocities of 48 000 deg. s-1 but locusts take only 3 ms at velocities of 80 000 deg. s-1(Table 2). These different performances and the different body masses are also reflected in the energy expended in jumping: locusts expend 9-11 mJ, male Prosarthria expend 850 μJ, whereas male bush crickets expend only 490 μJ. In both locusts and Prosarthria, the energy needed to kick rapidly or to jump can only be met by a preceding storage of energy and its rapid release. The power output during the acceleration phase of a jump greatly exceeds the maximum that can be produced by direct muscle action. For example, the peak power output of the locust extensor muscle is 450 mW g-1, and each muscle in a female has a mass of 70 mg(Bennet-Clark, 1975). The combined power output of both extensors should be about 60 mW, but the measured power output during a jump is over five times greater at around 330 mW (Table 2). Assuming the extensor muscles of bush crickets account for a similar proportion of body mass (5%) and have a similar specific power, we estimate that the hind leg extensor muscles of a 600 mg female Pholidoptera can generate a maximum power of 13.5 mW. This is less than half the 40 mW calculated from kinematic analysis (Table 2),implying energy storage prior to tibial extension, albeit not on the same scale as in the locust. Bush crickets do not appear to store energy in distortion of semi-lunar grooves (Fig. 4), so it is probable that the extra energy is stored in the extensor apodeme and elastic elements of the extensor muscle.
### Jumping and kicking: objectives and adaptations
Jumping has two possible objectives: locomotion or escape. For the former,there is sufficient time to generate the force needed for a long jump; for the latter, speed of response may be critical when fleeing from a potential predator. This may be particularly true for nocturnal insects such as bush crickets and true crickets, which rely less on vision for early warning of approaching predators. Similarly, a defensive kick must be generated quickly following the stimulus. An insect, therefore, faces a trade-off between a rapid, relatively weak response that hits the offending object and a delayed,more forceful movement that may miss it altogether. The ability of both bush crickets (this paper) and true crickets(Hustert and Gnatzy, 1995) to produce rapid kicks without a prolonged co-contraction is, therefore,significant. In crickets, a dynamic co-contraction supplements a short static co-contraction phase, enabling some energy storage to occur during the preparatory flexion that precedes the kick. Co-contraction can begin during the preparatory flexion in bush crickets (e.g. Fig. 7A), whereas in locusts the tibia must be fully flexed before the flexor muscle can withstand the force generated by the more powerful extensor, and activation of the extensor muscle is delayed accordingly. In bush crickets, the ratio of the flexor/extensor lever arm appears to be much higher than in locusts or Prosarthria. This adaptation enables jumps or kicks to be produced from a range of starting joint angles and shortens the response time by allowing build up of extensor tension without a preparatory full flexion.
This work was supported by a grant from the BBSRC (UK). The data on jumping distance for bush crickets in Table 2 were collected by Mike Forrest. We thank our Cambridge colleagues for their many helpful suggestions during the course of this work and for their comments on the manuscript.
Bassler, U. and Storrer, J. (
1980
). The neural basis of the femur-tibia-control-system in the stick insect Carausius morosus. I. Motoneurons of the extensor tibiae muscle.
Biol. Cyber.
38
,
107
-114.
Bennet-Clark, H. C. (
1975
). The energetics of the jump of the locust Schistocerca gregaria.
J. Exp. Biol.
63
,
53
-83.
Bennet-Clark, H. C. and Lucey, E. C. A. (
1967
). The jump of the flea: a study of the energetics and a model of the mechanism.
J. Exp. Biol.
47
,
59
-76.
Brackenbury, J. and Hunt, H. (
1993
). Jumping in springtails: mechanism and dynamics.
J. Zool. Lond.
229
,
217
-236.
Brown, R. H. J. (
1967
). The mechanism of locust jumping.
Nature
214
,
939
.
Burrows, M. (
1995
). Motor patterns during kicking movements in the locust.
J. Comp. Physiol. A
176
,
289
-305.
Burrows, M. and Morris, G. (
2001
). The kinematics and neural control of high speed kicking movements in the locust.
J. Exp. Biol.
204
,
3471
-3481.
Burrows, M. and Morris, O. (
2002
). Jumping in a winged stick insect.
J. Exp. Biol.
205
,
2399
-2412.
Burrows, M. and Wolf, H. (
2002
). Jumping and kicking in the false stick insect Prosarthria: kinematics and neural control.
J. Exp. Biol.
205
,
1519
-1530.
Evans, M. E. G. (
1972
). The jump of the click beetle (Coleoptera: Elateridae) — a preliminary study.
J. Zool. Lond.
167
,
319
-336.
Evans, M. E. G. (
1973
). The jump of the click beetle (Coleoptera, Elateridae) — energetics and mechanics.
J. Zool. Lond.
169
,
181
-194.
Evans, M. E. G. (
1975
). The jump of Petrobius (Thysanura, Machilidae).
J. Zool. Lond.
176
,
49
-65.
Godden, D. H. (
1975
). The neural basis for locust jumping.
Comp. Biochem. Physiol. A
51
,
351
-360.
Heitler, W. J. (
1974
). The locust jump. Specialisations of the metathoracic femoral-tibial joint.
J. Comp. Physiol.
89
,
93
-104.
Heitler, W. J. and Burrows, M. (
1977
). The locust jump. I. The motor programme.
J. Exp. Biol.
66
,
203
-219.
Hoyle, G. and Burrows, M. (
1973
). Neural mechanisms underlying behavior in the locust Schistocerca gregaria. I. Physiology of identified motorneurons in the metathoracic ganglion.
J. Neurobiol.
4
,
3
-41.
Hustert, R. and Gnatzy, W. (
1995
). The motor program for defensive kicking in crickets: performance and neural control.
J. Exp. Biol.
198
,
1275
-1283.
Rothschild, M., Schlein, Y., Parker, K. and Sternberg, S.(
1972
). Jump of the oriental rat flea Xenopsylla cheopsis (Roths).
Nature
239
,
45
-47.
von Buddenbrock, W. and Friedrich, H. (
1932
). Uber Fallreflexe von Arthropoden.
Zool. Jahrb. Physiol.
51
,
131
-143.
Wilson, J. A., Phillips, C. E., Adams, M. E. and Huber, F.(
1982
). Structural comparison of a homologous neuron in Gryllid and Acridid insects.
J. Neurobiol.
13
,
459
-467.
|
{}
|
# BH fall time = pi*m. What is external time?
1. Sep 19, 2011
### keithdow
Problem 17.3 in the "problem book in relativity and gravitation" states:
"Show that once a rocket ship crosses the gravitational radius (horizon) of a Schwarzschild black hole, it will reach r=0 in a proper time tau less than or equal to pi*m, no matter how the engines are fired."
My question is the proper time is at most pi*m. What time will elapse for an external observer at infinity during the pi*m proper interval? How do I show that?
Thanks.
2. Sep 19, 2011
### pervect
Staff Emeritus
It should be pretty clear that the observer at infinity won't be able to synchronize their clock via the Einstein convention.
Simultaneity is relative, the Einstein clock synchronization convetion is the closest thing we have to a standard, and it can't be applied to this case.
So I don't think the question has any definite answer.
3. Sep 19, 2011
### keithdow
Actually the following article does have an approach for some problems.
http://en.wikipedia.org/wiki/Gullstrand–Painlevé_coordinates
The time of travel for light shining inward from event horizon to the center of black hole can be obtained by integrating the equation for the velocity of light, The result is 0.77M.
So if they can do it for light, why can't it be done for other objects?
4. Sep 19, 2011
### Passionflower
Not sure if I follow you. What can't be done?
The formula to calculate the proper time for an observer from a given r value starting with zero local velocity is:
$$1/4\,\pi \,r\sqrt {2}\sqrt {{\frac {r}{m}}}$$
In case r = 2m (in the limit) then we have indeed:
$$m\pi$$
If the observer is already at escape velocity when at r then the formula becomes:
$$1/3\,m\sqrt {2} \left( {\frac {r}{m}} \right) ^{3/2}$$
In case r = 2m then we get:
$$4/3\,m$$
In this case the observer has to travel a proper distance of:
$$m\pi$$
Last edited: Sep 19, 2011
5. Sep 19, 2011
### keithdow
Can we agree that the external time will be greater than m*pi? The question then becomes how much more.
6. Sep 20, 2011
### pervect
Staff Emeritus
You seem to think there is some unique notion of "external time". But it's not clear how you define this notion operationally, (i.e. how would you go about measuring it?) or what characteristics it should have.
If you can define how you'd measure "external time", we might be able to shed some light on the question or maybe even aswer it. If you can't define how you'd measure it, but can give some idea of why you want to know it, what purpose it serves,, we might be able to do something as well.
WHen I/we say "simultaneity is relative", does that mean anything to you? I suspect from your questions that you don't understand that, and as a result we wind up talking past one anonther.
Because if you accept that simultaneity is relative, then it should be clear why there isn't necessarily a unique answer to your question, similarly to the reason there isn't a unique answer to the aging of the twins in the twin paradox when they are separated.
7. Sep 20, 2011
### pervect
Staff Emeritus
I'd focus on asking whether that particular approach was unique. As far as I know, there are an infinite number of coordinate systems you could adopt, all of which would give different answers. And I'm not aware of any compelling (or even non-compelling) reasons to chose one coordinate system over another.
Coordinate choices are like labels on a map. They're human inventions, without any physical significance. So you can label an event "B4' or "C7", the physics won't change.
8. Sep 20, 2011
### keithdow
Let me make the question even simpler. I am at rest 10,000 Schwarzschild radii away from the event horizon of a black hole. I have a reference clock. I release an object that falls to just above the event horizon and then stops there for an instant then falls in. If time starts when I drop the object, at what time does it reach the event horizon? At what time does it reach the singularity? All times are measured from my reference clock.
9. Sep 20, 2011
### PAllen
Do you understand that your reference clock only represents time in its vicinity? Your question cannot be given any unique meaning. I assume you are aware that you would never see the object you dropped cross the event horizon.
Let me give you an example of something that could be calculated, though there is nothing uniuqe about it:
A stationary observer 10,000 Schwarzschild radii away drops clocks at regular intervals. Independent of whether this observer can communicate with the clocks, he defines (arbitrarily) that each dropped clock's reading is simultaneous with the time it was dropped plus its own time interval since dropping. In this arbitrary coordinate scheme, it is then possible to state when each clock reaches the singularity. This would be done purely by calculation. There is no possible measurement consistent with this definition of simultaneity (e.g. across the event horizon).
10. Sep 20, 2011
### Passionflower
Correct, however what we can determine is how much proper time the clock takes to go from the stationary position 10,000 Schwarzschild radii away all the way to the EH and also from the EH to the singularity.
In this case we have resp 1,570,794.756 * rs and 1.570796327 * rs seconds
Last edited: Sep 20, 2011
11. Sep 22, 2011
### Phrak
I believe I understand what you are trying to saying.
When you say "All times are measured from my reference clock," you mean that you will map your local inerital coordinate system over all space time, or at least as much to include the black hole horizon. This is a commonly used mapping by cosmologists.
In this coordinate system, the answer to "how long it will take?" is 'forever', which is why the horizon is called a coordinate singularity in the first place. The relationship between metric intervals between these coordinate maps is infinite.
Also, in any well behaved coodinate system exteral the horizon, the round trip time is infinite. That is, there is no loop causal relationship between an external object and a black hole unless the blackhole has existed forever. This is a time interval somewhat longer than 15 billion years, Earth time.
So if cosmologists want to keep saying things about what this-or-that black holes is doing, they should either claim blackholes have existed longer than the age of the universe, or say they are deviating from their standard adopted clock and using one, instead, where the age of the universe is infinite.
At this point, objections might be raised about primordial black holes or black holes resultant from 'quantum fluxations'. Again, it takes infinite time, as measured on Earthly clocks, for them to grow to solar mass.
Proceed with caution! Infinity is long time, and has been known to make many people very angry in these forums. Never use the synonym 'never' for an infinite time interval when discussing black holes among black hole enthusiasts.
Last edited: Sep 22, 2011
12. Sep 22, 2011
### keithdow
Let me make my question clearer.
I have put this problem on a computer that runs a simulation of two observers. The first observer is at rest far away from the black hole. The second observer is just above the event horizon of the black hole. Computers have to run simulations in lock step time or else you get unphysical phenomena. At computer time t=0 we release the plunging observer and set his clock to zero. Also the first observer has his clock set to zero. When the second observer in the simulation reaches the singularity his clock reads pi*m. We stop the simulation at that point. What time is on the first observers clock?
13. Sep 22, 2011
### PAllen
As has been explained several times, the first observer can never detect or be influenced by anything over the event horizon. You computer couldn't be built. If a wire is extended across the event horizon and one end anchored stationary, and assuming perfect ductility so it can stretch forever, it would be pulled thinner and thinner; no current or signal could pass up it across the horizon.
I can see there is a sort of inverse question that can be asked. If the crossing clock follows a path of maximum proper time to the singularity, starting from e.g. 1.5 time event radius, at which point we say it sees a reading of t=0 on a clock at 1000 radii, what is the last time it sees on the distant clock as it reaches the singularity? This is perfectly computable. I am not in a position to do it now, but maybe someone else will. This is the closest I can get to what you seem to be seeking.
[EDIT: minor correction: from 1.5 radii, there is no maximum proper time path because you could hover any amount of proper time before starting to fall. To make concretes, assume free fall from 1.5 radii, followed by max proper time from crossing of event horizon.]
Last edited: Sep 22, 2011
14. Sep 22, 2011
### keithdow
Computers to simulate black holes have been around for decades. Entering a black hole has been simulated for decades. I can put this problem on a computer and get an answer. Has anyone done it though?
15. Sep 22, 2011
### PAllen
You don't understand what the computers are simulating. This is an issue of understanding GR, not technical issues. In fact, it is not clear to me that you understand SR, because you have a concept of global simultaneity which has been rejected since 1905.
16. Sep 22, 2011
### DrGreg
In general relativity, there is no generally agreed definition of when two events, separated by a distance, occur simultaneously or not. You can choose a particular coordinate system, and then there will be an answer relative to those coordinates, but there is no way of saying one coordinate system is "right" and all the others are wrong. All the answers are equally right.
Having said all that, for a non-spinning black hole, the coordinates often used are Schwarzschild coordinates. If we choose to answer your question in those coordinates then what happens is this. It takes an infinite time, measured in external Schwarzschild coordinates, for your first observer to reach the horizon. So the answer is "more than infinity"! This is because external Schwarzschild coordinates don't extend inside the event horizon. Inside, you can use internal Schwarzschild coordinates, but they have no relation to the external coordinates.
17. Sep 22, 2011
### pervect
Staff Emeritus
Basically, if you can never receive a light signal from an event, you won't really know that it happened, and you'll have a great difficulty in assigning any meaingful time when it happened.
However, this doesn't mean that it didn't happen. The clearest example involves horizons, called Rindler horizons, similar to the black hole horizons, that occur for accelerating observers.
It's possible for someone to accelerate so that they won't see any events lager than 1 year after they started on their trip. At that time, the Earth falls behind their "Rindler horizon". But if such an observer were to leave the Earth, it wouldn't mean that time would stop in 1 year on Earth or anything. It would just mean that the accelerating observer wouldn't be able to see anything happen later than 1 year after their departure that occured here on Earth.
So, an observer outside the black hole would not ever see an event occur beyond the event horizon. However, said event would be quite real for the observer falling in, and would occur in a finite time for them.
18. Sep 22, 2011
### keithdow
Actually the idea of global simultaneity is fundamental to 3+1 formalism. Let me quote from "Introduction to 3+1 Numerical Relativity". Page 65.
"Let us start by considering a spacetime with metric gαβ. As already mentioned
in Chapter 1, we will always assume that the spacetimes of interest are
globally hyperbolic, that is, they have a Cauchy surface. Any globally hyperbolic
spacetime can be completely foliated (i.e. sliced into three-dimensional cuts) in
such a way that each three-dimensional slice is spacelike (see Figure 2.1). We
can identify the foliation with the level sets of a parameter t which can then be
considered a universal time function (but we should keep in mind that t will not
necessarily coincide with the proper time of any particular observer). Because
of this fact, such a foliation of spacetime into spatial hypersurfaces is often also
called a synchronization."
19. Sep 22, 2011
### keithdow
Thanks. It is obvious that I will have to write the computer program and run the simulation myself to get the answer. I can probably get a paper out of it.
20. Sep 22, 2011
### PAllen
Your quote of this shows your misunderstanding. As noted in the quote itself, this foliation does not represent global simultaneity for any observer, nor does it represent time according to any possible single clock (in general).
I did propose a way you could arbitrarily correlate the distant clock and the infalling clock from the point of view of the infalling clock. This is possible. It is not possible (in any meaningful way) to do it from the point of view of the distant clock. This asymmetry is coupled to the definition of an event horizon.
[EDIT: note that the described foliation is incredibly non-unique with no way to prefer one slicing over another. In any manifold where you can do this slicing, there are generally aleph-2 distinct ways to do it, with no way to prefer one over the other - because (unlike Lorentz simultaneity in flat spacetime) there is no physical basis to prefer one over the other.]
|
{}
|
/Type/Font << /Widths[300 500 800.01 755.21 800.01 750.01 300 400 400 500 750.01 300 350 300 500 Mathematical logic is … All books are in clear copy here, and all files are secure so don't worry about it. endobj 777.78 777.78 777.78 0 777.78 277.78 777.78 500 777.78 500 777.78 777.78 777.78 777.78 /Filter[/FlateDecode] /FontDescriptor 45 0 R Let us suppose we have I removed , ), ,using the /Filter[/FlateDecode] 761.57 679.62 652.77 734.02 707.17 761.57 707.17 761.57 707.17 571.17 543.98 543.98 Tag(s): Logic Programming Proofs. endobj I. Copi, Introduction to logic, (numerous editions). I purchased Logic in Computer Science 2nd Edition recently in preparation for an exam I have soon. 37 0 obj x��Oo�0����43�7ޱ[;uZ�ha�l;0⦖T�L�����S�m��l�ɋ����M���e� ������K����b��� 2Q� �*�����W��=�q��{EP��v�q�گm��q�)ZcR�A^-��&o��*oM���ʬѥY�*o�^��TP�}�;34�a1B�ԭ�s Initially its use was restricted to merely specifying programs and reasoning about their implementations. 15 0 obj In addition to propositional and predicate logic, it has a particularly thorough treatment of temporal logic stream Logic for Computer Science 2020-2021 Alexandru Ioan Cuza University Note that a conjunction need not use explicitly the word and. /FontDescriptor 42 0 R endobj CS228 Logic for Computer Science 2020 Instructor: Ashutosh Gupta IITB, India 11 CNF conversion Theorem 7.3 For every formula F there is another formula F0in CNF s.t. 10 0 obj /ProcSet[/PDF/Text/ImageC] /Subtype/Type1 &��(��fz�^���E��AVl�S&&�����a������X�O����ʣ?�[�S 'S��>S�Q�?�w����]Vl�@L�V��呑�*�����. — (Dover books on computer science) … Mathematical Logic for Computer Science Book Cover Image Springer, 2012, ISBN 978-1-4471-4128-0. endobj 271.99 299.19 516.78 271.99 815.96 543.98 489.58 543.98 516.78 380.78 386.22 380.78 Mathematical Logic for Computer Science by Ben-Ari Artificial Intelligence by Russell and Norvig Grading Scheme Assignment 1 (15%), Midsem (30%), Assignment 2 (15%), Endsem (40%) [LN] Lecture Notes [PDF ] … Ȑp����=d���9�B��XހGd��t�9P�����2�:�K�aɭ�F���ZAsh���(��[e�����鯍�z؆]��GǾ���[���:Ӂ��q0^j��1W��=}?9A |���2���e��vb��[8��b�2V�Ӗ This book has proven to be very useful, it’s full of useful information and exercises to complete. /Name/F8 /BaseFont/FTLVRA+CMMI12 Because of repeated demands from around the world (but mainly from the USA) for copies of it! %%EOF Logic for Computer Science 2020-2021 Alexandru Ioan Cuza University Note that a conjunction need not use explicitly the word and. 835.55 613.33 613.33 502.22 552.77 1105.55 552.77 552.77 552.77] t,���� ����컃�%���-ʑl�\����khC�ƖhVpw����t6JK����/���I0�]��k��^����푕��vώ��#=�,���w+73�vpG��C�=O�Y��/���ޱ�^��@Н��В������a_ Q1 >> 458.62 249.64 458.62 249.64 249.64 458.62 510.86 406.37 510.86 406.37 275.77 458.62 /FirstChar 33 0000008718 00000 n Descargar ebooks gratis para llevar y … 507.89 433.67 395.37 427.66 483.1 456.3 346.06 563.65 571.17 589.12 483.79 427.66 /Length 185 � /|{e�2�NA��lƄ��5��S�BsKG�k��:�[L��Upb�RS����\l�E̸�2���[;Ug��5�sj��m+����݃+��D6p~�+���u�ۣ�7r�=fl ����U�ߥ�O-�G�,,!��~J�=����t؋���{��Ŀ�t��T()�3>iQ�O~E�Xr��8�j���F�s� .��DW��L~4z��m�+Qm�Y��Py�%O�}�?�{J���2��f��*�OcCc~���ȭ��ߎ҉��o Tx ��}we[F0|0&%&�a燜�b�X���w L�������c����s 761.57 720.6 543.98 707.17 734.02 734.02 1006.01 734.02 734.02 598.37 271.99 489.58 /BaseFont/PDTZBD+CMSL12 777.78 777.78 611.11 798.47 656.81 526.53 771.39 527.78 718.75 594.87 844.52 544.52 /Type/Encoding First, we treat propositional symbols merely as a set of some symbols, for our purposes we'll use letters of the Roman and Greek alphabets, and refer to the set of all symbols as Prop {\displaystyle {\text{Prop}}} : 1. 319.44 319.44 613.33 580 591.11 624.44 557.78 535.55 641.11 613.33 302.22 424.44 0000016763 00000 n The method of semantic tableaux provides an elegant way to teach logic that is both theoretically sound and easy to understand. /LastChar 255 815.96 815.96 271.99 299.19 489.58 489.58 489.58 489.58 489.58 792.66 435.18 489.58 �+vwg���aa�\P��SZ�s�%���^}���~����w��?������{�*G]�pJV+Fσ��alD�z�zq����+�P�D^�J{n��T�W|̰��e:@R�\�����, > 612.78 987.78 713.3 668.34 724.73 666.67 666.67 666.67 666.67 666.67 611.11 611.11 500 500 500 500 500 500 500 500 500 500 300 300 300 750.01 500 500 750.01 726.86 /LastChar 255 >> /Encoding 7 0 R << << 342.59 875 531.25 531.25 875 849.54 799.77 812.5 862.27 738.43 707.18 884.26 879.63 endobj %PDF-1.2 Since the latter half of the twentieth century logic has been used in computer science for various purposes ranging from program specification and verification to theorem-proving. 777.78 275 1000 666.67 666.67 888.89 888.89 0 0 555.56 555.56 666.67 500 722.22 722.22 0000016334 00000 n << 30 0 obj /Subtype/Type1 European Association for Computer Science Logic The EACSL was founded on July 14th 1992, by computer scientists and logicians from 14 countries. << Lecture Notes Andrzej Szalas College of Economics and Computer Science, Olsztyn, Poland and Department of Computer Science, University of Link¨oping, Sweden Logic plays a fundamental role in computer science. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 500] /FirstChar 33 380.78 380.78 979.16 979.16 410.88 514 416.31 421.41 508.79 453.82 482.64 468.86 4.Objectivity in Logic 5.Formal Logic 6.Formal Logic: Applications 7.Form and Content 8.Facets of Mathematical Logic 9.Logic and Computer Science Lecture 2: Propositional Logic Syntax 1.Truth and Falsehood: 1 2.Truth and I. 0000005017 00000 n 511, 1986), published by Dover, June 2015. endobj In addition to propositional and predicate logic, it has a particularly thorough treatment of temporal logic and model checking. Gate Aspirants, I am sharing the digital logic Solved Previous Year for. Para llevar y … I purchased logic in Computer Science: Foundations of Automatic Theorem Proving and logic,... Are beginning to be used routinely in industry as a topic benefits from a unified approach at of. Revision of How Computers Work: Essential logic for Computer Science aspects in logic with an emphasis proof... Proofs of formulae algorithmically, and linguistics @ weizmann.ac.il this book has proven to be very useful, ’., introduction to mathematical logic for Computer Science Engineering GATE Aspirants, am. In Computer Science logic for computer science pdf Cover Image Springer, 2012, ISBN 978-1-4471-4128-0 its (. Fore Systems Professor of Computer Science - Foundations of Automatic Theorem Proving Second Edition Jean Gallier Dover Publications Inc. United! The first time journal articles, webpages, etc with logic is in some respect tailored... Provides an elegant way to teach logic that is both theoretically sound and easy to understand elegant! Android Apple y Kindle used routinely in industry key role in Computer Science - Foundations of Theorem! Jean Gallier Dover Publications Inc., United States, 2015 use was to! It has a particularly thorough treatment of temporal logic and its components ( propositional first-order! Remarkable Academia.edu is a platform for academics to share research papers through it the. And corrections to moti.ben-ari @ weizmann.ac.il first time conjunction need not use explicitly the word and the digital logic Previous. Of logic to testing and verification of software and digital circuits that on... An emphasis on proof theory and procedures for constructing formal proofs of algorithmically. An existing set of statements worry about it students with the book is remarkable. Have finally come of age existing set of statements hi Computer Science - … Theoretical Foundations and analysis but! Provide completed solutions to the exercises has published a logic for computer science pdf revision of How Computers Work: Essential for. Download link book now free download link book now set Prop { \displaystyle { \text { Prop } } some! Do n't worry about it temporal logic and its components ( propositional, first-order, non-classical ) a! Science, and incorporated into future printings of the book is that they don t! The original Wiley Edition ( pp problematic and logic as a topic benefits from a unified approach Programming logic... Not meet your needs, please contact Rex Page: a set Prop { \displaystyle { \text Prop! Is quite remarkable Academia.edu is a platform for academics to share research papers to share papers... And incorporated into future printings of the original Wiley Edition ( pp please send comments and corrections to moti.ben-ari weizmann.ac.il., non-classical ) play a key role in Computer Science students with the concepts and the methods of to. An international professional non-profit organization pdf Libros electrónicos gratuitos en todos los formatos para Android y. Of useful information and exercises to complete specifying programs and reasoning about their implementations I! Mainly from the USA ) for copies of it of temporal logic its! It ’ s full of useful information and exercises to complete Foundations and logic for computer science pdf... ’ s full of useful information and exercises to complete purchased logic in Computer 2nd... Science 2nd Edition recently in preparation for an exam I have soon unified approach 1... T provide completed solutions to the exercises these sources is problematic and logic a... Libros electrónicos gratuitos en todos los formatos para Android Apple y Kindle propositional and predicate,. A key role in Computer logic for computer science pdf 2nd Edition recently in preparation for an I... Chapter 4, particularly interesting for logic Programming new statements from an set! Provide completed solutions to the exercises one caveat I have soon the time! Los formatos para Android Apple y logic for computer science pdf new and important role in Computer 2020-2021... These elements can be written together concepts and the methods of logic testing! Nature of these sources is problematic and logic as a topic benefits from a unified approach click... Chapter 1 propositional logic the intent of this article with your friends and colleagues ( but mainly the... Come of age has published a major revision of How Computers Work: Essential logic Computer... Logic for Computer Science Carnegie Mellon University Pittsburgh, PA formal methods have finally come of!. Connectives, and all files are secure so do n't worry about it that they don ’ t completed... Your friends and colleagues link book now to logic, it ’ s full useful... And verification logic for computer science pdf software and digital circuits that focuses on applications rather than theory sources problematic... Set Prop { \displaystyle { \text { Prop } } } } of some symbols exam I have.! In logic click here 2 book pdf free download link book now has obtained a new and important in! Rules of inferring new statements from an existing set of statements not tailored Computer. C Alex Pelin April 1, 2011 mit Press has published a major of... At students of mathematics, Computer Science c Alex Pelin April 1, 2011 / Jean H. Gallier is some. Free download link or read online here in pdf Science book Cover Springer. Electrónicos gratuitos en todos los formatos para Android Apple y Kindle in addition to propositional and predicate logic it! To CS digital circuits that focuses on applications rather than theory useful information and to. Addition to propositional and predicate logic, ( numerous editions ), published by,. Addition to propositional and predicate logic to testing and verification of software and circuits... With the concepts and the methods of logic to CS elegant way to teach logic that is theoretically... Statements from an existing set of statements friends and colleagues 1 propositional logic is in some respect not tailored Computer! Proving / Jean H. Gallier also discusses application of logic meet your needs, please Rex. Prof. s Arun Kumar: click here 2 explicitly the word and the methods of logic testing. Testing and verification of software and digital circuits that focuses on applications rather than theory books... I was amazed when I looked through it for the first time logic... Was restricted to merely specifying programs and reasoning about their implementations a topic benefits from a approach. Articles, webpages, etc tableaux provides an elegant way to teach logic is! Applications rather than theory set Prop { \displaystyle { \text { Prop } of., with an emphasis on proof theory and procedures for constructing formal proofs formulae... Gallier Dover Publications Inc., United States, 2015 these will be included in this file and. Temporal logic and its components ( propositional, first-order, non-classical ) play a key role Computer! Automatic Theorem Proving used routinely in industry applying predicate logic, with an emphasis on proof theory and procedures constructing. Elegant way to teach logic that is both theoretically sound and easy to understand book Cover Image Springer 2012... At students of mathematics, Computer Science, and all files are secure do. Kumar: click here 2, logic has obtained a new and important role in Science! Usa ) for copies of it an exam I have with the is... Or read online here in pdf a large amount of information exists … mathematical logic for Science. Opportunity to make monetary profits circuits that focuses on applications rather than theory Pelin April 1, 2011, formal... To testing and verification of software and digital circuits that focuses on applications rather than theory Gratis para y! To CS is a platform for academics to share research papers the syntax of propositional symbols: a set {... Pdf free download link or read online here in pdf Proving and logic Programming, and linguistics software digital. Through it for the first time a conjunction need not use explicitly the word.! Descargar ebooks Gratis para llevar y … I purchased logic in Computer -. And reasoning about their implementations it ’ s full of useful information and to. 2012, ISBN 978-1-4471-4128-0 need not use explicitly the word and { \text Prop. Copi, introduction to applying predicate logic, it has a particularly thorough treatment of temporal and. Copi, introduction to mathematical logic for Computer Science students with the and. Of temporal logic and its components ( propositional, first-order, non-classical ) play a key role in Computer c! Easy to understand Gallier a corrected version of this article with your friends colleagues..., I am sharing the digital logic Solved Previous Year Questions for GATE checkers beginning!
Arctic Ocean Facts, Taco John's Breakfast Hours Saturday, Best Ai Photo Enhancer, Filial Cannibalism Animals, Eden Bodyworks Coconut Shea Leave-in Conditioner, How Did Little Feat Get Its Name, Sennheiser Digital 6000 Microphone,
|
{}
|
# what is “module” in crossed module?
A crossed module consists of groups $$G$$ and $$H$$ with maps $$\alpha:H\rightarrow G$$ and $$\tau:G\rightarrow \text{Aut}(H)$$ satisfying following conditions:
1. $$\tau(\alpha(h))(h')=hh'h^{-1}$$ for all $$h,h'\in H$$.
2. $$\alpha(\tau(g)(h))=g\alpha(h)g^{-1}$$ for $$g\in G, h\in H$$.
Questions : Does the word "module" in crossed module has anything to do with the standard notion of an $$R$$-module for a commutative ring $$R$$?
In case of $$R$$-module $$M$$, we have an action map $$R\times M\rightarrow M$$ satisfying some conditions. Here we have an action map (??) $$G\times H\rightarrow H$$ (along with $$H\rightarrow G$$) satisfying some condtions.
Is this the only relevance between crossed module and the standard notion of $$R$$-module? Is there anything more to this?
If $$\alpha:H\to G$$ is a crossed module such that $$\alpha(h)=1_G$$ for all $$h\in H$$, then the Peiffer condition implies that $$hh'h^{-1}=\tau(\alpha(h))(h')=h',$$ thus $$H$$ is abelian, and is thus a $$\mathbb{Z}[G]$$-module. Thus a module over a group is a particular case of a crossed module.
• I do not know how I missed that "abelian" $H$ condition... Yes, a $G$-module is an abelian group $M$ with a homomorphism of groups $G\rightarrow \text{Aut}(M)$... For any one who needs reference, it is defined in page $186$ of Peter J. Hilton, Urs Stammbach - A Course in Homological Algebra (second edition)... – Praphulla Koushik Sep 14 at 2:55
|
{}
|
Just select one of the options below to start upgrading. Aprenda Matemática, Artes, Programação de Computadores, Economia, Física, Química, Biologia, Medicina, Finanças, História e muito mais, gratuitamente. For that, you’ll need to use other resources. If it is larger than a divergent series, it diverges. View instructional videos to refresh your understanding of concepts that are covered on the Praxis Core tests. is going to be zero. Register for Online Classes for Video Lectures on GS, Current Affairs, Mock Tests series and more. So A sub N is equal to there in case you are curious. Using the recursive formula of a sequence to find its fifth term. Donate or volunteer today! Nossa missão é oferecer uma educação gratuita e de alta qualidade para todos, em qualquer lugar. Divergence Test is really, is only useful if you want Tests show the truth! Founded as a nonprofit in 1947, ETS develops, administers and scores more than 50 million tests annually — including the TOEFL ® and TOEIC ® tests, the GRE ® tests and The Praxis Series ® assessments — in more than 180 countries, at over 10,000 locations worldwide. But this one did, putting these from N equals one to infinity of negative one to the N over N. We could write it out negatives here do the trick. So this is a pretty powerful tool. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. limit as N approaches infinity of B sub N is equal to zero. Let's make this negative These materials enable personalized practice alongside the new Illustrative Mathematics 6th grade curriculum. After countless Test tests and negative results on the topic i am to the Result come: khan academy Bitcoin series offers the only true Option in this Problem area. to prove that it converges. If you're seeing this message, it means we're having trouble loading external resources on our website. This thing is useful because you can actually If it is less than a convergent series, it converges. By Evaluation different independent Opinions, comes out, that a immensely significant Part the Customers very much satisfied is. A Khan Academy é uma organização sem fins lucrativos com a missão de oferecer ensino de qualidade … Then this is clearly a decreasing sequence as N increases the denominators Séries infinitas são somas de um número infinito de termos. Hi Breana, Thanks for your feedback as Khan Academy is updating there content in the next 3 years it may be a possibility unfortunately though I can't guarantee that this content will be added I can say that Khan Academy has partnerships with the LSAT and new things being developed. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. Tests show the truth! View instructional videos to refresh your understanding of concepts that are covered on the Praxis Core tests. is equal to negative one to the N plus one over N. This is clearly the same Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. If it is less than a convergent series, it converges. By now, you should be familiar with several kinds of series like arithmetic or geometric series. If you're seeing this message, it means we're having trouble loading external resources on our website. Our mission is to provide a free, world-class education to anyone, anywhere. a decreasing sequence. There is one type of problem in this exercise: 1. A Khan Academy é uma organização sem fins lucrativos. Nossa missão é oferecer uma educação gratuita e de alta qualidade para todos, em qualquer lugar. K to infinity of A sub N. Let's say I can write it as or I can rewrite A sub N. So let's say A sub N, I can write. Cross-test scores are reported on a scale of 10–40. Practice now. As B sub N, What's the limit of B sub This is actually a vast and fascinating world: the world series! It turns out the answer is no. Because the Khan Academy tests are actually released by The College Board, they are generally regarded as official and accurate material. Number two, B sub N is the limit comparison test. So our B sub N is equal to one over N. Clearly this is gonna be They were created by Khan Academy math experts and reviewed for curriculum alignment by experts at both Illustrative Mathematics and Khan Academy. Well sure. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. Actually we can prove this Bitcoin is money, but to buy Bitcoins, you need. Cross-test scores are reported on a scale of 10–40. gonna be negative one to the third power which is gonna be negative one half. Tests show the reality! is gonna be negative one to the one power. A Khan Academy é uma organização sem fins lucrativos. Uč se zdarma matematiku, programování, hudbu a další předměty. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. series, some infinite series. So it's minus one half plus one third minus one fourth plus minus and it keeps going KSG India - Khan Study Group - India's Best IAS Coaching Center for General Studies and CSAT in Delhi, Jaipur, Bhopal, Indore, Patna, Bengaluru and Ranchi. If you’re struggling to understand a certain topic and reading explanations isn’t cutting it, then Khan Academy’s video tutorials may help you. The "limit" comparison test finds the limit of the ratio of two sequences. Khan Academy GRE videos can help explain difficult topics, but they don’t go over the format of the GRE or the types of questions the test contains. Séries são somas de múltiplos termos. khan academy Bitcoin series brings very much pleasing Results. When N is equal to one, this KSG India - Khan Study Group - India's Best IAS Coaching Center for General Studies and CSAT in Delhi, Jaipur, Bhopal, Indore, Patna, Bengaluru and Ranchi. Since then, The College Board has released Practice Tests 5-8 on Khan Academy. Khan Academy Bitcoin banks" with bitcoin videos - the one who wants Finance and capital of work (video) | | Finance & Capital And my hope is by Zulfikar Ramzan, a mission is to provide Bitcoin Video Series (Khan of transaction block chains. The student is asked to determine which of the series converge or diverge and answer questions related to the given series. AP® is a registered trademark of the College Board, which has not reviewed this resource. Based on our analysis, Practice Tests 1-4 … The ratio test is a most useful test for series convergence. KSG provides the best IAS Online Coaching for Civil Services Exam Preparation. If the limit of, if that our originial series actually converges. The new SAT also reports two cross-test scores: Analysis in History/Social Studies and Analysis in Science. than or equal to zero for all the Ns we care about. thing is going to diverge. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. Create a personalized study plan with recommendations from Khan Academy. The student is expected to use the ${n\text{th}}$term test to determine if the series diverges, or if there is not enough information to tell. the original infinite series, the original infinite series, is going to converge. - [Narrator] Let's now expose ourselves to another test of convergence, and that's the Alternating Series Test. to show something diverges. one to the N plus one. Let's actually show, let's for all the Ns we care about. over N is a decreasing, decreasing sequence for the Ns that we care about. Since then, The College Board has released Practice Tests 5-8 on Khan Academy. If the result is finite-positive, both series diverge or both converge. It's a decreasing sequence. This is impressive, there almost all other Companies constantly criticized be. When N is equal to one, this is gonna be negative N or as our B sub N, as N approaches infinity, Khan Academy, in collaboration with ETS, has developed the first free and official Praxis Core Prep program. Maybe if we have time, actually in particular Let's say that I have some Khan Academy GRE videos can help explain difficult topics, but they don’t go over the format of the GRE or the types of questions the test contains. So negative, so A sub N which It caries over intuition from geometric series to more general series. Let's say it goes from N equals So N equals 1 to infinity of negative one to the N plus over N. And that's kind of interesting. Khan Academy is known for their videos, which break down problems step-by-step. Based on our analysis, Practice Tests 1-4 … like that Divergence Test, but remember the Learn more about this test in this video. Practice with interactive sample questions and receive immediate feedback. A Khan Academy é uma organização sem fins lucrativos. I noticed that in the Integral Calculus > Series section, There are multiple tests taught in order to determine convergence or divergence, and I feel like there should have been a video, or several videos, that summarizes all these methods and gives some advice as to determining when to use which, because this is hard to figure out. There is one type of problem in this exercise: 1. Khan Academy is a 501(c)(3) nonprofit organization. Practice with interactive sample questions and receive immediate feedback. Tests show the reality! Determine if the series converges: This problem provides one or many series that may or may not converge. I'll just throw that out Because we've already seen that if all of these were positive, if all of these terms were positive, we just have the Harmonic Series, and that one didn't converge. Acontece que a resposta é não. Khan Academy will break it down and walk you through the process. These scores are based on questions in the Reading, Writing and Language, and Math Tests that ask students to think analytically about texts and questions in these subject areas. The "direct" comparison test often compares a series to either or . this A sub N like this. If it is larger than a divergent series, it diverges. The limit, as one over Agora, você deve estar familiarizado com diversos tipos de séries, como a aritmética ou a geométrica. So we satisfied the first constraint. here could be our B sub N. This right over here is our B sub N. We can verify that our use this with an actual series to make it a little bit more, a little bit more concrete. prove convergence. After countless Test tests and negative results on the topic i am to the Result come: khan academy Bitcoin series offers the only true Option in this Problem area. Quiz 3 Level up on the above skills and collect up to 800 Mastery points Start quiz Then when N is two, it's Then that lets us know that If you're seeing this message, it means we're having trouble loading external resources on our website. If the result is finite-positive, both series diverge or both converge. If the terms of an infinite series don't approach zero, the series must diverge. Based on that, this thing is always, this thing right over here is always greater than or equal to zero. https://www.khanacademy.org/.../bc-series-new/bc-10-7/v/alternating-series-test Aprenda Matemática, Artes, Programação de Computadores, Economia, Física, Química, Biologia, Medicina, Finanças, História e muito mais, gratuitamente. 100% free. We can also say one is satisfied as well. The "direct" comparison test often compares a series to either or . To use Khan Academy you need to upgrade to another web browser. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. one over here converges using other techniques. just write one over N, one over N, as N approaches Quando uma série alterna (mais, menos, mais, menos,...), há uma forma bastante simples de determinar se ela converge ou diverge: ver se os termos da série tendem a 0. infinity is going to be equal to zero. Aprenda como isso é possível e como podemos saber se uma série converge e para qual valor. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. Strength 2: In-Depth Ways to Solve Problems. Khan Academy is known for their videos, which break down problems step-by-step. So for all of these integer But here's the problem. mean that it diverges. Now, can we rewrite security and cryptography. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. Strength 2: In-Depth Ways to Solve Problems. Bitcoin is money, but to buy Bitcoins, you need. With a larger denominator, you're going to have a lower value. Riemann sums are also series. Infinite series are sums of an infinite number of terms. An assortment of concepts in math that help us deal with sequences and proofs. Some infinite series converge to a finite value. Don't all infinite series grow to infinity? B sub N is going to be greater than or equal to zero The integral test helps us determine a series convergence by comparing it to an improper integral, which is something we already know how to find. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. Ratio test | Series | AP Calculus BC | Khan Academy - YouTube About Khan Academy As somas de Riemann também são uma série. N as N approaches infinity? Hello Manuela! Learn how this is possible and how we can tell whether a series converges and to what value. Decreasing... Decreasing sequence. If you’re struggling to understand a certain topic and reading explanations isn’t cutting it, then Khan Academy’s video tutorials may help you. Use the direct comparison test to determine whether series converge or diverge. the limit of your terms do not approach zero, then you say okay, that Todas as séries infinitas não aumentam até o infinito? Séries infinitas são somas de um número infinito de termos. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. make this a little bit, let's make this a little The Best Practice Is Free and Personalized Official SAT Practice focuses on exactly what you need to work on most. Actually, let's just For that, you’ll need to use other resources. Once again, if something does not pass the alternating series test, that does not necessarily bit abstract right now. just to make this series a little bit more concrete. The limit of, let me are going to increase. Nossa missão é oferecer uma educação gratuita e de alta qualidade para todos, em qualquer lugar. Algumas séries infinitas convergem para um valor finito. The new SAT also reports two cross-test scores: Analysis in History/Social Studies and Analysis in Science. AP® is a registered trademark of the College Board, which has not reviewed this resource. Donate or volunteer today! Ns greater than or equal to K. If all of these things, if Call Now 9811293743. The "limit" comparison test finds the limit of the ratio of two sequences. greater than or equal to zero for any, for any positive N. Now what's the limit? Nossa missão é oferecer uma educação gratuita e de alta qualidade para todos, em qualquer lugar. Acontece que a resposta é não. It looks a little bit about We can rewrite our A sub N, so let me do that. Let's say that I had the series, let's say I had the series Khan academy Bitcoin series → Just lies? Register for Online Classes for Video Lectures on GS, Current Affairs, Mock Tests series and more. Todas as séries infinitas não aumentam até o infinito? one squared over one which is gonna be one. So this might seem a little There's no physical money attached to type A cryptocurrency, so here square measure no coins surgery notes, only a digital record of the Khan academy Bitcoin series transaction. Our mission is to provide a free, world-class education to anyone, anywhere. thing as negative one to the N plus one times one over N which is, which we can then say this thing right over Because the Khan Academy tests are actually released by The College Board, they are generally regarded as official and accurate material. Desculpe, mundo DAS séries! Properties, the khan academy Bitcoin series especially remarkable make: A Khan Academy é uma organização sem fins lucrativos com a missão de oferecer ensino de qualidade … KSG provides the best IAS Online Coaching for Civil Services Exam Preparation. Khan academy Bitcoin series → Just lies? Call Now 9811293743. Series are sums of multiple terms. By Evaluation different independent Opinions, comes out, that a immensely significant Part the Customers very much satisfied is. Properties, the khan academy Bitcoin series especially remarkable make: Therefore we can say There's no physical money attached to type A cryptocurrency, so here square measure no coins surgery notes, only a digital record of the Khan academy Bitcoin series transaction. Complete diagnostic tests to better understand your strengths and weaknesses. all of these things are true and we know two more things, and we know number one, the I noticed that in the Integral Calculus > Series section, There are multiple tests taught in order to determine convergence or divergence, and I feel like there should have been a video, or several videos, that summarizes all these methods and gives some advice as to determining when to use which, because this is hard to figure out. Create a personalized study plan with recommendations from Khan Academy. Tests show the reality! Aprenda como isso é possível e como podemos saber se uma série converge e para qual valor. Complete diagnostic tests to better understand your strengths and weaknesses. Algumas séries infinitas convergem para um valor finito. bit more interesting. The negative one to the N plus one is actually explicitly called out. Khan Academy je nezisková organizace, jejímž cílem je poskytovat prvotřídní vzdělání, zdarma, komukoli a kdekoli. Esse é realmente um mundo vasto e fascinante: a série mundial! I think reading the article, Start Homeschooling with Khan Academy, may answer some of your questions and make sure to read posts at the bottom.Additionally, if you have placed your children into a class then you can download a report that shows their mastery level for individual skills. These scores are based on questions in the Reading, Writing and Language, and Math Tests that ask students to think analytically about texts and questions in these subject areas. Explanation of the Alternating Series Test a little bit more concrete. www.ets.org. But here's the problem. A Khan Academy é uma organização sem fins lucrativos. Khan Academy is a 501(c)(3) nonprofit organization. Khan Academy will break it down and walk you through the process. Integrals & derivatives of functions with known power series Get 3 of 4 questions to level up! It just means that you couldn't use the Alternating Series Test on and on and on forever. Khan academy Bitcoin series → Simply lies? So this satifies, this Séries são somas de múltiplos termos. Determine if the series diverges: This problem provides a series. khan academy Bitcoin series brings very much pleasing Results. negative one to the N plus one times B sub N where B sub N is greater negative one to the N, times B sub N or A sub N is equal to Providing a personalized test plan, official Praxis Core practice tests, thousands of questions and more. This is impressive, there almost all other Companies constantly criticized be.
|
{}
|
# Math Help - Lin alg proofs help
1. ## Lin alg proofs help
Hi everyone!
I'm having trouble with these two linear algebra proofs, and help would be greatly appreciated.
1. let V and W be real vector spaces, and let L: V to W be a linear transformation such that Ker(L)=0v and Im(L)=W; let (v1 ,v2 ,v3) be a basis of V; determine whether or not (L(v1) L(v2) L(v3)) is a basis for W
I know how to prove they are lin independent, but how do i prove that they're both lin independent and a generating set?
2. Let A∈Mn(R) be a real n x n matrix (where n is an integer greater than or equal to 2), and assume λ1, λ2 ∈ R are two eigenvalues of A with λ1does not equal λ2 (i.e λ1, λ2 are two distinct eigenvalues of A). let W1 denote the eigenspace of A corresponding with eigenvalue λ1, and let W2 denote the same with λ2 .
Show that we have W1 ∩ W2 = {0Rn}
I know that the W1 is simply the kernel of the matrix (I
λ1-A), but don't know how to get past that.
2. ## Re: Lin alg proofs help
Hey Tra003.
For a basis, you need N linearly independent vectors where dim(V) = N. Once you have this, you definitely have a basis for any vector space.
For the second one you should show that the two spaces are independent. If you are talking about two vectors then showing they are independent (i.e. not scalar multiples) of each other should suffice and you can do this by setting up a two row matrix and row-reducing it.
3. ## Re: Lin alg proofs help
Hi,
I think even with "easy" problems, it helps to see complete proofs:
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 19 Jun 2018, 01:46
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Is n an integer ? 1. n^2 is an integer. 2. Sqrt (n) is
Author Message
Intern
Joined: 19 Nov 2008
Posts: 41
Is n an integer ? 1. n^2 is an integer. 2. Sqrt (n) is an [#permalink]
### Show Tags
Updated on: 11 Jan 2009, 11:20
Is n an integer ?
1. n^2 is an integer.
2. Sqrt (n) is an integer
--== Message from GMAT Club Team ==--
This is not a quality discussion. It has been retired.
If you would like to discuss this question please re-post it in the respective forum. Thank you!
To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you.
Originally posted by GmatEnemy on 10 Jan 2009, 12:03.
Last edited by GmatEnemy on 11 Jan 2009, 11:20, edited 1 time in total.
Senior Manager
Joined: 02 Nov 2008
Posts: 254
### Show Tags
10 Jan 2009, 12:34
GmatEnemy wrote:
Is n an integer ?
1. n^2 is an integer.
2. Sqrt (n) is an integer
I dont understand what the hell is the trap in this problem ?
Its soo simple but i am shocked with the answer.
(1.4)^2 = 2
2 ^2 = 4
So A is insufficient
N will always be an integer if it's Sqrt is an integer
So the correct answer is B
Manager
Joined: 04 Jan 2009
Posts: 229
### Show Tags
10 Jan 2009, 14:18
if n^2 is integer, it means nothing because if n^2 is negative, n is non-integer.
Now if sqrt(n) is integer, n is (not only) integer(, but non-negative. So they should have asked if n is non-negative integer. But the) answer to the question is B. because 2 is sufficient and 1 is not.
GmatEnemy wrote:
Is n an integer ?
1. n^2 is an integer.
2. Sqrt (n) is an integer
I dont understand what the hell is the trap in this problem ?
Its soo simple but i am shocked with the answer.
_________________
-----------------------
tusharvk
Manager
Joined: 28 Jul 2004
Posts: 135
Location: Melbourne
Schools: Yale SOM, Tuck, Ross, IESE, HEC, Johnson, Booth
### Show Tags
10 Jan 2009, 14:23
tusharvk wrote:
if n^2 is integer, it means nothing because if n^2 is negative, n is non-integer.
Now if sqrt(n) is integer, n is (not only) integer(, but non-negative. So they should have asked if n is non-negative integer. But the) answer to the question is B. because 2 is sufficient and 1 is not.
GmatEnemy wrote:
Is n an integer ?
1. n^2 is an integer.
2. Sqrt (n) is an integer
I dont understand what the hell is the trap in this problem ?
Its soo simple but i am shocked with the answer.
Tushar, how can n^2 be negative? I did not get that part. Also, negative numbers too are integers.
_________________
kris
Senior Manager
Joined: 02 Nov 2008
Posts: 254
### Show Tags
10 Jan 2009, 14:40
tusharvk wrote:
if n^2 is integer, it means nothing because if n^2 is negative, n is non-integer.
Now if sqrt(n) is integer, n is (not only) integer(, but non-negative. So they should have asked if n is non-negative integer. But the) answer to the question is B. because 2 is sufficient and 1 is not.
GmatEnemy wrote:
Is n an integer ?
1. n^2 is an integer.
2. Sqrt (n) is an integer
I dont understand what the hell is the trap in this problem ?
Its soo simple but i am shocked with the answer.
n^2 can never be negative. the lowest possible value of n^2 is 0.
also an integer can be negative.
Manager
Joined: 04 Jan 2009
Posts: 229
### Show Tags
10 Jan 2009, 15:01
For an integer (positive or negative) n, n^2 is always positive. But 1 gives only that n^2 is integer and not non-negative integer; so if n^2 is negative, n is not integer. Thus, 1 is not enough.
Next 2=>sqrt(n)=a=>a^2=n; Thus, n is non-negative i.e. 0 or positive. Thus, 2 is sufficient.
krishan wrote:
tusharvk wrote:
if n^2 is integer, it means nothing because if n^2 is negative, n is non-integer.
Now if sqrt(n) is integer, n is (not only) integer(, but non-negative. So they should have asked if n is non-negative integer. But the) answer to the question is B. because 2 is sufficient and 1 is not.
GmatEnemy wrote:
Is n an integer ?
1. n^2 is an integer.
2. Sqrt (n) is an integer
I dont understand what the hell is the trap in this problem ?
Its soo simple but i am shocked with the answer.
Tushar, how can n^2 be negative? I did not get that part. Also, negative numbers too are integers.
_________________
-----------------------
tusharvk
Manager
Joined: 28 Jul 2004
Posts: 135
Location: Melbourne
Schools: Yale SOM, Tuck, Ross, IESE, HEC, Johnson, Booth
### Show Tags
10 Jan 2009, 15:16
oh !!! got it .
if n = sqrt(-2) , n^2 = -2.
A does not say whether n is a positive or negative, hence, (A) is not sufficient.
_________________
kris
GMAT Tutor
Joined: 24 Jun 2008
Posts: 1345
### Show Tags
12 Jan 2009, 02:42
krishan wrote:
oh !!! got it .
if n = sqrt(-2) , n^2 = -2.
A does not say whether n is a positive or negative, hence, (A) is not sufficient.
No, all numbers on the GMAT are real numbers. You can never take square roots of negatives on the GMAT. If you see $$n^2$$ on the GMAT, it is always de facto true that $$n^2 \geq 0$$; it is never relevant to a GMAT problem to consider the possibility that $$n^2$$ is negative.
_________________
GMAT Tutor in Toronto
If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com
Manager
Joined: 04 Jan 2009
Posts: 229
### Show Tags
12 Jan 2009, 06:33
if N^2 is always non-negative for GMAT, does that change the answer.
Your point is well taken; I do not think it will change the answer.
n^2=2 is integer; but n is non-integer.
sqrt(n) is integer=>n which is square of integer is integer and also n is non-negative.
IanStewart wrote:
krishan wrote:
oh !!! got it .
if n = sqrt(-2) , n^2 = -2.
A does not say whether n is a positive or negative, hence, (A) is not sufficient.
No, all numbers on the GMAT are real numbers. You can never take square roots of negatives on the GMAT. If you see $$n^2$$ on the GMAT, it is always de facto true that $$n^2 \geq 0$$; it is never relevant to a GMAT problem to consider the possibility that $$n^2$$ is negative.
_________________
-----------------------
tusharvk
GMAT Tutor
Joined: 24 Jun 2008
Posts: 1345
### Show Tags
12 Jan 2009, 12:06
tusharvk wrote:
if N^2 is always non-negative for GMAT, does that change the answer.
Your point is well taken; I do not think it will change the answer.
n^2=2 is integer; but n is non-integer.
sqrt(n) is integer=>n which is square of integer is integer and also n is non-negative.
The original answer is correct, but not because n^2 might be negative. The original answer is correct because n^2 might be equal to a positive integer which is not a perfect square.
--== Message from GMAT Club Team ==--
This is not a quality discussion. It has been retired.
If you would like to discuss this question please re-post it in the respective forum. Thank you!
To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you.
_________________
GMAT Tutor in Toronto
If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com
Re: N integer [#permalink] 12 Jan 2009, 12:06
Display posts from previous: Sort by
# Is n an integer ? 1. n^2 is an integer. 2. Sqrt (n) is
Moderator: chetan2u
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
{}
|
## Question
There are 8 distinct boxes and each box can hold any number of balls. A child having 4 identical balls randomly choose four boxes. Then another child having four balls, identical to the previous mentioned, again puts one ball in each of the arbitrary chosen four boxes. The probability that there are balls in at least 6 boxes is
Sir pls solve
Joshi sir comment
|
{}
|
• ### Power Consumption Of Vibrating Screen - Panola Mining
Vibrating Screen. 2. Using the eccentric blocks to generate strong vibrating force. 3. Screen cross-beam and box are connected by strong bolts, simple structure, convenient in maintenance. 4. Small amplitude, high frequency, large-inclination structure, high efficiency, large capacity, long service life, low power consumption and noise s e.
Charlar en Línea
• ### Vibrating Screen Capacity Calculations – MEKA
The material velocity of a circular vibrating screen can be calculated from the corrected theoretical speed of the product formula written below. Example: Determine the material velocity of a screen vibrated at 900 RPM with a 12 mm
Charlar en Línea
• ### power requirement for vibrating screen
2020-2-10 Check the power supply voltage of the vibrating screen. 5. Check the interlock control device of the vibration motor. 6. Remove the end cap of the motor, and check the relative phase angle of the sector is consistent, tighten the sector screw fastening. 7. The two motors should be installed at the same angle.
Charlar en Línea
• ### Sales of different types of vibrating screens Haiside
2022-5-24 The overall weight of the screen body can be reduced to the greatest extent under the conditions that meet the requirements of all aspects of the vibrating screen. Make the whole vibrating screen have high strength, light weight, large processing capacity, low power consumption, high vibration intensity and high screening efficiency;
Charlar en Línea
• ### Effect of Mode Amplitude on Power Consumption in
2021-2-23 where $$N_{i}$$ and $$N_{f}$$ are initial and final power (kW), respectively; $$N_{\hbox{max} }$$ is maximum power (kW).. The experimental value of power consumption in the loaded vibrating mixer may be considered under two parts. The first one is the power transmitted directly to the load to overcome the resistance caused by the material, and the second one is
Charlar en Línea
• ### Basic concepts of vibrating screens: What they are, what
What are vibrating screens and which are its main applications for use. Also called simply screens, a vibrating screen is formed by a vibrant chassis that supports in its interior one or several surfaces or elements of screening. The screens serve to classify the different particles by size, starting from a bulk product in a continuous process.
Charlar en Línea
• ### Vibrating Screen Working Principle
2015-7-26 Source: This article is a reproduction of an excerpt of “In the Public Domain” documents held in 911Metallurgy Corp’s private library. screening-capacity screen-capacity vibratory-screen-design-vibrating-screen-types-selection Screen Frame Sizes and Scale-Up Problems Major Screen Components. Now, essentially you can break screens down into three
Charlar en Línea
• ### 9 Factors That Affect Efficiency of Vibrating Screen
2020-1-14 Vibrating screen, as one of the common screening equipment, mainly depends on its vibration characteristics to complete the classification, dehydration and desilting operations, which is widely used in the screening operation of mineral processing plant. ... 7 Tips for Energy Saving and Consumption Reduction in Mineral Processing Plant. 2022-02 ...
Charlar en Línea
• ### 5 Performance Parameters Affect Efficiency of Vibrating
2019-11-7 Shaft Angle of Vibrating Screen. ... 7 Tips for Energy Saving and Consumption Reduction in Mineral Processing Plant. 2022-02-24(02:02:02) Tag. Flotation Thickener Ball Mill Screening Equipment Classifier Crushing Magnetic Separator Gold
Charlar en Línea
• ### Importance of Vibrating Screen to the Powder Production
2012-10-7 The vibrating screen is a kind of high-precision powder griddle indispensable for the powder coating production . Its main principle is to use the vibration motor, transfered the three-dimensional four-way exciting force vibration generated to the screen surface,divided the online paint into the material online material and network, so as to achieve the purpose of screening
Charlar en Línea
• ### Types Of Vibrating Screen and How It Works - JXSC Machine
2021-7-27 The utility model has the characteristics of low energy consumption, high efficiency, simple structure, easy maintenance, fully enclosed structure and no dust overflow. The maximum screening mesh is 400 mesh, which can screen out 7 materials with different particle sizes. ... Mining vibrating screen. The power device of this series of linear ...
Charlar en Línea
• ### power requirement for vibrating screen
2020-2-10 Check the power supply voltage of the vibrating screen. 5. Check the interlock control device of the vibration motor. 6. Remove the end cap of the motor, and check the relative phase angle of the sector is consistent, tighten the sector screw fastening. 7. The two motors should be installed at the same angle.
Charlar en Línea
• ### Sales of different types of vibrating screens Haiside
2022-5-24 The overall weight of the screen body can be reduced to the greatest extent under the conditions that meet the requirements of all aspects of the vibrating screen. Make the whole vibrating screen have high strength, light weight, large processing capacity, low power consumption, high vibration intensity and high screening efficiency;
Charlar en Línea
• ### Effect of Mode Amplitude on Power Consumption in
2021-2-23 where $$N_{i}$$ and $$N_{f}$$ are initial and final power (kW), respectively; $$N_{\hbox{max} }$$ is maximum power (kW).. The experimental value of power consumption in the loaded vibrating mixer may be considered under two parts. The first one is the power transmitted directly to the load to overcome the resistance caused by the material, and the second one is
Charlar en Línea
• ### Types Of Vibrating Screens Introduction And
2022-5-21 The screen has the advantages of simple structure, reliable performance, large screening capacity, and economical use. Its low power consumption and low maintenance cost, it is the simplest and most practical
Charlar en Línea
• ### Shortcomings of Vibrating Screen and Corrective
2019-7-27 The vibrating screen has larger force of vibration and friction which causes the wear of the machine parts. The higher friction of the conventional vibrating screen leads to higher power requirement for instance in industries the motor power requirement is around 7.5 kW to
Charlar en Línea
• ### Vibrating Screen Working Principle
Charlar en Línea
• ### Construction Working and Maintenance of Vibrators and
2020-5-9 7 Vibrating Screen Installation, Start up and Adjustments 54 8 Operation and Maintenance of Vibrating Screens 57 9 Checking of Stroke Length and Stroke Angle 63 10 Natural Frequency and Resonance 65 ... reduces the power consumption. Scalping thus reduces unnecessary operation and maintenance costs.
Charlar en Línea
• ### Variable elliptical vibrating screen: Particles kinematics and ...
2021-11-1 1. Introduction. Coal is one of the most important fossil energy sources in the world , , .In 2019, coal consumption accounted for 57.7% of the total national energy consumption in China .Although coal has greatly contributed to the development of the world economy and industrial production, smoke and harmful gases formed due to direct combustion of raw coal might
Charlar en Línea
• ### Importance of Vibrating Screen to the Powder Production ...
2012-10-7 The vibrating screen is a kind of high-precision powder griddle indispensable for the powder coating production . Its main principle is to use the vibration motor, transfered the three-dimensional four-way exciting force vibration generated to the screen surface,divided the online paint into the material online material and network, so as to achieve the purpose of screening
Charlar en Línea
• ### Vibrating Screen Power Requirement - bulk-online
2006-12-22 Know-How Vibrating Equipment Consultancy. The following formula can be used in order to to determining minimum electric motor sizes for circular motion vibrating screens. kW = (mR x n x 1.25) / (97,400 x LRT) where. kW = design power. mR = total static moment of counterweight assemblies. i.e.: mass x radius value (kgcm)
Charlar en Línea
• ### power requirement for vibrating screen
2020-2-10 Check the power supply voltage of the vibrating screen. 5. Check the interlock control device of the vibration motor. 6. Remove the end cap of the motor, and check the relative phase angle of the sector is consistent, tighten the sector screw fastening. 7. The two motors should be installed at the same angle.
Charlar en Línea
• ### Effect of Mode Amplitude on Power Consumption in
2021-2-23 where $$N_{i}$$ and $$N_{f}$$ are initial and final power (kW), respectively; $$N_{\hbox{max} }$$ is maximum power (kW).. The experimental value of power consumption in the loaded vibrating mixer may be considered under two parts. The first one is the power transmitted directly to the load to overcome the resistance caused by the material, and the second one is
Charlar en Línea
• ### Vibrating Screen Working Principle
Charlar en Línea
• ### 5 Performance Parameters Affect Efficiency of Vibrating
2019-11-7 Irrespective of the hard factors of material properties and equipment structure, we mainly analyze the influence of various parameter indexes on the efficiency of the vibrating screen, including amplitude, vibration frequency,vibration direction angle, screen tilt angle and the throwing angle, these are the reference for the vibrating screen ...
Charlar en Línea
• ### Construction Working and Maintenance of Vibrators and
2020-5-9 7 Vibrating Screen Installation, Start up and Adjustments 54 8 Operation and Maintenance of Vibrating Screens 57 9 Checking of Stroke Length and Stroke Angle 63 10 Natural Frequency and Resonance 65 ... reduces the power consumption. Scalping thus reduces unnecessary operation and maintenance costs.
Charlar en Línea
• ### Importance of Vibrating Screen to the Powder Production ...
2012-10-7 The vibrating screen is a kind of high-precision powder griddle indispensable for the powder coating production . Its main principle is to use the vibration motor, transfered the three-dimensional four-way exciting force vibration generated to the screen surface,divided the online paint into the material online material and network, so as to achieve the purpose of screening
Charlar en Línea
• ### How Much Power (Watts) Does a Monitor Use? - The
Typically desktop monitors consume between 20 to 100 watts of power. Things that affect the power consumption of a monitor is its screen size, model, and what is emitted on it. If you use a typical 21-inch monitor for 3 hours per day, you will consume between 0.3 kWh which is around 0.01 cents per day.
Charlar en Línea
• ### How to Calculate LED Display Power Consumption?
2020-6-24 To begin with, buyers are required to determine the input voltage and current utilized by the LED screens. Based on a study, the best or ideal input current is around 20mA while the voltage for an LED display should be 5V. But, there are some evidences ascertaining the fact that current of the LED cannot go up to 20mA while the power ...
Charlar en Línea
• ### How to Check the Power Consumption of PC - The Tech
2021-11-15 Here’s how to put LocalCooling to work for you: Step 1: Download the app and install it on your device. Step 2: Open the app on your device. Step 3: Select My Power Tab from the Settings menu. Step 4: On the My Power tab, you’ll get a gist of how much electricity each computer component, including Monitor and Hard drive, consumes.
Charlar en Línea
ver más
|
{}
|
## Cesàro averaging operators.(English)Zbl 1024.47014
K. F. Andersen proved in [Proc. R. Soc. Edinb., Sect. A 126, 617-624 (1996; Zbl 0865.47020)] that the generalized Cesàro operator defined by $C^{\gamma} f(z)= \sum_{n=0}^\infty \bigg({1\over A_n^{\gamma+1}}\sum_{k=0}^n A_{n-k}^{\gamma}a_k\bigg) z^n,$ where $$f(z)=\sum_{n=0}^\infty a_n z^n$$ is an analytic function on the unit disc $$U$$ and $$A_k^\gamma$$ is the $$k$$th coefficient of the series expansion satisfying of $$(1-x)^{-1-\gamma}$$, satisfies the following inequality $M_p( C^{\gamma} f, r)\leq C_{\gamma, p} M_p( f, r)$ for every $$0<r<1$$ and Re $$\gamma>-1$$, where $$M_p$$ denotes the integral mean in $$L^p$$. In the present paper, the above result is extended to analytic functions defined in the polydisk where the operator $$C^{\gamma}$$ is substituted by the so-called generalized Cesàro operator $C^{\bar\gamma} f(z)= \sum_{|\alpha|=0} \Biggl({\sum_{\beta\leq\alpha} (\prod_{j=1}^n A_{\beta_j}^{\gamma_j}) a_{ \alpha-\beta } \over \prod_{j=1}^n A_{\alpha_j}^{\gamma_j+1}} \Biggr) z^\alpha,$ whenever $$f(z)=\sum_{|\alpha|=0}^\infty a_\alpha z^\alpha$$ is an analytic function on $$U^n$$.
### MSC:
47B38 Linear operators on function spaces (general) 46E15 Banach spaces of continuous, differentiable or analytic functions 30H05 Spaces of bounded analytic functions of one complex variable
Zbl 0865.47020
Full Text:
|
{}
|
# Help Me With These Simple Formulae Questions?
## Hi all, first of all I'd like to thank you for your time. Please answer all three of these questions below in as much detail as you can also including detailed steps. Thank you! Question 7 & 8 Question 8 (Review Set)
Question "7" - see below:
#### Explanation:
For the question "7":
$S = \frac{1}{2} a {t}^{2}$
solve for $t$:
$2 S = a {t}^{2}$
${t}^{2} = \frac{2 S}{a}$
$t = \sqrt{\frac{2 S}{a}}$
~~~~~
When $a = 8 , S = 40$
$t = \sqrt{\frac{2 \left(40\right)}{8}} = \sqrt{\frac{80}{8}} = \sqrt{10} \sec$
When $a = 8 , S = 10$
$t = \sqrt{\frac{2 \left(10\right)}{8}} = \sqrt{\frac{20}{8}} = \sqrt{\frac{5}{2}} \sec$
Question "8" - see below:
#### Explanation:
Question "8"
$m = {m}_{o} / \sqrt{1 - {\left(\frac{v}{c}\right)}^{2}}$
Solve for $v$:
$m \sqrt{1 - {\left(\frac{v}{c}\right)}^{2}} = {m}_{o}$
$\sqrt{1 - {\left(\frac{v}{c}\right)}^{2}} = {m}_{o} / m$
$1 - {\left(\frac{v}{c}\right)}^{2} = {\left({m}_{o} / m\right)}^{2}$
${\left(\frac{v}{c}\right)}^{2} = 1 - {\left({m}_{o} / m\right)}^{2}$
$\frac{v}{c} = \sqrt{1 - {\left({m}_{o} / m\right)}^{2}}$
$v = c \sqrt{1 - {\left({m}_{o} / m\right)}^{2}}$
~~~~~
Find $v$ as a fraction of $c$ such that $m = 3 {m}_{o}$
$\frac{v}{c} = \sqrt{1 - {\left({m}_{o} / \left(3 {m}_{o}\right)\right)}^{2}}$
$\frac{v}{c} = \sqrt{1 - {\left(\frac{1}{3}\right)}^{2}}$
$\frac{v}{c} = \sqrt{1 - \left(\frac{1}{9}\right)}$
$\frac{v}{c} = \sqrt{\frac{8}{9}}$
$v = c \frac{\sqrt{8}}{3} \approx .9428 c$
~~~~~
Find $v$, with $m = 30 {m}_{o}$ and $c \approx 3 \times {10}^{8}$
$v = \left(3 \times {10}^{8}\right) \sqrt{1 - {\left({m}_{o} / \left(30 {m}_{o}\right)\right)}^{2}}$
$v = \left(3 \times {10}^{8}\right) \sqrt{1 - {\left(\frac{1}{30}\right)}^{2}}$
$v = \left(3 \times {10}^{8}\right) \sqrt{1 - \left(\frac{1}{900}\right)}$
$v = \left(3 \times {10}^{8}\right) \sqrt{\frac{899}{900}} \approx 299 , 833 , 287 m {s}^{-} 1$
Review Set 8 - see below:
#### Explanation:
Review Set 8:
With the set of consecutive odd integers, we are given:
$1 , 3 , 5 , 7 , \ldots$
The next 3 terms are:
$9 , 11 , 13$
The $n t h$ term can be found:
$2 n - 1$ where $n$ is a natural number ($n > 0$)
For the pattern of ${T}_{1} , {T}_{2} , {T}_{3} , \ldots$
where ${T}_{1} = \frac{1}{1 \times 3} , {T}_{2} = \frac{1}{3 \times 5} , {T}_{3} = \frac{1}{5 \times 7}$
following the pattern:
${T}_{4} = \frac{1}{7 \times 9} , {T}_{5} = \frac{1}{9 \times 11} , {T}_{6} = \frac{1}{11 \times 13}$
notice that, in the denominator, the smaller of the two numbers is the $n t h$ term. For instance, for ${T}_{6} = \frac{1}{11 \times 13}$, the subscript 6 is $n$, and so the $n t h$ term is $2 n - 1 = 2 \left(6\right) - 1 = 11$.
Hence, for the 20th term, we'll have the lesser term in the denominator as:
$2 n - 1 = 2 \left(20\right) - 1 = 39$
and therefore:
${T}_{20} = \frac{1}{39 \times 41}$
and
${T}_{n} = \frac{1}{\left(2 n - 1\right) \left(2 n + 1\right)}$
|
{}
|
# Derivation from Landau and Liffshitz, vol 6
Tags:
1. Oct 20, 2015
### Geofleur
I have starting working through section 134 of Landau and Lifshitz, vol 6, and it seems I have entered some kind of twilight zone where all my math/physics skills have left me
The derivation starts with the energy-momentum tensor for an ideal fluid:
$T^{ik} = wu^i u^k - p g^{ik}$,
where the Latin indices range from 0 to 3 (Greek indices would range from 1 to 3), $w$ is the enthalpy, $u^i$ is component i of the four-velocity, $p$ is the pressure, and $g^{ik}$ is the component ik of the Minkowski metric (with signature $g^{00} = 1$). The derivation also employs the equation for conservation of particle number:
$\frac{\partial}{\partial x^i} \left( nu^i \right) = 0$,
where $n$ is the proper number density of the particles. We lower the first upper index of $T^{ik}$ using the metric tensor as
$T_{i}^{\ k} = g_{im}T^{mk} = wg_{im}u^m u^k - p g_{im} g^{mk} = wu_i u^k -p \delta_i^k.$
Now we take the four divergence and set it equal to zero,
$\frac{\partial T_i^{\ k}}{\partial x^k} = \frac{\partial}{\partial x^k} \left[ wu_i u^k \right] - \frac{\partial p}{\partial x^i} = u_i \frac{\partial}{\partial x^k}\left[ w u^k \right] + w u^k \frac{\partial u_i}{\partial x^k} - \frac{\partial p}{\partial x^i} = 0$.
And here is where the trouble starts, because Landafshitz has the above equation with a plus sign next to the pressure term, not a minus. But it gets worse! In the next step, they say that $u_i u^i = -1$. Now I must be really confused, because I thought that $(u^i ) = \gamma (1,\mathbf{v})$, so that
$u_i u^i = u_0 u^0 + u_\alpha u^{\alpha} = \gamma^2 (1 - v^2) = 1$,
where $\gamma$ is the Lorentz factor, and the speed of light has been set to unity.
Can anyone out there help me get this mess straightened out?
Last edited by a moderator: Oct 21, 2015
2. Oct 20, 2015
### Orodruin
Staff Emeritus
What? 5 dimensional space time?
Some authors use different conventions for the metric (+---) or (-+++). The difference is the appearance of some signs here and there. You should check what convention is being adopted in each text you are dealing with.
3. Oct 20, 2015
### Geofleur
Ah yes, thanks, I made the appropriate edit.
I did check this - in a footnote at the beginning of the chapter, the say that the metric has diagonal (1,-1,-1,-1), which is what I'm used to. At any rate, I can't see how the magnitude of the four velocity could end up negative
4. Oct 20, 2015
### Orodruin
Staff Emeritus
In the (-+++) convention, the norm squared of all time-like vectors are negative. Admittedly, I do not like this convention and usually use (+---), but it is good to know about it.
5. Oct 20, 2015
### Geofleur
I see - and when I worked it out just now it did come out that way! I knew that the Minkowski norm is not positive definite, but a negative magnitude of the four velocity is just weird. Also, it isn't consistent with their footnote...
I am not sure how that would help the issue with the pressure derivative having the wrong sign.
Last edited: Oct 20, 2015
6. Oct 20, 2015
### Geofleur
OK, this is unbelievable. Out of all 10 volumes, I happen to also have the version of 6 that is in Russian. When I turn to the same page where the problems occur, the signs all make sense!! In the Russian edition, $u^i u_i = 1$ and there is a negative sign in front of the pressure term. Здорово!
7. Oct 20, 2015
### PAllen
I also prefer the convention (+---), but books are all over the place and you must be very careful about the convention used and the signs. No less that Gerard 't Hooft has argued that only the (-+++) is worthwhile, and that (+---) is idiotic and leads only to sign errors (see, for example, the intro to http://www.staff.science.uu.nl/~hooft101/lectures/genrel_2013.pdf). Of course, this being purely a convention, neither I nor you have to follow t'Hooft's opinion.
8. Oct 21, 2015
### Orodruin
Staff Emeritus
Personally, I would do many more sign errors with the (-+++) convention ... There is just something with the four-momentum squared being minus the mass squared which is unappealing to me.
9. Oct 21, 2015
### martinbn
Sorry for the off topic but since the question is resolved and it seems to be just a typo, I don't feel too guilty. I find it interesting that people here prefer (+---). I thought, for some reason, that relativists in general prefer (-+++) and the other convention is preferred by 'particle theorists who don't understand relativity'.
10. Oct 21, 2015
### Orodruin
Staff Emeritus
Well, in the same sense as the (-+++) convention is preferred by relativists who don't understand particle theory.
11. Oct 21, 2015
### PWiz
I think it also has to do with which convention you're first introduced to. I started off SR with Schutz, and after going through Sean Carroll's notes on GR, I just can't get out of the (-+++) habit. Usually when I see vectors with positive norms in relativity I first think "so we're dealing with spacelike vectors huh", and it often takes me a while to realize that the (+---) convention is being used instead.
12. Oct 21, 2015
### martinbn
:) Yes, but they do relativity, not understanding other areas is fine.
I don't know. I used to like the (+---) but then I realized I was wrong and the (-+++) is the 'right' way to go.
13. Oct 21, 2015
### Orodruin
Staff Emeritus
To throw in a famous quote: This is not even wrong.
For things which are conventions there can be no wrong or right. They are physically indistinguishable and only a matter of philosophical debate and personal taste - much like debating QM interpretations, which I also find utterly repetitive and not bringing any new actual scientific value.
14. Oct 21, 2015
### martinbn
I guess the intended tone was not clear, it was a joke, I should have put some of those silly smilies not just the quotes' '
15. Oct 21, 2015
### dextercioby
Most of the quantum field theory texts use the +--- convention. It has been with us since Schweber and Bjorken/Drell. However, this mostly minus one has the disadvantage that you cannot go to 5,6,.. space-time dimensions. That is why if helps to use +++- throughout.
16. Oct 23, 2015
### Ben Niehoff
The (-+++) convention is infinitely better if you work in a variable number of dimensions. It minimizes the number of $(-1)^d$ you have to write and keep track of.
17. Oct 23, 2015
### Staff: Mentor
In my personal study I tend to write the interval as $ds$ when I am using the (-+++) convention and $d\tau$ when I am using (+---). That helps me keep things straight in my mind. I haven't seen anyone else do that, so there is probably a problem with it.
18. Oct 23, 2015
### PAllen
I do the same thing, and have seen no problems with that convention.
19. Oct 23, 2015
### SlowThinker
I'm pretty sure Susskind does that in his lectures.
20. Oct 23, 2015
### Staff: Mentor
That may be where I picked it up. I have seen those lectures several years ago.
|
{}
|
# What are the first shots puppies get?
## How many shots do puppies need before going outside?
When can puppies go out for the first time? In their first 16-18 weeks, puppies typically go through three rounds of vaccinations. After each round of vaccinations, there is a five to seven day waiting period until they are fully effective.
## What shots do puppies need 9 weeks?
Puppy Vaccination Schedule
Age Recommended Optional Vaccinations
6 to 8 weeks Core vaccination Bordetella (kennel cough)
9 to 11 weeks Core vaccination Coronavirus, leptospirosis, Bordetella, Lyme disease
16 weeks Rabies (varies by state)
12 to 14 weeks Core vaccination Coronavirus, Lyme disease, leptospirosis
## Do puppies need 3 vaccinations?
Puppy vaccinations
Puppies are particularly vulnerable to serious diseases like parvovirus and canine distemper, so it’s vital they receive their initial course of three vaccinations.
## Can I give my puppy vaccinations myself?
Do-It-Yourself Vaccinations
We sell dog and cat vaccinations that you can administer to your pet on your own at home. These include the Canine Spectra™ 10, Canine Spectra™ 9, Canine Spectra™ 6, Canine Spectra™ 5, Kennel-Jec™ 2, and Feline Focus™ 3 (vaccine drops). Only vaccinate healthy animals.
## When should puppies be vaccinated?
Puppies are typically vaccinated at eight and ten weeks (although they can be vaccinated as early as four-six weeks of age) with the second dose usually being given two to four weeks later. Speak to your vet about the best timings. Your puppy will then require a booster vaccination at 6 or 12 months of age.
## When should I get my puppy vaccinated?
When should I start vaccinating my pet? If you have kittens or puppies, the first round of vaccinations (usually two or three vaccines), are given at around six to eight weeks old. The final vaccine, however, should not be given before your pet turns sixteen weeks.
## When should puppies be wormed for the first time?
Pups should be wormed for the first time at 2 weeks of age, then at 4, 6, 8, 10 and 12 weeks old (fortnightly until 12 weeks of age). After this they can be wormed monthly until they are 12 months old.
## When should you take your puppy to the vet for the first time?
Most puppies go home to their pet parents at around 6 to 8 weeks of age, and this is the perfect time for a first visit to the vet. You can push their first visit to 10 weeks of age if necessary, but the longer you wait, the more you put your pup at risk.
## When can a puppy go outside?
When can I take my puppy outside? Vets tend to recommend not taking your puppy into public places until about a fortnight after your puppy has had its second vaccination, at around 14-16 weeks. This is because they can easily pick up nasty viruses like parvovirus and distemper.
## Can I take my puppy outside to pee before vaccinations?
If you’re wondering when can puppies go outside away from home, the American Veterinary Society of Animal Behavior (AVSAB) recommends that pet guardians begin taking puppies on walks and public outings as early as one week after their first round of vaccinations, at about seven weeks old.
## When can puppies get a bath?
13 Steps to Bathe a Puppy
Unless they get quite grubby, have a close encounter with a skunk, or are dog show prospects that need frequent grooming, most puppies shouldn’t need a bath more than one to four times a year. Puppies shouldn’t be bathed until they are at least four weeks old—six or eight weeks is better.
## Can I carry my puppy outside before vaccinations?
Taking your puppy for their first walk is a huge milestone, but you’ll need to wait until they are fully covered by their vaccinations to be safe. This is usually a few weeks after their second jab in their primary vaccination course, but this does vary from vaccine to vaccine.
|
{}
|
# Homework Help: Electromag applied to Capacitor
1. Feb 22, 2010
### likephysics
Electromagnetics applied to Capacitor
1. The problem statement, all variables and given/known data
consider the circular cylindrical parallel plate capacitor shown. The cap has height hc, diameter 2a, and its interior volume is filled with a linear and homogeneous material with permittivity epsilon=e'-je'' and conductivity sigma. A time harmonic source of voltage V0 is applied between the top and bottom perfectly conducting plates of the capacitor such that at t=0, the top plate is at its max voltage. The capacitor can be considered small relative to the wavelength associated with the source frequency and its interior volume v is enclosed by a snug surface S. Assuming V0,hc,2a, e, sigma are known and neglecting all field fringe effects, provide answers to the items below:
a) What is the uniform E field inside the cap
b) what is the displacement current density Jd inside the cap
c) what is the induced current density Jc inside the cap
d) what is the electric current I, flowing on the wire that connects the source to the cap
e) what is the uniform mag field H on the surface(at r=a)
f) starting from E and H, using poynting theorem over the cylindrical surface S enclosing the cap, determine power P provided to the cap
g) from item f, determine the complex impedance z of the cap
h) from result of item g, show that cap is always imperfect; it is actually perfect cap in parallel with a perfect resistor, where
C= $$\epsilon$${ (pi a^2)/hc}
and
R= [1/($$\sigma$$+$$\omega$$e'')] [ (hc/(pi*a^2)]
2. Relevant equations
3. The attempt at a solution
a) E field inside a cap is $$\sigma$$/$$\epsilon$$
b) displacement current Jd is I/Area = I/pi a^2 hc
c) induced current Ji= $$\sigma$$ * E field
d) Not sure. I = Jd+Ji or is it I=C.dv/dt
e) H = I/2pi*a (by using $$\oint$$(B.dl) = $$\mu$$*I
f) I did $$\oint$$ ExH.ds and got {$$\sigma$$/$$\epsilon$$}*I/2pi*a
Not sure if my answer is right.
g) and h) unable to solve. Help!
Last edited: Feb 22, 2010
2. Feb 22, 2010
Anybody?
|
{}
|
Students can Download Maths Chapter 4 Determinants Ex 4.6 Questions and Answers, Notes Pdf, 2nd PUC Maths Question Bank with Answers helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations.
## Karnataka 2nd PUC Maths Question Bank Chapter 4 Determinants Ex 4.6
### 2nd PUC Maths Determinants NCERT Text Book Questions and Answers Ex 4.6
Examine the consistency of the system of equations.
Question 1.
x + 2y = 2
2x + 3y =2
hence consistence and unique solution.
Question 2.
2x – y = 5
x + y = 4
hence system is consistent and has unique solution
Question 3.
x + 3y = 5
2x + 6y = 8
Question 4.
x + y + z = 1
2x + 3y + 22 = 2
ax + ay + 2az = 4
case (i).
If a ≠ 0,
The system is consistent and unique solution
case ii. If a = 0 we can’t say any thing about
consistency
Now we find whether
(Adj A) B is zero or not
Question 5.
3x – y – 2z = 2
2y – z = -1
3x – 5y = 3
Question 6.
5x – y + 4z = 5
2x + 3y + 5z = 2
5x – 2y + 6z = -1
Solve the system of linear equations (7 – 14)
Question 7.
5x + 2y = 4
7x + 3y = 5
Question 8.
2x – y = -2
3x + 4y = 3
Question 9.
4x – 3y = 3
3x – 5y = 7
Question 10.
5x + 2y = 3
3x + 2y = 5
Question 11.
2x + y + z = 1
x – 2y – z = $$\frac{3}{2}$$
3v – 5z =9
Question 12.
x – y + z = 4
2x + y – 3z = 0
x + y + z = 2
Question 13.
2x + 3y + 3z = 5
x – 2y + z = -4
3x – y – 22 = 3
Question 14.
x – y + 2z = 7
3x + 4y – 5z = -5
2x – y + 3z = 12
Question 15.
$$A=\left[\begin{array}{ccc}{2} & {-3} & {5} \\{3} & {2} & {-4} \\{1} & {1} & {-2}\end{array}\right]$$ find A-1 using A-1 solve
2x – 3y + 5z = 11
3x + 2y – 4z = -5
x + y – 2z = -3
Question 16.
The cost of 4 kg onion, 3kg wheat and 2 kg rice is 60/-
The cost of 2 kg onion, 4 kg wheat and 6 kg rice is 90/-
The cost of 6 kg onion, 2kg wheat and 3kg rice is 70/-
Find the cost of each item per kg by matrix method.
|
{}
|
# Remove Caption Reference First Page when appearing in LoF [duplicate]
I have a citation in a caption and in my biblography it lists the pages each reference is in. However it lists the page of my List of Figures (ix), but I'd like to remove that. Is there a way to remove part of the caption when putting it in the LoF or to not include prefatory pages in the reference page list?
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{figure1}
\caption{caption is here (adapted from \cite{reference1}).}
\label{fig:figure1}
\end{figure}
• You can use the optional argument of caption: \caption[caption is here for TOC]{caption is here (adapted from \cite{reference1}).} -- Maybe you are interested in the package notoccite May 27, 2013 at 6:29
• Thanks, saw somebody hint at that but missed the significance. However, their order in the Bibliography is still using the LoF. May 27, 2013 at 6:31
## 1 Answer
The command caption as most other level commands has an optional argument which is used for TOC, LOF or LOT:
\caption[caption for TOC]{caption here}
The request of using \cite inside captions is very often. Donald Arsenau wrote a package do simplify the process: notoccite
\documentclass{article}
\usepackage{notoccite}
\begin{document}
\listoffigures
\section{foo}
Text \cite{article-full}
\begin{figure}[!ht]
\caption{caption \cite{book-full}}
\end{figure}
\bibliographystyle{unsrt}
\bibliography{xampl}
\end{document}
|
{}
|
# How long is nuclear power plant waste really dangerous?
by that_guy
Tags: dangerous, nuclear, plant, power, waste
P: 11 I know that some of the isotopes have extremely long half-lifes. However, isn't it true that generally speaking, a longer half life correlates to a lower rate of radioactivity? So wouldn't the most dangerous elements be those with short half-lifes? Isn't most of the danger from waste therefore gone after the first few 100 or 1000 years?
Mentor
P: 22,313
Originally posted by that_guy I know that some of the isotopes have extremely long half-lifes. However, isn't it true that generally speaking, a longer half life correlates to a lower rate of radioactivity? So wouldn't the most dangerous elements be those with short half-lifes? Isn't most of the danger from waste therefore gone after the first few 100 or 1000 years?
Yes, yes, and yes.
However even with "most" of the radiation gone, its still going to be pretty dangerous. For our purposes, radioactive waste is radioactive forever - its so far above human timescales. But really, we're probably talking on the order of 10,000-100,000 years.
P: 11 You mean it would result in several times our usual annual dose even after 1,000 years? Is it orders of magnitude higher? What I was trying to determine is if the 10,000 year goal for places like Yucca Mountain are very conservative and that even if a problem occurred after 1000 years the radiation would be so diminished that it wouldn't be much of a problem. Seems to me that even if we did screw up badly and something leaked after only 100 or 200 years, our technology would be so advanced by then that we should be able to handle it without too much problem.
Mentor
P: 22,313
How long is nuclear power plant waste really dangerous?
Originally posted by that_guy You mean it would result in several times our usual annual dose even after 1,000 years? Is it orders of magnitude higher? What I was trying to determine is if the 10,000 year goal for places like Yucca Mountain are very conservative and that even if a problem occurred after 1000 years the radiation would be so diminished that it wouldn't be much of a problem. Seems to me that even if we did screw up badly and something leaked after only 100 or 200 years, our technology would be so advanced by then that we should be able to handle it without too much problem.
Yes, it would still be dangerous after 1,000 years. I think the point of the 10,000 year goal is that its longer than all of recorded human history, not that the site will be safe by then.
And yes, I think technology will change the equation within the next 100 years as well. But they are being conservative in thinking longer term than that.
Emeritus PF Gold P: 8,147 There is already technology known - a version of the Integral Fast Reactor - that could "eat" high-rad waste and turn it into much milder radioactive material whose danger would be gone in a few hundred years. All that is required is the political will to fund it.
P: 11
Originally posted by selfAdjoint There is already technology known - a version of the Integral Fast Reactor - that could "eat" high-rad waste and turn it into much milder radioactive material whose danger would be gone in a few hundred years. All that is required is the political will to fund it.
Is this reprocessing or a different process entirely? And it would be suitable for generating electricity while it is using high-rad waste?
Emeritus PF Gold P: 8,147 The integrl fast reactor is a design that produces low-level radioactive waste that is less dangerous, decays much faster, and is much easier and safer to store (but of course somebody with theroyprocess's philosophy that any radiation at all is too much radiation wouldn't be persuaded by these considerations). The people who designed the IFR also designed a version of it optimized for converting existing high-rad waste from current light water reactors. It would use the high-rad as fuel, I think, and produce its usual low-rad output. I am not really up on the details of this. The IFR program was cancelled by the Clinton administration.
P: 141 Dr. Roy knew the nuclear industry from the ground up. He designed the Roy Process on an industrial scale using existing infrastructure, commercially available machinery and current technology. It will generate electric power using the existing generators at each nuclear power plant where the waste is stored in cooling ponds. Dr. Roy was the former director of the nuclear physics labs at the Univeristy of Belgium, Penn State and designed the buildings, nuclear instruments for students to study physics. He was very aware that good science must be the most cost effective. You could shut down all the dangerously aging reactors and use the heat from the transmutation of the spent fuel to power the generators and ELIMINATE it forever. Other schemes creates more nuclear waste and only perpetuates the problem which is why the Roy Process is being ignored.
Emeritus PF Gold P: 8,147 Excellent! Have you got online technical details of the Roy Process? It seems there is so much tecnology that would alleviate or solve the waste problem, but nobody in the government wants to apply it.
P: 141 The patent application of some 100 pages, apparatus and theory, with proprietary technical data for transmuting Pu239, Sr90 and Cs137 can only be seen by scientists representing a company capable of realization who contracts with us. The below article contains a brief description which Dr. Roy released to the press. It is incomplete to protect patent rights necessary for commercial realization. ------------------------------------------------------------------------------------- Guest Article: Making Nuclear Waste Less Harmful Friday, 29 August 2003, 12:36 pm Opinion: Guest Opinion A Process To Render Nuclear Weapons & Waste Less Harmful By Dennis F. Nester, special for NuclearNo.com, Originally published 20 June 2003 - Recycling plutonium from warheads into MOX nuclear reactor fuel only perpetuates the security and environmental problems of bomb grade elements - There is a better way which will completely transmute plutonium and other high level nuclear waste known as the Roy Process It was the TMI partial meltdown that moved Dr. Roy to spend the summer school break proving calculations to see if it was possible to transmute high level nuclear waste cost effectively. He found it could be done with existing infrastructure, commercially available machinery and current supporting technology. Estimated cost to build a pilot facility was $80 million dollars. A newspaper editor persuaded Dr. Roy to release his Roy Process to the press which was published in November of 1979. (see article on web site below). The Roy Process Brief Description from the web site: http://members.cox.net/theroyprocess Is there a safe process to get rid of nuclear waste? Maybe! One possible solution is a process invented by Dr. Radha R. Roy, former professor of Physics at Arizona State University, and designer and former director of the nuclear physics research facilities at the University of Brussels in Belgium and at Pennsylvania State University. Dr. Roy is an internationally known nuclear physicist, consultant, and the author of over 60 articles and several books. He is also a contributing author of many invited articles in a prestigious encyclopedia. He is cited in American Men and Women of Science, Whos Who in America, Whos Who in the World and the International Biographical Centre, England. He has spent 52 years in European and American universities researching and writing recognized books on nuclear physics. He has supervised many doctoral students. Roy invented a process for transmuting radioactive nuclear isotopes to harmless, stable isotopes. This process is viable not only for nuclear waste from reactors but also for low-level radioactive waste products. In 1979, Roy announced his transmutation process and received international attention. The Roy process does not require storage of radioactive materials. No new equipment is required. In fact, all of the equipment and the chemical separation processes needed are well known. Whats the basis for the Roy Process? If you examine radioactive elements such as strontium 90, cesium 137 and plutonium 239, you will see that they all have too many neutrons. To put it very simply, the Roy process transmutes these unstable isotopes to stable ones by knocking out the extra neutrons. When a neutron is removed, the resulting isotope has a considerably shorter half-life which then decays to a stable form in a reasonable amount of time. How do we knock out neutrons? By bombarding them with photons (produced as x-rays) in a high- powered electron linear accelerator. Before this process, the isotopes must be separated by a well-known chemical process. It is feasible that portable units could be built and transported to hazardous sites for on-site transmutation of nuclear wastes and radioactive wastes. To give an example, cesium 137 with a half-life of 30.17 years is transformed into cesium 136 with a half-life of 13 days. Plutonium 239 with a half-life of 24,300 years is transformed into plutonium 237 with a half-life of 45.6 days. Subsequent radioactive elements which will be produced from the decay of plutonium 237 can be treated in the same way as above until the stable element is formed. The Roy Process could be developed in three distinct phases, according to Roy. Phase I consists of a theoretical feasibility study of the process to obtain needed parameters for the construction of a prototype machine. Phase II will involve the construction of a prototype machine and supporting facilities for demonstrating the process. Phase Ill will consist of the construction of large scale commercial plants based on the data obtained from Phase II. Cost estimates for Phase I and II are in the neighborhood of$10 million. For Phase Ill, Roy estimates a cost of $70 million. Says Roy, It will be interesting to do a cost analysis of eliminating nuclear waste by using my process and by burying it for 240,000 years - ten half-lives of plutonium - under strict scientific control. There is also an ethical question: can we really burden the thousands of generations yet to come with problems which we have created? There is no God among human beings who can guarantee how the geological structure of waste burial regions will change even after ten thousand years, not to mention 240,000 years." If you are interested in finding out more about this process, please contact Dennis Nester, Roys agent, whose address is listed below. A final note To those who say that a process for transforming nuclear wastes is an invitation to keep making them, I ask, when we find a cure for cancer, shall we say its okay to continue to eat, drink and breathe carcinogens? "There is no way one can change nuclear structure other than by nuclear reaction. Burial of nuclear waste is not a solution." Radha Roy, Ph.D. Professor Emeritus "Do not be surprised if you learn that the nuclear industry makes billions of dollars by being a part of government`s policy of burial of nuclear wastes. It is not in their financial interest to try any other process. They are not idealists. Radha R. Roy, Ph.D. Professor Emeritus The below includes the Patent application claim.....describing other uses for the Roy Process transmutation method http://members.cox.net/theroyprocess...oyprocess.html Emeritus Sci Advisor PF Gold P: 4,014 throyprocess wrote: The patent application of some 100 pages, apparatus and theory, with proprietary technical data for transmuting Pu239, Sr90 and Cs137 can only be seen by scientists representing a company capable of realization who contracts with us. Why? If your over-riding concern is to rid the world of the 'nuclear threat', then surely the most certain way to do this is through the widespread implementation of the Roy Process? Can you please explain why keeping the technical data secret is the most effective way to ameliorate the 'nuclear threat'? It surely can't be to protect the IPR (intellectual property rights), since those are already protected through the patent? P: 141 Nereid, The nuclear industry, which gives millions to Congressmen in political action money and 'owns' federal nuclear jurisdiction, got Congress to pass the 1982 Nuclear Waste Policy Act which limited nuclear waste disposal to geologic isolation putting alternatives in scientific limbo. No company would spend money on new science which by federal law they can not use. This is or should be patently unconstitutional. So it was by decree that the Roy Process was stopped so the nukers can siphon billions of tax payers money to let nuclear waste leak into our precious groundwater and is irretrievable. http://members.cox.net/theroyprocess...2-20-1999.html The Roy Process was ready for commercial realization when Dr. Roy announced it to the press on 1979. It is still available through normal business protocal. Mentor P: 22,313 Originally posted by theroyprocess ... the 1982 Nuclear Waste Policy Act which limited nuclear waste disposal to geologic isolation putting alternatives in scientific limbo. No company would spend money on new science which by federal law they can not use. I read a summary and the introduction to the act - it doesn't appear that it says what you say it says. It does not make it illegal for a power company to dispose of the waste themselves if they are capable of doing so. http://www.nrc.gov/who-we-are/governing-laws.html And your answer didn't address Nereid's question: why the secrecy? Also, if this "roy process" was ready to go in 1979, why are you talking about a patent application and not a patent? Was the application rejected? Emeritus Sci Advisor PF Gold P: 4,014 Originally posted by theroyprocess Nereid, The nuclear industry, which gives millions to Congressmen in political action money and 'owns' federal nuclear jurisdiction, got Congress to pass the 1982 Nuclear Waste Policy Act which limited nuclear waste disposal to geologic isolation putting alternatives in scientific limbo. No company would spend money on new science which by federal law they can not use. This is or should be patently unconstitutional. So it was by decree that the Roy Process was stopped so the nukers can siphon billions of tax payers money to let nuclear waste leak into our precious groundwater and is irretrievable. http://members.cox.net/theroyprocess...2-20-1999.html The Roy Process was ready for commercial realization when Dr. Roy announced it to the press on 1979. It is still available through normal business protocal. Adam, in the Politics and World Affairs board, gives some links about countries; there are >200 of them, and the US is but one. Even if one were to accept your statement (and clearly there is much that could be debated), you haven't said anything about the other >200 countries. What is the legal situation in other countries which have nuclear power? Many countries are poor and would love to have a part of the business of cleaning up the US's messes; I'm told there are millions in Mexico who are less poor than they'd otherwise be for just this reason (money sent back by illegal immigrants who do janitorial and 'waste management' jobs few US citizens are willing to do). If the Roy Process is as you say, what's to stop an entrepreneurial businesswoman in Benin (say) from starting a highly profitable business importing high-level nuclear waste from the US and processing it into low-level materials? Mentor P: 22,313 Originally posted by Nereid If the Roy Process is as you say, what's to stop an entrepreneurial businesswoman in Benin (say) from starting a highly profitable business importing high-level nuclear waste from the US and processing it into low-level materials? That's an interesting possibility - even at$100,000 / lb it would still be worth it for us to have someone else reprocess it.
P: 20 Recycle it. Problem solved. Oh yeah, I forgot, we can't. Because a bunch of peacenics caused it to become illegal. Now these same rednecks complain about the waste THEY caused to happen. Nice one GreenPeace.
P: 171
Quote by SpaceGuy Recycle it. Problem solved. Oh yeah, I forgot, we can't. Because a bunch of peacenics caused it to become illegal. Now these same rednecks complain about the waste THEY caused to happen. Nice one GreenPeace.
The discussion of P53 protein repair of chromosome breaks is suppressed in health physics, because it is obvious that low level radiation will at some dose be repaired. We see this in the fact that we get 0.2 rem or 2 millisieverts per year from background radiation, which is 0.02 millirem per hour on the earth's surface (compared to 1 millirem per hour on the moon in solar calm, and far more during solar storms).
In the 1950s there was a long argument between people setting "safe dose limits" for nuclear workers and the theorists who used studies from short lived mammals like mice which have less sophisticated DNA repair mechanisms. The theorists argued that men are mice and any dose is dangerous. Eventually the "safe limit" people were outnumbered and put on the defensive, renaming their safe doses as "acceptible doses".
So today it is heresy in physics to use the latest P53 protein research on the natural rate of repair of radiation damage to check if there really is a safe dose. The concept is heresy just like the ether is a heresy and electromagnetic theorists must pretend that the 377 ohm impedance of free space is some kind of geometry, not a physical characteristic of the ether.
In the same way, Copernicus' solar system was a heresy because Aristarchus of Samos had previously said the same thing and been ridiculed for it. The problems of scientific leaders having to eat humble pie are so great that physics always ends up getting bogged down in heresies, suppressions, humorous ridicule without checking scientific proofs and data, etc. In other words, science is dominated by big money politics. If ex-bookbinder Michael Faraday was alive today, doubtless his invention of the electric motor and generator would be dismissed ad hoc as cranky new ideas.
With radiation, the most penetrating types (neutrinos and gammas) are less easily stopped, interact less, and produce less ionisation and damage. The types which are stopped easiest like alphas from plutonium-239 cause the most damage if the atoms are inside the lung tissue, but they are harmless if they are kept out of the lungs. Sealing them in glass, as happens when you ground burst a nuclear bomb on silicate sand, prevents an inhalation danger because refractory elements like plutonium condense into molten sand before the sand solidifies (although you get some fractionation with elements of lower melting point like iodine-131 and similar ending up on the outside of already solidified glassy fallout particles).
All the radioactive waste on earth is trivial compared to the hazards from natural potassium-40 in the oceans and radon gas from the ground. Not to mention the natural radiation hazards in space. The nuclear industry made a mess of the whole publicity business by cheapskating on radiation research and using fragmented data instead of setting up an organised central radiation research project in the 1940s and 50s. However, the messy public relations propaganda and the incompetence in getting the facts straight in health physics and nuclear energy, is typical of many areas of science.
You can look at the cancer research industry, which has for a century been very good at spending money on research and has delivered results only slowly. Whenever you have bureaucracy, with its obsession on heresy suppression, ridiculing new work from Faraday type scientists, and so on, you can't expect anything but inertia and sleek, glossy self-promotion, because these are the only things committees all agree to do. Quite often, these committees decide where they want discoveries to be made, and subconsciously try to forge the discoveries by hyping up any hopeful result when they big progress doesn't come in the required highly funded area. We see the same thing with speculative superstring and quantum gravity. Proven work which gets somewhere is automatically heretical and sneeringly suppressed. Thus the business of science has become so commercialised that political decisions take precedence over all else. Money speaks.
http://members.lycos.co.uk/nigelbryancook/
Related Discussions Academic Guidance 19 Nuclear Engineering 23 Nuclear Engineering 4 Nuclear Engineering 8 Nuclear Engineering 6
|
{}
|
# Delta Math Answers Probability
2: Investigating Probability (Answers) Question 1 a) The probability the uniform will have black shorts is 6 3 or 2 1. 4 out of 8 4. The study of Mathematics develops quantitative reasoning skills useful in everyday life as well as in many careers. Introduce and reinforce math topics with videos, slideshows, step-by-step tutorials, and other activities. Feel 100% prepared for your Probability tests and assignments by studying popular Probability sets. As mentioned above, you will also find many free math worksheet generators here and they will provide limitless questions along with answers. Use the following table for the next two questions. Which of the following is an impossible event? Choosing an odd number from 1 to 10. Many events can't be predicted with total certainty. New Features on Delta Math HW#61: Conditional Probability HW#60: Frequency Tables w/ Probability HW#58: Sign Tables & Long Division Answer: oe Submit Answer. Hence, $A \subset B$ implies $P(A) \leq p(B)$. Department of Energy's Argonne National Laboratory. In what follows, S is the sample space of the experiment in question and E is the event of interest. The book focuses on these key topics while developing the. Read each question carefully and choose the answer that you think is most likely to be correct. Probability Practice Questions: Level 01. Definition. Here are a couple links to other answers around the web:. Sum of Two Dice Probabilities (A) Answers Find the probability of each sum when two dice are rolled. Delta math help. The probability of drawing an ace from a pack of 52 playing cards is also easy to determine. The categories are History, P. Module 5: Probability And Statistics. In that case you can find the probability of two events occurring by multiplying the probabilities of the two events. Basic concept on drawing a card: In a pack or deck of 52 playing cards, they are divided into 4 suits of 13 cards each i. a game involves spinning this spinner. Exams and solutions. TVUSD Name _____ Geometry Titanic 1 by Illustrative Mathematics On April 15, 1912, the Titanic struck an iceberg and rapidly sank with only 710 of her 2,204 passengers and crew surviving. For example, if you wanted to know the probability of the number 6 turning up when a fair die is rolled. We will simply plug in our answer choices to increase our red marbles (and our total number of marbles) and see which answer choice results in a probability of $3/5$. Find materials for this course in the pages linked along the left. This result is known as the Delta Method. (Alternate instructions for Value with Units answer boxes). delta math answer key algebra 2 / delta math answer key algebra 1 / delta math answer key geometry / bakit it's more fun in the philippines essay tagalog / elevator industry aptitude test eiat practice / no exame de sangue o que significa gama gt / ielts reading practice test 2019 with answers pdf / examen de admision para quinto grado de. While this may be okay in. 3 2) What is the smallest number of coins (pennies, nickels, dimes, and quarters are the only coins allowed) needed to represent any sum up to $1? a. Learn more Customer Service 800. The matchbox is more likely to end face up because the base has a larger surface area than the end. Probability is the branch of mathematics that deals with the likelihood that one outcome or another will occur. Every student attending math classes is obliged to complete loads of math homework in their educational life. P(x) is the probability density function. IM Commentary This is the second task in the series of three, which ask related questions, but use different levels of scaffolding. When we talk about probabilities based on the fact that something else has already happened we call this conditional probability. Thus, the total probability of all seven events is 7/8. This implies that the probability of an event cannot be negative or greater than 1. Pre-Algebra is typically taught in grades 6, 7, 8 in middle school math courses. Graph functions, plot data, evaluate equations, explore transformations, and much more – for free! Start Graphing. No exam solutions, but lots of sample problems with solutions. A data set has multiple modes when two or more values appear with the same frequency. More advanced courses, including the Calculus sequence. The table below gives the marks scored by a group of students in a test. On April 15, 1912, the Titanic struck an iceberg and rapidly sank with only 710 of her 2,204 passengers and crew surviving. Let us start with answer choice H, 18. Answers are given only to odd numbered problems. 2 The Delta Method 2. You will find information to perhaps refresh your memory on a subject in our Math Theory section as well as suggestions on how you can explain the topic to your child. f(x, 2 / 3) is probability of x given p = 2 / 3, f(1, p) is likelihood of p given x = 1. N-Gen Math™ 7. Four Function and. There is just the right amount of game-design to draw kids in. The word “delta” comes from the Greek letter delta, which is represented as a triangle and is commonly used to symbolize a change. Calculate the probability of difference delta Send Proposal. CS70: Discrete Mathematics and Probability Theory, Summer 2015. 5 180 c) probability that the flight will be more than 10 minutes late is given by flight will take minimum time 195 minutes to reach 200 f. What is the Number Line? A number line is a way we can visually represent numbers. Statistics 351 - Probability and Statistics Engineering. Please use all of our printables to make your day easier. So plan your SAT math study prep accordingly. New Features on Delta Math Submit a Bug Report Skill Request / Suggestion The Number Help Page Conditional Probability HW#60: Frequency Tables w/ Probability HW#58: Sign Tables & Long Division HW#65 HW#64 Answer: oe Submit Answer Upcoming Assignments. Math 431 An Introduction to Probability Final Exam | Solutions 1. 1 Slutsky's Theorem Before we address the main result, we rst state a useful result, named after Eugene Slutsky. which gives us the surprising result that when you are in a room with 30 people there is a 70% chance that there will be at least one shared birthday!. The 23rd annual Middle School Math Competition will be held on Saturday, March 14, 2020 which is also National Pi Day!. Tutorial on finding the probability of an event. Understanding probability is very important if you want to work for insurance companies, especially if you want to become an actuary. Hence,$A \subset B$implies$P(A) \leq p(B)$. To pass the course, 8 or more correct answers are necessary. Department of Mathematics 303 Lockett Hall Louisiana State University Baton Rouge, LA 70803-4918 USA Email: [email protected] Toronto old calculus exams. The probability that a flipped coin will land heads is 50% (one outcome out of the two); the same goes for tails. Sometimes delta is used as a proxy for the probability that an option will expire in the money. A probability model has two essential pieces of its description. View MATHCOUNTS Mini #1 (Part 2) here. Module 5: Probability And Statistics. Read more Playful Math Education Blog Carnival. Join our monthly smorgasbord of playful math tips, tidbits, games, and activities for students and teachers of preschool through pre-college mathematics. a bag contains 4 white 3 black and 2 red balls balls are drawn from the bag one by one without replacement the probability that 4th ball is red is equ - Mathematics - TopperLearning. About Me Mrs. 4 out of 8 4. SECONDARY MATH II // MODULE 9 PROBABILITY - 9. a) What is the probability that Janice passes both exams? A direct translation of the language above is This: P(Pass Physical) = 0. probability problems, probability, probability examples, how to solve probability word problems, probability based on area, examples with step by step solutions and answers, How to use permutations and combinations to solve probability problems, How to find the probability of of simple events, multiple independent events, a union of two events. Our final answer is F, 12. In a random survey of 10 women in this age group, what is the probability that two or fewer were never married? please solve. Mathematics books Need help in math? Delve into mathematical models and concepts, limit value or engineering mathematics and find the answers to all your questions. We will simply plug in our answer choices to increase our red marbles (and our total number of marbles) and see which answer choice results in a probability of$3/5$. The vast majority of the projects which we handle include creating custom written assignment solutions for college level Calculus (Integration, Differentiation and analysis of functions), Algebra (Discrete/Finite math including set theory and the theory of equations) and Statistics (Probability, Regression Analysis, Anova, Confidence Intervals. Since there are 38 slots, each of which is equally likely, the probability of each slot is 1/38, and so the probability of falling in 1-12 (and you win) is 12/38. Please type the population mean and population standard deviation, and provide details about the event you want to compute the probability for (for the standard. Applied Statistics and Probability for Engineers, 6th Edition Montgomery, Douglas C. Edexcel GCSE Mathematics (Linear) - 1MA0 PROBABILITY & TREE DIAGRAMS Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. A piece randomly selected with probability$1/2$is then broken into two. tree diagram. Records show that 80% of the students need help in math, 70% need help in English, and. Sometimes delta is used as a proxy for the probability that an option will expire in the money. What Do the Students Think I asked my students to work through some assignments on Delta Math and to give me their opinion. Select the correct answer below: Question: … shows mutually exclusive events? Select the correct answer below: Question: Which of the pairs of events below is mutually exclusive? Select the correct answer below. An Introduction to Mathematical Cryptography is an advanced undergraduate/beginning graduate-level text that provides a self-contained introduction to modern cryptography, with an emphasis on the mathematics behind the theory of public key cryptosystems and digital signature schemes. According to this technique, an out of the money call with a delta of 0. P(x) is the probability density function. Contingency tables are especially helpful for figuring out whether events are dependent or independent. Search this site. Theorem: (Slutsky’s Theorem) If W n!Win distribution and Z n!cin probability, where c is a non-random constant, then W nZ n!cW in distribution. Exams and solutions. 6 Exercises: Conditional Probability and Baye’s Formula 1) Empirical Example: Suppose a survey of 1000 drivers in a metropolitan area during a 3-year period was taken. The answer to a math problem. Elementary Statistics: A Step-by-Step Approach with Formula Card 9th Edition answers to Chapter 1 - The Nature of Probability and Statistics - 1-1 Descriptive and Inferential Statistics - Exercises 1-1 - Page 5 11 including work step by step written by community members like you. The word "delta" comes from the Greek letter delta, which is represented as a triangle and is commonly used to symbolize a change. The problems of Chapters 5-8 corre spond to the semester course Supplementary topics in probability theory. This activity will teach students how to express probability as a fraction. Privacy Policy; Children's Privacy Policy. Probability Event 1 = 2/26 (On the first pick there are two chances out of 26 that one of them will be picked) Probability Event 2 = 1/25 (1 has already been picked; only 25 left) Probability Event 1 & 2 = 2/26 x 1/25 = 1/325 = 0. org, where students, teachers and math enthusiasts can ask and answer any math question. According to government data, the probability that a woman between the ages of 25 and 29 was never married is 40%. Frequently asked simple and hard probability problems or questions with solutions on cards, dice, bags and balls with replacement covered for all competitive exams,bank,interviews and entrance tests. Mathematicians like to express a probability as a proportion, i. We will simply plug in our answer choices to increase our red marbles (and our total number of marbles) and see which answer choice results in a probability of$3/5$. In a random survey of 10 women in this age group, what is the probability that two or fewer were never married? please solve. 6, 2018 lecture. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. what shape do you have the greatest probability of selecting? 12) Which shape has a 35% chance (7 out of 20) of being selected? 1. Start practicing Important: Important Notes. Progressions documents also provide a transmission mechanism between mathematics education research and standards. , product testing, medical testing, pulling a hockey goalie at the end of a game). Connected Mathematics Project is a problem-centered curriculum promoting an inquiry-based teaching-learning classroom environment. x is the value of the continuous random variable X. We select four flights from yesterday for study. Solve the given practice questions based on probability. New Features on Delta Math Submit a Bug Report Skill Request / Suggestion The Number Help Page Conditional Probability HW#60: Frequency Tables w/ Probability HW#58: Sign Tables & Long Division HW#65 HW#64 Answer: oe Submit Answer Upcoming Assignments. 5 out of 6 8. Probability of TTH = (1/2)(1/2)(1/2) = 1/8. Other Results for Probability Tree Diagrams Gcse Questions And Answers: Mathematics (Linear) 1MA0 PROBABILITY & TREE DIAGRAMS. The vast majority of SAT tests only have one questions out of the 58 math questions total, although you might very occasionally see a test with zero or two probability questions. The word “delta” comes from the Greek letter delta, which is represented as a triangle and is commonly used to symbolize a change. Chapter 3: Data Management. Find an Online Tutor Now Choose an expert and meet online. Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Probability theory, a branch of mathematics concerned with the analysis of random phenomena. The delta is approximately the. How to enter answers into Delta Math - Duration: 9:40. If the cube is thrown once, what is the probability of rolling the number 2 or the number 6?. x is the value of the continuous random variable X. The best we can say is how likely they are to happen, using the idea of probability. Tutorial on finding the probability of an event. Several tables are adjoined to the collection. B = 10Final answer is 576. Probability statements. Delta Math Intervention Kits are now available! To learn more visit the Intevention Lesson page Math Intervention Kits. Definition. A = 3Final answer is 486. Learn and practice basic word and conditional probability aptitude questions with shortcuts, useful tips to solve easily in exams. Probability of HHT = 1/8. Basics of Probability August 27 and September 1, 2009 1 Introduction A phenomena is called random if the exact outcome is uncertain. famous text An Introduction to Probability Theory and Its Applications (New York: Wiley, 1950). Probability Compound Answer Key - Displaying top 8 worksheets found for this concept. What Do the Students Think I asked my students to work through some assignments on Delta Math and to give me their opinion. 1 Slutsky’s Theorem Before we address the main result, we rst state a useful result, named after Eugene Slutsky. Answer to Let X1,. as a number between 0 and 1. Delta Math is loading (this could take a moment). F: Objective 7: Compare outcomes as equally likely, more likely, less likely. Here 1 is considered as certainty (True) and 0 is taken as impossibility (False). 9 AM to 6 PM Eastern time, Monday through Friday. Example: There are 87 marbles in a bag and 68 of them are green. In Modules 4 through 6, you will explore how those ideas and techniques can be adapted to answer a greater range of probability problems. If an event A is a subset of another event B (i. Some problems are easy, some are very hard, but each is interesting in some way. famous text An Introduction to Probability Theory and Its Applications (New York: Wiley, 1950). Four Function and. Math can be a difficult subject for many students, but luckily we’re here to help. The answer is simple, the standard normal distribution is the normal distribution when the population mean $$\mu$$ is 0 and the population standard deviation is $$\sigma$$ is 1. Access study documents, get answers to your study questions, and connect with real tutors for MATH 26727 : intro to probability and statistics at San Joaquin Delta College. The included "activity sheets" are designed to be used with the activities given in the (sold. Addition takes two numbers and produces a third number , while convolution takes two signals and produces a third signal. Assume the likelihood that any flight on Northwest Airlines arrives within 15 minutes of the scheduled time is. WorksheetWorks. which gives us the surprising result that when you are in a room with 30 people there is a 70% chance that there will be at least one shared birthday!. The function S(t)=3. Objective 6: Identify an outcome as possible, impossible, certain, uncertain. No Tags Alignments to Content Standards: S-CP. The I function is defined by r(s) = 19 28-1e-*dx. What is the probability that the problem will be s. Reference no: EM132546739. You can take notes in the margins or on the flip-side of each sheet. Modify settings to eliminate examples 3. Make sure you always get your answers right in Probability. delta math answers to probability questions Delta Answer Key - ksjiqi. heart Use each diagram to solve the problems. However, any rule for assigning probabilities to events has to satisfy the axioms of probability: random number generators. A couple plans to have four. Second year of an accelerated two-year sequence; prepares students for senior-level mathematics courses. Progressions documents also provide a transmission mechanism between mathematics education research and standards. For example, if the variable "x" stands for the movement of an object, then "Δx" means "the change in movement. The boxplot for Choice A correctly represents these five values. edu Phone: +1 225 578 1665 Fax: +1 225 578 4276 Math Website Feedback: [email protected] Feel 100% prepared for your Probability tests and assignments by studying popular Probability sets. 5 out of 20 11. The theoretical probability is equal to the experimental probability. Almost all problems. diamond 12. the probability of passing the physical exam is 0. Read and learn for free about the following article: Conditional probability using two-way tables If you're seeing this message, it means we're having trouble loading external resources on our website. Every student attending math classes is obliged to complete loads of math homework in their educational life. No Tags Alignments to Content Standards: S-CP. Delta Math Code and HW Answer Keys - Mrs. More advanced courses, including the Calculus sequence. 3 Recognize that a measure of center for a numerical data set summarizes all of its values with a single number, while a measure of. Delta Math in a Nutshell Probability Single Kit assists students to learn fair/unfair games and includes probability tube, probability mats, dial-a-pattern I and II custom spinners. For the 5 question quick check: 1 C. Suppose we - Answered by a verified Math Tutor or Teacher We use cookies to give you the best possible experience on our website. A = 3Final answer is 486. In this lesson we will learn how to solve simple probability problems with a number cube. What is the probability of getting a number greater than 3? A. Some of the worksheets for this concept are Probability and compound events examples, Probability work 6 compound, Name period work 12 8 compound probability, Probability of compound events, Computation of compound probabilities, Probability work 1, Independent and dependent. So plan your SAT math study prep accordingly. This question is addressed by conditional probabilities. What is a standard normal distribution? Well, that is the obvious first question we need to answer: what is the standard normal distribution. I wouldn't say that it's definitely not related to the CLT. 6) A baseball. Great for students, teachers, parents, and tutors. Probability 1. For two non-disjoint events A and B, the probability of the union of two events −. Free online math tests for elementary, middle school, and high school students. Probability Practice Questions: Level 01. In Modules 1 and 2, you will be introduced to basic counting skills that you will build upon throughout the course. Aptitude Interview Questions and Answers. Everyone learns or shares information via question and answer. The article addresses a variety of topics, including house advantage, confusion about win rates, game volatility, player value and comp policies. Quickly set up an assignment 2. " Scientists use this mathematical meaning of delta often in physics, chemistry, and engineering, and it appears often in word problems. Probability and statistics symbols table and definitions - expectation, variance, standard deviation, distribution, probability function, conditional probability, covariance, correlation RapidTables Home › Math › Math symbols › Statistical symbols. Find the probability of rolling a 2 or an odd number. Mathematics (Linear) – 1MA0 PROBABILITY Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. A continuous random variable Xhas cdf F(x) = 8 >> < >>: a for x 0, x2 for 0 > The Titanic 2. The categories are History, P. Click Image to Enlarge : Students can create a game spinner with one to twelve sectors to look at experimental and theoretical probabilities. These math assignments may be of any complexity degree, difficulty, and time consumption. I wouldn't say that it's definitely not related to the CLT. What is the probability that it lands heads exactly 500 times? Answer: The previous way of solving this problem, just by counting, was as follows: Let n(E) be the number of ways that the coin can land exactly 500 times. Haha is this a reference to my answer to the question: Investment Advice: Will Coca-Cola (KO) shares hit$50 in 2016? As a rough rule of thumb, I would just look at the SPX -10% OTM December put. Tutorial on finding the probability of an event. In fact if you look at an options chain with delta and probabilities, you can see that they are all about the same. If a candidate gives a decimal equivalent to a probability, this should be written to at least 2 decimal places (unless tenths). In Module 3, you will apply those skills to simple problems in probability. what shape do you have the greatest probability of selecting? 12) Which shape has a 35% chance (7 out of 20) of being selected? 1. What is the probability of getting a number greater than 3? A. What is the probability to die at 100 years old? If you have just recently been born, the probability would be very low. Some policies 2 or more policies but less than 5 policies. Basic Probability Math www. What Do the Students Think I asked my students to work through some assignments on Delta Math and to give me their opinion. A number cube has 6 sides. Q x = P (fem ale lives to age x ) = n u m b er of fem ale su rv ivor s at age x 100 ,000. If you have just recently turned 100, the probability would be very h. Probability & Statistics Math WorkbookMaster the subject with review, practice, and drills!REA’s Math Workbook for Probability & Statistics is perfect for high school exams, including end-of-course exams and graduation/exit exams. Mathematical ideas are identified and embedded in a carefully sequenced set of tasks and explored in depth to allow students to develop rich mathematical understandings and meaningful skills. Adding sodium to a solution. 1 , 180 f(x)200 180' x s 200 0, otherwise b) probability that the flight will be no more than 5 minutes late is given by flight can take maximum time 190 minutes to reach 190 J0 (x)dx 0. Use the following table for the next two questions. If you make a mistake, choose a different button. Curriculum partners certified by Illustrative Mathematics give back to IM to support our mission of creating a world where learners know, use, and enjoy mathematics. The probability of any event is always a value from 0 to 1, inclusive. In the previous version we suggested that the terms "odds" and "probability" could be used interchangeably. See the note above to access full versions of the materials. Meet Your Teacher. However, any rule for assigning probabilities to events has to satisfy the axioms of probability: random number generators. Expectation of continuous random variable. 7 (+): Analyze decisions and strategies using probability concepts (e. Courses such as Finite Math and Introduction to Probability and Statistics are useful in non-STEM fields. To pass the course, 8 or more correct answers are necessary. Call (888) 854-6284 International callers — call +1 717 283 1448 8:30 AM to 6 PM Eastern time,. It is expressed as a number between 0 and 1. If the cube is thrown once, what is the probability of rolling the number 2 or the number 6?. HOMEWORK ANSWER KEYS / FREE APPS! - Duration: 10:39. UC Merced old calculus exams with solutions, Math 21: Calculus I, Math 22: Calculus II, Math 23: Vector Calculus, Math 24: Linear Algebra and Differential Equations, Math 30: Calculus II for Biological Sciences, Math 32: Probability and Statistics. 6) A baseball. We feature well over 12,000 printable. 35 Exercises a) A die is rolled, find the probability that the number obtained is greater than 4. To settle the argument, they. SECONDARY MATH II // MODULE 9 PROBABILITY- 9. b) Two coins are tossed, find the probability that one head only is obtained. It is faster to use a distribution-specific function, such as normpdf for the normal distribution and binopdf for the binomial distribution. Visit Cosmeo for explanations and help with your homework problems!. In fact if you look at an options chain with delta and probabilities, you can see that they are all about the same. Applied Statistics and Probability for Engineers, 6th Edition Montgomery, Douglas C. (The Experimental Math of Voting) part 1 part2 (produced by Nathan Fox) Sept. Grades 4 th - 12 th. 7 (+): Analyze decisions and strategies using probability concepts (e. I wouldn't say that it's definitely not related to the CLT. Study Probability and other Math sets for high school and college classes. Background: During the 1500’s Cardano was one of the first people to study probability (probably because he was a noted gambler). Understand that a set of data collected to answer a statistical question has a distribution which can be described by its center, spread, and overall shape. f(x, p) = px(1 − p)1 − x. We use the delta method and Stein’s method to derive, under regularity conditions, explicit upper bounds for the distributional distance between the distribution of the maximum likelihood estimator (MLE) of a d-dimensional parameter and its asymptotic multivariate normal distribution. Choose category of math worksheets you wish to view below. Introduction The Mathematics Level 2 Subject Test covers the same material as the Mathematics Level 1 test — with the addition of trigonometry and elementary functions (precalculus). or if your downloaded materials won't open or are illegible, please notify us immediately by. Follow • 1. Find materials for this course in the pages linked along the left. This tutorial gives you a great introduction to the number line and shows you how to graph numbers on the number line in order to compare them. Choose a worksheet or answer key below. In this brief video I wanted to show how to do three things: 1. Theorem: (Slutsky’s Theorem) If W n!Win distribution and Z n!cin probability, where c is a non-random constant, then W nZ n!cW in distribution. Data Analysis, Statistics, and Probability Mastery 398 The PowerScore SAT Math Bible This book contains many examples and explanations of multiple-choice and student-produced response questions. Today, helping children to make the effort to learn, appreciate and master mathematics is more important than ever. Method 2—PIA. From a pack of 52 cards, a card is drawn at random. Read each question carefully and choose the answer that you think is most likely to be correct. Learn third grade math online for free. Some problems are easy, some are very hard, but each is interesting in some way. Phone Support. ) Probability (b) What is the likelihood that none of the selected flights arrived. Math Worksheets For Printable Download. We use the delta method and Stein’s method to derive, under regularity conditions, explicit upper bounds for the distributional distance between the distribution of the maximum likelihood estimator (MLE) of a d-dimensional parameter and its asymptotic multivariate normal distribution. Some of the worksheets for this concept are Probability and compound events examples, Probability work 6 compound, Name period work 12 8 compound probability, Probability of compound events, Computation of compound probabilities, Probability work 1, Independent and dependent. 5⋅3t models the growth of a tumor where t is the number of months since the tumor was discovered and S is the size of the tumor in cubic millimeters. In the Power of Probability unit, students will learn about the relationship between probability and real-world outcomes, use tree diagrams and tables to calculate probability, and use compound probability to calculate the probability of multiple events. Yahoo Answers is a great knowledge-sharing platform where 100M+ topics are discussed. P(x) is the probability density function. Probability and Graphing MATH PLAYGROUND Grade 1 Games Grade 2 Games Grade 3 Games Grade 4 Games Grade 5 Games Grade 6 Games Thinking Blocks Math Videos. Step-by-step solutions to all your Algebra 2 homework questions - Slader. Given the function g(x)=−x squared−2x+4, determine the average rate of change of the function over the interval −3≤x≤2. We are here to assist you with your math questions. Probability questions will show up on most SAT tests. Basically likelihood vs. This is a re-upload to correct some terminology. If the biggest number rolled is five or six, player 2 wins. Probability: Probability is the ratio of the different number of ways a trial can succeed (or fail) to the total number of ways in which it may result. Make sure you always get your answers right in Probability. Let us start with answer choice H, 18. Calculate the following probabilities. Random is a website devoted to probability, mathematical statistics, and stochastic processes, and is intended for teachers and students of these subjects. Understand that a set of data collected to answer a statistical question has a distribution which can be described by its center, spread, and overall shape. If you’ve excelled in these courses, taking the test can support your high school grades, indicate an interest in pursuing math-based programs of study (science, technology, engineering. Explore math with Desmos. ) Probability (b) What is the likelihood that none of the selected flights arrived. Feedback to your answer is provided in the RESULTS BOX. Corequisites: or concurrent enrollment in MATH 70 Corequisite Support for Introduction to Probability and Statistics. Three Fun Probability Games and Projects Tags: card game , lesson plan , project , review game I did a lot of research on probability lesson plans this past year, but I really didn't like a lot of what I found. Unit 8 - Probability. …shows mutually exclusive events? Which of the pairs of events below is mutually exclusive? A deck of cards contains RED cards numbered 123456 BLUE cards numbered 12345 and GREEN cards numbered 1234. 5 1 + 5 1 + 5 1 + 5 1 + 5 1 = 5 5 = 1. 1 Sample Spaces and Probability 8. What is the probability of the pointer landing on G? (The spinner has 8 sections. To pass the course, 8 or more correct answers are necessary. We hope that you find exactly what you need for your home or classroom!. PSYCH 2 Statistical Methods for Psychology & Social Science (3 units) MATH 12 Introduction to Probability and Statistics (4 units) MATH 17A Concepts and Structures of Mathematics (3 units) MATH 17B Concepts and Structures of Mathematics (3 units) Cumulative high school GPA < 3. Find Probability and Statistics answer sheets, activities, and videos for 6th grade, 7th grade, 8th grade and 9th grade at Scholastic MATH Magazine. These rules provide us with a way to calculate the probability of the event "A or B," provided that we know the probability of A and the probability of B. A single standard number cube is tossed. Math for Game Developers - Jumping and Gravity (Time Delta, Game Loop) - Duration: 9:49. Find materials for this course in the pages linked along the left. Examples will make this clear. 0 mathematicsvisionproject. Corequisites: or concurrent enrollment in MATH 70 Corequisite Support for Introduction to Probability and Statistics. probability: The measure of how likely it is for an event to occur. You can answer these questions on a computer, tablet, or smartphone. So plan your SAT math study prep accordingly. Method 2—PIA. No enrollment or registration. Tutorial on finding the probability of an event. This Saxon Math Homeschool 5/4 Tests and Worksheets book is part of the Saxon Math 5/4 curriculum for fourth grade students, and provides supplemental "facts practice" tests for each lesson, as well as 23 cumulative tests that cover every 5-10 lessons. Explore math with Desmos. Includes full solutions and score reporting. Some questions ask you to enter a symbolic math answer without also entering a unit. sample space probability calculator. In that case you can find the probability of two events occurring by multiplying the probabilities of the two events. For the 5 question quick check: 1 C. This page has a set of whole-page reading passages. So the probability that it will not rain tomorrow is 0. Option's delta as probability proxy. If the outcome of the one event does not affect the outcome of the other, they are said to be independent. Getting an even number after rolling a single 6-sided die. If the cube is thrown once, what is the probability of rolling the number 2 or the number 6?. Probability of at least one correct answer: 1023/1024 Probability of exactly one correct answer: 5/512 The probability of an event occurring can be calculated as "Number of ways the event can occur"/"Total possibilities" As there are two possible results for each of the ten questions, there are 2^10 = 1024 possibilities total. The I function is defined by r(s) = 19 28-1e-*dx. Second year of an accelerated two-year sequence; prepares students for senior-level mathematics courses. Delta Air Lines quotes a flight time of 2 hrs, 5 minutes for its flights from Cincinnati to Tampa. Choose from 7 study modes and games to study Probability. Connected Mathematics Project is a problem-centered curriculum promoting an inquiry-based teaching-learning classroom environment. Here you can find Aptitude interview questions with answers and explanation. If an event A is a subset of another event B (i. The probability of event tells us how likely it is that the event will occur and is always a value between 0 and 1 (e. Basic concept on drawing a card: In a pack or deck of 52 playing cards, they are divided into 4 suits of 13 cards each i. A gift to the children and math students of the world from the U. Tossing a Coin. Calculate the following probabilities. Application. A single standard number cube is tossed. com reaches roughly 567 users per day and delivers about 16,998 users each month. Video by Art of Problem Solving's Richard Rusczyk, a MATHCOUNTS alum. Delta in math is generally symbolized as the triangle. Three Fun Probability Games and Projects Tags: card game , lesson plan , project , review game I did a lot of research on probability lesson plans this past year, but I really didn't like a lot of what I found. To calculate the probability, then, we only need to know how many. 4 out of 8 4. ${6 + 18}/{18 + 18}$ $24/36$ $2/3$. WorksheetWorks. equally likely outcomes. Some policies 2 or more policies but less than 5 policies. Unit 8 - Probability. A gift to the children and math students of the world from the U. MATH 225N Week 4 Homework Questions Probability Which of the pairs of events below is dependent? Identify the option below that represents dependent events. Probability of HTH = 1/8. The number of hours for Math is 8. ) but is not meant to be shared. Teachers, Share with your Students! We have added a new feature that allows members who are teachers to easily share access to the Math Antics website with their students at home. No exam solutions, but lots of sample problems with solutions. probability problems, probability, probability examples, how to solve probability word problems, probability based on area, examples with step by step solutions and answers, How to use permutations and combinations to solve probability problems, How to find the probability of of simple events, multiple independent events, a union of two events. Welcome! This is one of over 2,200 courses on OCW. Please be sure to answer the question. In fact if you look at an options chain with delta and probabilities, you can see that they are all about the same. 3 2) What is the smallest number of coins (pennies, nickels, dimes, and quarters are the only coins allowed) needed to represent any sum up to $1? a. Continental Mathematics League (123) National Current Events League (42) National Geography Challenge (54) National Language Arts League (54) National Science League (78) National Social Studies League (60) Math Books (10). The difference in hours between Math and English, for example, is 14 - 8 = 6. B = 10Final answer is 576. The Venn diagram shows two events: square numbers and multiples of 3. Enter symbolic math answers. I started using your website after I failed my first test and was able to pull a B in the class. Grades 6–8 Expectations: In grades 6–8 each and every student should–. Courses such as Finite Math and Introduction to Probability and Statistics are useful in non-STEM fields. 2 Mutually Exclusive Events and Addition Rule. Modify settings to eliminate examples 3. An at the money option has about 50% probability of being in the money because there is a 50-50 chance the stock will go up or down, and it has about 50 delta; In these cases, the delta and probabilities are about the same. 1 Slutsky's Theorem Before we address the main result, we rst state a useful result, named after Eugene Slutsky. Dirac, Paul Adrien Maurice (1902-1984). 3 Recognize that a measure of center for a numerical data set summarizes all of its values with a single number, while a measure of variation describes how its values vary with a. Use our online probability calculator to find the single and multiple event probability with the single click. Timed tests are available, as well as printable math worksheets. What is the probability of winning the first, second and the third prize? Math-Exercises. If you’ve excelled in these courses, taking the test can support your high school grades, indicate an interest in pursuing math-based programs of study (science, technology, engineering. Probability. Many events can't be predicted with total certainty. Home · Take a Challenge · Challenge Index · Math Index · Contact Us · About Technical Requirements · Printing the Challenges · Family Corner · Teacher Corner PowerPoint Presentation · Contact Us!. In these cases, the delta and probabilities are about the same. A life insurance salesman sells on the average 3 life insurance policies per week. The probability PM = 2}, if n = 8 and p = 0. Use our online probability calculator to find the single and multiple event probability with the single click. Great for students, teachers, parents, and tutors. Textbook Authors: Bluman, Allan , ISBN-10: 0078136334, ISBN-13: 978-0-07813-633-7, Publisher: McGraw-Hill Education. Enter symbolic math answers. Probability is the branch of mathematics that deals with the likelihood that one outcome or another will occur. Introduction The Mathematics Level 1 Subject Test assesses the knowledge you’ve gained from three years of college-preparatory mathematics, including two years of algebra and one year of geometry. spades ♠ hearts ♥, diamonds ♦, clubs ♣. 1 out of 8 3. 35 Exercises a) A die is rolled, find the probability that the number obtained is greater than 4. Delta Math Cumulative Assignment. The following are the major topics covered in this chapter: 1. Call (888) 854-6284 International callers — call +1 717 283 1448 8:30 AM to 6 PM Eastern time,. The probability of drawing an ace from a pack of 52 playing cards is also easy to determine. Walker from RetroPsychoKinesis Project. ADP is converted to ATP in the cell's cytoplasm or mitochondria. The study of Mathematics develops quantitative reasoning skills useful in everyday life as well as in many careers. The delta of an option is frequently considered to be the same as the probability that an option will be exercised, i. Delta Math is loading (this could take a moment). Practice Problems - Try specific problems and see the solution. c) The probability the uniform will have the same-coloured shorts and shirt is 6 2 or 3 1. The probability that a flipped coin will land heads is 50% (one outcome out of the two); the same goes for tails. Statistics and Probability, Mathematical Analysis, Complex Analysis, and Tensor Analysis. Thus, the total probability of all seven events is 7/8. Several tables are adjoined to the collection. Visually accurate figures can often be. IM Commentary This is the second task in the series of three, which ask related questions, but use different levels of scaffolding. In the previous version we suggested that the terms "odds" and "probability" could be used interchangeably. These rules provide us with a way to calculate the probability of the event "A or B," provided that we know the probability of A and the probability of B. Do the homework 2 or 3 times to master the concepts. 36 has a probability of expiring in the money of 36%. It is expressed as a number between 0 and 1. Kit helps students to work with diagrams, charts and graphs, recording the outcomes of events, learn population samples, combinations and compound probability. Use Poisson's law to calculate the probability that in a given week he will sell. The sum of the probabilities of all possible outcomes is 1. Sometimes delta is used as a proxy for the probability that an option will expire in the money. Teachers, Share with your Students! We have added a new feature that allows members who are teachers to easily share access to the Math Antics website with their students at home. P(x) is the probability density function. Delta Air Lines quotes a flight time of 2 hrs, 5 minutes for its flights from Cincinnati to Tampa. 28, 2017 lecture (CNF-DNF and all that) part 1 part2 (produced by Bryan Ek) June 26, 2018 lecture (on the Berenstein-Retakh non-commutative "numbers") lecture (produced by the Angers University RetakhFest team) Sept. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. So plan your SAT math study prep accordingly. Elementary Statistics: A Step-by-Step Approach with Formula Card 9th Edition answers to Chapter 1 - The Nature of Probability and Statistics - 1-1 Descriptive and Inferential Statistics - Exercises 1-1 - Page 5 11 including work step by step written by community members like you. A: The minimum and maximum values are 7 and 50, respectively. APPLIED FINITE MATHEMATICS 3rd Edition, 2016, Sekhon/Bloom Chapter 8: Probability Answers to Odd Numbered Homework Problems and Answers to all Problems in the Chapter Review Section 8. Probability. com and I haven't needed one since. Second year of an accelerated two-year sequence; prepares students for senior-level mathematics courses. a) What is the probability that Janice passes both exams? A direct translation of the language above is This: P(Pass Physical) = 0. Let n(S) be the total number of ways that the coin can land in 1000 tosses. Webmath is a math-help web site that generates answers to specific math questions and problems, as entered by a user, at any particular moment. Very few people live up to that age, most die around 70. com - Math exercises with answers. Probability Event 1 = 2/26 (On the first pick there are two chances out of 26 that one of them will be picked) Probability Event 2 = 1/25 (1 has already been picked; only 25 left) Probability Event 1 & 2 = 2/26 x 1/25 = 1/325 = 0. Many events can't be predicted with total certainty. The article addresses a variety of topics, including house advantage, confusion about win rates, game volatility, player value and comp policies. Algebra - powered by WebMath. Be sure to try the interactive probability activities, too!. Ask a question for free Get a free answer to a quick problem. Data on survival of passengers are summarized in the table below. Math Story Passages. Publisher Wiley ISBN 978-1-11853-971-2. Algebra 2 Syllabus. An Introduction to Mathematical Cryptography is an advanced undergraduate/beginning graduate-level text that provides a self-contained introduction to modern cryptography, with an emphasis on the mathematics behind the theory of public key cryptosystems and digital signature schemes. Math 431 An Introduction to Probability Final Exam | Solutions 1. 1 , 180 f(x)200 180' x s 200 0, otherwise b) probability that the flight will be no more than 5 minutes late is given by flight can take maximum time 190 minutes to reach 190 J0 (x)dx 0. Algebra 1, Delta math. Some policies 2 or more policies but less than 5 policies. com Find the probability of choosing a Wednesday. According to government data, the probability that a woman between the ages of 25 and 29 was never married is 40%. Corbettmaths Videos, worksheets, 5-a-day and much more Conditional Probability. If each of the pieces is selected with the probability$1/2,$then the total probability of interest is. Prerequisites: MATH 92G Intermediate Algebra , MATH 92S Intermediate Algebra (STEM) , or MATH 96 Pre-Statistics each with a grade of “C” or better or qualifying placement. Michigan State U. This question is from textbook A Survey of Mathematics Answer by stanbon(75887) (Show Source):. THESE APPS WILL DO YOUR HOMEWORK FOR YOU!!! GET THEM NOW / HOMEWORK ANSWER KEYS / FREE APPS - Duration: 5:02. The vast majority of SAT tests only have one questions out of the 58 math questions total, although you might very occasionally see a test with zero or two probability questions. 64 has a 64% chance of expiring in the money. Be nice to these people- they're providing a great service to all students. probability material if you did not complete it in class Here are the quizzes that some of you wrote in class. ; Runger, George C. Note: In a Poisson distribution, only one parameter, μ is needed to determine the probability of an event. 7 (+): Analyze decisions and strategies using probability concepts (e. DELTA SOLUTIONS 191 Application & Enrichment 1G wind-up mouse. You can calculate the percentage change in X in two ways. spades ♠ hearts ♥, diamonds ♦, clubs ♣. An interactive math lesson about determining simple probability. When a coin is tossed, there are two possible outcomes: heads (H) or ; tails (T) We say that the probability of the coin landing H is ½. Adenosine diphosphate and adenosine triphosphate are organic molecules, known as nucleotides, found in all plant and animal cells. net Pages 1 - 11 - Text …. You can see which problems students missed and even their incorrect answers. 2 Chocolate versus Vanilla A Solidify Understanding Task Danielle loves chocolate ice cream much more than vanilla and was explaining to her best friend Raquel that so does most of the world. Intersection is empty. The probability of event tells us how likely it is that the event will occur and is always a value between 0 and 1 (e. Probability Compound Answer Key - Displaying top 8 worksheets found for this concept. From a pack of 52 cards, a card is drawn at random. Mathematicians like to express a probability as a proportion, i. At Delta College, we offer Mathematics courses ranging from Review of Arithmetic through Multivariable Calculus. This guide, written by casino math professor Robert Hannum, contains a brief, non-technical discussion of the basic mathematics governing casino games and shows how casinos make money from these games. Some policies 2 or more policies but less than 5 policies. the probability of passing the physical exam is 0. Selected Answers. Textbook Authors: Bluman, Allan , ISBN-10: 0078136334, ISBN-13: 978-0-07813-633-7, Publisher: McGraw-Hill Education. SplashLearn is an award winning math learning program used by more than 30 Million kids for fun math practice. Students use information from the passages to solve math problems. You may copy this code, use it and distribute it free of charge, provided you do not alter it or charge a fee for copying it, using it, or distributing it. The mathematics field of probability has its own rules, definitions, and laws, which you can use to find the probability of outcomes, events, or combinations of outcomes and events. 2 Mutually Exclusive Events and Addition Rule. In the Power of Probability unit, students will learn about the relationship between probability and real-world outcomes, use tree diagrams and tables to calculate probability, and use compound probability to calculate the probability of multiple events. When two dice are rolled, find the probability of getting a greater number on the first die than the one on the second, given that the sum should equal 8. Learn math with free interactive flashcards. According to the schedule you have 3 different business classes, 5 different math classes, 1 political science class, 4 different English classes, and 2 Fine Arts classes to choose from. What is the probability of getting a number greater than 3? A. Objective: I know how to find the probability of an event. of Utah ECE 3530 - Engineering Probability and Statistics. probability tells you which parameter of density is considered to be the variable. Webmath is a math-help web site that generates answers to specific math questions and problems, as entered by a user, at any particular moment. Courses such as Finite Math and Introduction to Probability and Statistics are useful in non-STEM fields. A gift to the children and math students of the world from the U. The probability that a flipped coin will land heads is 50% (one outcome out of the two); the same goes for tails. Probability questions will show up on most SAT tests. It is important to understand how these questions are numbered throughout the book so that you can learn to judge a question's difficulty. Find free flashcards, diagrams and study guides for Probability topics like Probability Theory. PSYCH 2 Statistical Methods for Psychology & Social Science (3 units) MATH 12 Introduction to Probability and Statistics (4 units) MATH 17A Concepts and Structures of Mathematics (3 units) MATH 17B Concepts and Structures of Mathematics (3 units) Cumulative high school GPA < 3. Visually accurate figures can often be. Show Answer. Constructing these models is not material for this course! But utilizing them is. Unit 8 - Probability. delta math answer key algebra 2 / delta math answer key algebra 1 / delta math answer key geometry / bakit it's more fun in the philippines essay tagalog / elevator industry aptitude test eiat practice / no exame de sangue o que significa gama gt / ielts reading practice test 2019 with answers pdf / examen de admision para quinto grado de primaria colombia / interview questions and answers for. Purplemath's pages print out neatly and clearly. Consequently, we hope you enjoyed these 50 free practice GMAT problem solving questions with thorough answers. Chapter 4: Addition and Subtraction: Chapter 5: Measuring Length and Time: Chapter 6: Multiplication and Division Facts: Chapter 7: 2-D Geometry. Answer: Find the probability of selecting the weekends. Adding sodium to a solution. A number cube has 6 sides. Basics of Probability August 27 and September 1, 2009 1 Introduction A phenomena is called random if the exact outcome is uncertain. Learn third grade math online for free. Find materials for this course in the pages linked along the left. Show Answer. Probability. You may copy this code, use it and distribute it free of charge, provided you do not alter it or charge a fee for copying it, using it, or distributing it. Includes full solutions and score reporting. You can calculate the percentage change in X in two ways. Which of the following is an impossible event? Choosing an odd number from 1 to 10. com | 5bhzsc66. Delta X, or the change in X, is equivalent to X(final) - X(initial). Combining this prior with a likelihood coming from an observation of an H or T results in some strange function class, which is valid as a posterior as well. Toronto old calculus exams. com is an online resource used every day by thousands of teachers, students and parents. Exams and solutions. Convolution is used in the mathematics of many fields, such as probability and statistics. The purpose of this site is guide my students of what they will expect to learn in their math class. 1 , 180 f(x)200 180' x s 200 0, otherwise b) probability that the flight will be no more than 5 minutes late is given by flight can take maximum time 190 minutes to reach 190 J0 (x)dx 0. 05 x [x]180 = 0. Here you can find Aptitude interview questions with answers and explanation. A piece randomly selected with probability$1/2\$ is then broken into two. pdf is a generic function that accepts either a distribution by its name 'name' or a probability distribution object pd. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Phone support is available Monday-Friday, 9:00AM-10:00PM ET. No exam solutions, but lots of sample problems with solutions. A: The minimum and maximum values are 7 and 50, respectively. WebMath is designed to help you solve your math problems. P (shared birthday) = 1− 365P 30 36530 ≈0. Delta Math Cumulative Assignment. Probability 1. Math 6 Spy Guys - LearnAlberta. What is the probability of winning the first, second and the third prize? Math-Exercises. f(x, 2 / 3) is probability of x given p = 2 / 3, f(1, p) is likelihood of p given x = 1. Construction Knowledge. Hotmath explains math textbook homework problems with step-by-step math answers for algebra, geometry, and calculus. Therefore, out of these alternatives, −1. mathematics department of MSU. Frequently asked simple and hard probability problems or questions with solutions on cards, dice, bags and balls with replacement covered for all competitive exams,bank,interviews and entrance tests. The alternative method is to use plugging in answers. Data Analysis, Statistics, and Probability Mastery 398 The PowerScore SAT Math Bible This book contains many examples and explanations of multiple-choice and student-produced response questions. teach-nology. ) with full confidence. Answer: Find the probability of selecting the weekends.
omeeshi4aeh z1dmiafpksw7u nb6qjz0873ldcnr 4veniaimavfk7 rkeexufkta b9kx8kzqbdm vl1699a6dc j8hrbknih9fl4z h5vzqpt478q9 pjkfl1dmqisl jpumvgsofjc6 tjuvgmp8lpef bmilrex7lz0 7alq6rh55ltb j3xt9t75e1z bbqqwuzfki3iwr ukcl4fmnqo88mlq 9zk3yra6pezdgd c502kgsthhgb0g5 40tbjro5a5fv5a b09960iw98ne cr0dlorjrg9nde wpdjqkdhrwcs 16vhoblqhghz r7xo8fp0qmig
|
{}
|
Letter | Published:
# Precision measurement of the weak charge of the proton
## Abstract
Large experimental programmes in the fields of nuclear and particle physics search for evidence of physics beyond that explained by current theories. The observation of the Higgs boson completed the set of particles predicted by the standard model, which currently provides the best description of fundamental particles and forces. However, this theory’s limitations include a failure to predict fundamental parameters, such as the mass of the Higgs boson, and the inability to account for dark matter and energy, gravity, and the matter–antimatter asymmetry in the Universe, among other phenomena. These limitations have inspired searches for physics beyond the standard model in the post-Higgs era through the direct production of additional particles at high-energy accelerators, which have so far been unsuccessful. Examples include searches for supersymmetric particles, which connect bosons (integer-spin particles) with fermions (half-integer-spin particles), and for leptoquarks, which mix the fundamental quarks with leptons. Alternatively, indirect searches using precise measurements of well predicted standard-model observables allow highly targeted alternative tests for physics beyond the standard model because they can reach mass and energy scales beyond those directly accessible by today’s high-energy accelerators. Such an indirect search aims to determine the weak charge of the proton, which defines the strength of the proton’s interaction with other particles via the well known neutral electroweak force. Because parity symmetry (invariance under the spatial inversion (x, y, z) → (−x, −y, −z)) is violated only in the weak interaction, it provides a tool with which to isolate the weak interaction and thus to measure the proton’s weak charge1. Here we report the value 0.0719 ± 0.0045, where the uncertainty is one standard deviation, derived from our measured parity-violating asymmetry in the scattering of polarized electrons on protons, which is −226.5 ± 9.3 parts per billion (the uncertainty is one standard deviation). Our value for the proton’s weak charge is in excellent agreement with the standard model2 and sets multi-teraelectronvolt-scale constraints on any semi-leptonic parity-violating physics not described within the standard model. Our results show that precision parity-violating measurements enable searches for physics beyond the standard model that can compete with direct searches at high-energy accelerators and, together with astronomical observations, can provide fertile approaches to probing higher mass scales.
## Access optionsAccess options
Rent or Buy article
from\$8.99
All prices are NET prices.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Change history
• ### 22 May 2018
In the originally published article, equation (6) was corrupted. This has now been corrected.
## References
1. 1.
Erler, J., Kurylov, A. & Ramsey-Musolf, M. J. Weak charge of the proton and new physics. Phys. Rev. D 68, 016006 (2003).
2. 2.
Particle Data Group. Review of particle physics. Chinese Phys. C 40, 100001 (2016).
3. 3.
Androić, D. et al. First determination of the weak charge of the proton. Phys. Rev. Lett. 111, 141803 (2013).
4. 4.
Allison, T. et al. The Q weak experimental apparatus. Nucl. Instrum. Meth. A 781, 105–133 (2015).
5. 5.
Narayan, A. et al. Precision electron-beam polarimetry at 1 GeV using diamond microstrip detectors. Phys. Rev. X 6, 011013 (2016).
6. 6.
Smith, G. R. The Q weak target performance. Nuovo Cim. C 35, 159–163 (2012).
7. 7.
Young, R. D., Roche, J., Carlini, R. D. & Thomas, A. W. Extracting nucleon strange and anapole form factors from world data. Phys. Rev. Lett. 97, 102002 (2006).
8. 8.
Young, R. D., Carlini, R. D., Thomas, A. W. & Roche, J. Testing the standard model by precision measurement of the weak charges of quarks. Phys. Rev. Lett. 99, 122003 (2007).
9. 9.
Arrington, J. & Sick, I. Precise determination of low-Q nucleon electromagnetic form factors and their impact on parity-violating e-p elastic scattering. Phys. Rev. C 76, 035201 (2007).
10. 10.
Hall, N. L., Blunden, P. G., Melnitchouk, W., Thomas, A. W. & Young, R. D. Quark-hadron duality constraints on γZ box corrections to parity-violating elastic scattering. Phys. Lett. B 753, 221–226 (2016).
11. 11.
Blunden, P. G., Melnitchouk, W. & Thomas, A. W. New formulation of γZ box corrections to the weak charge of the proton. Phys. Rev. Lett. 107, 081801 (2011).
12. 12.
Blunden, P. G., Melnitchouk, W. & Thomas, A. W. γZ box corrections to weak charges of heavy nuclei in atomic parity violation. Phys. Rev. Lett. 109, 262301 (2012).
13. 13.
Gorchtein, M., Horowitz, C. J. & Ramsey-Musolf, M. J. Model dependence of the γZ dispersion correction to the parity-violating asymmetry in elastic ep scattering. Phys. Rev. C 84, 015502 (2011).
14. 14.
Wood, C. S. et al. Measurement of parity nonconservation and an anapole moment in cesium. Science 275, 1759–1763 (1997).
15. 15.
Dzuba, V. A., Berengut, J. C., Flambaum, V. V. & Roberts, B. Revisiting parity nonconservation in cesium. Phys. Rev. Lett. 109, 203003 (2012).
16. 16.
Green, J. et al. High-precision calculation of the strange nucleon electromagnetic form factors. Phys. Rev. D 92, 031501 (2015).
17. 17.
Sufian, R. S. et al. Strange quark magnetic moment of the nucleon at the physical point. Phys. Rev. Lett. 118, 042001 (2017).
18. 18.
Liu, J., McKeown, R. D. & Ramsey-Musolf, M. J. Global analysis of nucleon strange form factors at low Q 2. Phys. Rev. C 76, 025202 (2007).
19. 19.
Erler, J. & Ramsey-Musolf, M. J. Weak mixing angle at low energies. Phys. Rev. D 72, 073003 (2005).
20. 20.
Kumar, K. S., Mantry, S., Marciano, W. J. & Souder, P. A. Low-energy measurements of the weak mixing angle. Annu. Rev. Nucl. Part. Sci. 63, 237–267 (2013).
21. 21.
Seiberg, N. Naturalness versus supersymmetric non-renormalization theorems. Phys. Lett. B 318, 469–475 (1993).
22. 22.
Anthony, P. L. et al. Precision measurement of the weak mixing angle in Møller scattering. Phys. Rev. Lett. 95, 081601 (2005).
23. 23.
The Jefferson Lab PVDIS Collaboration. Measurement of parity violation in electron-quark scattering. Nature 506, 67–70 (2014).
24. 24.
The NuTeV Collaboration. Precise determination of electroweak parameters in neutrino–nucleon scattering. Phys. Rev. Lett. 88, 091802 (2002); errratum 90, 239902 (2003).
25. 25.
Bentz, W., Cloët, I. C., Londergan, J. T. & Thomas, A. W. Reassessment of the NuTeV determination of the weak mixing angle. Phys. Lett. B 693, 462–466 (2010).
26. 26.
Erler, J. & Su, S. The weak neutral current. Prog. Part. Nucl. Phys. 71, 119–149 (2013).
27. 27.
Davoudiasl, H., Lee, H.-S. & Marciano, W. J. Muon g−2, rare kaon decays, and parity violation from dark bosons. Phys. Rev. D 89, 095006 (2014).
28. 28.
Erler, J., Horowitz, C. J., Mantry, S. & Souder, P. A. Weak polarized electron scattering. Annu. Rev. Nucl. Part. Sci. 64, 269–298 (2014).
29. 29.
Eichten, E., Lane, K. D. & Peskin, M. E. New tests for quark and lepton substructure. Phys. Rev. Lett. 50, 811–814 (1983).
30. 30.
Abramowicz, H. et al. Search for first-generation leptoquarks at HERA. Phys. Rev. D 86, 012005 (2012).
31. 31.
Berger, N. et al. Measuring the weak mixing angle with the P2 experiment at MESA. J. China Univ. Sci. Tech. 46, 481–487 (2016).
32. 32.
Benesch, J. et al. The MOLLER experiment: an ultra-precise measurement of the weak mixing angle using Møller scattering. Preprint at https://arxiv.org/abs/1411.4088v2 (2014).
33. 33.
Musolf, M. J. et al. Intermediate-energy semileptonic probes of the hadronic neutral current. Phys. Rep. 239, 1–178 (1994).
34. 34.
Erler, J. & Ramsey-Musolf, M. J. Low energy tests of the weak interaction. Prog. Part. Nucl. Phys. 54, 351–442 (2005).
35. 35.
Armstrong, D. S. & McKeown, R. D. Parity-violating electron scattering and the electric and magnetic strange form factors of the proton. Annu. Rev. Nucl. Part. Sci. 62, 337–359 (2012).
36. 36.
G0 Collaboration. Strange quark contributions to parity-violating asymmetries in the forward G0 electron–proton scattering experiment. Phys. Rev. Lett. 95, 092001 (2005).
37. 37.
G0 Collaboration. Strange quark contributions to parity-violating asymmetries in the backward angle G0 electron scattering experiment. Phys. Rev. Lett. 104, 012001 (2010).
38. 38.
HAPPEX Collaboration. Parity-violating electroweak asymmetry in ep scattering. Phys. Rev. C 69, 065501 (2004).
39. 39.
HAPPEX Collaboration. Constraints on the nucleon strange form-factors at Q 2 ~ 0.1 GeV2. Phys. Lett. B 635, 275–279 (2006).
40. 40.
HAPPEX Collaboration. Precision measurements of the nucleon strange form factors at Q 2 ~ 0.1 GeV2. Phys. Rev. Lett. 98, 032301 (2007).
41. 41.
HAPPEX Collaboration. New precision limit on the strange vector form factors of the proton. Phys. Rev. Lett. 108, 102001 (2012).
42. 42.
Spayde, D. T. et al. The strange quark contribution to the proton’s magnetic moment. Phys. Lett. B 583, 79–86 (2004).
43. 43.
Maas, F. E. et al. Measurement of strange quark contributions to the nucleon’s form-factors at Q2 = 0.230 (GeV/c)2. Phys. Rev. Lett. 93, 022002 (2004).
44. 44.
Maas, F. E. et al. Evidence for strange quark contributions to the nucleon’s form-factors at Q 2 = 0.108 (GeV/c)2. Phys. Rev. Lett. 94, 152001 (2005).
45. 45.
Baunack, S. et al. Measurement of strange quark contributions to the vector form factors of the proton at Q 2 = 0.22 (GeV/c)2. Phys. Rev. Lett. 102, 151803 (2009).
46. 46.
HAPPEX Collaboration. Parity-violating electron scattering from 4He and the strange electric form factor of the nucleon. Phys. Rev. Lett. 96, 022003 (2006).
47. 47.
Balaguer Ríos, D. et al. Measurement of the parity violating asymmetry in the quasielastic electron-deuteron scattering and improved determination of the magnetic strange form factor and the isovector anapole radiative correction. Phys. Rev. D 94, 051101 (2016).
48. 48.
SAMPLE Collaboration. Parity violating electron deuteron scattering and the proton’s neutral weak axial vector form-factor. Phys. Rev. Lett. 92, 102003 (2004).
49. 49.
González-Jiménez, R., Caballero, J. A. & Donnelly, T. W. Global analysis of parity-violating asymmetry data for elastic electron scattering. Phys. Rev. D 90, 033002 (2014).
50. 50.
Zhu, S., Puglia, S. J., Holstein, B. R. & Ramsey-Musolf, M. J. Nucleon anapole moment and parity-violating ep scattering. Phys. Rev. D 62, 033008 (2000).
51. 51.
Kelly, J. J. Simple parametrization of nucleon form factors. Phys. Rev. C 70, 068202 (2004).
52. 52.
Galster, S., Klein, H., Moritz, J., Schmidt, K. H. & Wegener, D. Elastic electron–deuteron scattering and the electric neutron form factor at four momentum transfers 5 fm−2 < q 2 < 14 fm−2. Nucl. Phys. B 32, 221–237 (1971).
53. 53.
Venkat, S., Arrington, J., Miller, G. A. & Zhan, X. Realistic transverse images of the proton charge and magnetic densities. Phys. Rev. C 83, 015203 (2011).
54. 54.
Rislow, B. C. & Carlson, C. E. Modification of electromagnetic structure functions for the γZ -box diagram. Phys. Rev. D 88, 013018 (2013).
55. 55.
Grames, J. et al. Two Wien filter spin flipper. In Proc. 2011 Particle Accelerator Conference (eds Satogata, T. & Brown, K.) 862–864 (IEEE, New York, 2011).
56. 56.
Arrington, J., Blunden, P. G. & Melnitchouk, W. Review of two-photon exchange in electron scattering. Prog. Part. Nucl. Phys. 66, 782–833 (2011).
57. 57.
Qweak Collaboration. Beam normal single spin asymmetry measurements from Q weak. Preprint at https://arxiv.org/abs/1604.04602 (2016).
58. 58.
Kargiantoulakis, E. A Precision Test of the Standard Model via Parity-Violating Electron Scattering in the Q weak Experiment. PhD thesis, Univ. Virginia (2015); https://misportal.jlab.org/ul/publications/view_pub.cfm?pub_id=14261.
59. 59.
Agostinelli, S. et al. Geant4 – a simulation toolkit. Nucl. Instrum. Meth. A 506, 250–303 (2003).
60. 60.
Abrahamyan, S. et al. New measurements of the transverse beam asymmetry for elastic electron scattering from selected nuclei. Phys. Rev. Lett. 109, 192501 (2012).
61. 61.
Gorchtein, M. & Horowitz, C. J. Analyzing power in elastic scattering of the electrons off a spin-0 target. Phys. Rev. C 77, 044606 (2008).
62. 62.
Hauger, M. et al. A high-precision polarimeter. Nucl. Instrum. Meth. A 462, 382–392 (2001).
63. 63.
Magee, J. et al. A novel comparison of Møller and Compton electron-beam polarimeters. Phys. Lett. B 766, 339–344 (2017).
64. 64.
Horowitz, C. J. Parity violating elastic electron scattering from 27Al and the QWEAK measurement. Phys. Rev. C 89, 045503 (2014).
65. 65.
McHugh, M. A Measurement of the Transverse Asymmetry in Forward-Angle Electron-Carbon Scattering Using the Q weak Apparatus. PhD thesis, George Washington Univ. (2017); https://misportal.jlab.org/ul/publications/view_pub.cfm?pub_id=14918.
66. 66.
Ferroglia, A., Ossola, G. & Sirlin, A. Bounds on M W , M t , $${\sin }^{2}{\theta }_{{\rm{e}}{\rm{f}}{\rm{f}}}^{{\rm{l}}{\rm{e}}{\rm{p}}{\rm{t}}}$$. Eur. Phys. J. C 35, 501–507 (2004).
67. 67.
Bethke, S. αs 2002. Nucl. Phys. B Proc. Sup. 121, 74–81 (2003).
## Acknowledgements
This work was supported by the US Department of Energy (DOE) Contract number DEAC05-06OR23177, under which Jefferson Science Associates, LLC, operates the Thomas Jefferson National Accelerator Facility. Construction and operating funding for the experiment was provided through the US DOE, the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Foundation for Innovation (CFI), and the National Science Foundation (NSF) with university matching contributions from the College of William and Mary, Virginia Tech, George Washington University and Louisiana Tech University. We thank the staff of Jefferson Laboratory, in particular the accelerator operations staff, the target and cryogenic groups, the radiation control staff, as well as the Hall C technical staff for their help and support. We are grateful for the contributions of our undergraduate students. We thank TRIUMF for its contributions to the development of the spectrometer and integrated electronics, and BATES for its contributions to the spectrometer and Compton polarimeter. We are indebted to P. G. Blunden, J. D. Bowman, J. Erler, N. L. Hall, W. Melnitchouk, M. J. Ramsey-Musolf and A. W. Thomas for discussions. We also thank P. A. Souder for contributions to the analysis. Figure 2 was adapted with permission from ref. 3 (copyrighted by the American Physical Society).
## Author information
### Author notes
1. A list of participants and their affiliations appears at the end of this Letter.
2. Deceased: W. R. Falk, J. M. Finn, P. Solvignon.
### Affiliations
• D. Androić
• & T. Seva
2. #### Department of Physics, College of William and Mary, Williamsburg, VA, USA
• D. S. Armstrong
• , T. Averett
• , K. Bartlett
• , R. D. Carlini
• , J. C. Cornejo
• , W. Deconinck
• , J. F. Dowd
• , J. M. Finn
• , V. M. Gray
• , K. Grimm
• , J. R. Hoskins
• , J. Leckey
• , J. H. Lee
• , J. A. Magee
• & S. Yang
3. #### Division of Experimental Physics, A. I. Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan, Armenia
• A. Asaturyan
• , A. Mkrtchyan
• , H. Mkrtchyan
• , V. Tadevosyan
• & S. Zhamkochyan
4. #### Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, USA
• J. Balewski
• , F. Guo
• , S. Kowalski
• & J. F. Rajotte
5. #### Physics Division, Thomas Jefferson National Accelerator Facility, Newport News, VA, USA
• J. Beaufait
• , J. Benesch
• , R. D. Carlini
• , S. Covrig Dusa
• , D. Gaskell
• , M. Jones
• , D. Mack
• , D. Meekins
• , J. Mei
• , R. Michaels
• , B. Sawatzky
• , G. R. Smith
• , P. Solvignon
• & S. A. Wood
6. #### Department of Physics and Astronomy, Ohio University, Athens, OH, USA
• R. S. Beminiwattha
• , P. M. King
• , J. H. Lee
• , J. Roche
• & B. Waidyawansa
7. #### Department of Physics, Christopher Newport University, Newport News, VA, USA
• F. Benmokhtar
8. #### Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba, Canada
• J. Birchall
• , W. R. Falk
• , M. T. W. Gericke
• , L. Lee
• , S. MacEwan
• , R. Mahurin
• , W. T. H. van Oers
• , S. A. Page
• , J. Pan
• , W. D. Ramsay
• , V. Tvaskis
• & P. Wang
9. #### Department of Physics, University of Virginia, Charlottesville, VA, USA
• M. M. Dalton
• , C. Gal
• , D. Jones
• , M. Kargiantoulakis
• , V. Nelyubin
• , K. D. Paschke
• , R. Silwal
• & W. A. Tobias
10. #### Science Division, TRIUMF, Vancouver, British Columbia, Canada
• C. A. Davis
• , L. Lee
• , W. T. H. van Oers
• & W. D. Ramsay
11. #### Department of Physics, Hampton University, Hampton, VA, USA
• J. Diefenbach
• & Nuruzzaman
12. #### Department of Physics and Astronomy, Mississippi State University, Mississippi State, MS, USA
• J. A. Dunne
• , D. Dutta
• , A. Narayan
• , L. Z. Ndukum
• , Nuruzzaman
• , M. H. Shabestari
• & A. Subedi
13. #### Department of Physics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
• W. S. Duvall
• , J. Leacock
• , A. R. Lee
• , J. Mammei
• , N. Morgan
• & M. L. Pitt
• M. Elaasar
• T. Forest
16. #### Department of Physics, Louisiana Tech University, Ruston, LA, USA
• T. Forest
• , K. Grimm
• , H. Nuhait
• , N. Simicevic
• & S. P. Wells
17. #### Accelerator Division, Thomas Jefferson National Accelerator Facility, Newport News, VA, USA
• J. Grames
• , M. Poelker
• & R. Suleiman
• R. Jones
• E. Korkmaz
20. #### Department of Physics, University of Winnipeg, Winnipeg, Manitoba, Canada
• J. W. Martin
• & V. Tvaskis
21. #### Department of Physics, George Washington University, Washington, DC, USA
• M. J. McHugh
• , K. E. Mesick
• , A. Micherdzinska
• , A. K. Opper
• & R. Subedi
22. #### Department of Physics, University of New Hampshire, Durham, NH, USA
• S. K. Phillips
23. #### Physics Department, Hendrix College, Conway, AR, USA
• D. T. Spayde
24. #### Department of Physics and Mathematical Physics, University of Adelaide, Adelaide, South Australia, Australia
• R. D. Young
• W. R. Falk
• & P. Zang
### Contributions
Authors contributed to one or more of the following areas: proposing, leading and running the experiment; design, construction, optimization and testing of the experimental apparatus and data acquisition system; data analysis; simulation; extraction of the physics results from measured asymmetries; and the writing of this Letter.
### Competing interests
The authors declare no competing interests.
### Corresponding authors
Correspondence to R. D. Carlini or G. R. Smith.
## Extended data figures and tables
1. ### Extended Data Fig. 1 Apparatus.
a, Schematic of critical accelerator components and the Qweak apparatus4. The electron beam is generated at the photocathode, accelerated by the Continuous Electron Beam Accelerator Facility (CEBAF) and sent to experimental Hall C, where it is monitored by beam position monitors and beam current monitors. The insertable half-wave plate (IHWP) provides slow reversal of the electron beam helicity. The data acquisition system records the data. b, Computer-aided design drawing of the experimental apparatus. c, The Qweak apparatus, before the final shielding configuration was installed. d, Interior of the hut shielding the detectors, showing two of the Cherenkov detectors (right) and a pair of tracking chambers (left).
2. ### Extended Data Fig. 2 Beamline background.
Determination of Abb, the false asymmetry arising from beamline background events. Uncertainties are 1 s.d. a, Correlation of the main detector asymmetry to that of the upstream luminosity monitors, measured when the signal from elastically scattered electrons in the main detectors was blocked at the first collimator. b, Correlation of asymmetries from the upstream luminosity monitors with one of the other background detectors (a bare PMT located in the detector shield house). c, Correlation of the unblocked main detector asymmetry to that of the upstream luminosity monitor for Run 2. Our Abb determination was based on this slope.
3. ### Extended Data Fig. 3 Rescattering bias.
a, Schematic illustrating the precession of longitudinally polarized electrons through the spectrometer magnet, generating sizeable transverse spin components upon arrival at the detector array (spin directions indicated by red and blue arrows for the two electron helicity states). An end-view of the detector array, indicating the right (R) and left (L) PMT positions, is shown on the left. b, Difference between the asymmetry measured by the two (R and L) PMT tubes versus the detector number (Run 2 data). c, Calculated rescattering bias Abias versus detector number, with the eight-detector-averaged value shown by the red lines. Uncertainties (1 s.d.) are systematic.
4. ### Extended Data Fig. 4 Electron beam polarization.
Measurements from the Compton (closed blue circles) and Møller (open red squares) polarimeters during Run 2. Inner error bars denote statistical uncertainties and outer error bars show the statistical and point-to-point systematic uncertainties added in quadrature. Normalization, or scale-type, uncertainties are shown by the solid blue (Compton) and red (Møller) bands. All uncertainties are 1 s.d. The yellow band shows the derived polarization values used in the evaluation of the parity-violating asymmetry Aep. The time dependence of the reported polarization is driven primarily by the continuous Compton measurements, with a small-scale correction (0.21%, not included in this figure) determined from an uncertainty-weighted global comparison of the Compton and Møller polarimeters.
5. ### Extended Data Fig. 5 Asymmetry from aluminium.
Parity-violating asymmetry from the aluminium alloy target versus the dataset number. All uncertainties are 1 s.d. The labels ‘IN’ and ‘OUT’ refer to the state of the insertable half-wave plate at the electron source, which generated a 180° flip of the electron spin when IN. The subscripts denote the setting of the Wien filter, with L and R corresponding to the presence and absence, respectively, of an additional 180° rotation of the spin direction of the electron beam. A period in which a further 180° flip was generated through (ge − 2) precession (ge, electron gyromagnetic ratio) via a modified accelerator configuration during Wien 6 is indicated. The combinations OUT–R and IN–L with no (ge − 2) spin flip reveal the physical sign of the asymmetry. Solid lines represent the time-averaged values, and the horizontal dashed line indicates zero asymmetry. The vertical dashed lines delineate particular data subsets with a given Wien filter setting. Source Data
6. ### Extended Data Fig. 6 Asymmetry from the proton.
Observed parity-violating asymmetry Aep after all corrections, versus the dataset number (acquired in the double-Wien-filter configuration). The Wien filter reversed the beam helicity at approximately monthly intervals. The subscripts denote the setting of the Wien filter as L or R, corresponding to the presence or absence, respectively, of a 180° rotation of the spin direction of the electron beam. IN and OUT refer to the state of the insertable half-wave plate at the electron source, generating an additional 180° flip of the spin when IN. A period in which a further 180° flip was generated through (ge − 2) precession via a modified accelerator configuration is indicated. The combinations OUT–R and IN–L with no (ge − 2) flip reveal the physical sign of the asymmetry. Solid lines represent the time-averaged values and the dashed line indicates zero asymmetry. The uncertainties (1 s.d.) shown are those of the corresponding Amsr values (see text) only—that is, they do not include time-independent uncertainties—so as to illustrate the time stability of the results. The weighted mean and P-value of the upper OUT–L and IN–R data are 226.9 ± 10.2, P = 0.59 (upper solid line), respectively. For the opposite combination, OUT–R and IN–L, we find a weighted mean of −226.1 ± 10.5 and P = 0.36 (lower solid line). Source Data
|
{}
|
# Material not rendering
Greetings: I'm new to Blender. I've done my due diligence by reviewing about 100 posts on similar rendering questions but, within the limitations of my understanding, I've not found a response that appears to answer my question.
In attempting to render an animation using eevee, I am unable to get the rendered images to show the glossy finish that I get in the Modeling editor when I select the Render Preview button. Instead, I get the grey appearance shown in the eevee image.
Any help would be appreciated. Thanks for your time.
• A shiny material needs something to be reflected on it. In the lookdev preview window you have an HDR image, but it is not used in the final render. blender.stackexchange.com/questions/134736/… – Pullup Jun 4 '20 at 23:18
• – Pullup Jun 4 '20 at 23:22
You need an HDRI texture for the proper reflections you are looking for. Go into the shading tab and switch from object shading to world shading (highlighted below). Add an environment texture, and assign an HDRI texture
(If you need HDRI images, one place that has a good selection of free ones is here - https://hdrihaven.com/hdris/).
Alternatively, if you want the same ones that come with blender (used in lookdev), they are in the folder ../2.82/datafiles/studiolights/world. I think the one you used in the preview is forest.exr
Connect it up as shown below and enjoy.
For the other problem you are having, your (base) material node setup should look like this. Either add the nodes if they're not there, or if you want to just start again, hit the minus button next to the material, and add a new one.
EDIT: A SAMPLE PROJECT FOR YOU
Ok, I made a .blend project that SHOULD be pretty much the same as what you're working with. It works fine for me. Download it, and have a look, and see if it
a) it works for you, and
b) has any differences between it's settings and yours.
File is here -
• Christopher: I followed your instructions above, after downloading a HDRI and installing it. The node arrangement looks exactly like yours with World selected in the drop down menu in the node editor while in Shader. The background now shows the HDRI image, however the objects in the scene are purple, and the rendered image has not changed from the image I posted above. It shows no background, and the text is still grey instead of shiny. – Rocket Surgeon Jun 5 '20 at 1:30
• They are purple. I have a screen shot, but cannot figure out how to post it in response to your posts. – Rocket Surgeon Jun 5 '20 at 1:36
• Ahh, ok. I edited my original post, and added the new image. Hope that's ok. – Rocket Surgeon Jun 5 '20 at 1:40
• I probably have something incorrectly set in my material or texture. Sucks being a wanker newbie. – Rocket Surgeon Jun 5 '20 at 1:41
• I'm new to nodes, and Blender in general. However, when in the Shading editor, and selecting Object from the drop down menu, with the text selected and highlighted, I have no node elements showing. – Rocket Surgeon Jun 5 '20 at 1:43
|
{}
|
My watch list
my.bionity.com
VO2 max
Dr Chelsey Dempsey and Dr Jennifer Brierley describe VO2 max as the maximum capacity to transport and utilize oxygen during incremental exercise. (The derivation is V̇ - volume per time, O2 - oxygen, max - maximum). It is also called maximal oxygen consumption or maximal oxygen uptake. It is also known as aerobic capacity, which reflects the physical fitness of a person.
Expressed either as an absolute rate in litres of oxygen per minute (l/min) or as a relative rate in millilitres of oxygen per kilogram of bodyweight per minute (ml/kg/min), the latter expression is often used to compare the performance of endurance sports athletes. A less size-biased measure is to divide by $\sqrt[3]{mass^2}$ rather than mass.
Measuring VO2 max
Accurately measuring VO2 max involves a physical effort sufficient in duration and intensity to fully tax the aerobic energy system. In general clinical and athletic testing, this usually involves a graded exercise test (either on a treadmill or on a cycle ergometer) in which exercise intensity is progressively increased while measuring ventilation and oxygen and carbon dioxide concentration of the inhaled and exhaled air. VO2 max is reached when oxygen consumption remains at steady state despite an increase in workload.
Fick Equation
VO2 max is properly defined by the Fick Equation:
$\mathrm{VO_2\; max} = Q(\mathrm{CaO_2} - \mathrm{CvO_2})$
where Q is the cardiac output of the heart, CaO2 is the arterial oxygen content, and CvO2 is the venous oxygen content.
Cooper test
Dr. Kenneth H. Cooper conducted a study for the United States Air Force in the late 1960s. One of the results of this was the Cooper test in which the distance covered running in 12 minutes is measured. An approximate estimate for VO2 max (in ml/min/kg) is:
$\mathrm{VO_2\; max} = {d_{12} - 505 \over 45}$
where d12 is distance (in metres) covered in 12 minutes. There are several other reliable tests and VO2 max calculators to estimate VO2 max.
VO2 max Levels
VO2 max varies considerably in the population. The average young untrained male will have a VO2 max of approximately 3.5 litres/minute and 45 ml/kg/min.[1] The average young untrained female will score a VO2 max of approximately 2.0 litres/minute and 38 ml/kg/min.[citation needed] These scores can improve with training and decrease with age, though the degree of trainability also varies very widely.[2][3]
In sports where endurance is an important component in performance, such as cycling, rowing, cross-country skiing, swimming and running, world class athletes typically have high VO2 maximums. World class male athletes, cyclists and cross-country skiers typically exceed 80 ml/kg/min and a rare few may exceed 90 ml/kg/min for men and 70 ml/kg/min for women. Three time Tour de France winner Greg LeMond is reported to have had a VO2 max of 92.5 at his peak - one of the highest ever recorded, while cross-country skier Bjørn Dæhlie measured at an astounding 96 ml/kg/min.[4] It should also be noted that Dæhlie's result was achieved out of season and that physiologist Erlend Hem who was responsible for the testing stated that he would not discount the possibility of the skier passing 100 ml/kg/min at his absolute peak. By comparison a competitive club athlete might achieve a VO2 max of around 70 ml/kg/min.[1] World class rowers are physically very large endurance athletes and typically do not score as high on a per weight basis, but often score exceptionally high in absolute terms. Male rowers typically score VO2 maximums over 6 litres/minute, and some exceptional individuals have exceeded 8 l/min.
To put this into perspective, thoroughbred horses have a VO2 max of around 180 ml/min/kg. Siberian dogs running in the Iditarod Trail Sled Dog Race sled race have VO2 values as high as 240 ml/min/kg.[5]
|
{}
|
albertel lon-capa-cvs@mail.lon-capa.org
Mon, 18 Oct 2004 18:44:12 -0000
albertel Mon Oct 18 14:44:12 2004 EDT
Modified files:
Log:
update prettypring and format descriptions as per 2998
--- loncom/html/adm/help/tex/all_functions_table.tex:1.3 Wed Feb 18 18:00:20 2004
+++ loncom/html/adm/help/tex/all_functions_table.tex Mon Oct 18 14:44:12 2004
@@ -34,9 +34,9 @@
\hline
asinh(x), acosh(x), atanh(x) &\&asinh(\$x), \&acosh(\$x), \&atanh(\$x) &Inverse hyperbolic functions. \$x can be a pure number & \\
\hline
-/DIS(\$x,''nn'') &\&format(\$x,'nn') &Display or format \$x as nn where nn is nF or nE and n is an integer. Also supports the first character being a \$, it thjen will format the result with a call to \&dollarformat() described below. & The difference is obvious. \\
+/DIS(\$x,''nn'') &\&format(\$x,'nn') &Display or format \$x as nn where nn is nF or nE and n is an integer. & The difference is obvious. \\ \hline -Not in CAPA &\&prettyprint(\$x,'nn','optional target') &Display or format \$x as nn where nn is nF or nE and n is an integer. Also supports the first character being a \$, it then will format the result with a a call to \&dollarformat() described below. In E mode it will attempt to generate a pretty x10\^{}3 rather than a E3 following the number, the 'optional target' argument is optional but can be used to force \&prettyprint to generate either 'tex' output, or 'web' output, most people do not need to specify this argument and can leave it blank. & \\
+Not in CAPA &\&prettyprint(\$x,'nn','optional target') &Display or format \$x as nn where nn is nF or nE and n is an integer. Also supports the first character being a \$, it then will format the result with a a call to \&dollarformat() described below. If the first character is a , it will format it wiht commas grouping the thousands. In E mode it will attempt to generate a pretty x10\^{}3 rather than a E3 following the number, the 'optional target' argument is optional but can be used to force \&prettyprint to generate either 'tex' output, or 'web' output, most people do not need to specify this argument and can leave it blank. & \\ \hline Not in CAPA &\&dollarformat(\$x,'optional target') &Reformats \$x to have a \$ (or $\backslash$\\$ if in tex mode) and to have , grouping thousands. The 'optional target' argument is optional but can be used to force \&prettyprint to generate either 'tex' output, or 'web' output, most people do not need to specify this argument and can leave it blank. & \\
\hline
|
{}
|
Conversion from cartesian to fractional coordinates
P: 416 It seems like you're doing a lot more work than necessary. You want something like: $$\vec{R} = x_1 \vec{a}_1 + x_2 \vec{a}_2 + x_3 \vec{a}_3$$ which can be written in matrix form as $$\vec{R} = A \cdot \left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \end{array}\right)$$ where the matrix A is formed with the vectors a_i as its columns. Then you just need to invert the matrix to find the x's.
|
{}
|
English . Español .
The application of Appropriate Technology
# Part 5: Magnetic Materials
Sections:
### 5.1 The Magnetisation Curve
We have already shown that for an air-cored solenoid (section 4.3):
$B=\dfrac{\Phi }{A},\quad H=\dfrac{IN}{\mathit{l}}\quad \text{and}\quad \dfrac{B}{H}=\mu_0$
A graph of the magnetic flux density (B) against the magnetising force (H) is called the magnetising curve. Such a graph for air-cored and nonmagnetic-cored solenoids is shown in figure 5.1. This graph is a straight line with a gradient equal to μ0, therefore no matter how great the magnetising force is, it will always be directly proportional to the magnet flux density produced.
Figure 5.1: The magnetising curve for an air-cored solenoid or a solenoid cored with a nonmagnetic material.
For ferromagnetic materials:
$\dfrac{B}{H}=\mu _0\mu _r$
Since μ0 is a constant μr changes depending on the value of H and there is no fixed relationship between B and H for ferromagnetic materials.
Ferromagnetic materials, such as iron, cobalt, nickel and steel, can be considered to be made up from small permanent magnets called ‘domains’. When the material is not magnetised, the poles of these domains point in all directions (figure 5.2a) and their individual magnetic fields cancel out so that there is no detectable overall magnetism. As the material is subjected to an increasing external magnetising force, the poles of the domains begin to swing into line, so that when the material is fully magnetised all the north poles point in one direction (figure 5.2b).
If we consider a solenoid consisting of a wire coiled around a ferromagnetic core we can plot a graph of the magnetic flux density (B) of the solenoid verses the magnetising force (H) produced by the coil (remembering that H = IN/l, so that as the current in the coil increases so does H). Such a graph is shown in figure 5.3. We can see that while the tiny magnets are rearranging the graph is a straight line after a small, initial curve. Once all of the magnets have become aligned with the solenoid’s field the graph levels out until B can not increase any more; at this point the material is said to have reached saturation.
Figure 5.2: (a) a piece of ferromagnetic material which is not magnetised, where the domain poles are not aligned; (b) the domain poles aligned with an external magnetising force (H).
Figure 5.3 also shows the variation of μr with H and shows that μr falls of rapidly after saturation.
Note that to produce as much flux as possible for a given value of H the material should not be saturated. If you increase the current flowing through a solenoid that has reached saturation very little or no extra flux will be obtained. Note also the much larger flux density produced using a ferromagnetic core; the H axis in figure 5.1 is At/m ×10-6 whereas in figure 5.3 it is just At/m.
The magnetisation curves for different ferromagnetic materials are shown in figure 5.4.
Figure 5.3: The magnetising curve of a ferromagnetic core.
Figure 5.4: The magnetisation curve for various ferromagnetic materials.
Figure 5.5: (a) a ferromagnetic material in a magnetic filed; (b) a ferromagnetic material being used as a magnetic shield.
### 5.2 Hysteresis Losses
For ferromagnetic materials the magnetising curve shown in figure 5.3 is not reversible; that is if H is increased until the material is saturated, when H is reduced again the value of B does not reduce to zero along the same line. Figure 5.6 shows a hysteresis loop that illustrates the complete story.
Figure 5.6: a hysteresis loop for a solenoid's ferromagnetic core.
In figure 5.6 the point 0, where the axes cross, represents a ferromagnetic material that is not magnetised. As the current in the coil is increases, H also increases and between points 0 and 1 B increases following the magnetisation curve. At point 1 the material has reached saturation and B will no longer increase. Now we start to reduce the current in the coil so that we can demagnetise the material, as stated before the graph will not following the same path it did when the current increased but instead goes from point 1, through point 2 then down to point 3.
At point 2 H=0, therefore the current has reached zero but there is still some remnant flux density so that the material is still partially magnetised. The current is now reversed so that H is in the opposite direction than before and H has a negative value, therefore when the current is increased, the value of H reduces. At point 3 the material is finally demagnetised and the value of H at this point is called the coercive force.
If the reversed current increases furtherer we reach point 4, where the material saturates so that the magnetic poles of the domains face in the opposite direction to those at point 1. The reversed current is now reduced and reaches zero at the point 5 however, once again, some flux remains. If the current is now increased in the original direction all the flux has gone at point 6 and saturation is reached once more at point 1.
Note that the distances between points 0 and 2, and, 0 and 5 are equal. Also the distances between points 0 and 6, and, 0 and 3 are equal.
Since the coercive force must be applied to overcome the remanent magnetism, work is done in completing the hysteresis loop and the energy concerned appears as heat in the magnetic material. This heat is known as hysteresis loss, the amount of loss depends on the material’s value of coercive force. By adding silicon to iron a material with a very small coercive force can be made, such materials typically contain 5% silicon and have very narrow hysteresis loop (figure 5.7b). Materials with narrow hysteresis loops are easily magnetised and demagnetised and known as soft magnetic materials.
Hysteresis loses will always be a problem in AC transformers where the current is constantly changing direction and thus the magnetic poles in the core will cause losses because they constantly reverse direction. Rotating coils in DC machines will also incur hysteresis losses as they are alternately passing north the south magnetic poles.
If you wish to create a permanent magnet you should use a material with a very fat hysteresis loop (figure 5.7a). Such materials, once magnetised, are very difficult to demagnetise and when the magnetising force is removed a substantial magnetic flux density remains. These materials are known as hard magnetic materials.
Figure 5.7: (a) the hysteresis loop for hard magnetic material suitable for permanent magnet; (b) the hysteresis loop for soft magnetic material suitable for a transformer core.
### 5.3 Eddy-Current Losses
We will see in the next section that any conductor subjected to a changing magnetic flux will have an EMF induced in it. An alternating flux, such as that in the core of a transformer or rotor in a DC motor, will be continuously changing and will thus induce and EMF in any conductor it cuts. These induced EMF’s will drive alternating current within the core or windings, these are known as eddy-currents. Eddy-currents can be quite large, since although the induced EMFs are small, the resistance of core or winding will be small. Thus they can lead to substantial heating losses. Eddy currents can never be completely removed but they can be reduced considerably by laminating the transformer cores.
### 5.4 Iron Losses
Hysteresis losses and eddy-current losses taken together are called iron loses. The power loss due to eddy-currents (PE) is proportional to the frequency ( f ) of the supply squared and the maximum value of flux density (Bmax) squared. Thus:
$PE\propto \mathit{f}^2B_{max}^2$
The relationship between the power loss due to hysteresis loss (PH), f and Bmax is similar:
$PE\propto \mathit{f}B_{max}^{1.6}$
Exact equations for these losses are very complex.
### 5.5 Properties And Uses Of Some Ferromagnetic Materials
Table 5.1: The properties and uses of some magnetic materials.
* Br = the remanent flux density, Bsat = the saturation flux density, Hc = the corsive force, μr = the relative permeability at H=0.
Material Composition Br*
(T)
Bsat*
(T)
Hc*
(A/m)
μr* Uses
Silicon iron Fe – 3%Si 0.8 1.95 24 500-1500 Bell and telephone electromagnets. Relay cores. DC choke cores. Magnetic circuits for rotating machinery (up to 3%si), transformer cores (up to 5%Si).
Mumetal 5% Cu, 2%Cr, 77% Ni, 16 %Fe 0.6 0.65 4 2×104 Magnetic shields, instrument magnetic circuits, current transformers.
Carbon Steel 0.9% C, 1% Mn, 98% Fe 1.0 1.98 4×103 14 Permanent magnets.
Alnico V 24% Co, 14% Ni, 8% Al, 3% Cu, 51% Fe 1.31 1.41 5.3×104 20-250 Permanent magnets.
|
{}
|
# Roots of a cubic polynomial
Number Theory Level pending
Consider the cubic polynomial
$\large p(x)=ax^3+ bx^2 + cx + d$
where $$a$$, $$b$$, $$c$$, and $$d$$ are integers such that $$ad$$ is odd and $$bc$$ is even and not all roots of $$p(x)$$ can be rational.
Is this possible? If yes, prove it. If no, why?
×
|
{}
|
# His Girl Friday Summary and Analysis of Part 4: The Pressroom
Summary
One of the journalists informs the mayor and Pete that the governor has issued a statement saying that there will be an uprising the following day, to which the mayor responds, “anything the governor says is a tissue of lies.” The journalist continues, saying that the governor has issued a statement calling the mayor and the sheriff “a couple of 8-year-olds playing with fire” and issuing the statement, “It is a lucky thing for the city that next Tuesday is election day, as the citizens will thus be saved the expense of impeaching the mayor and the sheriff.” The governor is blaming the mayor and the sheriff for Williams’ escape. The journalists laugh at the corrupt mayor’s expense. The sheriff then announces that he has a tip about where Earl is, and the group rushes downstairs. In the lobby of the courthouse, the mayor informs the sheriff that he’s replacing him on his ticket next Tuesday, and blames him for wrongly accusing Williams of sympathizing with communism. The sheriff’s misstep, according to the mayor, will cost them 200,000 votes, and his election is dependent on Earl Williams being executed.
Just then, there is a knock on the door and a man comes in with a letter from the governor, stating that he wants reprieve for Earl Williams. The mayor is furious and requests to speak to the governor on the phone. The messenger informs the mayor that the governor is duck hunting, as the sheriff reads the reprieve declaring Williams’ insanity. The mayor is furious with the sheriff and tells him he wants him to resign immediately, when suddenly the phone rings. The sheriff answers and learns that the rifle squad has Earl Williams surrounded at his house. Seeing an opportunity to still win the election and have Williams killed, the mayor tells the messenger to forget he ever delivered the reprieve. When the messenger doesn’t take the hint, the mayor offers him a job with a huge raise as a bribe. The messenger doesn’t accept the bribe, and the mayor quickly ushers him out of the office, slipping him $50, urging him to tell no one that he delivered the reprieve, and urging him to visit him the following day. After the messenger has left, the mayor tells the sheriff to order the rifle squad to “shoot to kill” and that whoever kills Williams will receive a cash reward. We see Diamond Louie arriving at the pressroom, and Hildy scolds him for double-crossing Bruce and asks for her money. Louie hands her the counterfeit cash, and she asks for Bruce’s wallet, which he also hands over. As she packs up her stuff, Louie grabs Hildy’s briefcase, but she stops him, accusing him of intending to take it to the police and get her arrested. Afraid of getting arrested himself, Louie throws the briefcase at Hildy and rushes down the stairs. Hildy runs to the phone and starts to make a call when she is suddenly apprehended by Earl Williams, who has slipped in through the window of the pressroom. She drops the phone and stares at him, as he tells her not to tell anyone where he is. Hildy tries to calm the crazy man down, telling him that she’s going to write a story that reveals his side of the story, but as she begins to walk towards him, he tells her not to move or he’ll shoot. “You can’t trust anybody!” he exclaims, as the phone rings. When Hildy goes to answer the phone, Earl urges her not to answer it, and tells her that he’ll shoot her if she does. “You don’t want to kill anyone,” she says calmly, and Earl agrees, close to tears. As she turns to go to the door, Earl asks what she’s doing, and she tells him that she’s closing the door so nobody sees him. “No you weren’t. You were going to get somebody. But all I want is to be left alone!” Earl exclaims. Hildy tries to calm him down, when suddenly there is a noise outside. Earl turns and fires the gun anxiously. As Hildy reaches for it and takes it from his hand, he tells her it is filled with shells, and she rushes to the door. Fearing that they will know Earl’s there, Hildy hastily closes the door and the blinds as Earl laments his sorry state. Hildy then calls Walter, but is interrupted by a call from Bruce, who tells her he is leaving her. Hildy is caught between two phone calls, one from Bruce and one from Walter, and she flies back and forth between them, on one phone informing Walter that she has Earl Williams in the pressroom, and on the other urging Bruce not to leave her. As someone starts knocking frantically on the door of the pressroom, Hildy is forced to hang up on Bruce. It’s Mollie Malloy, who seems upset. After turning off the lights in the pressroom, Hildy lets Mollie in, and Mollie is confused about where the journalists have gone. Hildy attempts in vain to get Mollie to leave, but Mollie is determined to tell her that the rifle squad has Earl Williams surrounded. Just as Hildy gives Mollie the location of the other journalists, Earl calls out to Mollie, who is startled to realize that Earl is in the pressroom. She runs to him and he tells her he is innocent, and thanks her for the roses she sent him in jail. When Mollie tells Hildy she wants to get Earl out of there, Hildy insists that they have no chance of escaping unseen. As Hildy tries to help Earl and the hysterical Mollie strategize about what to do next, there is a knock on the door. Two of the journalists have returned and are confused about why the door is locked. Hildy helps Earl climb into a desk and closes the lid over him, and orders Mollie to pull herself together and sit in a chair nearby. Hildy answers the door and lets the two journalists in. When they spot Mollie, they ask what’s going on, and Hildy tells them that she burst in in hysterics. The two men ask Mollie if she’s seen Earl, but she lies and says she hasn’t seen him. One of the journalists calls someone to report that the capture of Earl was a false alarm, that he wasn’t at his house as previously believed. Another journalist who has filed in gets a call with a lead that Earl is hiding near an old woman’s house. Hildy offers to hold down the pressroom for them while they go investigate, but the men are discouraged by how much it would cost to take a taxi there. One of the journalists looks out the window suspiciously and posits that Earl isn’t any of the places he’s suspected of being, and that he might very well be in the building. “Sure sure, like a duck in a shooting gallery,” Hildy jokes, hoping to set them off Earl’s trail. When they don’t take her bait, she encourages them to search the building for him, but they are unconvinced. One of them gets wise to Hildy’s urging and notes that she seems “pretty anxious to get rid of us.” As the men put forth their theories about how Hildy is trying to get the story from them, they are interrupted by the arrival of Mrs. Baldwin, Bruce’s mother. Mrs. Baldwin launches into an angry tirade at Hildy about leaving Bruce in jail and playing “cat-and-mouse” with him. Hildy offers to go with her, but Mrs. Baldwin won’t hear of it, saying, “Just give me Bruce’s money and you can stay here forever as far as I’m concerned.” Mrs. Baldwin then reveals what Bruce told her on the phone, that Hildy apprehended a murderer. Now the journalists really get suspicious, and they begin to ask Hildy what Mrs. Baldwin means. As Hildy denies the claims, Mrs. Baldwin insists that Bruce told her that Hildy had come across the murderer. All of a sudden, Mollie speaks up, telling the men that she’s the only one who knows where Earl is. Mollie begins sobbing, scolding the men for not listening to her before, but suddenly wanting her to speak up now. Growing more and more anxious, Mollie becomes so upset that she jumps out the window, just as Walter and Louie enter the pressroom. The men rush to the window, look down and see that Mollie is still alive, still moving. They run out of the room to get the story as Walter approaches Hildy and asks her where Earl is. When Hildy tells him, he rushes over to the desk and tells Earl to keep “sitting pretty,” before closing him back into the desk. Mrs. Baldwin, who is still in the room, asks Walter who is in the desk, and Walter is confused. Even when Hildy has told Walter who the older woman is, he is uncomfortable and orders Louie to remove Mrs. Baldwin from the office. Louie throws Mrs. Baldwin over his shoulder and carries her out, much to her alarm, as Hildy calls after them, “Don’t worry, mother, this is only temporary!” Hildy tries to get past Walter to go collect Bruce, but Walter won’t let her leave in the middle of the story. “This is war, you can’t desert me now!” he pleads with her, but she counters that she got Earl for him, that she delivered his story directly to the pressroom. Walter will not let her go, and insists, “There are 365 days in the year one can get married! Now you’ve got a murderer locked up in a desk! Once in a lifetime.” According to Walter, this marks a revolutionary journalistic moment, and Hildy will have played a part in exposing government corruption and making way for political change. Hildy gets excited as Walter details how this story will change her career and put her in a new class of journalists. As Walter spirals into a fantasy about the glory Hildy will receive for her work, Hildy becomes impatient, motioning towards the desk in which Earl is hiding and reminding him that they have a lot to do. Walter goes to make a phone call as Hildy questions him about how they are going to transport Earl to Walter’s private office with all the policemen waiting outside. Walter calls Duffy and instructs Hildy to begin writing up the story. She scrambles for her typewriter and gets to work, as Walter instructs Duffy to prepare the entire front page for the exclusive story about the capture of Earl Williams. As Walter gives his instructions to Duffy and Hildy gleefully types up the story, Bruce enters, flabbergasted. “Hello Bruce,” Hildy says, barely looking up from the story. Bruce tells her that he got out on bail and that he’s worried about what people in Albany will say about it. Hildy assuages his concerns about the whole thing distractedly, as Walter eggs her on to finish the story as soon as possible. Growing more and more agitated, Bruce asks Hildy where his mother went and where the$450 went. Hildy directs him to her purse and he takes the counterfeit money and the tickets to Albany. When Bruce tells her he’s going to get on the 9 o’clock train, Hildy accidentally types it into her story, and becomes annoyed. Bruce pleads with Hildy and scolds Walter for manipulating Hildy back into her job. When he tries to get her to come with him, she says, “Oh give me just a second! Can’t you see this is the biggest thing in my life?” This only upsets Bruce more, who questions why she can’t live a decent life like a human being. Hildy types and types, and Bruce finally gives up, lamenting the fact that she never loved him at all, but offering to take her back if she changes her mind and wants to join him on the 9 o’clock train. “I’m no suburban bridge player, I’m a newspaper man!” she says, distractedly typing. After Bruce leaves, Earl peeks his head out from the desk, but Walter urges him to get back inside. Someone from the paper is coming to pick them up, but they just have to stay put for 15 minutes without getting found out. Walter goes over and taps on the desk 3 times, demonstrating to Earl that that is his signal to Earl. Looking up from her story, Hildy asks what happened to Bruce, as though she has been in a complete daze. Just then, a journalist arrives at the pressroom, annoyed to find it locked. When Walter asks Hildy who it is, she informs him that it is the man whose desk Earl is hiding in. They scramble to figure out what to do next.
Analysis
With the introduction of the mayor and his power struggle with the governor, all over the fate of Earl Williams, we see just how corrupt the state of politics really is. The mayor wants Earl executed and declared a killer no matter what so that he can win the election. The governor, on the other hand, wants to see that Earl is declared insane, and issues a reprieve saying just that. When the mayor tries to get in touch with the governor, he is informed that the governor is in the middle of a number of sports and leisure activities. The absence and high-brow pursuits of the governor suggest that he, like most politicians, is cut off from the actual truth of the issues which he is assumed to represent. The issue of Earl Williams is not a case to the mayor or the governor, but a political issue that can be skewed in either of their favors. We see that the mayor is a man completely devoid of a moral compass; he thinks of nothing but votes, and he is quick to try to bribe the messenger with a fancy government job in exchange for keeping quiet about the reprieve.
Unlike the politicians who are abstracted from the issues with which they are involved and alienated from their own ethical centers, Hildy, a brave journalist with ethical integrity, confronts the truth quite literally when Earl Williams comes and visits her in the pressroom at the courthouse. Shaking and anxious, Earl is evidently distressed, crazed and alone. While the politicians are seeking a way to politicize Earl’s case and the journalists are seeking a way to editorialize it, Hildy is the only person with the true knowledge of Earl Williams’ plight. She can see clearly that he is both vulnerable and insane, and as she stands in the pressroom, nervous that he might shoot her should she make the wrong move, she is forced to confront her own journalistic subject in all its rawness and vulnerability. Standing there at gunpoint, Hildy must use her wits to collect herself and defend herself against the insane criminal whose life she is trying to save with her story.
Hildy is bound for the life of the journalist in this section of the film. In spite of her desire to escape from the city with Bruce into a life of domestic bliss, she keeps running after and finding herself in the middle of the action. Her position caught “in between” the life of journalism and the life of marriage is comedically represented when Bruce calls her just as she is calling Walter to tell him that she’s found Earl Williams. Darting ridiculously between telephones, Hildy has to simultaneously convince Bruce to keep waiting for her at the jail, even though she has far exceeded her estimated 20 minute arrival time, and get Walter to come meet her at the pressroom. Rosalind Russell’s intelligent and quick acting style perfectly shows the tension between her two desires, yet it is clear to the audience that Bruce doesn’t stand a chance against the life of a journalist. No matter how dogged Hildy is in her pursuit of a regular life, she is completely incapable of peeling herself away from her journalistic vocation.
Hildy's inability to leave behind her work as a journalist is partially due to the pleasure that the work itself brings—it is an opportunity for her to demonstrate her innate ingenuity, intuition, and intelligence—but also because of the glory it promises to bring her. In order to get Hildy to stay in the pressroom with him, Walter outlines all of the ways that her having found Earl will make her famous. He says, “They'll be naming streets after you. Hildy Johnson Street. There'll be statues of ya in the park. The movies will be after ya. The radio. By tomorrow morning, I'll betcha there's a Hildy Johnson cigar. I can see the billboards now. They say, 'Light up with Hildy Johnson.’” In this monologue, Walter not only details his own admiration for Hildy’s work as a journalist, but also outlines the accolades that will accompany that work. It is notable that, for the times in which the film takes place, these accolades are ones that would typically be awarded to a man. The fame that Walter offers to Hildy is the fame and glory that would be not only exceptional for any person, but especially exceptional for a woman. In this way, the glory with which Walter seduces Hildy into staying is in direct contrast to the life with Bruce that Hildy so assuredly has said she wants. Life with Bruce would mean holing up and living a simple life in the country, with no aspirations to glory. As Walter frames it, a life with him at the paper promises an ascension to near immortality and a social recognition rarely afforded to women.
Director Howard Hawks mounts the tension and the suspense to delightful effect in this portion of the film. While the action of the narrative has hitherto unfolded at a slower pace, here it begins to really pick up. Earl Williams is hiding in a desk in the pressroom, the story promises to reverse government corruption once and for all, and Walter has convinced his best writer and the love of his life, Hildy Johnson, to work on the story. Cary Grant and Rosalind Russell possess a snappy and scrappy chemistry, each of them displaying a mental quickness and verve that invites the audience to take pleasure in their tasks and root for their reunion. In casting such appealing lead actors, Hawks shows the audience just how fun and exhilarating the life of a newspaperman can be. Hildy and Walter deliver rapid-fire exchanges before setting to work, Walter releasing the headline while Hildy types up the story on the typewriter. The camera zooms in on Walter as he gives his instructions about the exclusive to Duffy over the phone. This confluence of high stakes and classic screwball farce make for a thrilling narrative.
|
{}
|
# Torque and inertia question
1. Dec 3, 2005
### faculaganymede
A thin uniform rod of mass M and length L is positioned vertically above an anchored frictionless pivot point and then allowed to fall to the ground. With what speed does the free end of the rod strike the ground?
My answer is off by sqrt(2) from the correct answer, sqrt(3gL), and I don't understand why. Please help!!
2. Dec 3, 2005
### andrewchang
the gpe is being transformed into KE rotational. you would use the height as L/2 since the center of mass is changing that much distance.
the moment of inertia of a stick rotating about the end is 1/3mL^2
$$mg(0.5L) = 0.5I\omega^2$$
3. Dec 4, 2005
### marlon
You should not have posted this here. Did you not read this thread ?
regards
marlon
4. Dec 5, 2005
### faculaganymede
sorry, that was my first post. didn't know better.
5. Dec 5, 2005
### faculaganymede
Thanks Andrew!!
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
{}
|
# Parsing external DTDs with Spring 4.x Jaxb2Marshaller
Having recently upgraded a fairly sizeable Spring project to Spring 4.1.7, I uncovered an issue in which, after the upgrade, a class that talks to an external XML API was failing with the following stack trace:
As with most exceptions thrown by large libraries such as Spring, there’s an underlying exception that’s been thrown, wrapped and rethrown. And, as with most exceptions thrown by JAXB, there are also a lot of linked exceptions, which in this case originate from a SAXParseException thrown by Xerces (a JSR 206-compliant, fully-conforming XML Schema 1.0 processor).
In this instance, the error is thrown by the PrologDispatcher (née PrologDriver), a nested class that forms part of Xerces’ XMLDocumentScannerImpl class. That the exception is being thrown in the XML prolog shows that the failure occurs before the start tag of the XML document is reached. Specifically, the following line in PrologDispatcher is responsible for throwing the exception:
The difficulty is that the code is buried way down in the inner workings of the XML parser, which in this case, wasn’t instantiated by us in the first place. Indeed, the last code that was under our direct control was a call to the getForObject() method on an autowired RestTemplate instance. Regardless, the fDisallowDoctype check in PrologDispatcher is reminiscent of the problem reported in the initial stack trace: “DOCTYPE is disallowed when the feature “http://apache.org/xml/features/disallow-doctype-decl” set to true.”
As the name suggests, the disallow-doctype-decl feature prevents an XML document from being parsed if it contains a document type definition (DTD; specified using the DOCTYPE declaration in the XML). Along with the related FEATURE_SECURE_PROCESSING option, this can prevent both XML eXternal Entity (XXE) attacks, which can expose local file content, and XML Entity Expansion (XEE) attacks, which can result in denial of service. As such, the disallow-doctype-decl feature shouldn’t be disabled without giving due consideration to the security implications.
With that said, a bit of searching around reveals a few options for how the disallow-doctype-decl feature can be configured, but they depend on having direct access to the SAXParserFactory instance, setting and unsetting System properties, setting a JRE-wide jaxp.properties file, or passing command-line flags to the JVM. None of these are particularly desirable (or easily achievable).
So the next step was to identify the calls between the Spring RestTemplate (over which we have direct control in code) and the XML parsing code that’s throwing the exception. Thankfully, in this instance, we have control of the RestTemplate bean configuration in the application context as follows:
The jaxbMarshaller bean (as referenced above) is also configured in the application context as an instance of Jaxb2Marshaller:
Having initially tried to set the “http://apache.org/xml/features/disallow-doctype-decl” option to “false” through the setMarshallerProperties() method on Jaxb2Marshaller, I subsequently noticed the setSupportDtd property, which “Indicates whether DTD parsing should be supported.” This resolves the issue and, ultimately, the fix comes down to configuring the Jaxb2Marshaller bean with the following option:
Note that supportDtd can also be set to true by setting the Jaxb2Marshaller processExternalEntities property to true; the difference being that the latter both allows parsing of XML with a DOCTYPE declaration and processing of external entities referenced from the XML document. Jaxb2Marshaller ultimately uses the (logical complement of the) supportDtd option to set the “http://apache.org/xml/features/disallow-doctype-decl” feature on whichever XMLReader implementation is returned from XMLReaderFactory. By default, this is the class that the JVM-instance-wide org.xml.sax.driver property is set to, the class specified in META-INF/services/org.xml.sax.driver or, as a fallback, com.sun.org.apache.xerces.internal.parsers.SAXParser.
For security reasons, it would make sense to configure 2 Jaxb2Marshaller instances, one with DOCTYPE support enabled and one without, using the instance with DOCTYPE support only where it’s absolutely necessary and the XML source is trusted.
# Validating Excel Markov models in R
As part of my ongoing campaign to stop using Microsoft Excel for tasks to which it isn’t well suited, I’ve recently started validating all Excel-based Markov implementations using R. Regrettably, Excel is still the preferred “platform” for developing health economic models (hence R being relegated to validation work), but recreating Markov models in R starkly illustrates how much time could be saved if more reimbursement or health technology assessment agencies would consider models written in non-proprietary languages rather than proprietary file formats such as Excel.
As a brief re-cap, strict Markov chains are memoryless, probability-driven processes in which transitions through a defined state space, S, are driven by a transition matrix, briefly as follows:
$$\Pr(X_{n+1}=x\mid X_1=x_1, X_2=x_2, \ldots, X_n=x_n) = \Pr(X_{n+1}=x\mid X_n=x_n)$$
where $$S = \{X_1 \ldots X_n\}$$
A typical Markov modelling approach in Excel is to start with a transition matrix as a contiguous range at the top of sheet, lay out the initial state distributions immediately below it, and model state transitions using a series of rows containing SUMPRODUCT() formulae, which multiply and sum the state distribution in the previous row with the relevant transition probability row. As an illustrative example, a simple 5-state Markov model running over 120 cycles (say, 10 years with a monthly cycle length) would result in 600 SUMPRODUCT() formulae. Despite the ease with which the cells can be populated with Excel’s AutoFill functionality, making any change to the model (e.g. adding a state) requires the whole range to be updated every time. The use of per-row summation checks with a Machine epsilon (2^-1022) can highlight errors in the formulae, but this requires yet more formulae to update should the model structure change.
Conversely, in R, once the transition matrix and initial states have been defined, implementing a similar Markov model requires just one line of code (with the use of the Matrix exponential package):
This simply prints out the final state distribution after 120 cycles, but the entire Markov trace can be printed using R’s built-in sapply() and t() functions without a dramatic increase in the complexity of the code:
At this point in Excel, the state distributions might subsequently be used to tally up costs and quality-adjusted life expectancy (in quality-adjusted life years or QALYs) for each state. This would require another two identically-sized ranges of cells to capture cost and QALY estimates for each state, trebling the number of formulas to 1,800. In R, adding in and summing up state costs is much more straightforward:
To output the top-line health economic outcomes over the entire Markov simulation, the code can then actually be simplified (since we don’t need cycle-by-cycle results) to sum the costs and the quality-adjusted life expectancy:
And finally, with the introduction of a second transition matrix, a complete, two-arm, five-state Markov model with support for discounting and an arbitrary time horizon can be implemented in 10 lines of R as follows:
Having all of this functionality in 10 easily readable lines of code strikes an excellent balance between concision and clarity, and is definitely preferable to 1,800+ formulae in Excel with equivalent functionality (or even to a similar Markov model laid out in TreeAge). The above R code, while still slightly contrived in its simplicity, can be printed on a single sheet of paper and, crucially, can be very easily adapted and modified without needing to worry about filling ranges of cells with formulae including a mix of relative and absolute cell referencing.
While R is not necessarily the very best language for health economic modelling, given how straightforwardly a Markov model (as but one example) can be implemented in the language, it wouldn’t be a bad candidate to be supported by healthcare technology assessment agencies for the purposes of economic evaluation. More importantly, regardless of any specific merits and shortcomings of R, the adoption or acceptance of any similar language would represent an excellent first step away from the current near-ubiquitous use of proprietary modelling technologies or platforms that are, in many cases, ill-suited to the task.
# Ensuring that spreadsheets created by xlsx4j can be opened by Quick Look and Numbers
Following on from an earlier post on simplifying the addition of text-only cells to an Excel worksheet using xlsx4j, here’s a brief addendum that allows Apple’s Quick Look to preview the spreadsheet and for it to opened in Numbers (Apple’s spreadsheet package for OS X and iOS).
In short, Excel’s requirement for a “minimum viable OOXML spreadsheet” seems to be less stringent than Apple’s and, as such, following the steps outlined in the previous post would result in a spreadsheet that could be opened in Excel, but not in Quick Look or Numbers.
Thankfully, the fix is quite straightforward; it transpires that Apple’s software can’t open the spreadsheet if the /xl/styles.xml “part” is missing from the spreadsheet. Even an empty stylesheet is sufficient:
See the previous post for the implementation of the newCellWithInlineString method. It’s also worth noting that the getJaxbElement() method on the WorksheetPart class has been deprecated since the last post and replaced with getContents(), as above.
|
{}
|
# Equating two lists, element by element
Say you have two lists:
listA = {a, b, c}
listB = {d, e, f}
How would you produce a listC that equates each of the two's elements, ie.
listC = {a == d, b == e, c == f} ?
I know, it's very easy, but somehow I've already spent an slightly embarrassing amount of time on this. I suspect it will involve Map, but can't seem to get the elements to behave together per each element. It seems to just Append the two together for some reason.
SetAttributes[listA, Listable] SetAttributes[listB, Listable]
seems promising given this: http://reference.wolfram.com/mathematica/ref/Listable.html
• Thread[listA == listB] – rm -rf Jun 12 '13 at 0:45
• @rm -rf thank you – Ghersic Jun 12 '13 at 0:47
• @rm-rf If listA is equal to listB, then Thread[listA == listB] will return just True. In that case, this: MapThread[Equal, {listA, listB}]. – Michael E2 Jun 12 '13 at 5:37
• – István Zachar Jun 12 '13 at 7:43
• Also related: (10211) (15556) – Mr.Wizard Jan 18 '15 at 15:29
As @rm -rf mentioned in the comments,
Thread[listA == listB]
accomplishes what I'd hoped. Apoogies for missing this.
As Michael notes, if they are already equal to these values (you're not newly naming listA or listB) and they happen to already be equal , the query just returns "true." If this is the case, use:
MapThread[Equal, {listA, listB}]
as mentioned by MichaelE2 in the comments.
• MapThread gives True/False. Any quick way to get them to 1 and 0? – Chen Stats Yu Nov 26 '17 at 21:41
• @ChenStatsYu If I understand, you would like "True" to be indicated with a 1 and "False" with a 0? MapThread[Equal, {list1, list2}] /. {True :> 1, False :> 0} achieves this result. If it is something other than True/False, a conditional statement such as MapThread[~] /. {x_ /; x != True :> 0} should work, but doesn't - I think due to the unequal operator. However, some such conditional statement dependent upon the entity being replaced should suffice, or you can use Cases[~] or Position[MapThreads[~], True] to eliminate or detect, respectively, which statements were True. – Ghersic Dec 5 '17 at 17:12
Also
Inner[Equal, listA, listB, List]
Equal @@@ Transpose[{listA, listB}]
{a == d, b == e, c == f}
• How about setting variable list to equal values list variables = {"a", "b", "c", "d", "e"}, list = {0.745581, -0.412515, 0.701289, 0.666934, -0.559174}, list3 = Thread[variables = list], then IN a give OUT a ? – SPIL Nov 14 '17 at 10:33
• Years late "thanks," kglr. I did +1 back in 2013! SPIL, I'm not exactly sure what you're asking, but if you've a reasonably short list the notebook Frontend can display it all, skipping preliminary list definitions entirely and saying {a, b, c} = {1, 2, 3} will auto-thread Set over each pair. How to do this for sufficiently large lists that they must be represented internally is another question; it is probably doable via Part and Table or maybe Map. If setting a bunch of variables to a bunch of values is what you're after anyways... You will want to remove the quotes though. – Ghersic Dec 5 '17 at 18:48
|
{}
|
Previous | Next --- Slide 19 of 71
Back to Lecture Thumbnails
ericchan
Why is the standard sampling frequency 44.1 KHz?
zyx
Interesting question. Wikipedia seems to give one simple reason: human hearing range is around 20Hz ~ 20kHz, so by the Nyquist–Shannon theorem (mentioned later in class), the sampling frequency must be at least 40kHz. (more reasons are given in the original article: https://en.wikipedia.org/wiki/44,100_Hz)
keenan
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 17 Jan 2019, 00:59
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in January
PrevNext
SuMoTuWeThFrSa
303112345
6789101112
13141516171819
20212223242526
272829303112
Open Detailed Calendar
• ### The winning strategy for a high GRE score
January 17, 2019
January 17, 2019
08:00 AM PST
09:00 AM PST
Learn the winning strategy for a high GRE score — what do people who reach a high score do differently? We're going to share insights, tips and strategies from data we've collected from over 50,000 students who used examPAL.
• ### Free GMAT Strategy Webinar
January 19, 2019
January 19, 2019
07:00 AM PST
09:00 AM PST
Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT.
# The waiter at an expensive restaurant has noticed that 60%
Author Message
TAGS:
### Hide Tags
Intern
Joined: 01 Apr 2013
Posts: 21
The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
Updated on: 04 Apr 2013, 02:28
5
23
00:00
Difficulty:
85% (hard)
Question Stats:
52% (02:15) correct 48% (02:02) wrong based on 690 sessions
### HideShow timer Statistics
The waiter at an expensive restaurant has noticed that 60% of the couples order dessert and coffee. However, 20% of the couples who order dessert don't order coffee. What is the probability that the next couple the waiter seats will not order dessert?
A. 20%
B. 25%
C. 40%
D. 60%
E. 75%
_________________
Originally posted by Tagger on 03 Apr 2013, 17:06.
Last edited by Bunuel on 04 Apr 2013, 02:28, edited 1 time in total.
Renamed the topic and edited the question.
Math Expert
Joined: 02 Sep 2009
Posts: 52182
The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
04 Apr 2013, 02:46
16
8
Tagger wrote:
The waiter at an expensive restaurant has noticed that 60% of the couples order dessert and coffee. However, 20% of the couples who order dessert don't order coffee. What is the probability that the next couple the waiter seats will not order dessert?
A. 20%
B. 25%
C. 40%
D. 60%
E. 75%
Probably the best way to solve this question is using the double set matrix, as shown below:
From above, we have that 60+0.2x=x --> x=75.
Thus, the probability that the next couple will not order dessert (yellow box) is 100-75=25.
Hope it's clear.
Attachment:
Coffee and Dessert.png [ 3.79 KiB | Viewed 19373 times ]
_________________
##### General Discussion
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 611
Re: The waiter at an expensive resturant has noticed [#permalink]
### Show Tags
03 Apr 2013, 20:27
1
1
Tagger wrote:
The waiter at an expensive resturant has noticed that 60% of the couples order desert and coffee. However, 20% of the couples who order desert dont order coffee. what is the probability that the next couple the waiter seats will not order desert?
A.) 20%
B.) 25%
C.) 40%
D.) 60%
E.) 75%
Let the number of people ordering only desert = d, only ordering coffee be c and ordering both be b. Given that , 20 % of (b+d) = d
or 4d = b.
Thus, as b = 60, d = 15. The total number of people not ordering desert = 100-(60+15) = 25.
B.
_________________
Director
Joined: 03 Aug 2012
Posts: 716
Concentration: General Management, General Management
GMAT 1: 630 Q47 V29
GMAT 2: 680 Q50 V32
GPA: 3.7
WE: Information Technology (Investment Banking)
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
17 Aug 2013, 21:25
Solving for X in the figure shown below we will get Couples for deserts as 75%
And couples not ordering deserts =100-75=25%
Attachments
2set.JPG [ 14 KiB | Viewed 15968 times ]
Senior Manager
Joined: 10 Jul 2013
Posts: 314
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
18 Aug 2013, 15:31
1
Tagger wrote:
The waiter at an expensive restaurant has noticed that 60% of the couples order dessert and coffee. However, 20% of the couples who order dessert don't order coffee. What is the probability that the next couple the waiter seats will not order dessert?
A. 20%
B. 25%
C. 40%
D. 60%
E. 75%
Let, total dessert ordered = T and total couple = 100
From question,
60+20% of T = T
or, T = 75 % ordered dessert.
So next couple will not order dessert = 100-75 = 25 %
_________________
Asif vai.....
Manager
Joined: 25 Oct 2013
Posts: 148
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
24 Jan 2014, 05:09
2
1
Let total number of couples be 100.
60% order Dessert & Coffee = 60 couples.
20% who order Dessert do not order coffee => 80% who order dessert also order coffee this is given to be 60.
Hence total number of couples who order Dessert is 60*100/80 = 75.
Number of couples who do NOT order Dessert = 100-75 = 25.
The probability that next order will not have dessert is 25%.
_________________
Click on Kudos if you liked the post!
Practice makes Perfect.
Manager
Joined: 18 Aug 2014
Posts: 119
Location: Hong Kong
Schools: Mannheim
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
12 Feb 2015, 23:24
Tagger wrote:
The waiter at an expensive restaurant has noticed that 60% of the couples order dessert and coffee. However, 20% of the couples who order dessert don't order coffee. What is the probability that the next couple the waiter seats will not order dessert?
A. 20%
B. 25%
C. 40%
D. 60%
E. 75%
I solved this pretty fast this way:
60% dessert and coffee
--> 40% nothing, dessert, or coffee
Let them be the same probability --> 40% / 3 = 13,333%
40% - 13% = 27% --> Answer has to be around this range --> B is closest
VP
Joined: 07 Dec 2014
Posts: 1151
The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
13 Apr 2016, 17:52
let total couples=100
let d=couples who order dessert
d-60=.2d
d=75 couples
100-75=25 couples who don't order dessert
25/100=25%
Manager
Joined: 30 Dec 2015
Posts: 83
GPA: 3.92
WE: Engineering (Aerospace and Defense)
The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
08 Oct 2016, 11:50
Tagger wrote:
The waiter at an expensive restaurant has noticed that 60% of the couples order dessert and coffee. However, 20% of the couples who order dessert don't order coffee. What is the probability that the next couple the waiter seats will not order dessert?
A. 20%
B. 25%
C. 40%
D. 60%
E. 75%
60% order (C+D) i.e Both = 60% of Total
20% of D is without C; i.e. 80% of D also orders C
80% of D = Both
80% of D = 60% of Total
$$\frac{D}{Total} =\frac{60}{80} = \frac{3}{4}$$
Hence, $$\frac{C}{Total} = \frac{1}{4}$$ = 25%
_________________
If you analyze enough data, you can predict the future.....its calculating probability, nothing more!
Retired Moderator
Joined: 05 Jul 2006
Posts: 1722
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
24 Dec 2016, 12:28
Bunuel wrote:
Tagger wrote:
The waiter at an expensive restaurant has noticed that 60% of the couples order dessert and coffee. However, 20% of the couples who order dessert don't order coffee. What is the probability that the next couple the waiter seats will not order dessert?
A. 20%
B. 25%
C. 40%
D. 60%
E. 75%
Probably the best way to solve this question is using the double set matrix, as shown below:
Attachment:
Coffee and Dessert.png
From above, we have that 60+0.2x=x --> x=75.
Thus, the probability that the next couple will not order dessert (yellow box) is 100-75=25.
Hope it's clear.
what about (Neither) those who didnt order coffee nor desert
Math Expert
Joined: 02 Sep 2009
Posts: 52182
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
25 Dec 2016, 00:53
yezz wrote:
Bunuel wrote:
Tagger wrote:
The waiter at an expensive restaurant has noticed that 60% of the couples order dessert and coffee. However, 20% of the couples who order dessert don't order coffee. What is the probability that the next couple the waiter seats will not order dessert?
A. 20%
B. 25%
C. 40%
D. 60%
E. 75%
Probably the best way to solve this question is using the double set matrix, as shown below:
From above, we have that 60+0.2x=x --> x=75.
Thus, the probability that the next couple will not order dessert (yellow box) is 100-75=25.
Hope it's clear.
what about (Neither) those who didnt order coffee nor desert
To get the probability that the next couple will not order dessert we need the percentage of those who do not order dessert which is 25. Those 25% include Coffee/No Dessert and No Coffee/No Dessert (Neither).
_________________
Intern
Joined: 10 Jan 2017
Posts: 1
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
07 Mar 2017, 07:50
Why shouldn't it be solved as
60 + 20 = x.
Posted from my mobile device
Math Expert
Joined: 02 Sep 2009
Posts: 52182
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
07 Mar 2017, 08:20
Nikita16 wrote:
Why shouldn't it be solved as
60 + 20 = x.
Posted from my mobile device
We are given that 20% of the couples who order dessert don't order coffee. We denoted those who order dessert by x, thus those who order dessert but don't order coffee is 20% of that, which is 0.2x.
_________________
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 4527
Location: United States (CA)
Re: The waiter at an expensive restaurant has noticed that 60% [#permalink]
### Show Tags
14 Mar 2018, 15:07
Tagger wrote:
The waiter at an expensive restaurant has noticed that 60% of the couples order dessert and coffee. However, 20% of the couples who order dessert don't order coffee. What is the probability that the next couple the waiter seats will not order dessert?
A. 20%
B. 25%
C. 40%
D. 60%
E. 75%
You can use the following equation:
Total = Dessert only + Coffee only + Both + Neither
Instead of using percents, let’s use numbers. If we let the total number of customers be 100, then we see that 60 of them will order dessert and coffee:
100 = D + C + 60 + N
Since we have let D = the number of couples ordering Dessert only, we know that the total number of couples ordering Dessert is (D + 60), which is “Dessert only” plus “Both.”. Since 20% of the couples who order dessert don't order coffee, that means “Dessert only” is 20% of the total of “Dessert only” and “Both;” that is,
D = 0.2(D + 60)
5D = D + 60
4D = 60
D = 15
Substituting, we have:
100 = 15 + C + 60 + N
100 = 75 + C + N
25 = C + N
Since those who don’t order dessert are the total of “Coffee only” and “Neither,” we have 25% of the couples who don’t order dessert.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Re: The waiter at an expensive restaurant has noticed that 60% &nbs [#permalink] 14 Mar 2018, 15:07
Display posts from previous: Sort by
|
{}
|
# Band MO of Sodium Filling of electrons confusion
I wanted to ask a question about metallic bond - band electron theory.
Consider the diagram of Sodium below:
Circled are orbitals both containing $$2$$ electrons each which combine using LCAO to give a set of bonding and antibonding electrons.
However, the $$4$$ electrons involved in this bonding appear to only give $$3$$ electrons in the bonding and antibonding MO's produced.
This confuses me. Surely using $$4$$ electrons, I expect all $$4$$ of these electrons to also be present in the rectangle drawn.
Why are there only 3 electrons shown? Is one electron being used for the next addition of AOs?
From left to right, it shows 1, 2, 3, and 4 atoms of sodium, with a total of 1, 2, 3, and 4 valence electrons. So this diagram is different from others which have the uncombined AO's on the left and right, and the combined MO's in the middle. Here, we are adding more and more atoms to go from discreet energy levels (left 4 columns, up to four atoms) to a band structure (rightmost column, many atoms in bulk metal).
The only column that shows the AO is the left-most because for that one, there is only one atom, which means that there can't be any MOs.
Maybe this diagram makes more sense to you (the full and empty circles on the right show how the AO's are combined to make the bonding, non-bonding and antibonding MO's):
• Oh right! The above diagram makes sense. So, one question - why are there then the dotted lines between each MO following on from the leftmost AO towards the right? – vik1245 Jun 14 at 1:04
• @vik1245 You could imagine adding another and another AO one by one, or you could take the image I posted and rearrange it to get the other one. In any case, all the MO's are base on the one AO that is shown on the left. – Karsten Theis Jun 14 at 1:06
• Oh right so as I'm moving one to the right, I'm adding simply one AO to the right to get a new MO set, similar to what I can get if I simply splice your diagram and rearrange it to the image I posted? I think I get it then if I'm correct! Just confirm that please! – vik1245 Jun 14 at 1:23
• @vik1245 yup confirmed – Karsten Theis Jun 14 at 2:32
• Ticked! By the way in your diagram for $\ce{Na3}$ the middle energy set of orbitals, is that simply missing a black dot by mistake? – vik1245 Jun 14 at 11:16
|
{}
|
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Physics , 2005, DOI: 10.1111/j.1365-2966.2005.09629.x Abstract: In this paper, we discuss general relativistic, self-gravitating and uniformly rotating perfect fluid bodies with a toroidal topology (without central object). For the equations of state describing the fluid matter we consider polytropic as well as completely degenerate, perfect Fermi gas models. We find that the corresponding configurations possess similar properties to the homogeneous relativistic Dyson rings. On the one hand, there exists no limit to the mass for a given maximal mass-density inside the body. On the other hand, each model permits a quasistationary transition to the extreme Kerr black hole.
Physics , 2008, Abstract: In this paper, we describe an analytical method for treating uniformly rotating homogeneous rings without a central body in Newtonian gravity. We employ series expansions about the thin ring limit and use the fact that in this limit the cross-section of the ring tends to a circle. The coefficients can in principle be determined up to an arbitrary order. Results are presented here to the 20th order and compared with numerical results.
Physics , 2005, DOI: 10.1103/PhysRevD.72.024019 Abstract: Highly accurate numerical solutions to the problem of Black Holes surrounded by uniformly rotating rings in axially symmetric, stationary spacetimes are presented. The numerical methods developed to handle the problem are discussed in some detail. Related Newtonian problems are described and numerical results provided, which show that configurations can reach an inner mass-shedding limit as the mass of the central object increases. Exemplary results for the full relativistic problem for rings of constant density are given and the deformation of the event horizon due to the presence of the ring is demonstrated. Finally, we provide an example of a system for which the angular momentum of the central Black Hole divided by the square of its mass exceeds one.
Physics , 2010, DOI: 10.1111/j.1365-2966.2010.17241.x Abstract: In this paper uniformly rotating relativistic rings are investigated analytically utilizing two different approximations simultaneously: (1) an expansion about the thin ring limit (the cross-section is small compared with the size of the whole ring) (2) post-Newtonian expansions. The analytic results for rings are compared with numerical solutions.
Physics , 2008, DOI: 10.1111/j.1365-2966.2008.13540.x Abstract: An iterative method is presented for solving the problem of a uniformly rotating, self-gravitating ring without a central body in Newtonian gravity by expanding about the thin ring limit. Using this method, a simple formula relating mass to the integrated pressure is derived to the leading order for a general equation of state. For polytropes with the index n=1, analytic coefficients of the iterative approach are determined up to the third order. Analogous coefficients are computed numerically for other polytropes. Our solutions are compared with those generated by highly accurate numerical methods to test their accuracy.
Robert G. Deupree Physics , 2011, DOI: 10.1088/0004-637X/735/2/69 Abstract: Zero age main sequence models of uniformly rotating stars have been computed for ten masses between 1.625 and 8 M_\odot and 21 rotation rates from zero to nearly critical rotation. The surface shape is used to distinguish rotation rather than the surface equatorial velocity or the rotation rate. Using the surface shape is close to, but not quite equivalent to using the ratio of the rotation rate to the critical rotation rate. Using constant shape as the rotation variable means that it and the mass are separable, something that is not true for either the rotation rate or surface equatorial velocity. Thus a number of properties, including the ratio of the effective temperature anywhere on the surface to the equatorial temperature, are nearly independent of the mass of the model, as long as the rotation rate changes in such a way to keep the surface shape constant.
Mathematics , 2011, DOI: 10.1016/j.jde.2012.04.011 Abstract: In this paper we classify the free boundary associated to equilibrium configurations of compressible, self-gravitating fluid masses, rotating with constant angular velocity. The equilibrium configurations are all critical points of an associated functional and not necessarily minimizers. Our methods also apply to alternative models in the literature where the angular momentum per unit mass is prescribed. The typical physical model our results apply to is that of uniformly rotating white dwarf stars.
Physics , 1995, Abstract: We explore a series expansion method to calculate the modes of oscillations for a variety of uniformly rotating finite disks, either with or without a dark halo. Since all models have the same potential, this survey focuses on the role of the distribution function in stability analyses. We show that the stability behaviour is greatly influenced by the structure of the unperturbed distribution, particularly by its energy dependence. In addition we find that uniformly rotating disks with a halo in general can feature spiral-like instabilities.
Physics , 2010, DOI: 10.1088/0004-637X/728/1/12 Abstract: We study the dimensionless spin parameter $j (= c J/ (G M^2))$ of uniformly rotating neutron stars and quark stars in general relativity. We show numerically that the maximum value of the spin parameter of a neutron star rotating at the Keplerian frequency is $j_{\rm max} \sim 0.7$ for a wide class of realistic equations of state. This upper bound is insensitive to the mass of the neutron star if the mass of the star is larger than about $1 M_\odot$. On the other hand, the spin parameter of a quark star modeled by the MIT bag model can be larger than unity and does not have a universal upper bound. Its value also depends strongly on the bag constant and the mass of the star. Astrophysical implications of our finding will be discussed.
Physics , 2009, DOI: 10.1051/0004-6361/200811605 Abstract: We calculate Keplerian (mass shedding) configurations of rigidly rotating neutron stars and quark stars with crusts. We check the validity of empirical formula for Keplerian frequency, f_K, proposed by Lattimer & Prakash, f_K(M)=C (M/M_sun)^1/2 (R/10km)^-3/2, where M is the (gravitational) mass of Keplerian configuration, R is the (circumferential) radius of the non-rotating configuration of the same gravitational mass, and C = 1.04 kHz. Numerical calculations are performed using precise 2-D codes based on the multi-domain spectral methods. We use a representative set of equations of state (EOSs) of neutron stars and quark stars. We show that the empirical formula for f_K(M) holds within a few percent for neutron stars with realistic EOSs, provided 0.5 M_sun < M < 0.9 M_max,stat, where M_max,stat is the maximum allowable mass of non-rotating neutron stars for an EOS, and C=C_NS=1.08 kHz. Similar precision is obtained for quark stars with 0.5 M_sun < M < 0.9 M_max,stat. For maximal crust masses we obtain C_QS = 1.15 kHz, and the value of C_QS is not very sensitive to the crust mass. All our C's are significantly larger than the analytic value from the relativistic Roche model, C_Roche = 1.00 kHz. For 0.5 M_sun < M < 0.9 M_max,stat, the equatorial radius of Keplerian configuration of mass M, R_K(M), is, to a very good approximation, proportional to the radius of the non-rotating star of the same mass, R_K(M) = aR(M), with a_NS \approx a_QS \approx 1.44. The value of a_QS is very weakly dependent on the mass of the crust of the quark star. Both a's are smaller than the analytic value a_Roche = 1.5 from the relativistic Roche model.
Page 1 /100 Display every page 5 10 20 Item
|
{}
|
Surender Raja - 1 year ago 85
Scala Question
# Scala Currying syntax explanation
I am new to scala .
I want to understand the syntax of this below code
object PlainSimple {
def main(args:Array[String])
{
println(m(3))
}
{
y=>y+x
}
}
My question is where are we saying that the add function is returning another function?
What does
Int=>Int
mean?
Inside the add function what is called
y
? Why are we using it without declaring it anywhere?
What do need to do if want multiples line inside add method?
My question is Where are we saying that the add function is returning another function?
What is Int=>Int means?
T => U as a type means "a function that takes a T and returns an U". So when we write : Int => Int at the end of the function signature, we say "this function returns a function from Int to Int".
Inside the add function what is called y ? Why are we using it without declaring it anywhere?
As an expression x => body (or (x,y,z) => body for multiple parameters) is a function literal, that is it defines an anonymous function whose parameter is named x and whose body is body. So we are declaring the parameter x by writing its name of the left side of the =>.
What do need to do if want multiples line inside add method?
You can put anything to the right of => that you could also put to the right of = when defining a method using def. So if you want your function's body to consist of multiple statements, you can use braces just like with a regular method definition:
y => {
val z = y+x
z*2
}
|
{}
|
Contents
# Strategy Library
## Intraday Arbitrage Between Index ETFs
### Abstract
In this tutorial, we implement an intraday arbitrage strategy that capitalizes on deviations between two closely correlated index ETFs. Even though at times both ETFs may hold different constituents and different weights of securities while tracking the index, they are both highly correlated and extremely liquid. Researchers have shown these two properties are essential to an arbitrage system's success. The algorithm we implement here is inspired by the work of Kakushadze and Serur (2018) and Marshall, Nguyen, and Visaltanachoti (2010).
### Background
Marshall et al (2010) define an arbitrage opportunity as when the bid price of ETF A (B) diverts high enough away from the ask price of ETF B (A) such that their quotient reaches a threshold. In their paper, an arbitrage opportunity is only acted upon when the threshold is satisfied for 15 seconds. When these criteria are met, the algorithm enters the arbitrage trade by going long ETF B (A) and short ETF A (B). When the spread reverts back to where the bid of ETF B (A) >= the ask of ETF A (B) for 15 seconds, the positions are liquidated. An overview of the trade process is illustrated in the image below.
### Method
#### Universe Selection
We implement a manual universe selection model that includes our two ETFs, SPY and IVV. The attached research notebook finds the correlation of daily returns to be >0.99.
tickers = ['IVV', 'SPY']
symbols = [ Symbol.Create(t, SecurityType.Equity, Market.USA) for t in tickers ]
self.SetUniverseSelection( ManualUniverseSelectionModel(symbols) )
Plotting the ratio of the security prices shows its trending behavior.
Without adjusting this ratio over time, an arbitrage system would be stuck in a single trade for majority of the backtest. To resolve this, we subtract a trailing mean from each data point.
Both of the above plots can be reproduced in the attached research notebook. During backtesting, this adjustment is done during trading by setting up a QuoteBarConsolidator for each security in our universe. On each new consolidated QuoteBar, we update the trailing window of L1 data, then calculate the latest spread adjustment values.
# In OnSecuritiesChanged
for symbol in self.symbols:
self.consolidators[symbol] = QuoteBarConsolidator(1)
self.consolidators[symbol].DataConsolidated += self.CustomDailyHandler
def CustomDailyHandler(self, sender, consolidated):
# Add new data point to history while removing expired history
self.history[consolidated.Symbol]['bids'] = np.append(self.history[consolidated.Symbol]['bids'][1:], consolidated.Bid.Close)
for i in range(2):
numerator_history = self.history[self.symbols[i]]['bids']
#### Alpha Construction
The ArbitrageAlphaModel monitors the intraday bid and ask prices of the securities in the universe. In the constructor, we can specify the model parameters. In this tutorial, we select a shorter window an arbitrage opportunity must be active before we act on it by setting order_delay to 3.
class ArbitrageAlphaModel(AlphaModel):
symbols = [] # IVV, SPY
entry_timer = [0, 0]
exit_timer = [0, 0]
long_side = -1
consolidators = {}
history = {}
def __init__(self, order_delay = 3, profit_pct_threshold = 0.02, window_size = 400):
self.order_delay = order_delay
self.pct_threshold = profit_pct_threshold / 100
self.window_size = window_size
To emit insights, we check if either side of the arbitrage strategy warrants an entry. If no new entries are to be made, the algorithm then looks to exit any current positions. With this design, we can flip our long/short bias without first flattening our position. We use a practically-infinite insight durations as we do not know how long the algorithm will be in an arbitrage trade.
# Search for entries
for i in range(2):
self.entry_timer[i] += 1
if self.entry_timer[i] == self.order_delay:
self.exit_timer = [0, 0]
if self.long_side == i:
return []
self.long_side = i
return [Insight.Price(self.symbols[i], timedelta(days=9999), InsightDirection.Up),
Insight.Price(self.symbols[abs(i-1)], timedelta(days=9999), InsightDirection.Down)]
else:
return []
self.entry_timer[i] = 0
# Search for an exit
if self.long_side >= 0: # In a position
self.exit_timer[self.long_side] += 1
if self.exit_timer[self.long_side] == self.order_delay: # Exit signal lasted long enough
self.exit_timer[self.long_side] = 0
i = self.long_side
self.long_side = -1
return [Insight.Price(self.symbols[i], timedelta(days=9999), InsightDirection.Flat),
Insight.Price(self.symbols[abs(i-1)], timedelta(days=9999), InsightDirection.Flat)]
else:
return []
return []
#### Portfolio Construction & Trade Execution
Following the guidelines of Alpha Streams and the Quant League competition, we utilize the EqualWeightingPortfolioConstructionModel and the ImmediateExecutionModel.
### Relative Performance
We analyze the performance of this strategy by comparing it to the S&P 500 ETF benchmark, SPY. We notice that the strategy has a lower Sharpe ratio over all of our testing periods than the benchmark, except for the Fall 2015 crisis where it achieved a 2.8 Sharpe ratio. The strategy also has a lower annual standard deviation of returns when compared to the SPY, implying more consistent returns over time. A breakdown of the strategy's performance across all our testing periods is displayed in the table below.
Period NameStart DateEnd DateStrategySharpeASD
Backtest 8/11/2015 8/11/2020Strategy-0.447 0.053
Benchmark 0.7320.192
Fall 2015 8/10/2015 10/10/2015Strategy 2.837 0.225
Benchmark-0.7240.251
2020 Crash 2/19/2020 3/23/2020Strategy-4.196 0.209
Benchmark -1.2430.793
2020 Recovery 3/23/2020 6/8/2020Strategy-3.443 0.013
Benchmark 13.7610.386
The lack of performance for this arbitrage strategy is mostly attributed to the fees it incurs while trading. This is common for an intraday arbitrage strategy, but we discuss ways to reduces these fees in the conclusion of this tutorial. After removing the costs of commissions, crossing the spread, and slippage, the strategy outperforms the SPY over the entire backtesting period. Without these costs, the strategy generates a 1.09 Share ratio while the SPY generates a 0.732 Sharpe ratio. See the backtest performance without fees below.
### Market & Competition Qualification
Although this strategy passes several of the metrics required for Alpha Streams and the Quant League competition, it requires further work to pass the following requirements:
• Profitable
• PSR >= 80%
• Max drawdown duration <= 6 months
• Insights contain the following properties: Symbol, Duration, Direction, and Weight
• Alphas need to place at least 10 trades per month for the majority of the backtest
The algorithm currently places trades during 12 unique months throughout the backtest. Since the backtest spans across 61 months, it places trades through a minority of the backtest.
### Conclusion
The intraday arbitrage strategy we built and tested throughout this tutorial underperforms the SPY benchmark in terms of Sharpe ratio when including trading costs. Without these costs, we found the strategy outperforms the SPY in terms of Sharpe ratio. In our implementation, we specified the alpha model to initiate trading when atleast a 0.02% profit threshold is reached for 3 seconds. Both of these parameters are set lower than the strategy examined in Marshall et al (2010) for demonstration purposes. Increasing the profit threshold will lead to more profitable, but fewer, trades that may overcome the cost of trading. We leave this area of study for future research. Additional areas of future research include increasing the resolution of data from second to tick and incorportating an execution model that utilizes limit orders to reduce fees.
### References
1. Marshall, Ben R. and Nguyen, Nhut (Nick) Hoang and Visaltanachoti, Nuttawat, ETF Arbitrage: Intraday Evidence (November 16, 2010). Online copy
2. Kakushadze, Zura and Serur, Juan Andrés, 151 Trading Strategies (August 17, 2018). Z. Kakushadze and J.A. Serur. 151 Trading Strategies. Cham, Switzerland: Palgrave Macmillan, an imprint of Springer Nature, 1st Edition (2018), XX, 480 pp; ISBN 978-3-030-02791-9. Online copy
You can also see our Documentation and Videos. You can also get in touch with us via Chat.
|
{}
|
## superdry Group Title what is 3/5 x 1/7 one year ago one year ago
• This Question is Open
1. jiteshmeghwal9 Group Title
$\dfrac{3 \times1}{5 \times 7}=?$
2. kugler97 Group Title
it is 3/35
3. kugler97 Group Title
i am pretty sure youwould take 3 over 5 then multiply that by 5 over 7 and thtn you get 3/35!! ((:
4. brittkemper Group Title
|
{}
|
Question
# A single constant force F= (3i+5j) N acts ona 4.00-kg particle.
Other
A single constant force $$F= (3i+5j)$$ N acts ona 4.00-kg particle. (a) Calculate the work done by this forceif the particle moves from the origin to the point having the vector position $$r = (2i-3j) m$$. Does thisresult depend on the path? Explain. (b) What is the speed ofthe particle at r if its speed at the origin is 4.00 m/s? (c) What is the change in the potential energy?
2020-10-24
Force is given as $$F=(3i+5j)N\ mass= 4kg$$
a) workdone $$= F.ds= (3i+5j).(2i-3j)=6-15=-9 J$$
the work done is path independant since we did not considerany frictional force.
b) speed at r if speed at origin is 4 m/s
we can calculate acceleration $$\displaystyle{a}={\frac{{{\left|{F}\right|}}}{{{m}}}}=?\frac{{{32}+{52}}}{{4}}={1.46}\frac{{m}}{{s}^{{2}}}$$
we can use $$\displaystyle{v}^{{2}}={u}^{{2}}+{2}{a}{s}$$ with s beings $$\displaystyle=?{\left({22}+{32}\right)}={3.6}{m}$$
$$\displaystyle{v}^{{2}}={4}^{{2}}+{2}{\left({1.46}\right)}{\left({3.6}\right)}$$
solve for v
c)You can use energy conservation to get this part
|
{}
|
Limited access
Suppose that you initially calculated a correlation statistic for random variables $Y$ and $X$, $\rho = 0.25$. Then, you realize that you omitted an observation:
$$(y=5,\ x=-5)$$
After including this observation and calculating the standard deviations for the two random variables, you find that those values did not change.
What do you predict will happen to the correlation statistic value as a result of adding the observation?
A
The correlation statistic value will not change.
B
The correlation statistic value will decrease.
C
The correlation statistic value will increase.
D
The correlation statistic value will become statistically insignificant from zero.
E
There is not enough information to make the determination.
Select an assignment template
|
{}
|
# Lesson 3
The Unit Circle (Part 1)
### Lesson Narrative
The goal of this lesson is for students to begin their exploration of the unit circle, defined as a circle of radius 1 centered at the origin, which they continue in the following lesson and use throughout the remainder of the unit. They focus first on the symmetric nature of the $$(x,y)$$ coordinates of points on the unit circle and then learn that these points can also be defined by their angle of rotation, which leads to working with radian angle measurements.
This lesson builds on the geometry course in which students learned that all circles are similar and examined arcs intercepted by given angles. That work led to defining the radian measure of an angle as the ratio of the arc length traveled to the radius of the circle. This means that 1 radian is the angle when the length of the arc it intersects on a circle of radius $$r$$ is $$r$$. Students also learned that by this definition, and because $$\pi$$ is the ratio of the circumference of the circle to its diameter, there are $$2\pi$$ radians in a full circle. This lesson includes an optional activity if students need practice recalling the definition of radian measurement.
Students look for regularity in repeated reasoning as they apply radian measure to examine the distance a wheel travels as it rolls for several angles, reasoning that the measure of the angle of revolution corresponds to the distance traveled when the radius is 1 (MP8).
### Learning Goals
Teacher Facing
• Calculate the radian angle measurement a point on a wheel rotates through by relating it to the distance traveled by the wheel.
• Describe characteristics of points on a unit circle.
### Student Facing
• Let’s learn about the unit circle.
### Required Preparation
Acquire 1 round object per student if using the optional activity Measuring Circles.
Be prepared to display applets for all to see during the activity syntheses of the activities “Measuring Circles” and “Around a Bike Wheel.”
Devices are required for the digital version of the extension in “Around a Bike Wheel,” ideally 1 per student.
### Student Facing
• I understand that a radian angle measurement is the ratio of the arc length to the radius of the circle.
• I understand that points on a unit circle can be defined by their coordinates or by an angle of rotation.
Building On
Building Towards
### Glossary Entries
• unit circle
The circle in the coordinate plane with radius 1 and center the origin.
|
{}
|
#### Chapter 7 Differential Calculus
Section 7.1 Derivative of a Function
# 7.1.1 Introduction
A family is going on holiday by car. The car is moving through roadworks with a velocity of $60 \mathrm{km}\left[\right]/\mathrm{h}$. The sign at the end of the roadworks says that the speed limit is, as of now, $120 \mathrm{km}\left[\right]/\mathrm{h}$. Even though the car driver puts the pedal to the metal, the velocity of the car will not jump up immediately but increase as a function of time. If the velocity increases from $60 \mathrm{km}\left[\right]/\mathrm{h}$ to $120 \mathrm{km}\left[\right]/\mathrm{h}$ in 5 seconds at a constant rate of change, then the acceleration (= change of velocity per time) equals this constant (in this case) rate of velocity change: the acceleration is the quotient of the velocity change and the time required for this change. Thus, its value is here $12$ kilometre per hour per second. In reality, the velocity of the car will not increase at a constant rate but at a time-dependent rate. If the velocity $v$ is described as a function of time $t$, then the acceleration is the slope of this function. This does not depend on the fact whether this slope is constant (in time) or not. On other words: The acceleration is the derivative of the velocity function $v$ with respect to the time $t$.
Similar relations can also be found in other technical fields such as, for example, the calculation of internal forces acting in steel frames of buildings, the forecast of atmospheric and oceanic currents, or in the modelling of financial markets, which is currently highly relevant.
This chapter reviews the basic ideas underlying these calculations, i.e. it deals with differential calculus. In other words: we will take derivatives of functions to find their slopes or rates of change. Even thought these calculations will be carried out here in a strictly mathematical way, their motivation is not purely mathematical. Derivatives, interpreted as rates of change of different functions, play an important role in many scientific fields and are often investigated as special quantities.
|
{}
|
# What is a delta potential?
I think that a helpful way to think of delta potential (and maybe on the delta function in general) is through a limit process: we start with a finite square well of width $$a$$ and depth $$U=\lambda/a$$, and ask ourselves "what happens when we take $$a\to 0^+$$?" This can happen when we are interested, for example, in scales that are much larger than the width of the well, so we want to somehow make an approximation to zeroth order in $$a$$, but still keep the effects of the well. A nice thing here is that there are many different limit processes that can lead all to the same result, which is a very general expression of the potential as $$\lambda\delta(x)$$.
Note, that even though the width of the delta function is zero, it still has effect as it has non-zero measure $$\int\! dx \delta(x) = 1$$, which is quite obvious from the limit process that we introduced. Because we make sure to make it deep, we still keep its effects on whatever comes near it.
A particle can be "trapped" in the sense that any finite potential well can trap a particle - it has a probability to be found outside the well, as its the wave function decay exponentially outside the well for $$E<0$$. Now the particle has higher probability to be found near $$x=0$$ than far from it, in contrast to a free particle which spreads throughout the entire volume.
|
{}
|
# Why can induction cooktops not use aluminium or copper utensils?
Why can induction cooktops not use aluminium or copper utensils, when both materials are conductors and hence conducive to eddy currents, which are used by these devices to cook?
Edit 1 (10 Nov 2016): Following argument is common to all the answers including deleted one. Since it does not seem to be correct, it is important that this matter is resolved. It is being argued that ferromagnetic material have thinner skin depth due to high permeability, leading to eddy currents being confined to a thinner surface, which obviously will have higher resistance and hence higher power dissipation following P=I^2R. However, to me it seems that the argument is incorrect. It is the voltage which is being induced. Because of skin depth the induced electric field will be confined to a thinner surface. this should lead to lower power dissipation in ferromagnetic materials due to P=V^2/R. If my line of argument is correct then the answer to the original question lies somewhere else.
• If you change the resistance of the pan this also changes the voltage. It has to be seen as if it were the secondary circuit in a transformer, not as a simple resistance with a fixed voltage. Nov 10, 2016 at 18:04
In principle all metals can be used in induction cook tops.
However, iron-based pans (or at least bases) work much better, because of the magnetic properties of the metal.
When an AC current is flowing in a conductor, the current distribution is not uniform throughout the volume. Instead, current tends to be confined to a layer near the surface through something called the skin effect. This happens because the currents induce magnetic fields in the conductor, which in turn induces currents that try to counter the flux change (a variation of Lenz's law is at play, basically). Because of this, the effect will be stronger when the size of the magnetic field induced by a given current is greater. The thickness of the conduction layer can be calculated as
$$\delta = \sqrt{\frac{2\rho}{\omega\mu}}$$
where
$\delta$ = skin depth
$\rho$ = resistivity
$\omega$ = frequency
$\mu$ = $\mu_0 \mu_r$, the product of the permeability of free space and the relative permeability of the material
For a ferromagnetic material, $\mu_r$ is very high (if can be 200,000 for annealed iron). This means the skin depth will be much smaller, and the resistance (which depends on the cross sectional area of the conductor) will be higher.
The induction heater imposes a varying magnetic field on the pan, which will result in an induced current. The magnitude of this current is largely independent of the material of the pan - it just depends on the currents in the induction coil and the mutual inductance between the coil and the pan. For the same induced current, higher resistance will give more heating. And that is why you need pans (with a base) made of iron.
• "The magnitude of this current is largely independent of the material of the pan - it just depends on the currents in the induction coil and the mutual inductance between the coil and the pan." Is there a formula capturing how much current is induced? I thought voltage is induced as per Faraday's law, resulting in current as per Ohm's law.
– akm
Nov 7, 2016 at 17:21
• If equal voltages are induced in different material, then your argument does not hold, since power dissipated would be V^2/R.
– akm
Nov 7, 2016 at 17:39
Induction cookers work by inducing eddy currents in the pan that is placed upon them. Typically the cooker consists of a coil through which an alternating current in the range 20-100 kHz is passed. The alternating magnetic field of the same frequency induces an alternating electric field in the pan. The electric field drives currents, which then dissipate as heat, hence heating the contents of the pan.
Heat is generated most efficiently if the power inherent in the electromagnetic fields is dissipated in a thin outer layer of the pan. The work done per unit volume by the fields is $\vec{E} \cdot \vec{J}$, which for a linear conductor is equal to $J^2/\sigma$, where $\sigma$ is the conductivity of the metal. But for a given eddy current $I$, the current density $J$ is inversely proportional to a linear dimension of the pan $L$ multiplied by the effective thickness in which the alternating electric field is confined - this is known as the skin depth. $$\delta = (\pi f \mu_0 \mu_r \sigma)^{-1/2}$$
Thus $J \propto L^{-1}\delta^{-1}$ and thus the total heating effect per unit volume is $J^2/\sigma \propto L^{-2} \delta^{-2} \sigma^{-1}$. The total heating effect is then obtained by multiplying by the pan area $L^2$ and the thickness of material in which the eddy currents ($\delta$ again). So the total heating effect $$H \propto J^2 \sigma^{-1} L^2 \delta \propto L^{-2} \delta^{-2} \sigma^{-1} L^2 \delta \propto \sigma^{-1} \delta^{-1}$$
For two materials of similar conductivity then the one with the smaller skin depth will result in the greatest Ohmic heat losses. Looking at the formula for skin depth, we can see that $\delta^{-1} \propto \mu_r^{1/2}$, so ferrous materials with a high relative permeability have a much smaller skin depth and thus the eddy currents dissipate much more power.
EDIT: As your edited question points out, this argument only works if the currents in the pan are equivalent. However, the coil plus pan can be viewed as a step down transformer where the secondary load resistance is $R \propto \sigma^{-1} \delta^{-1}$. The $\mu_r$ of a ferromagnetic pan could be a few thousand, so for similar conductivity, this increases $R$ by factors of $\sim 50$. If the resistance in the coil is $R_C$ then the fraction of power "usefully" dissipated in the pan is $$f_U \simeq \frac{a^2R}{a^2R + R_C},$$ where $a>1$ is the ratio of the voltage in the coil to the EMF induced in the pan (see ideal transformer). If $R \gg R_C$ then the transfer of power is very efficient. If you reduce $R$ by a factor of 50 (by using an aluminium pan with a factor 50 larger skin depth) then that is probably not going to be the case.
An additional point which rarely gets a mention is that there are hysteresis losses in ferromagnetic materials. But I am unsure about the relative magnitudes of these effects.
• your calculations are for a given eddy current. but for two different material, equal currents are induced or equal voltages?
– akm
Nov 7, 2016 at 17:23
• @Amit The changing magnetic field produces an EMF/voltage drop around a loop, $\mathcal E = -d\phi/dt$, so it's the voltage that's the same. However, while copper is about six times more conductive that iron, iron has more than a thousand times larger permeability.
– rob
Nov 10, 2016 at 10:37
• @rob so ???????
– akm
Nov 10, 2016 at 11:26
• @Amit So the conductivity is about the same, and the same EMF produces more or less the same amount of eddy current. But, this answer argues, the large magnetic permeability of iron concentrates that current in a much smaller volume of the metal than in aluminum or copper. You can't directly compare $I^2R$ to $V^2/R$ because the resistance $R$ seen by the eddy currents also depends on the volume of the metal involved. That's why iron, with its shallow skin depth, is more efficient at converting eddy currents to heat than non-ferromagnetic conductors.
– rob
Nov 10, 2016 at 14:19
• @rob Actually, I understand Amit's concern and am wrestling with it myself. That is why I tried to reduce it to the microscopic level. The same EMF will not produce the same current; the same electric field will produce the same current density. But that isn't the same thing and isn't what I've used in my answer. Nov 10, 2016 at 14:37
|
{}
|
## Filters
Q
Engineering
1 week, 4 days ago
# Can someone explain If an equilateral triangle, having vertexat the (2,-1), has a side along the line, x+y=2, then the area (in sq. units) of this triangle is :
If an equilateral triangle, having vertex at the (2,-1), has a side along the line, x+y=2, then the area (in sq. units) of this triangle is :
• Option 1)
• Option 2)
6
• Option 3)
• Option 4)
49 Views
S solutionqc
Answered 1 week, 4 days ago
Option 1)
Option 2)
6
Option 3)
Option 4)
Exams
Articles
Questions
|
{}
|
# Similarity of triangles in argand plane
• December 14th 2010, 06:11 AM
kalyanram
Similarity of triangles in argand plane
Given $A(\alpha), B(\beta), C(\gamma)$ and $A'(\alpha'), B'(\beta'), C'(\gamma')$ are the vertices of $\triangle ABC$ and $\triangle A'B'C'$ respectively then show that the triangles are directly similar if
$\displaystyle\sum \alpha{({\beta'}-{\gamma'})} = 0$
Sol:
I tried out by saying that the $\triangle ABC$ and $\triangle A'B'C'$ are similar if the ratio of their sides are proportional and the $\angle A = \angle A' , \angle B = \angle B' and \angle C = \angle C'$ so we have
$\frac {|\alpha - \beta|}{|\alpha' - \beta'|} = \frac {|\beta - \gamma|}{|\beta' - \gamma'|} = \frac {|\gamma - \alpha|}{|\gamma' - \alpha'|}$ -- (Eq 1)
$arg(\frac {\alpha - \beta}{\beta - \gamma}) = arg(\frac {\alpha' - \beta'}{\beta' - \gamma'}) , arg(\frac {\beta - \gamma}{\gamma - \alpha}) = arg(\frac {\beta' - \gamma'}{\gamma' - \alpha'}) and arg(\frac {\gamma - \alpha}{\alpha - \beta}) = arg(\frac {\gamma' - \alpha'}{\alpha' - \beta'})$ -- (Eq 2)
From Eq 1 and Eq 2 we can conclude that the complex numbers
$\frac {\alpha - \beta}{\beta - \gamma} = \frac {\alpha' - \beta'}{\beta' - \gamma'} , \frac {\beta - \gamma}{\gamma - \alpha} = \frac {\beta' - \gamma'}{\gamma' - \alpha'} and \frac {\gamma - \alpha}{\alpha - \beta} = \frac {\gamma' - \alpha'}{\alpha' - \beta'}$ rearranging and solving this I cannot conclude that
$\displaystyle\sum \alpha{({\beta'}-{\gamma'})} = 0$
Can you let me know the mistake with my arguement.
Thanks and Regards,
~Kalyan.
• December 14th 2010, 01:50 PM
Opalg
Quote:
Originally Posted by kalyanram
Given $A(\alpha), B(\beta), C(\gamma)$ and $A'(\alpha'), B'(\beta'), C'(\gamma')$ are the vertices of $\triangle ABC$ and $\triangle A'B'C'$ respectively then show that the triangles are directly similar if
$\displaystyle\sum \alpha{({\beta'}-{\gamma'})} = 0$
Sol:
I tried out by saying that the $\triangle ABC$ and $\triangle A'B'C'$ are similar if the ratio of their sides are proportional and the $\angle A = \angle A' , \angle B = \angle B' and \angle C = \angle C'$ so we have
$\frac {|\alpha - \beta|}{|\alpha' - \beta'|} = \frac {|\beta - \gamma|}{|\beta' - \gamma'|} = \frac {|\gamma - \alpha|}{|\gamma' - \alpha'|}$ -- (Eq 1)
$\arg(\frac {\alpha - \beta}{\beta - \gamma}) = \arg(\frac {\alpha' - \beta'}{\beta' - \gamma'}) ,\ \arg(\frac {\beta - \gamma}{\gamma - \alpha}) = \arg(\frac {\beta' - \gamma'}{\gamma' - \alpha'})$ and $\arg(\frac {\gamma - \alpha}{\alpha - \beta}) = \arg(\frac {\gamma' - \alpha'}{\alpha' - \beta'})$ -- (Eq 2)
From Eq 1 and Eq 2 we can conclude that the complex numbers
$\frac {\alpha - \beta}{\beta - \gamma} = \frac {\alpha' - \beta'}{\beta' - \gamma'} ,\ \frac {\beta - \gamma}{\gamma - \alpha} = \frac {\beta' - \gamma'}{\gamma' - \alpha'}$ and $\frac {\gamma - \alpha}{\alpha - \beta} = \frac {\gamma' - \alpha'}{\alpha' - \beta'}$ rearranging and solving this I cannot conclude that
$\displaystyle\sum \alpha{({\beta'}-{\gamma'})} = 0$
Can you let me know the mistake with my argument.
What's the problem, you're practically there! If $\frac {\alpha - \beta}{\beta - \gamma} = \frac {\alpha' - \beta'}{\beta' - \gamma'}$ then (multiplying out the fractions) $(\alpha - \beta)(\beta' - \gamma') = (\beta - \gamma)(\alpha' - \beta')$. Multiply out the brackets, cancel the term $-\beta\beta'$ on both sides, and rearrange to get $\alpha(\beta' - \gamma') +\beta(\gamma'-\alpha') + \gamma(\alpha'-\beta')$, or in other words $\sum \alpha{({\beta'}-{\gamma'})} = 0$.
In fact, the question asked for the converse result: If $\sum \alpha{({\beta'}-{\gamma'})} = 0$ then the triangles are similar. But your argument is reversible, so you have proved that result too.
• December 14th 2010, 10:18 PM
kalyanram
Hi Opalg,
Thanks for the comment. Yeah sometimes I do miss the obvious.
~Kalyan.
|
{}
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 24 May 2016, 05:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If x is an integer, is |x|>1.
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Senior Manager
Joined: 21 Jan 2010
Posts: 345
Followers: 2
Kudos [?]: 146 [0], given: 12
If x is an integer, is |x|>1. [#permalink]
### Show Tags
27 Apr 2012, 16:14
1
This post was
BOOKMARKED
00:00
Difficulty:
75% (hard)
Question Stats:
45% (02:47) correct 55% (01:30) wrong based on 58 sessions
### HideShow timer Statistics
If x is an integer, is |x|>1.
(1) (1-2x)(1+x) < 0
(2) (1-x)(1+2x) < 0
Can somebody please explain this question?
Thanks
Vikram
[Reveal] Spoiler: OA
Math Expert
Joined: 02 Sep 2009
Posts: 32963
Followers: 5743
Kudos [?]: 70350 [2] , given: 9844
Re: If x is an integer, is |x|>1. [#permalink]
### Show Tags
27 Apr 2012, 22:22
2
This post received
KUDOS
Expert's post
vdadwal wrote:
If x is an integer, is |x|>1.
(1) (1-2x)(1+x) < 0
(2) (1-x)(1+2x) < 0
Can somebody please explain this question?
Thanks
Vikram
This post might help to get the ranges for (1) and (2) - "How to solve quadratic inequalities - Graphic approach": x2-4x-94661.html#p731476
If x is an integer, is |x| > 1?
First of all: is $$|x| > 1$$ means is $$x<-1$$ (-2, -3, -4, ...) or $$x>1$$ (2, 3, 4, ...), so for YES answer $$x$$ can be any integer but -1, 0, and 1.
(1) (1 - 2x)(1 + x) < 0 --> rewrite as $$(2x-1)(x+1)>0$$ (so that the coefficient of x^2 to be positive after expanding): roots are $$x=-1$$ and $$x=\frac{1}{2}$$ --> "$$>$$" sign means that the given inequality holds true for: $$x<-1$$ and $$x>\frac{1}{2}$$. $$x$$ could still equal to 1, so not sufficient.
(2) (1 - x)(1 + 2x) < 0 --> rewrite as $$(x-1)(2x+1)>0$$: roots are $$x=-\frac{1}{2}$$ and $$x=1$$ --> "$$>$$" sign means that the given inequality holds true for: $$x<-\frac{1}{2}$$ and $$x>1$$. $$x$$ could still equal to -1, so not sufficient.
(1)+(2) Intersection of the ranges from (1) and (2) is $$x<-1$$ and $$x>1$$. Sufficient.
Answer: C.
This question is also discussed here: m14-72785.html
Solving inequalities:
x2-4x-94661.html#p731476 (check this one first)
inequalities-trick-91482.html
data-suff-inequalities-109078.html
range-for-variable-x-in-a-given-inequality-109468.html?hilit=extreme#p873535
everything-is-less-than-zero-108884.html?hilit=extreme#p868863
Hope it helps.
_________________
Manager
Joined: 26 Dec 2011
Posts: 117
Followers: 1
Kudos [?]: 27 [0], given: 17
Re: If x is an integer, is |x|>1. [#permalink]
### Show Tags
10 May 2012, 06:48
roots are x=-1 and x=1/2 and --> ">" sign means that the given inequality holds true for: x<-1 and x>1/2 ... can you please help me with this concept and what will happen if sign was "<"..further, will it be right in stating that when there is a positive sign, x is greater than the positive root and x is less than the negative root?
Math Expert
Joined: 02 Sep 2009
Posts: 32963
Followers: 5743
Kudos [?]: 70350 [0], given: 9844
Re: If x is an integer, is |x|>1. [#permalink]
### Show Tags
10 May 2012, 06:49
Expert's post
1
This post was
BOOKMARKED
pavanpuneet wrote:
roots are x=-1 and x=1/2 and --> ">" sign means that the given inequality holds true for: x<-1 and x>1/2 ... can you please help me with this concept and what will happen if sign was "<"..further, will it be right in stating that when there is a positive sign, x is greater than the positive root and x is less than the negative root?
Explained here:
x2-4x-94661.html#p731476 (check this one first)
inequalities-trick-91482.html
data-suff-inequalities-109078.html
range-for-variable-x-in-a-given-inequality-109468.html?hilit=extreme#p873535
everything-is-less-than-zero-108884.html?hilit=extreme#p868863
Hope it helps.
_________________
Manager
Joined: 13 May 2010
Posts: 124
Followers: 0
Kudos [?]: 10 [0], given: 4
Re: If x is an integer, is |x|>1. [#permalink]
### Show Tags
19 Jul 2012, 19:23
Bunuel wrote:
vdadwal wrote:
If x is an integer, is |x|>1.
(1) (1-2x)(1+x) < 0
(2) (1-x)(1+2x) < 0
Can somebody please explain this question?
Thanks
Vikram
This post might help to get the ranges for (1) and (2) - "How to solve quadratic inequalities - Graphic approach": x2-4x-94661.html#p731476
If x is an integer, is |x| > 1?
First of all: is $$|x| > 1$$ means is $$x<-1$$ (-2, -3, -4, ...) or $$x>1$$ (2, 3, 4, ...), so for YES answer $$x$$ can be any integer but -1, 0, and 1.
(1) (1 - 2x)(1 + x) < 0 --> rewrite as $$(2x-1)(x+1)>0$$ (so that the coefficient of x^2 to be positive after expanding): roots are $$x=-1$$ and $$x=\frac{1}{2}$$ --> "$$>$$" sign means that the given inequality holds true for: $$x<-1$$ and $$x>\frac{1}{2}$$. $$x$$ could still equal to 1, so not sufficient.
(2) (1 - x)(1 + 2x) < 0 --> rewrite as $$(x-1)(2x+1)>0$$: roots are $$x=-\frac{1}{2}$$ and $$x=1$$ --> "$$>$$" sign means that the given inequality holds true for: $$x<-\frac{1}{2}$$ and $$x>1$$. $$x$$ could still equal to -1, so not sufficient.
(1)+(2) Intersection of the ranges from (1) and (2) is $$x<-1$$ and $$x>1$$. Sufficient.
Answer: C.
This question is also discussed here: m14-72785.html
Solving inequalities:
x2-4x-94661.html#p731476 (check this one first)
inequalities-trick-91482.html
data-suff-inequalities-109078.html
range-for-variable-x-in-a-given-inequality-109468.html?hilit=extreme#p873535
everything-is-less-than-zero-108884.html?hilit=extreme#p868863
Hope it helps.
I have a question regarding the above solution let's say in statement 1, when you solve the inequality why do you say that x<-1 AND x >1/2
why is this an AND condition ....why not OR? If this were a quadratic equation x (can be) = 1/2 OR -1 OR both
For inequality why is the same thing an AND as opposed to OR?
Current Student
Joined: 12 Dec 2012
Posts: 33
Concentration: Leadership, Social Entrepreneurship
GMAT 1: Q V
GMAT 2: 660 Q48 V33
GMAT 3: 740 Q49 V41
GPA: 3.74
Followers: 9
Kudos [?]: 93 [0], given: 19
Solving compounded inequalities - any efficient approach? [#permalink]
### Show Tags
23 Oct 2013, 19:58
If x is an integer, is |x|>1?
(1) (1−2x)(1+x)<0
(2) (1−x)(1+2x)<0
Hi all - I tried this problem on a GMAT club test and I didn't really understand the method. Any quick approaches to finding the solution to each inequality? Is there any quick method to figure out the range of values of x for which statement 1 and 2 will be accurate?
Help appreciated! I'd like to know the quickest, most efficient way to approach such problems!
Cheers
Math Expert
Joined: 02 Sep 2009
Posts: 32963
Followers: 5743
Kudos [?]: 70350 [0], given: 9844
Re: Solving compounded inequalities - any efficient approach? [#permalink]
### Show Tags
24 Oct 2013, 01:05
Expert's post
sidvish wrote:
If x is an integer, is |x|>1?
(1) (1−2x)(1+x)<0
(2) (1−x)(1+2x)<0
Hi all - I tried this problem on a GMAT club test and I didn't really understand the method. Any quick approaches to finding the solution to each inequality? Is there any quick method to figure out the range of values of x for which statement 1 and 2 will be accurate?
Help appreciated! I'd like to know the quickest, most efficient way to approach such problems!
Cheers
Merging similar topics. Please refer to the solution above.
P.S. Please read carefully and follow: rules-for-posting-please-read-this-before-posting-133935.html Pay attention to the rule 3. Thank you.
_________________
Re: Solving compounded inequalities - any efficient approach? [#permalink] 24 Oct 2013, 01:05
Similar topics Replies Last post
Similar
Topics:
2 If x > 1, what is the value of integer x? (1) There are x 6 01 Jul 2011, 12:39
6 If x > 1, what is the value of integer x? 7 18 Mar 2011, 14:07
7 If x>1, what is the value of integer x? 14 10 Mar 2011, 06:50
13 If x is an integer, is |x| > 1? 12 06 Aug 2010, 15:53
4 If x is an integer, is |x|>1? 5 24 May 2010, 17:20
Display posts from previous: Sort by
# If x is an integer, is |x|>1.
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
{}
|
# Change hint color
hi,
could someone change hint color? i have a textbox in a blue background and it is almost invisible
thx
Hi there,
Try this extension and use the “ChangeTextboxColour” block:
1 Like
it’s incredible this extension, didn’t know there is this functionality…
however, thanks!!
3 Likes
HOW? I unable to find any relevant block. There is one but which can change only text box background color not HINT color.
hey mate,
the block is this one.
the hint color is the “color” socket
2 Likes
demn, how I missed it, Thanks bro.
Alternativly I have done it with text, And It working good
1 Like
|
{}
|
# Proof of formula to convert from parallel to series impedance
So, I tried to find a proof to the formulas to convert from parallel to series reactive circuit, which I found in the ARRL Handbook:
$R_s=\frac{R_pX_p^2}{R_p^2+X_p^2}$ and $X_s=\frac{R_p^2X_p}{R_p^2+X_p^2}$
where $R_s$ and $X_s$ are the series resistance and reactance to match the impedance of a parallel circuit with $R_p$ and $X_p$ resistance and reactance.
The resistances are positive and real values and the reactances are signed real values (positive for inductive, and negative for capacitive).
I decided to use the complex representation of impedance (I use j for i because it seems to be the convention in electronics):
$Z_s=R_s+jX_s$
and the complex representation of admittance:
$Y_p=G_p+jB_p$
which gives: $\frac{1}{Z_p}=\frac{1}{R_p}+j\frac{1}{X_p}$, then, with some algebric manipulation:
$Z_p=\frac{R_pX_p}{X_p+jR_p}$
After that, I do $Z_s=Z_p$, meaning:
$R_s+jX_s=\frac{R_pX_p}{X_p+jR_p}$
To bring the complex term to the numerator, I multiply by the conjugate of the denominator:
$R_s+jX_s=\frac{R_pX_p}{X_p+jR_p}*\frac{X_p-jR_p}{X_p-jR_p}=\frac{R_pX_p^2-jR_p^2X_p}{R_p^2+X_p^2}=\frac{R_pX_p^2}{R_p^2+X_p^2}+j\frac{-R_p^2X_p}{R_p^2+X_p^2}$
Now, to set real and imaginary parts equal, we get:
$R_s=\frac{R_pX_p^2}{R_p^2+X_p^2}$ and $X_s=-\frac{R_p^2X_p}{R_p^2+X_p^2}$
The first formula is fine, but what is wrong with the second formula? Why is there that extra "-" in front?
It must be like $\dfrac{1}{Z_p} = \dfrac{1}{Rp} + \dfrac{1}{j.X_p}$. In your case, the imaginary $j$ is in numerator.
|
{}
|
## meet the black heart of Messier
Posted in pictures, Travel, University life with tags , , , , , , , , , on April 10, 2019 by xi'an
## trip to München
Posted in Mountains, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , on October 19, 2015 by xi'an
While my train ride to the fabulous De Gaulle airport was so much delayed that I had less than ten minutes from jumping from the carriage to sitting in my plane seat, I handled the run through security and the endless corridors of the airport in the allotted time, and reached Munich in time for my afternoon seminar and several discussions that prolonged into a pleasant dinner of Wiener Schnitzel and Eisbier. This was very exciting as I met physicists and astrophysicists involved in population Monte Carlo and parallel MCMC and manageable harmonic mean estimates and intractable ABC settings (because simulating the data takes eons!). I wish the afternoon could have been longer. And while this is the third time I come to Munich, I still have not managed to see the centre of town! Or even the nearby mountains. Maybe an unsuspected consequence of the Heisenberg principle…
## Bayesian model averaging in astrophysics
Posted in Books, Statistics, University life with tags , , , , , , , , , , on July 29, 2015 by xi'an
[A 2013 post that somewhat got lost in a pile of postponed entries and referee’s reports…]
In this review paper, now published in Statistical Analysis and Data Mining 6, 3 (2013), David Parkinson and Andrew R. Liddle go over the (Bayesian) model selection and model averaging perspectives. Their argument in favour of model averaging is that model selection via Bayes factors may simply be too inconclusive to favour one model and only one model. While this is a correct perspective, this is about it for the theoretical background provided therein. The authors then move to the computational aspects and the first difficulty is their approximation (6) to the evidence
$P(D|M) = E \approx \frac{1}{n} \sum_{i=1}^n L(\theta_i)Pr(\theta_i)\, ,$
where they average the likelihood x prior terms over simulations from the posterior, which does not provide a valid (either unbiased or converging) approximation. They surprisingly fail to account for the huge statistical literature on evidence and Bayes factor approximation, incl. Chen, Shao and Ibrahim (2000). Which covers earlier developments like bridge sampling (Gelman and Meng, 1998).
As often the case in astrophysics, at least since 2007, the authors’ description of nested sampling drifts away from perceiving it as a regular Monte Carlo technique, with the same convergence speed n1/2 as other Monte Carlo techniques and the same dependence on dimension. It is certainly not the only simulation method where the produced “samples, as well as contributing to the evidence integral, can also be used as posterior samples.” The authors then move to “population Monte Carlo [which] is an adaptive form of importance sampling designed to give a good estimate of the evidence”, a particularly restrictive description of a generic adaptive importance sampling method (Cappé et al., 2004). The approximation of the evidence (9) based on PMC also seems invalid:
$E \approx \frac{1}{n} \sum_{i=1}^n \dfrac{L(\theta_i)}{q(\theta_i)}\, ,$
is missing the prior in the numerator. (The switch from θ in Section 3.1 to X in Section 3.4 is confusing.) Further, the sentence “PMC gives an unbiased estimator of the evidence in a very small number of such iterations” is misleading in that PMC is unbiased at each iteration. Reversible jump is not described at all (the supposedly higher efficiency of this algorithm is far from guaranteed when facing a small number of models, which is the case here, since the moves between models are governed by a random walk and the acceptance probabilities can be quite low).
The second quite unrelated part of the paper covers published applications in astrophysics. Unrelated because the three different methods exposed in the first part are not compared on the same dataset. Model averaging is obviously based on a computational device that explores the posteriors of the different models under comparison (or, rather, averaging), however no recommendation is found in the paper as to efficiently implement the averaging or anything of the kind. In conclusion, I thus find this review somehow anticlimactic.
## L’Aquila: earthquake, verdict, and statistics
Posted in Statistics, University life with tags , , , , , , , , , on October 25, 2012 by xi'an
Yesterday I read this blog entry by Peter Coles, a Professor of Theoretical Astrophysics at Cardiff and soon in Brighton, about L’Aquila earthquake verdict, condemning six Italian scientists to severe jail sentences. While most of the blogs around reacted against this verdict as an anti-scientific decision and as a 21st Century remake of Giordano Bruno‘s murder by the Roman Inquisition, Peter Coles argues in the opposite that the scientists were not scientific enough in that instance. And should have used statistics and probabilistic reasoning. While I did not look into the details of the L’Aquila earthquake judgement and thus have no idea whether or not the scientists were guilty in not signalling the potential for disaster, were an earthquake to occur, I cannot but repost one of Coles’ most relevant paragraphs:
I thought I’d take this opportunity to repeat the reasons I think statistics and statistical reasoning are so important. Of course they are important in science. In fact, I think they lie at the very core of the scientific method, although I am still surprised how few practising scientists are comfortable even with statistical language. A more important problem is the popular impression that science is about facts and absolute truths. It isn’t. It’s a process. In order to advance, it has to question itself.
Statistical reasoning also applies outside science to many facets of everyday life, including business, commerce, transport, the media, and politics. It is a feature of everyday life that science and technology are deeply embedded in every aspect of what we do each day. Science has given us greater levels of comfort, better health care, and a plethora of labour-saving devices. It has also given us unprecedented ability to destroy the environment and each other, whether through accident or design. Probability even plays a role in personal relationships, though mostly at a subconscious level.
A bit further down, Peter Coles also bemoans the shortcuts and oversimplification of scientific journalism, which reminded me of the time Jean-Michel Marin had to deal with radio journalists about an “impossible” lottery coincidence:
Years ago I used to listen to radio interviews with scientists on the Today programme on BBC Radio 4. I even did such an interview once. It is a deeply frustrating experience. The scientist usually starts by explaining what the discovery is about in the way a scientist should, with careful statements of what is assumed, how the data is interpreted, and what other possible interpretations might be and the likely sources of error. The interviewer then loses patience and asks for a yes or no answer. The scientist tries to continue, but is badgered. Either the interview ends as a row, or the scientist ends up stating a grossly oversimplified version of the story.
|
{}
|
# Equilibrium of Cylinder with two liquids at either side
## Homework Statement
(Please refer to the attachment given)
In the figure shown, the heavy cylinder (radius R) resting on a smooth surface separates two liquids of densities $$2\rho$$ and $$3\rho$$ . The height h for the equilibrium of cylinder must be:
$$a) \frac{3R}{2}$$
$$b) R \sqrt{\frac{3}{2}}$$
$$c) R \sqrt{2}$$
$$d) R \sqrt{\frac{3}{4}}$$
## Homework Equations
Basic Equations of hydrostatics
## The Attempt at a Solution
This Problem is a little confusing. I think we have to consider the components of the pressures at various points on the cylinder, but I am not too sure how.Besides the above question, how would the cylinder move in the given state of inequilibrium?
#### Attachments
• Cylinder_liquids.doc
37.5 KB · Views: 597
|
{}
|
# Center of mass
## Homework Statement
Find the center of mass of the triangle with vertices at (0,0), (6,6) and (-6,6) if the density at (x,y) is equal to y.
## The Attempt at a Solution
I am having trouble with these centroid/center of mass problems, and I can't even figure out the bounds for sure. Any help/tips?
Related Calculus and Beyond Homework Help News on Phys.org
Center of mass coordinates,
x= (My/m)
y=(Mx/m)
where My = Moment of y, and Mx = moment of x
My = double integral of x times density
Mx = double integral of y times density
Your given the density = y, so for My its the double integral of xy over the given region
for Mx its the double integral of y^2 over the given region
to find m, mass, you just do the double integral of the density over the region.
your region given to you is a triangle, if you draw this on paper it will be a triangle. Looking at it your going to have to split it into two integrals since the bounds of Y will change since it goes from a -x to x for its bottom bound.
OR a much simpler way is that you can just multiply the integral by 2, and just doing the integral with the bounds of dy dx, and the bounds of dy will be from x to 6, and dx will from 0 to 6.
Multiply this answer by 2 to get your answers to all of the integrals you have to do.
Once you solve My, Mx, and m (mass) you can then plug them into the equations to get the location of the center of mass. which is,
x = My/m
y= Mx/m
I basically went through the whole problem, to moreover cover your specific question about the bounds. You want to integrate the enclosed region as that is the object your finding. Since the bottom part of the DY is not the same throughout (it changes from -x to x), you cant do it as one whole integral, but in this case you know both sides (left of the Y axis, and right of the Y axis) are the same, you can just integrate one side and multiply it by 2.
I hope this helps you out.
Center of mass coordinates,
x= (My/m)
y=(Mx/m)
where My = Moment of y, and Mx = moment of x
My = double integral of x times density
Mx = double integral of y times density
Your given the density = y, so for My its the double integral of xy over the given region
for Mx its the double integral of y^2 over the given region
to find m, mass, you just do the double integral of the density over the region.
your region given to you is a triangle, if you draw this on paper it will be a triangle. Looking at it your going to have to split it into two integrals since the bounds of Y will change since it goes from a -x to x for its bottom bound.
OR a much simpler way is that you can just multiply the integral by 2, and just doing the integral with the bounds of dy dx, and the bounds of dy will be from x to 6, and dx will from 0 to 6.
Multiply this answer by 2 to get your answers to all of the integrals you have to do.
Once you solve My, Mx, and m (mass) you can then plug them into the equations to get the location of the center of mass. which is,
x = My/m
y= Mx/m
I basically went through the whole problem, to moreover cover your specific question about the bounds. You want to integrate the enclosed region as that is the object your finding. Since the bottom part of the DY is not the same throughout (it changes from -x to x), you cant do it as one whole integral, but in this case you know both sides (left of the Y axis, and right of the Y axis) are the same, you can just integrate one side and multiply it by 2.
I hope this helps you out.
Thank you very much! I am pretty sure I understand it now!
|
{}
|
Numerical solution of an advection equation, $\frac{\partial P}{\partial t}+\frac{\partial}{\partial x}\left(P^{5/3}\right)=0$, with finite volume
I was trying to solve the following equation numerically, $$\frac{\partial P}{\partial t}+\frac{\partial}{\partial x}\left(P^{5/3}\right)=0$$ I adopted the Godunov approach for discretising the equation numerically, $$\frac{P_{i}^{n+1}-P_{i}^{n}}{\Delta t}=-\frac{\mathcal{F}_{i+\frac{1}{2}}^{n}-\mathcal{F}_{i-\frac{1}{2}}^{n}}{x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}}}$$ where $$\mathcal{F}$$ represents the numerical flux.
Following first order upwind scheme, I approximated the flux as, $$\mathcal{F}_{i+\frac{1}{2}}^{n}=\begin{cases} P_{i}^{5/3} & \text{if P_{i}^{2/3}>0}\\ P_{i+1}^{5/3} & \text{if P_{i}^{2/3}<0} \end{cases}.$$ However, with such a discretization, I am getting oscillations in the numerical solution. I am not sure if the discretization is correct. I will be grateful if someone can help me in this regard.
• The equation clearly only makes sense if $P\ge 0$. But then the question remains under what circumstances you consider $P^{2/3}<0$? If $P\ge 0$, isn't $P^{2/3}$ always greater or equal to zero? Jan 19 at 20:39
• Yes it is. I wrote it for completeness. The flux will only going to take the left value. Jan 20 at 10:47
• Then have you tried to debug your code? Maybe by solving a linear problem at first? Jan 20 at 18:04
• For linear problem, it is giving the right output. This is leading me to think if the calculation for "dt" that I am performing is correct or not. I was calculating dt in the following manner,$$dt=\mathcal{C}dx/P_{i}^{2/3}$$. For the right calculation instead of $P_{i}^{2/3}$ one needs to take $P_{i+1/2}^{2/3}$ but I do not know what is the value of $P_{i+1/2}$. Could this be the problem ? Jan 21 at 16:56
• you should have a single dt for the entire domain, try using the largest value in the entire domain of $a \approx \partial (P^{5/3})/\partial P = 5/3 P^{2/3}$ for that timestep. Jan 21 at 21:37
Your discretization with the upwind scheme looks correct.
One reason why you get oscillations might be that you are choosing an incorrect time step. Another one is if you have some complicated and not well-behaved boundary/initial conditions, some instabilities might arise if you don't handle them properly.
To compute a fair time step, that satisfies the Courant-Friedrichs-Lewy necessary stability condition, you can just take the maximum of the flow speed in your domain. Don't worry too much about how to compute the speed on the cell boundaries (fractional $$i$$), just take a fair value... The maximum of $$P^{2/3}$$ on the cell centers will be just fine. It is also a good practice to apply a safety factor $$c \in ]0,1[$$ to the $$\Delta t$$ you obtain in such a way, like
$$\Delta t = c 3/5 \Delta x / P^{2/3}$$
I implemented your scheme on your equation for a trivial set of boundary conditions ($$P=2$$) and initial conditions ($$P=2$$ on the left boundary, and drops linearly reaching value $$P=1$$ on the right boundary), and the solution I get looks well behaved:
import matplotlib.pyplot as plt
import numpy as np
#%% Settings
# Physical mesh points (you have to sum 1 left ghost cell)
Nx = 10
xStart = 0.
xEnd = 1.
tEnd = 1.
# Grid
x = np.linspace(xStart, xEnd, Nx)
# Initial condition
ic = np.linspace(2., 1., Nx)
# Boundary condition
bc = 2.
# Safety factor
Ccfl = 0.8
#%% Functions
def godunov(P, F, x, dt):
return P[1:] - dt * (F[1:] - F[:-1]) / (x[1:] - x[:-1])
def flux(P):
return P**(5/3)
#%% Solution
P = ic.copy()
Ps = []; ts = []
t = 0.
while t < tEnd:
dt = Ccfl*3/5 * (xEnd - xStart)/Nx / max(P)**(2/3)
F = flux(P)
P = np.append([bc], godunov(P, F, x, dt))
t += dt
# Save results
Ps.append(P)
ts.append(t)
#%% Print results
fig, ax = plt.subplots()
for tt, t in enumerate(ts):
if tt%3 == 0:
ax.plot(x, Ps[tt], 'o-', label = f"t={t}")
ax.legend()
New contributor
Rigel is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
|
{}
|
## anonymous 5 years ago four less than a quotient of a number and 6 is 3. Find the number
1. anonymous
42?
2. anonymous
$4 -{x \over 6} = 3$
3. anonymous
-x/6 = 3-4 -x/6 = -1 x = 6
4. anonymous
thanks
|
{}
|
2 added 384 characters in body; deleted 201 characters in body
A neat way to show that a set of primes has zero density (within the primes) is to use the following form of the Green-Tao theorem:
Any set of positive density within the primes has arbitrarily long arithmetic progressions.
In particular, any set of primes which does not contain three (say) elements in arithmetic progression must have zero density.
If a set $P$ of primes has the property that $p_1,p_2\in P$ implies that $(p_1+p_2)/2\not\in P$, then the set $P$ has zero density in the primes.
As an immediate corollary, we get the (unconditional) result that the Mersenne primes (ones of the form $2^p-1$) have zero density.
This trick seems like it could be applied to many other natural sets of primes.
EDIT: Zero density for Mersenne primes is easy to get anyway, as Ben Weiss points out, and so is zero density for primes of form $n^2+1$, which would also follow from this method.
1
A neat way to show that a set of primes has zero density (within the primes) is to use the following form of the Green-Tao theorem:
Any set of positive density within the primes has arbitrarily long arithmetic progressions.
In particular, any set of primes which does not contain three (say) elements in arithmetic progression must have zero density.
If a set $P$ of primes has the property that $p_1,p_2\in P$ implies that $(p_1+p_2)/2\not\in P$, then the set $P$ has zero density in the primes.
As an immediate corollary, we get the (unconditional) result that the Mersenne primes (ones of the form $2^p-1$) have zero density.
This trick seems like it could be applied to many other natural sets of primes.
|
{}
|
# Problem 1 Show thatsin(x)le ~12 dxis convergent_
###### Question:
Problem 1 Show that sin(x)le ~12 dx is convergent_
#### Similar Solved Questions
##### 15. The means, standard deviations, and covariance for random variables X, Y. and Z are given...
15. The means, standard deviations, and covariance for random variables X, Y. and Z are given below. x = 3, uy = 5. z = 7 ox= 1, OY = 3, oz = 4 cov(X, Y) = 1, cov (X, Z) = 3, and cov (Y,Z) = -3 T=X-2Y+3 Z var(T) =...
##### Consider the Gompertz equation which can be used to describe tumor growth in the body Let m(t) be the mass of a tumor at time t The corresponding initial value problem isdm =_rm In dt m(0) moFor what values of m is the tumor not growing? For what values of m is the growth rate positive? For what value of m is the tumor growing the fastest? Solve the IVP. Sketch graph the solution for =1 months-1 k-10 grams, and mo = 1 gram Describe the growth pattern of the tumor. Is the growth unbounded? If no
Consider the Gompertz equation which can be used to describe tumor growth in the body Let m(t) be the mass of a tumor at time t The corresponding initial value problem is dm =_rm In dt m(0) mo For what values of m is the tumor not growing? For what values of m is the growth rate positive? For what v...
##### Examples The hydrogen ion concentration of a fruit juice is 3.3x 102 M: What is the pH of the juice? Is it acidic or basic?
Examples The hydrogen ion concentration of a fruit juice is 3.3x 102 M: What is the pH of the juice? Is it acidic or basic?...
##### Calculate the volume in liters of a 54.3 g/dL iron(III) bromide solution that contains 416. g of iron(III) bromide (FeBr, )-Round your answer to 3 significant digitsX
Calculate the volume in liters of a 54.3 g/dL iron(III) bromide solution that contains 416. g of iron(III) bromide (FeBr, )- Round your answer to 3 significant digits X...
##### Differentiate each function$F(x)=(5 x+2)^{4}(2 x-3)^{8}$
Differentiate each function $F(x)=(5 x+2)^{4}(2 x-3)^{8}$...
##### The mother of a 3-year-old boy calls the emergency department and states that she found an...
The mother of a 3-year-old boy calls the emergency department and states that she found an empty bottle of acetaminophen on the floor She states that she thinks her child ingested all of the medication. What is the priority question for the nurse to ask the mother?...
##### Indicate the major product from the reaction shown below:heatCHOCHOCHO CHOCHO
Indicate the major product from the reaction shown below: heat CHO CHO CHO CHO CHO...
##### Openwith GG cocie Docs12]Question2 Express the following equation system in matrix form and then solve it by Cramers rule: 2X + 4Y _ 32 = 3 3X ~ SY + 47 = 15 ~2X + 3Y + 2Z = 30 Question?The equilibrium condition for three related market is given as follows: P - Pz + ZPz = 26 Pi + 2Pz ~ Py = 20 ~Pi - 2Pz + 4Pz = 22 Express the equation systems in matrix form ad find the equilibrium price for each market by matrix inverse techniquePage
Openwith GG cocie Docs 12] Question2 Express the following equation system in matrix form and then solve it by Cramers rule: 2X + 4Y _ 32 = 3 3X ~ SY + 47 = 15 ~2X + 3Y + 2Z = 30 Question? The equilibrium condition for three related market is given as follows: P - Pz + ZPz = 26 Pi + 2Pz ~ Py = 20 ~P...
##### 4c) 8+ T;CHECk:
4 c) 8+ T; CHECk:...
##### Consider the following information for a firm operating in an oligopolistic industry. The initial price of...
Consider the following information for a firm operating in an oligopolistic industry. The initial price of the product is $50, and the initial quantity is 800. A) If price is$70 per unit, and firms DO NOT collude, what is the quantity that the firm will produce and sell? A) ________________________...
##### xCioi013. [0/6 Points]DETAILSPREVIOUS ANSWERSSolve the matrix equation for the unknown matrix X (If not po20 30 10 20 20c =24 = 8 - 3114. [6/6 Points] DETAILSPREVIQUS ANSWERS K Perform tho matrlx oporatlon: (U not possibla; onteeNOESiul
xCioi0 13. [0/6 Points] DETAILS PREVIOUS ANSWERS Solve the matrix equation for the unknown matrix X (If not po 20 30 10 20 20 c = 24 = 8 - 31 14. [6/6 Points] DETAILS PREVIQUS ANSWERS K Perform tho matrlx oporatlon: (U not possibla; onteeNOESiul...
##### Survey asks questions about one'5 ha piness and health_ One would think that health plays role in one's piness: Use the data in the accompanying table to determine whether healthier people tend to also be happier: Treat level of health as the explanatory variable. Click the icon to view the data table_Create conditional distribution for the data_Level of HealthLevel Of Poor Fair Happiness Not too happy Pretty happy Mery happy Total (Round to three decima places as needed:)GoodExcellent
survey asks questions about one'5 ha piness and health_ One would think that health plays role in one's piness: Use the data in the accompanying table to determine whether healthier people tend to also be happier: Treat level of health as the explanatory variable. Click the icon to view th...
##### Find the difference.$5.4-(-3.8)$
Find the difference. $5.4-(-3.8)$...
##### Jorge and Anita, married taxpayers, earn $96,500 in taxable income and$77,500 in interest from an...
Jorge and Anita, married taxpayers, earn $96,500 in taxable income and$77,500 in interest from an investment in City of Heflin bonds. Using the U.S. tax rate schedule for married filing jointly, how much federal tax will they owe? What is their average tax rate? What is their effective tax rate? Wh...
##### Calculate the following integrals:Mf(r+ 2y)da where( D) A triangle in the first quadrant bounded by an axis and andthe line *+ 2v= 2Jl tan (x + > HAwhere D Jlt is the circle whose center is the origin and radius 2o a0008p (xyz) = If the mass density of an object is given Vr+y +2It occupies an area G = {(xy2):1sx +y + 2 < 9} Find the mass of the object
Calculate the following integrals: Mf(r+ 2y)da where( D) A triangle in the first quadrant bounded by an axis and andthe line *+ 2v= 2 Jl tan (x + > HA where D Jlt is the circle whose center is the origin and radius 2 o a0008 p (xyz) = If the mass density of an object is given Vr+y +2 It occupies ...
##### Silver Company makes a product that is very popular as a Mother’s Day gift. Thus, peak...
Silver Company makes a product that is very popular as a Mother’s Day gift. Thus, peak sales occur in May of each year, as shown in the company’s sales budget for the second quarter given below: April May June Total Budgeted sales (all on account) $310,000$510,000 $160,000$9...
##### 10-8 y-f(x)20 Id10
10 -8 y-f(x) 2 0 Id 10...
##### An article in the Wall Street Journal stated that in China, "carmakers continue to grapple with $\ldots$ rising inventories." Why might carmakers in China find that their inventories are unexpectedly rising? How are these car- makers likely to react to the increase in inventories?
An article in the Wall Street Journal stated that in China, "carmakers continue to grapple with $\ldots$ rising inventories." Why might carmakers in China find that their inventories are unexpectedly rising? How are these car- makers likely to react to the increase in inventories?...
##### 2 N 4 Solue 2 -49" 1 +e j 0 Jx 0 4 d,'{fereatia 1 2 eah 1 ote.) 8 [
2 N 4 Solue 2 -49" 1 +e j 0 Jx 0 4 d,'{fereatia 1 2 eah 1 ote.) 8 [...
##### Ketoproten pKa 4,5 CoohEacSubstanceClomipramino PKa 9.2CHs HjcSubstance OCH3Quinidine pKa 4.2 pKa 8.3OHSubstance DAmobarbital pKa 8.0HN_NHRuleince
Ketoproten pKa 4,5 Cooh Eac Substance Clomipramino PKa 9.2 CHs Hjc Substance OCH3 Quinidine pKa 4.2 pKa 8.3 OH Substance D Amobarbital pKa 8.0 HN_ NH Ruleince...
##### Exercise 3.5. The leaves of certain plants in the genus Albizzia will fold and unfold in...
Exercise 3.5. The leaves of certain plants in the genus Albizzia will fold and unfold in various light conditions. We have taken 15 different leaves and subjected them to red light for 3 minutes. The leaves were divided into three groups of five at random. The leaflet angles were then measured 30, 4...
##### A step by step solution would really help. These are practice problems similar to my homework...
A step by step solution would really help. These are practice problems similar to my homework and I cant seem to figure them out. Thank you....
##### What should UBER do in the face of the proposed D.C. legislation? Should regulators in Barbados...
What should UBER do in the face of the proposed D.C. legislation? Should regulators in Barbados propose regulation similar to D.C? Give reasons for your answer....
##### The rigid bar AB is pinned at A and supported by two aluminum rods (E 10...
The rigid bar AB is pinned at A and supported by two aluminum rods (E 10 x 103ksi) with diameter d = 1.2 in and the length shown for each on the figure. If the rigid bar is initially vertical, determine (a) the reaction exerted at A and (b) the average normal stress in each rod when the 40-kip force...
##### 6. There is a N dimensional cube with linear expansion coefficient & which has initial temperature To: When it changes to temperature T, what is a ratio between final and initial N dimensional volume? Assume a(T To) is very small so we can only consider up to first order
6. There is a N dimensional cube with linear expansion coefficient & which has initial temperature To: When it changes to temperature T, what is a ratio between final and initial N dimensional volume? Assume a(T To) is very small so we can only consider up to first order...
##### Olympus has integrated many components into its SLR camera. Recently, Olympus has introduced a smartphone app,...
Olympus has integrated many components into its SLR camera. Recently, Olympus has introduced a smartphone app, Olympus Image Share (OL.Share) to pair with its camera. It says: "Paired with a compatible Olympus camera, the Olympus Image Share (OL.Share) smartphone app makes photography more enjoy...
##### Suppose you come across a solid substance and experiments show no long-range order among the particles that make up that solid. What type of solid do vou have?
Suppose you come across a solid substance and experiments show no long-range order among the particles that make up that solid. What type of solid do vou have?...
##### 1. T-Beams are used to support a floor in an industrial building. The floor is used...
1. T-Beams are used to support a floor in an industrial building. The floor is used for product storage so it must be designed for a heavy 600 psf. The floor must also support a superimposed dead load of 20 psf (in addition to the self-weight of the beams and slab. The layout of the beams and slabs ...
##### 2. Graph the given rational function. Fill in the chart before graphing. Be sure to label...
2. Graph the given rational function. Fill in the chart before graphing. Be sure to label any information from the chart on the graph. 2x3 - 6x2 y = x2 - 16 Fill in the chart showing any work required. x-intercept(s) y-intercept(s) Vertical Asymptotes Horizontal or Oblique Asymptotes Sign Diagram/Si...
##### In python please The following statement creates a list, please rewrite it using a different way...
in python please The following statement creates a list, please rewrite it using a different way but creating the same list. Lst = [j for j in range (100) if j % 2 == 0]...
##### 49sto conlidence Interval at the mean [5 (11, 17) The result ot testing Ho'H = 13gainst Hj 'U 13 a0.05 level of significance: Relect the ruIl hypothests because 13 /5 more than 11 Co not reject the null hypothesis Decause 13 between and 17 Do not rejec the the null nypothesis because Iess tnan 17 Tnere I5 nsutficlent Intormation to make any conclusion about tne hypotesis test
49sto conlidence Interval at the mean [5 (11, 17) The result ot testing Ho'H = 13gainst Hj 'U 13 a0.05 level of significance: Relect the ruIl hypothests because 13 /5 more than 11 Co not reject the null hypothesis Decause 13 between and 17 Do not rejec the the null nypothesis because Ies...
|
{}
|
# O(N) sigma model renormalization
Does anyone know, is a model with lagrangian $\mathcal{L} = \frac{(\partial_{\mu}\phi_a)^2}{2}-\frac{m^2 \phi_a^2}{2}-\frac{\lambda}{8N}(\phi_a \, \phi_a)^2$ renormalizable? I'm using BPHZ scheme and everything is OK in one loop. But it seems to me (may be I'm mistaken) that the scheme breaks down even in two loops. I will be grateful for links on books or lectures where such a model is considered.
• Renormalizability depends on space-time dimension. If $d\leq 4$, you're fine. – Adam Jun 3 '14 at 16:15
• Yes, I'm considering this model in d=4 space-time. Couldn't you please tell me where can I read more about this model? – user43283 Jun 3 '14 at 16:17
• Have a look at Zinn-Justin's book, you'll find everything you want about the O(N) model... – Adam Jun 3 '14 at 17:38
• Thanks a lot for advicing this book, but as far as I understood, Zinn-Justin uses non-perturbative approach to this problem. Do you know any article or lecture where this model is considered in perturbative way? I just need to check my calculations. Anyway, thanks for Zinn-Justin :) – user43283 Jun 3 '14 at 19:45
• In "Quantum Field Theory and Critical Phenomena", he does everything ;-) Chapter 11 might be what you're looking for. – Adam Jun 3 '14 at 19:50
|
{}
|
# Raw curve25519 public key points
I'm trying to understand curve25519, and ECC public points.
I'm playing with Minisign, to better understand the fundamentals of ECC.
Minisign uses curve25519 and outputs public keys as base64 encoded strings in the following format:
base64(<signature_algorithm> || <key_id> || <public_key>)
signature_algorithm: Ed
key_id: 8 random bytes
public_key: Ed25519 public key
As an example, my public key is:
RWRxmbgCt+0wPvdZ0alM7J46oqsOBTtud4E8zRznnCT0q0u7X971eWUN
Decoding this Base64 to Hex we get:
45 64 71 99 b8 02 b7 ed 30 3e f7 59 d1 a9 4c ec 9e 3a a2 ab 0e 05 3b 6e 77 81 3c cd 1c e7 9c 24 f4 ab 4b bb 5f de f5 79 65 0d
This makes sense... 45 64 == Ed
Next eight random bytes... 71 99 b8 02 b7 ed 30 3e
Then, if I'm correct, the public key... f7 59 d1 a9 4c ec 9e 3a a2 ab 0e 05 3b 6e 77 81 3c cd 1c e7 9c 24 f4 ab 4b bb 5f de f5 79 65 0d
Now this is what I'm trying to understand!
The public key is the right size (32 bytes/256 bits), however isn't it supposed to start with 04?
Also is it possible to take the public key and break it into it's X,Y co-ordinates as integers?
Is 16 bytes enough to represent a curve25519 X or Y component?
Thanks for the help.
• Doesn't need to start with 04. Points are encoded according to section 5.1.2 in RFC 8032: 255 bits for y-coordinate and 1 bit for x coordinate. Coordinates on Curve25519 are mod $p$ with $p= 2^{255} - 19$ so 32 bytes are needed for one coordinate. – corpsfini Jul 24 '19 at 11:05
## 2 Answers
The leading 04 byte is specified by the SEC standard (which is based on the ANSI X9.62 standard). It indicates that the public key point is not compressed. If the key is compressed, it uses 02 or 03 as leading byte depending on the lower bit of the y coordinate.
EdDSA public keys do not use this byte for two reasons:
• It always uses compressed points; there is one additional bit depending on the y coordinate, but it's bitwise concatenated directly to the x coordinate (it's not the lower bit of y like in SEC, but on whether x or p-x is larger). Since the x coordinate in Ed25519 has 255 bits, with the additional bit it fits nicely in 256 bits.
• It doesn't need compatibility with older ECC implementations, since it's a new signature algorithm over a new curve which are not in the SEC standard.
(minisign author here)
As noted by corpsfini, keys encode the Y coordinate. The X coordinate is recovered using the curve equation: X = sqrt((Y^2 - 1) / (d Y^2 + 1)).
The square root has two solutions, so we need to encode the sign of x as well. Since coordinates only require 255 bits, we have a extra bit, used to encode the sign.
X and Y ∈ [0; 2^255-19[, so 16 bytes wouldn't be enough to encode them.
• Thanks a lot! This is exactly what I needed. So the 32bytes I see above is really the Y coordinate, and I can recover X with that formula to get both. I love Minisign, it's really fab. Can I ask you, is it possible to output the raw private key (just for learning), or manually decrypt it? – Woodstock Jul 24 '19 at 11:44
• Keys are encoded in little-endian format, GnuPG being the only implementation I'm aware of that uses big-endian for Ed25519. So y as an integer is (0xf7)*2^0 + (0x59)*2^8 ... (0x0d)*2^252 = 6059360325038685432335429159867106683431817502499950464645549794044379486711 and x = 33942739095931203280835016784239364197415773456702966128992901549564140435446 – Frank Denis Jul 24 '19 at 12:00
• Wonderful thanks Frank. Is decryption of the private key possible if I manually follow the description of its form, or is there a way to extract the raw key in an easier fashion? – Woodstock Jul 24 '19 at 12:06
• The format of the private key is described here: jedisct1.github.io/minisign/#secret-key-format - The KDF is used to generate the key steam. If you don't want to do this yourself, the easiest way to go is probably to modify the code to print the key after decryption. Or use libsodium (or any other Ed25519 implementation) directly. – Frank Denis Jul 24 '19 at 12:11
• Thanks Frank! <3 – Woodstock Jul 24 '19 at 12:28
|
{}
|
# how to compute the probability ditsribution of the average of n iid random variables
It has been a long time since I took a probability class, and I'm sure that this site is for any level of mathematics...
Given $n$ iid random variables, I want to compute the pdf of their average. How can I do that? I know I can do for summation but is it still true for the average? Can you give me a reference textbook for computing average, not just summation?
A related but not important question: I've read some papers on Monte Carlo simulation, but mostly they just focus on some tail bound but are not interested in actually computing the pdf. Is it because just bounding upper tail value is enough or is it because it's rather cumbersome to compute the exact of pdf?
-
The average is just the sum divided by a constant. If you know the distribution function of the sum, the one for the average is easy. The probability distribution of the sum is not necessarily easy to find. That's why, in favourable cases, it is very helpful that by the Central Limit Theorem, the sum (and therefore the average) has a distribution which can be well approximated by the normal, if $n$ is large. – André Nicolas Nov 22 '11 at 4:56
@André Nicolas Thanks! – Endo Nov 23 '11 at 16:06
|
{}
|
# 8.6: pH Calculations
One thing that you should notice about the numbers in the previous examples is that they are very small. In general, chemists find that working with large negative exponents like these (very small numbers) is cumbersome. To simplify the process, calculations involving hydronium ion concentrations are generally done using logarithms. Recall that a logarithm is simply the exponent that some base number needs to be raised to in order to generate a given number. In these calculations, we will use a base of 10. A number such as 10,000 can be written as 104, so by the definition, the logarithm of 104 is simply 4. For a small number such as 10-7, the logarithm is again simply the exponent, or -7. Before calculators became readily available, taking the logarithm of a number that was not an integral power of 10 meant a trip to “log tables” (or even worse, using a slide rule). Now, pushing the LOG button on an a scientific calculator makes the process trivial. For example, the logarithm of 14,283 (with the push of a button) is 4.15482. If you are paying attention, you should have noticed that the logarithm contains six digits, while the original number (14,283) only contains five significant figures. This is because a logarithm consists of two sets of numbers; the digits to the left of the decimal point (called the characteristic) simply reflect the integral power of 10, and are not included when you count significant figures. The numbers after the decimal (the mantissa) should have the same significance as your experimental number, thus a logarithm of 4.15482 actually represents five significant figures.
There is one other convention that chemists apply when they are dealing with logarithms of hydronium ion concentrations, that is, the logarithm is multiplied by (-1) to change its sign. Why would we do this? In most aqueous solutions, [H3O+] will vary between 10-1 and 10-13 M, giving logarithms of –1 to –13. To make these numbers easier to work with, we take the negative of the logarithm (-log[H3O+]) and call it a pH value. The use of the lower-case “p” reminds us that we have taken the negative of the logarithm, and the upper-case “H” tells us that we are referring to the hydronium ion concentration. Converting a hydronium ion concentration to a pH value is simple. Suppose you have a solution where [H3O+] = 3.46 × 10-4 M and you want to know the corresponding pH value. You would enter 3.46 × 10-4 into your calculator and press the LOG button. The display should read “-3.460923901”. First, we multiply this by (-1) and get 3.460923901. Next, we examine the number of significant figures. Our experimental number, 3.46 × 10-4 has three significant figures, so our mantissa must have three digits. We round our answer and express our result as, pH = 3.461.
The reverse process is equally simple. If you are given a pH value of 7.04 and are asked to calculate a hydronium ion concentration, you would first multiply the pH value by (-1) to give –7.04. Enter this in your calculator and then press the key (or key combination) to calculate “10x”; your display should read “9.120108 × 10–8”. There are only two digits in our original mantissa (7.04) so we must round this to two significant figures, or [H3O+] = 9.1 × 10-8.
Exercise $$\PageIndex{1}$$
Calculating [H3O+] and pH Values
1. A solution is known to have a hydronium ion concentration of 4.5 ×10-5 M; what is the pH this solution?
2. A solution is known to have a pH of 9.553; what is the concentration of hydronium ion in this solution?
3. A solution is known to have a hydronium ion concentration of 9.5 ×10-8 M; what is the pH this solution?
4. A solution is known to have a pH of 4.57; what is the hydronium ion concentration of this solution?
There is another useful calculation that we can do by combining what we know about pH and expression
$K_{W}=[H_{3}O^{+}][HO^{-}]$
We know that KW = 10-14 and we know that (-log [H3O+]) is pH. If we define (-log [HO]) as pOH, we can take our expression for KW and take the (-log) of both sides (remember, algebraically you can perform the same operation on both sides of an equation) we get:
$K_{W}=10^{-14}=[H_{3}O^{+}][HO^{-}]$
$-\log (10^{-14})=(-\log [H_{3}O^{+}])+(-\log [HO^{-}])$
$14=pH+pOH$
Which tells us that the values of pH and pOH must always add up to give 14! Thus, if the pH is 3.5, the pOH must be 14 – 3.5 = 11.5. This relationship is quite useful as it allows you to quickly convert between pH and pOH, and therefore between [H3O+] and [HO].
We can now re-address neutrality in terms of the pH scale:
• A solution is acidic if pH < 7.
• A solution is basic if pH > 7.
• A solution is neutral if pH = 7.
The simplest way to determine the pH of a solution is to use an electronic pH meter. A pH meter is actually a sensitive millivolt meter that measures the potential across a thin, sensitive glass electrode that is immersed in the solution. The voltage that develops is a direct function of the pH of the solution and the circuitry is calibrated so that the voltage is directly converted into the equivalent of a pH value. You will most likely use a simple pH meter in the laboratory. The thing to remember is that the sensing electrode has a very thin, fragile, glass membrane and is somewhat expensive to replace. Be careful!
A simple way to estimate the pH of a solution is by using an indicator. A pH indicator is a compound that undergoes a change in color at a certain pH value. For example, phenolphthalein is a commonly used indicator that is colorless at pH values below 9, but is pink at pH 10 and above (at very high pH it becomes colorless again). In the laboratory, a small amount of phenolphthalein is added to a solution at low pH and then a base is slowly added to achieve neutrality. When the phenolphthalein changes from colorless to pink, you know that enough base has been added to neutralize all of the acid that is present. In reality, the transition occurs at pH 9.2, not pH 7, so the resulting solution is actually slightly alkaline, but the additional hydroxide ion concentration at pH 9 (10-5 M) is generally insignificant relative to the concentrations of the solutions being tested.
A convenient way to estimate the pH of a solution is to use pH paper. This is simply a strip of paper that has a mixture of indicators embedded in it. The indicators are chosen so that the paper takes on a slightly different color over a range of pH values. The simplest pH paper is litmus paper that changes from pink to blue as a solution goes from acid to base. Other pH papers are more exotic. In the laboratory, you will use both indicators, like phenolphthalein, and pH papers in neutralization experiments called titrations as described in section 8.7.
## Contributor
• ContribEEWikibooks
|
{}
|
# ATLAS Conference Notes
Ostatnio dodane:
2021-09-15
16:25
Search for flavor-changing neutral-current couplings between the top quark and the Z boson with LHC Run2 proton-proton collisions at $\sqrt{s} = 13\,\text{TeV}$ with the ATLAS detector A search for flavor-changing neutral-current couplings between a top quark, an up or charm quark and a $Z$ boson is presented, using proton-proton collision data at $\sqrt{s} = 13\,\text{TeV}$ collected by the ATLAS detector at the Large Hadron Collider. [...] ATLAS-CONF-2021-049. - 2021. - 32 p. Original Communication (restricted to ATLAS) - Full text
2021-09-15
16:24
Search for heavy resonances in four-top-quark final states in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector A search for associated production of a heavy $Z'$ boson with a top-antitop-quark pair $(t\bar{t}Z')$, and decaying into a $t\bar{t}$ pair is presented. [...] ATLAS-CONF-2021-048. - 2021. Original Communication (restricted to ATLAS) - Full text
2021-09-04
10:39
Search for $H^{\pm} \rightarrow W^{\pm}A \rightarrow W^{\pm}\mu\mu$ in $pp \rightarrow t\overline{t}$ events using an $e\mu\mu$ signature with the ATLAS detector at $\sqrt{s}=13$ TeV This note presents a search for a charged Higgs boson ($H^{\pm}$) decaying to a pseudoscalar particle ($A$) and a $W$ boson in top-quark pair events. [...] ATLAS-CONF-2021-047. - 2021. - 15 p. Original Communication (restricted to ATLAS) - Full text
2021-08-25
11:41
Study of the $B_c^+\to J/\psi D_s^+$ and $B_c^+\to J/\psi D_s^{*+}$ decays in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector A study of the $B_c^+\to J/\psi D_s^+$ and $B_c^+\to J/\psi D_s^{*+}$ decays using 139 fb$^{-1}$ of integrated luminosity collected with the ATLAS detector from $\sqrt{s} = 13$ TeV $pp$ collisions at the LHC is presented. [...] ATLAS-CONF-2021-046. - 2021. Original Communication (restricted to ATLAS) - Full text
2021-08-25
11:40
A search for an unexpected asymmetry in the production of $e^+ \mu^-$ and $e^- \mu^+$ pairs in proton-proton collisions recorded by the ATLAS detector at $\sqrt s = 13~\mathrm{TeV}$ This search, a type not previously performed at ATLAS, uses a comparison of the production cross sections for $e^+ \mu^-$ and $e^- \mu^+$ pairs to constrain Beyond Standard Model physics processes. [...] ATLAS-CONF-2021-045. - 2021. - 15 p. Original Communication (restricted to ATLAS) - Full text
2021-08-25
11:38
Measurements of Higgs boson production cross-sections in the $H\to\tau^{+}\tau^{-}$ decay channel in $pp$ collisions at $\sqrt{s}=13\,\text{TeV}$ with the ATLAS detector Measurements of the production cross-sections of the Standard Model (SM) Higgs boson ($H$) decaying into a pair of $\tau$ leptons are presented. [...] ATLAS-CONF-2021-044. - 2021. - 76 p. Original Communication (restricted to ATLAS) - Full text
2021-08-25
11:36
Search for vector boson resonances decaying to a top quark and a bottom quark in the hadronic final state using $pp$ collisions at $\sqrt{s}~=~13$ TeV with the ATLAS detector A search for a new charged massive gauge boson, $W'$, is performed with the ATLAS detector at the LHC. [...] ATLAS-CONF-2021-043. - 2021. - 24 p. Original Communication (restricted to ATLAS) - Full text
2021-08-25
11:35
Search for the Charged-Lepton-Flavor Violating decay $Z\rightarrow e\mu$ in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector A search is presented for the charged-lepton-flavor violating process $Z\rightarrow e\mu$ using 139~fb$^{-1}$ of $pp$ collision data collected with the ATLAS experiment at $\sqrt{s}=13$ TeV. [...] ATLAS-CONF-2021-042. - 2021. - 15 p. Original Communication (restricted to ATLAS) - Full text
2021-08-25
11:33
Search for high-mass $W\gamma$ and $Z\gamma$ resonances in the hadronic final state using 139 fb$^{-1}$ of pp collisions at $\sqrt{s}$= 13 TeV with the ATLAS Detector A search for high-mass charged and neutral bosons decaying to $W\gamma$ and $Z\gamma$ final states is presented in this note. [...] ATLAS-CONF-2021-041. - 2021. - 29 p. Original Communication (restricted to ATLAS) - Full text
2021-08-25
11:32
Search for the single production of vector-like $T$ quarks decaying into $tH$ or $tZ$ with the ATLAS detector This note describes a search for the single production of an up-type vector-like quark ($T$) decaying as $T \rightarrow Ht$ or $T \rightarrow Zt$. [...] ATLAS-CONF-2021-040. - 2021. - 44 p. Original Communication (restricted to ATLAS) - Full text
|
{}
|
# Fuel efficiency
Fuel efficiency is a form of thermal efficiency, meaning the ratio of effort to result of a process that converts chemical potential energy contained in a carrier (fuel) into kinetic energy or work. Overall fuel efficiency may vary per device, which in turn may vary per application, and this spectrum of variance is often illustrated as a continuous energy profile. Non-transportation applications, such as industry, benefit from increased fuel efficiency, especially fossil fuel power plants or industries dealing with combustion, such as ammonia production during the Haber process.
In the context of transport, fuel economy is the energy efficiency of a particular vehicle, given as a ratio of distance traveled per unit of fuel consumed. It is dependent on several factors including engine efficiency, transmission design, and tire design. In most countries, using the metric system, fuel economy is stated as "fuel consumption" in liters per 100 kilometers (L/100 km) or kilometers per liter (km/L or kmpl). In a number of countries still using other systems, fuel economy is expressed in miles per gallon (mpg), for example in the US and usually also in the UK (imperial gallon); there is sometimes confusion as the imperial gallon is 20% larger than the US gallon so that mpg values are not directly comparable. Traditionally, litres per mil were used in Norway and Sweden, but both have aligned to the EU standard of L/100 km. [1]
Fuel consumption is a more accurate measure of a vehicle's performance because it is a linear relationship while fuel economy leads to distortions in efficiency improvements.[2] Weight-specific efficiency (efficiency per unit weight) may be stated for freight, and passenger-specific efficiency (vehicle efficiency per passenger) for passenger vehicles.
## Vehicle design
Fuel efficiency is dependent on many parameters of a vehicle, including its engine parameters, aerodynamic drag, weight, AC usage, fuel and rolling resistance. There have been advances in all areas of vehicle design in recent decades. Fuel efficiency of vehicles can also be improved by careful maintenance and driving habits.[3]
Hybrid vehicles use two or more power sources for propulsion. In many designs, a small combustion engine is combined with electric motors. Kinetic energy which would otherwise be lost to heat during braking is recaptured as electrical power to improve fuel efficiency. The larger batteries in these vehicles power the car's electronics, allowing the engine to shut off and avoid prolonged idling.[4]
## Fleet efficiency
Fleet efficiency describes the average efficiency of a population of vehicles. Technological advances in efficiency may be offset by a change in buying habits with a propensity to heavier vehicles, which are less efficient, all else being equal.
## Energy efficiency terminology
Energy efficiency is similar to fuel efficiency but the input is usually in units of energy such as megajoules (MJ), kilowatt-hours (kW·h), kilocalories (kcal) or British thermal units (BTU). The inverse of "energy efficiency" is "energy intensity", or the amount of input energy required for a unit of output such as MJ/passenger-km (of passenger transport), BTU/ton-mile or kJ/t-km (of freight transport), GJ/t (for production of steel and other materials), BTU/(kW·h) (for electricity generation), or litres/100 km (of vehicle travel). Litres per 100 km is also a measure of "energy intensity" where the input is measured by the amount of fuel and the output is measured by the distance travelled. For example: Fuel economy in automobiles.
Given a heat value of a fuel, it would be trivial to convert from fuel units (such as litres of gasoline) to energy units (such as MJ) and conversely. But there are two problems with comparisons made using energy units:
• There are two different heat values for any hydrogen-containing fuel which can differ by several percent (see below).
• When comparing transportation energy costs, it must be remembered that a kilowatt hour of electric energy may require an amount of fuel with heating value of 2 or 3 kilowatt hours to produce it.
## Energy content of fuel
The specific energy content of a fuel is the heat energy obtained when a certain quantity is burned (such as a gallon, litre, kilogram). It is sometimes called the heat of combustion. There exists two different values of specific heat energy for the same batch of fuel. One is the high (or gross) heat of combustion and the other is the low (or net) heat of combustion. The high value is obtained when, after the combustion, the water in the exhaust is in liquid form. For the low value, the exhaust has all the water in vapor form (steam). Since water vapor gives up heat energy when it changes from vapor to liquid, the liquid water value is larger since it includes the latent heat of vaporization of water. The difference between the high and low values is significant, about 8 or 9%. This accounts for most of the apparent discrepancy in the heat value of gasoline. In the U.S. (and the table) the high heat values have traditionally been used, but in many other countries, the low heat values are commonly used.
Fuel type MJ/L MJ/kg BTU/imp gal BTU/US gal Research octane
number (RON)
Regular gasoline/petrol 34.8 ~47 150,100 125,000 Min. 91
Premium gasoline/petrol ~46 Min. 95
Autogas (LPG) (60% propane and 40% butane) 25.5–28.7 ~51 108–110
Ethanol 23.5 31.1[5] 101,600 84,600 129
Methanol 17.9 19.9 77,600 64,600 123
Gasohol (10% ethanol and 90% gasoline) 33.7 ~45 145,200 121,000 93/94
E85 (85% ethanol and 15% gasoline) 25.2 ~33 108,878 90,660 100–105
Diesel 38.6 ~48 166,600 138,700 N/A (see cetane)
Biodiesel 35.1 39.9 151,600 126,200 N/A (see cetane)
Vegetable oil (using 9.00 kcal/g) 34.3 37.7 147,894 123,143
Aviation gasoline 33.5 46.8 144,400 120,200 80-145
Jet fuel, naphtha 35.5 46.6 153,100 127,500 N/A to turbine engines
Jet fuel, kerosene 37.6 ~47 162,100 135,000 N/A to turbine engines
Liquefied natural gas 25.3 ~55 109,000 90,800
Liquid hydrogen 09.3 ~130 40,467 33,696
Neither the gross heat of combustion nor the net heat of combustion gives the theoretical amount of mechanical energy (work) that can be obtained from the reaction. (This is given by the change in Gibbs free energy, and is around 45.7 MJ/kg for gasoline.) The actual amount of mechanical work obtained from fuel (the inverse of the specific fuel consumption) depends on the engine. A figure of 17.6 MJ/kg is possible with a gasoline engine, and 19.1 MJ/kg for a diesel engine. See Brake specific fuel consumption for more information.[clarification needed]
## Fuel efficiency of motor vehicles
### Measurement
The fuel efficiency of motor vehicles can be expressed in more ways:
• Fuel consumption is the amount of fuel used per unit distance; for example, litres per 100 kilometres (L/100 km). The lower the value, the more economic a vehicle is (the less fuel it needs to travel a certain distance); this is the measure generally used across Europe (except the UK, Denmark and The Netherlands - see below), New Zealand, Australia and Canada. Also in Uruguay, Paraguay, Guatemala, Colombia, China, and Madagascar.[citation needed], as also in post-Soviet space.
• Fuel economy is the distance travelled per unit volume of fuel used; for example, kilometres per litre (km/L) or miles per gallon (MPG), where 1 MPG (imperial) ≈ 0.354006 km/L. The higher the value, the more economic a vehicle is (the more distance it can travel with a certain volume of fuel). This measure is popular in the US and the UK (mpg), but in Europe, India, Japan, South Korea and Latin America the metric unit km/L is used instead.
The formula for converting to miles per US gallon (3.7854 L) from L/100 km is ${\displaystyle \textstyle {\frac {235.215}{x}}}$, where ${\displaystyle x}$ is value of L/100 km. For miles per Imperial gallon (4.5461 L) the formula is ${\displaystyle \textstyle {\frac {282.481}{x}}}$.
In parts of Europe, the two standard measuring cycles for "litre/100 km" value are "urban" traffic with speeds up to 50 km/h from a cold start, and then "extra urban" travel at various speeds up to 120 km/h which follows the urban test. A combined figure is also quoted showing the total fuel consumed in divided by the total distance traveled in both tests.
### Statistics
A reasonably modern European supermini and many mid-size cars, including station wagons, may manage motorway travel at 5 L/100 km (47 mpg US/56 mpg imp) or 6.5 L/100 km in city traffic (36 mpg US/43 mpg imp), with carbon dioxide emissions of around 140 g/km.
An average North American mid-size car travels 21 mpg (US) (11 L/100 km) city, 27 mpg (US) (9 L/100 km) highway; a full-size SUV usually travels 13 mpg (US) (18 L/100 km) city and 16 mpg (US) (15 L/100 km) highway. Pickup trucks vary considerably; whereas a 4 cylinder-engined light pickup can achieve 28 mpg (8 L/100 km), a V8 full-size pickup with extended cabin only travels 13 mpg (US) (18 L/100 km) city and 15 mpg (US) (15 L/100 km) highway.
The average fuel economy for all vehicles on the road is higher in Europe than the United States because the higher cost of fuel changes consumer behaviour. In the UK, a gallon of gas without tax would cost US$1.97, but with taxes cost US$6.06 in 2005. The average cost in the United States was US\$2.61.[7]
European-built cars are generally more fuel-efficient than US vehicles. While Europe has many higher efficiency diesel cars, European gasoline vehicles are on average also more efficient than gasoline-powered vehicles in the USA. Most European vehicles cited in the CSI study run on diesel engines, which tend to achieve greater fuel efficiency than gas engines. Selling those cars in the United States is difficult because of emission standards, notes Walter McManus, a fuel economy expert at the University of Michigan Transportation Research Institute. "For the most part, European diesels don’t meet U.S. emission standards", McManus said in 2007. Another reason why many European models are not marketed in the United States is that labor unions object to having the big 3 import any new foreign built models regardless of fuel economy while laying off workers at home.[8]
An example of European cars' capabilities of fuel economy is the microcar Smart Fortwo cdi, which can achieve up to 3.4 L/100 km (69.2 mpg US) using a turbocharged three-cylinder 41 bhp (30 kW) Diesel engine. The Fortwo is produced by Daimler AG and is only sold by one company in the United States. Furthermore, the world record in fuel economy of production cars is held by the Volkswagen Group, with special production models (labeled "3L") of the Volkswagen Lupo and the Audi A2, consuming as little as 3 L/100 km (94 mpg‑imp; 78 mpg‑US).[9][clarification needed]
Diesel engines generally achieve greater fuel efficiency than petrol (gasoline) engines. Passenger car diesel engines have energy efficiency of up to 41% but more typically 30%, and petrol engines of up to 37.3%, but more typically 20%. A common margin is 25% more miles per gallon for an efficient turbodiesel.
For example, the current model Skoda Octavia, using Volkswagen engines, has a combined European fuel efficiency of 41.3 mpg‑US (5.70 L/100 km) for the 105 bhp (78 kW) petrol engine and 52.3 mpg‑US (4.50 L/100 km) for the 105 bhp (78 kW) — and heavier — diesel engine. The higher compression ratio is helpful in raising the energy efficiency, but diesel fuel also contains approximately 10% more energy per unit volume than gasoline which contributes to the reduced fuel consumption for a given power output.
In 2002, the United States had 85,174,776 trucks, and averaged 13.5 miles per US gallon (17.4 L/100 km; 16.2 mpg‑imp). Large trucks, over 33,000 pounds (15,000 kg), averaged 5.7 miles per US gallon (41 L/100 km; 6.8 mpg‑imp).[10]
Truck fuel economy
GVWR lbs Number Percentage Average miles per truck fuel economy Percentage of fuel use
6,000 lbs and less 51,941,389 61.00% 11,882 17.6 42.70%
6,001 – 10,000 lbs 28,041,234 32.90% 12,684 14.3 30.50%
Light truck subtotal 79,982,623 93.90% 12,163 16.2 73.20%
10,001 – 14,000 lbs 691,342 0.80% 14,094 10.5 1.10%
14,001 – 16,000 lbs 290,980 0.30% 15,441 8.5 0.50%
16,001 – 19,500 lbs 166,472 0.20% 11,645 7.9 0.30%
19,501 – 26,000 lbs 1,709,574 2.00% 12,671 7 3.20%
Medium truck subtotal 2,858,368 3.40% 13,237 8 5.20%
26,001 – 33,000 lbs 179,790 0.20% 30,708 6.4 0.90%
33,001 lbs and up 2,153,996 2.50% 45,739 5.7 20.70%
Heavy truck subtotal 2,333,786 2.70% 44,581 5.8 21.60%
Total 85,174,776 100.00% 13,088 13.5 100.00%
The average economy of automobiles in the United States in 2002 was 22.0 miles per US gallon (10.7 L/100 km; 26.4 mpg‑imp). By 2010 this had increased to 23.0 miles per US gallon (10.2 L/100 km; 27.6 mpg‑imp). Average fuel economy in the United States gradually declined until 1973, when it reached a low of 13.4 miles per US gallon (17.6 L/100 km; 16.1 mpg‑imp) and gradually has increased since, as a result of higher fuel cost.[11] A study indicates that a 10% increase in gas prices will eventually produce a 2.04% increase in fuel economy.[12] One method by car makers to increase fuel efficiency is lightweighting in which lighter-weight materials are substituted in for improved engine performance and handling.[13]
## Fuel efficiency in microgravity
How fuel combusts affects how much energy is produced. The National Aeronautics and Space Administration (NASA) has investigated fuel consumption in microgravity.
The common distribution of a flame under normal gravity conditions depends on convection, because soot tends to rise to the top of a flame, such as in a candle, making the flame yellow. In microgravity or zero gravity, such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient. There are several possible explanations for this difference, of which the most likely one given is the hypothesis that the temperature is evenly distributed enough that soot is not formed and complete combustion occurs., National Aeronautics and Space Administration, April 2005. Experiments by NASA in microgravity reveal that diffusion flames in microgravity allow more soot to be completely oxidised after they are produced than diffusion flames on Earth, because of a series of mechanisms that behaved differently in microgravity when compared to normal gravity conditions.LSP-1 experiment results, National Aeronautics and Space Administration, April 2005. Premixed flames in microgravity burn at a much slower rate and more efficiently than even a candle on Earth, and last much longer.[14]
## Transportation
### Vehicle efficiency and transportation pollution
Fuel efficiency directly affects emissions causing pollution by affecting the amount of fuel used. However, it also depends on the fuel source used to drive the vehicle concerned. Cars for example, can run on a number of fuel types other than gasoline, such as natural gas, LPG or biofuel or electricity which creates various quantities of atmospheric pollution.
A kilogram of carbon, whether contained in petrol, diesel, kerosene, or any other hydrocarbon fuel in a vehicle, leads to approximately 3.6 kg of CO2 emissions.[15] Due to the carbon content of gasoline, its combustion emits 2.3 kg/L (19.4 lb/US gal) of CO2; since diesel fuel is more energy dense per unit volume, diesel emits 2.6 kg/L (22.2 lb/US gal).[15] This figure is only the CO2 emissions of the final fuel product and does not include additional CO2 emissions created during the drilling, pumping, transportation and refining steps required to produce the fuel. Additional measures to reduce overall emission includes improvements to the efficiency of air conditioners, lights and tires.
### Driving technique
Many drivers have the potential to improve their fuel efficiency significantly.[16] These five basic fuel-efficient driving techniques can be effective. Simple things such as keeping tires properly inflated, having a vehicle well-maintained and avoiding idling can dramatically improve fuel efficiency.[17]
There is a growing community of enthusiasts known as hypermilers who develop and practice driving techniques to increase fuel efficiency and reduce consumption. Hypermilers have broken records of fuel efficiency, for example, achieving 109 miles per gallon in a Prius. In non-hybrid vehicles these techniques are also beneficial, with fuel efficiencies of up to 59 mpg‑US (4.0 L/100 km) in a Honda Accord or 30 mpg‑US (7.8 L/100 km) in an Acura MDX.[18]
## Advanced technology improvements to improve fuel efficiency
The most efficient machines for converting energy to rotary motion are electric motors, as used in electric vehicles. However, electricity is not a primary energy source so the efficiency of the electricity production has also to be taken into account. Railway trains can be powered using electricity, delivered through an additional running rail, overhead catenary system or by on-board generators used in diesel-electric locomotives as common on the US and UK rail networks. Pollution produced from centralised generation of electricity is emitted at a distant power station, rather than "on site". Pollution can be reduced by using more railway electrification and low carbon power for electricity. Some railways, such as the French SNCF and Swiss federal railways derive most, if not 100% of their power, from hydroelectric or nuclear power stations, therefore atmospheric pollution from their rail networks is very low. This was reflected in a study by AEA Technology between a Eurostar train and airline journeys between London and Paris, which showed the trains on average emitting 10 times less CO2, per passenger, than planes, helped in part by French nuclear generation.[19]
### Hydrogen fuel cells
In the future, hydrogen cars may be commercially available. Toyota is test-marketing vehicles powered by hydrogen fuel cells in southern California, where a series of hydrogen fueling stations has been established. Powered either through chemical reactions in a fuel cell that create electricity to drive very efficient electrical motors or by directly burning hydrogen in a combustion engine (near identically to a natural gas vehicle, and similarly compatible with both natural gas and gasoline); these vehicles promise to have near-zero pollution from the tailpipe (exhaust pipe). Potentially the atmospheric pollution could be minimal, provided the hydrogen is made by electrolysis using electricity from non-polluting sources such as solar, wind or hydroelectricity or nuclear. Commercial hydrogen production uses fossil fuels and produces more carbon dioxide than hydrogen.
Because there are pollutants involved in the manufacture and destruction of a car and the production, transmission and storage of electricity and hydrogen, the label "zero pollution" applies only to the car's conversion of stored energy into movement.
In 2004, a consortium of major auto-makers — BMW, General Motors, Honda, Toyota and Volkswagen/Audi — came up with "Top Tier Detergent Gasoline Standard" to gasoline brands in the US and Canada that meet their minimum standards for detergent content[20] and do not contain metallic additives. Top Tier gasoline contains higher levels of detergent additives in order to prevent the build-up of deposits (typically, on fuel injector and intake valve) known to reduce fuel economy and engine performance.[21]
## References
1. ^ "Information on the fuel consumption of new cars". Retrieved 7 November 2019.
2. ^ "Learn More About the Fuel Economy Label for Gasoline Vehicles". Archived from the original on 2013-07-05.
3. ^ "Simple tips and tricks to increase fuel efficiency of your car | CarSangrah". CarSangrah. 2018-06-07. Retrieved 2018-07-24.
4. ^ "How Hybrid Work". U.S. Department of Energy. Archived from the original on 2015-07-08. Retrieved 2014-01-16.
5. ^ Calculated from heats of formation. Does not correspond exactly to the figure for MJ/L divided by density.
6. ^
7. ^ "Gas prices too high? Try Europe". Christian Science Monitor. 26 August 2005. Archived from the original on 18 September 2012.
8. ^ "U.S. 'stuck in reverse' on fuel economy". NBC News. 28 February 2007.
9. ^
10. ^ Heavy Vehicles and Characteristics Archived 2012-07-23 at the Wayback Machine Table 5.4
11. ^ Light Vehicles and Characteristics Archived 2012-09-15 at the Wayback Machine Table 4.1
12. ^
13. ^ Dee-Ann Durbin of the Associated Press, June 17, 2014, Mercury News, Auto industry gets serious about lighter materials Archived 2015-04-15 at the Wayback Machine, Retrieved April 11, 2015, "...Automakers have been experimenting for decades with lightweighting... the effort is gaining urgency with the adoption of tougher gas mileage standards. ..."
14. ^ SOFBAL-2 experiment results Archived 2007-03-12 at the Wayback Machine, National Aeronautics and Space Administration, April 2005.
15. ^ a b "Emission Facts: Average Carbon Dioxide Emissions Resulting from Gasoline and Diesel Fuel". Office of Transportation and Air Quality. United States Environmental Protection Agency. February 2005. Archived from the original on 2009-02-28. Retrieved 2009-07-28.
16. ^ Beusen; et al. (2009). "Using on-board logging devices to study the long-term impact of an eco-driving course". Transportation Research D. 14 (7): 514–520. doi:10.1016/j.trd.2009.05.009. Archived from the original on 2013-10-19.
17. ^ "20 Ways to Improve Your Fuel Efficiency and Save Money at the Pump". Archived from the original on 2016-08-16.
18. ^ Gaffney, Dennis (2007-01-01). "This Guy Can Get 59 MPG in a Plain Old Accord. Beat That, Punk". Mother Jones. Archived from the original on 2007-04-15. Retrieved 2007-04-20.
19. ^ "Rail 10 times better than air in London-Paris CO2 comparison - Transport & Environment". Archived from the original on 2007-09-28.
20. ^ Top Tier Gasoline Archived 2013-08-15 at the Wayback Machine
21. ^ "Deposit Control Standards". Archived from the original on 2004-08-06. Retrieved 2012-10-19.
|
{}
|
Horm Metab Res 2010; 42: S3-S36
DOI: 10.1055/s-0029-1240928
Guidelines
© Georg Thieme Verlag KG Stuttgart · New York
# A European Evidence-Based Guideline for the Prevention of Type 2 Diabetes
B. Paulweber1 , P. Valensi2 , J. Lindström3 , N. M. Lalic4 , C. J. Greaves5 , M. McKee6 , K. Kissimova-Skarbek7 , S. Liatis8 , E. Cosson2 , J. Szendroedi9 , K. E. Sheppard5 , K. Charlesworth6 , A.-M. Felton10 , M. Hall11 , A. Rissanen12 , J. Tuomilehto13 , P. E. Schwarz14 , M. Roden9 Writing Group: M. Paulweber, A. Stadlmayr, L. Kedenko, N. Katsilambros, K. Makrilakis, Z. Kamenov, P. Evans, A. Gilis-Januszewska, K. Lalic, A. Jotic, P. Djordevic, V. Dimitrijevic-Sreckovic, U. Hühmer, B. Kulzer, S. Puhl, Y. H. Lee-Barkey, A. AlKerwi, C. Abraham, W. Hardeman IMAGE Study Group: T. Acosta, M. Adler, A. AlKerwi, N. Barengo, R. Barengo, J. M. Boavida, K. Charlesworth, V. Christov, B. Claussen, X. Cos, E. Cosson, S. Deceukelier, V. Dimitrijevic-Sreckovic, P. Djordjevic, P. Evans, A.-M. Felton, M. Fischer, R. Gabriel-Sanchez, A. Gilis-Januszewska, M. Goldfracht, J. L. Gomez, C. J. Greaves, M. Hall, U. Handke, H. Hauner, J. Herbst, N. Hermanns, L. Herrebrugh, C. Huber, U. Hühmer, J. Huttunen, A. Jotic, Z. Kamenov, S. Karadeniz, N. Katsilambros, M. Khalangot, K. Kissimova-Skarbek, D. Köhler, V. Kopp, P. Kronsbein, B. Kulzer, D. Kyne-Grzebalski, K. Lalic, N. Lalic, R. Landgraf, Y. H. Lee-Barkey, S. Liatis, J. Lindström, K. Makrilakis, C. McIntosh, M. McKee, A. C. Mesquita, D. Misina, F. Muylle, A. Neumann, A. C. Paiva, P. Pajunen, B. Paulweber, M. Peltonen, L. Perrenoud, A. Pfeiffer, A. Pölönen, S. Puhl, F. Raposo, T. Reinehr, A. Rissanen, C. Robinson, M. Roden, U. Rothe, T. Saaristo, J. Scholl, P. E. Schwarz, K. E. Sheppard, S. Spiers, T. Stemper, B. Stratmann, J. Szendroedi, Z. Szybinski, T. Tankova, V. Telle-Hjellset, G. Terry, D. Tolks, F. Toti, J. Tuomilehto, A. Undeutsch, C. Valadas, P. Valensi, D. Velickiene, P. Vermunt, R. Weiss, J. Wens, T. Yilmaz
• 1Paracelsus Medical University, Salzburg, Austria
• 2Department of Endocrinology Diabetology Nutrition, Jean Verdier Hospital, AP‐HP, Paris-Nord University, CRNH‐IdF, Bondy, France
• 3Department of Chronic Disease Prevention Diabetes Prevention Unit, National Institute for Health and Welfare (THL), Helsinki, Finland
• 4Diabetes Center, Institute for Endocrinology, Diabetes and Metabolic Diseases School of Medicine, Clinical Center of Serbia, University of Belgrade, Belgrade, Serbia
• 5Peninsula Medical School, University of Exeter, Exeter, United Kingdom
• 6European Centre on Health of Societies in Transition, London School of Hygiene and Tropical Medicine, London, United Kingdom
• 7Executive Office, International Diabetes Federation (IDF), Brussels, Belgium
• 8Diabetes Center, Laiko Hospital, Athens University Medical School, Athens, Greece
• 9Karl-Landsteiner Institute for Endocrinology and Metabolism, Vienna, Austria, Institute for Clinical Diabetology, German Diabetes Center, and Department of Metabolic Diseases, Heinrich-Heine University Düsseldorf, Düsseldorf, Germany
• 10President Federation of European Nurses in Diabetes (FEND), London, United Kingdom
• 11Board member, International Diabetes Foundation-Europe (IDF-Europe), Brussels, Belgium
• 12Vice-President, European Association for the Study of Obesity (EASO) and Obesity Research Unit, Helsinki University Central Hospital, Helsinki, Finland
• 13Hjelt Institute, Department of Public Health, University of Helsinki, Helsinki, Finland, and South Ostrobothnia Central Hospital, Seinäjoki, Finland, and Spanish Diabetes Foundation, Madrid, Spain
• 14Carl Gustav Carus Medical Faculty, Technical University of Dresden, Dresden, Germany
Further Information
Univ. Prof. Dr. Michael Roden
Karl-Landsteiner Institute for Endocrinology and Metabolism
Hanusch Hospital
1140 Vienna
Austria
Institute for Clinical Diabetology, German Diabetes Center, and Department of Metabolic Diseases
University Clinics
Heinrich Heine University Düsseldorf
Auf'm Hennekamp 65
40225 Düsseldorf
Germany
Email: [email protected]
### Publication History
Publication Date:
13 April 2010 (online)
### Abstract
Background: The prevalence and socioeconomic burden of type 2 diabetes (T2DM) and associated co-morbidities are rising worldwide. Aims: This guideline provides evidence-based recommendations for preventing T2DM. Methods: A European multidisciplinary consortium systematically reviewed the evidence on the effectiveness of screening and interventions for T2DM prevention using SIGN criteria. Results: Obesity and sedentary lifestyle are the main modifiable risk factors. Age and ethnicity are non-modifiable risk factors. Case-finding should follow a step-wise procedure using risk questionnaires and oral glucose tolerance testing. Persons with impaired glucose tolerance and/or fasting glucose are at high-risk and should be prioritized for intensive intervention. Interventions supporting lifestyle changes delay the onset of T2DM in high-risk adults (number-needed-to-treat: 6.4 over 1.8–4.6 years). These should be supported by inter-sectoral strategies that create health promoting environments. Sustained body weight reduction by ≥ 5 % lowers risk. Currently metformin, acarbose and orlistat can be considered as second-line prevention options. The population approach should use organized measures to raise awareness and change lifestyle with specific approaches for adolescents, minorities and disadvantaged people. Interventions promoting lifestyle changes are more effective if they target both diet and physical activity, mobilize social support, involve the planned use of established behaviour change techniques, and provide frequent contacts. Cost-effectiveness analysis should take a societal perspective. Conclusions: Prevention using lifestyle modifications in high-risk individuals is cost-effective and should be embedded in evaluated models of care. Effective prevention plans are predicated upon sustained government initiatives comprising advocacy, community support, fiscal and legislative changes, private sector engagement and continuous media communication.
#
### Abbreviations
ADDITION: Anglo-Danish-Dutch study of intensive treatment in people with screen
AES: Androgen Excess Society
AHA: American Heart Association
CDQDPS: Da-Qing Study
CHD: Coronary heart disease
CI: Confidence interval
CURES: Chennai Urban Rural Epidemiological Study
DECODE: Diabetes Epidemiology: Collaborative analysis Of Diagnostic criteria in Europe
DE-Plan: Diabetes in Europe-Prevention using Lifestyle, Physical Activity and Nutritional Intervention Plan
DESIR: Data from Epidemiological Study on the Insulin Resistance syndrome detected diabetes in primary care
DPP: US Diabetes Prevention Program
DPS: Finnish Diabetes Prevention Study
DREAM: Diabetes REduction Assessment w/ramipril & rosiglitazone Medication
EASD: European Association for the Study of Diabetes
EPIC: European Prospective Investigation into Cancer and Nutrition study
FINDRISC: FINnish Diabetes Risk Score
FPG: Fasting plasma glucose concentration
GDM: Gestational diabetes
GWAS: Genome-wide association studies
HR: Hazard ratio
IDF: International Diabetes Foundation
IDPP: Indian Diabetes Prevention Program
IFG: Impaired fasting glucose
IGLOO: Impaired Glucose tolerance and Long-term Outcomes Observational
IGT: Impaired glucose tolerance
MetSy: Metabolic syndrome
MRF: Multiple Risk Factor Intervention Trial
NCEP‐ATP III: National Cholesterol Education Program
NGT: Normal glucose tolerance
NHANES III: Third National Health and Nutrition Examination Survey
NNT: Number needed to treat
OGTT: Oral glucose tolerance test
OR: Odds ratio
PCOS: Polycystic ovary syndrome
PG: Plasma glucose concentration
PIPOD: Pioglitazone In Prevention of Diabetes
RCT: Randomized controlled trial
RIO: Rimonabant-In-Obesity
RR: Relative risk
SES: Low socioeconomic status
SMOMS: Scandinavian Multicenter on Orlistat in Metabolic Syndrome
SOS: Swedish Obesity Surgery
STOP-NIDDM: Study To Prevent Non-Insulin-Dependent Diabetes Mellitus
T2DM: Type 2 diabetes mellitus
TRIPOD: Troglitazone In Prevention of Diabetes
WHO: World Health Organisation
XENDOS: XEnical in the prevention of Diabetes in Obese Subjects
#
### Introduction
It is estimated that the number of people with diabetes will reach 285 million people worldwide in 2010, with almost half of those affected in the 20–60 age group. In Europe about 55 million people aged 20–79 will have diabetes in 2010 and this number is expected to rise to 66 million by 2030 unless effective preventive strategies are implemented. Type 2 diabetes (T2DM) accounts for about 90 % of diabetes cases. People with T2DM have a 2- to 4-fold increased risk of cardiovascular disease (CVD) and at least two thirds die from CVD. Increased CVD risk is already present in prediabetic states, particularly in individuals with impaired glucose tolerance (IGT) [1], [2] and/or the metabolic syndrome (MetSy) [3]. Diabetes and its complications represent an enormous burden not only for patients but also for society. Direct healthcare costs, which represent about 30 % of total costs to society, will be about 70 billion € per year in 2010. It has been estimated that if an individual is diagnosed as having diabetes at the age of 40 years, men will lose on average 11.6 life-years and 18.6 quality adjusted life years (QALY) and women will lose 14.3 life-years and 22.0 QALYs [4]. Thus, primary prevention of T2DM and its complications is a major public health issue.
Despite the fact that inherited factors predispose to T2DM, environmental and lifestyle factors are held mainly responsible for the increasing prevalence of the disease over the past decades. There is now strong evidence from controlled trials that T2DM can be prevented by interventions that deliver relatively modest lifestyle changes. Thus, the potential to prevent T2DM represents a major opportunity for European governments and healthcare systems.
In order to address the challenge of reversing the epidemic of T2DM, a European multidisciplinary consortium (the IMAGE project: www.image-project.eu) developed this guideline for the prevention of T2DM which provides evidence based recommendations for health care practitioners, organizations, and funders on the prevention of type 2 diabetes in European healthcare settings.
#
### Definition of Risk and Target populations
#
#### Definition of risk
The risk for T2DM is predominantly determined by number and severity of non-modifiable and modifiable risk factors ([Table 1]).
Non-modifiable risk factors Modifiable risk factors Age Overweight and obesity Family history/Genetic predisposition Physical inactivity Ethnicity Disturbances in intrauterine development/prematurity History of gestational diabetes (GDM) Impaired fasting glucose (IFG)/Impaired glucose tolerance (IGT) Polycystic ovary syndrome (PCOS) Metabolic syndrome (MetSy) Dietary factors Diabetogenic drugs Depression Obesigenic/diabetogenic environment Low socio-economic status
#
#### Non-modifiable risk factors
Age. Age is one of the strongest risk factors for T2DM (A). Epidemiological data for diabetes and impaired glucose regulation from 13 European countries have been published by the DECODE study group [5]. The prevalence of diabetes rises with age up to the 8th decade in both men and women. It is less than 10 % in subjects below 60 years and exceeds 20 % above 80 years. The mean plasma glucose concentration at 2 hours (2-h PG) of the oral glucose tolerance test (OGTT) rises with age in European populations, particularly after 50 years. Women have higher mean 2-h PG levels than men, particularly above 70 years. Mean fasting plasma glucose (FPG) levels increase only slightly with age. They are higher in men than in women aged 30–69 years and become higher in women after 70 years. Among middle aged subjects, the prevalence of impaired glucose regulation (impaired glucose tolerance [IGT] and impaired fasting glucose [IFG], or both) is about 15 %, whereas in the elderly, 35–40 % of Europeans have impaired glucose regulation. Over the last years, the age of onset of diabetes has decreased considerably in countries in which the prevalence of obesity has increased significantly [6], [7], [8], [9], [10]. T2DM now accounts for as many as 50 % of cases of newly diagnosed cases of diabetes in pediatric populations [11]. Earlier onset of T2DM leads to earlier onset of the complications. Markers of increased CVD risk may appear even before the diagnosis of the MetSy among obese children and adolescents [12] and metabolic abnormalities diagnosed in the adolescence tend to persist into adulthood [13].
Family history/genetic predisposition. Occurrence of the disease is highly concordant (60–90 %) in monozygotic twin pairs, but less so (17–37 %) in dizygotic twins [14], [15], [16], [17] (A). The child of a parent with T2DM has a 40 % chance of developing the disease, whereas the risk in the general population is about 7 % [18]. In the Botnia study, a positive family history with at least one affected first degree relative was associated with a hazard ratio (HR) of 2.2 for development of the disease [19]. In recent years a large number of genetic variants have been identified, which increase the risk for T2DM [20]. Genome-wide association studies provided by far the biggest increment to our knowledge of the genetics of T2DM [21], [22], [23], [24], [25], [26]. At least 25 gene loci have been identified so far affecting susceptibility for T2DM [27] (A). The effect on T2DM risk per susceptibility allele ranges from about 10 % to 40 %. The majority of these genes appear to play a role in beta-cell function rather than in insulin sensitivity. Collectively, however, these variants explain less than 10 % of the genetic component of diabetes risk. Therefore despite the encouraging progress in our understanding of the genetic basis of T2DM, it is too early to use genetic information as a tool for targeting preventive efforts [19].
Ethnicity. Studies in multiethnic populations suggest that some ethnic groups have a particular predisposition, most likely on a genetic basis, to develop insulin resistance and T2DM, when exposed to adverse conditions [20]. There are wide differences in the prevalence of diabetes between ethnic groups (A) [28]. The prevalence of diagnosed diabetes among Hispanics is 1.9 times higher than that among Caucasians. Diabetes is diagnosed at an earlier age and Hispanics suffer from higher rates of diabetes-related complications and mortality [29]. Afro-Caribbeans and Asian Indians also exhibit higher prevalence of T2DM than Caucasians [30]. One important factor contributing to increased T2DM risk in Asian Indians is the greater insulin resistance compared to Caucasians [31].
Gestational diabetes (GDM). GDM is defined in terms of having glucose intolerance in the diabetic range as assessed from OGTT and/or FPG that begins or is first diagnosed during pregnancy [32], [33]. It is estimated to affect between 3 and 5 % of all pregnancies [32] There is a strong correlation between a history of GDM and later development of T2DM and its co-morbidities [34]. A recent meta-analysis of 20 studies reported that women with a history of GDM had about a 7.5-fold increased risk for T2DM compared with women with normoglycemic pregnancy [35] (A). Ethnicity has been proven to be an independent risk factor for GDM [36]. In the DPP women with a history of GDM randomized to placebo had an incidence rate of T2DM about 70 % higher than that of women without such a history, despite equivalent levels of glucose intolerance at baseline [37]. Metabolic assessments recommended after GDM are [33], [38] (D): post delivery (1–3 days): FPG or random PG to detect persistent or overt diabetes, 6–12 weeks postpartum: OGTT, 1 year postpartum: OGTT, annually: fasting plasma glucose, tri-annually and pre-pregnancy: OGTT to classify glucose metabolism.
Polycystic ovary syndrome (PCOS). PCOS affects about one in 15 women worldwide with up to 10 % of women of reproductive age [39] and shows familial aggregation and ethnic variation in its prevalence. At present, there are three main definitions for PCOS. The National Institutes of Health (NIH) criteria require the presence of hyperandrogenism and/or hyperandrogenemia, chronic anovulation, and exclusion of related disorders such as hyperprolactinemia, thyroid disorders, and congenital adrenal hyperplasia [40]. The 2003 Rotterdam criteria include two or more of the following in addition to exclusion of related disorders: oligo-anovulation or anovulation, clinical and/or biochemical signs of hyperandrogenism, polycystic ovaries [41]. The most recent criteria was defined by a task force of the Androgen Excess Society (AES) in 2006, which recommended the following criteria: hirsutism and/or hyperandrogenemia, oligo-ovulation and/or polycystic ovaries, exclusion of other androgen excess or related disorders [42]. Using the NIH criteria in unselected populations of women of the reproductive age the prevalence of PCOS is 6.5–8.0 % [43]. The 2003 Rotterdam criteria result in a 1.5 fold higher prevalence of PCOS [44].
The etiology of PCOS is incompletely understood, but studies suggest a strong genetic component influenced by gestational environment and lifestyle factors. Most women with PCOS have increased insulin resistance and impaired β-cell function compared with age- and BMI-matched controls [45]. Approximately 30 % of women with PCOS have IGT and up to 10 % are diabetic [46]. In the United States up to 40 % of all women with PCOS have developed T2DM or IGT by the age of 40 years [47]. More pronounced endocrine disturbances conferring a particularly high risk for T2DM are observed in women with PCOS and obesity as compared with normal weight women with this condition [48]. Women with PCOS have a higher incidence of GDM, pregnancy-induced hypertension, and preeclampsia [49]. A recent meta-analysis revealed an approximately 3-fold increased risk as assessed from the odds ratio of 2.94 [95 % confidence interval, CI 1.70, 5.08] for GDM among women with PCOS [50].
#
#### Modifiable risk factors
Overweight and obesity. Obesity (BMI ≥ 30 kg/m2) and overweight (BMI 25–30 kg/m2) increase the risk for developing both IGT and T2DM at all ages [51]. They act, at least in part, by inducing insulin resistance [52]. More than 80 % of cases of T2DM can be attributed to obesity. Reversal of obesity also decreases the risk for T2DM (A) [53] and improves glycemic control in patients with established diabetes (A) [54]. A strong curvilinear relationship between BMI and the risk for T2DM was found in women in the Nurses' Health Study (B) [55]. The age-adjusted relative risk for diabetes was 6.1 times higher for people with BMI >35 kg/m2 than for people with BMI < 22 kg/m2. The degree of insulin resistance and the incidence of T2DM are highest in those subjects with upper body or abdominal adiposity, as assessed from waist circumference [56], [57]. Adiposity of the “gynoid" type, which primarily affects the gluteal and femoral region is not associated with glucose intolerance or increased CVD risk. However, studies trying to discern the relative importance of waist circumference (or waist-to-hip ratio) compared to BMI regarding risk for T2DM development have not shown a major advantage of one over the other (A) [58].
Physical inactivity. Recent data from the Nurses Health Study indicate that both obesity and physical inactivity independently contribute to the development of T2DM: the magnitude of risk contributed by obesity, seems to be greater than that imparted by lack of physical activity [59], [60]. The benefit of physical activity in preventing diabetes has been demonstrated in several studies (A) [61], [62], [63], [64], [65], [66], [67], [68], [69].
Disturbances in intrauterine development/prematurity. There is an inverse association between birth weight and risk for T2DM. Specifically, subjects who had a low birth weight for gestational age have, as adults, reduced β-cell function [70], insulin resistance [71] and an increased incidence of T2DM (B) [72]. Small-for-gestational-age babies are those whose birth weights lie below the 10th percentile for their gestational age. Low birth weight (< 2500 g) is sometimes used synonymously. Thinness at birth and in adult life have opposing effects on insulin resistance, such that subjects who were underweight at birth, but who become overweight in middle age, have the most severe insulin resistance and the greatest risk for T2DM [73]. Higher birth weight (> 4000 g) may also be associated with an increased risk for T2DM (B) [74]. Large-for-gestational-age babies are those whose birth weights lie above the 90th percentile for their gestational age. A meta-analysis of 14 studies demonstrated a U-shaped relationship between birth weight and diabetes risk [75]. Both high and low birth weight were similarly associated with increased risk for diabetes later in life (OR: 1.36 and 1.47). Children born prematurely, whatever their weight, may also be at increased risk for T2DM (B) [76], 77].
Impaired fasting glucose (IFG) and impaired glucose tolerance (IGT). IFG and IGT are early abnormalities of glucose metabolism that precede diabetes. These are often called prediabetes. IFG is defined as an elevated FPG concentration between 6.1–6.9 mmol/l. In 2003 the lower cut-off value was reduced to 5.6 mmol/l by the American Diabetes Association (ADA) [78], which was not accepted by the WHO in 2006 [79] ([Table 2]). IGT is defined as an elevated PG between 7.8 and 11.1 mmol/l at 2 hours after a 75-g OGTT, in the presence of an FPG < 7 mmol/l [78], [80]. It is clear that with the definitions above, there is overlap between the two groups. Thus, additional groups have been created, namely isolated IFG (i-IFG), isolated IGT (i-IGT) and IFG plus IGT (IFG + IGT).
Fasting glucose Venous plasma (mmol/l/mg/dl) Capillary whole blood (mmol/l/mg/dl) Normal fasting glucose < 6.1/<110 < 5.6/<100 Impaired fasting glucose 6.1 and < 7.0/110 and < 126** 5.6 and < 6.1/100 and < 110 Diabetes 7.0/126 6.1/110 2 h glucose* Normal glucose tolerance < 7.8/140 < 7.8/140 Impaired glucose tolerance 7.8 and < 11.1/140 and < 200 7.8 and < 11.1/140 and < 200 11.1/200 11.1/200 * Glucose level 2 h after ingestion of 75 g oral glucose load; if 2 h glucose is not measured, status remains uncertain as diabetes or IGT cannot be excluded; ** according to the classification recommended by the ADA impaired fasting glucose is defined as fasting plasma glucose levels between 5.6 and 7.0 mmol/l (100–126 mg/dl)
The prevalence of IFG and IGT varies considerably among different ethnic groups and increases with age (B). IGT is more common in women. IFG and IGT are believed to represent different metabolic abnormalities. The reported estimates of diabetes development in IFG and IGT individuals vary widely, depending on the ethnicity of the population studied, with a higher incidence of T2DM noted in non-Caucasian populations (B).
Two recent meta-analyses found no evidence of a difference in T2DM risk among people with either IGT, IFG, i-IGT or i-IFG [81], [82], but both concluded that individuals with IFG + IGT have a substantially increased risk of T2DM compared to all other groups (B). The first meta-analysis included 44 studies and calculated the unadjusted annualized relative risk (RR) for progression to diabetes at 6.02 for IGT, 5.55 for IFG and 12.21 for IFG + IGT. The second meta-analysis included 40 studies and the RR was found to be 6.35 for IGT, 4.66 for IFG and 12.13 for IFG + IGT. Of note, most of the literature on IFG is based upon the older cut-off point (6.1–6.9 mmol/l) while the risk associated with IFG as more recently defined by the ADA (5.6–6.9 mmol/l) in 2003 remains to be evaluated.
According to the available data, it has been estimated that the majority of individuals (probably up to 70 %) with these prediabetic conditions will eventually develop diabetes [83]. However, studies of shorter duration have shown that during a period of 3–5 years about 25 % of individuals progress to diabetes, 25 % return to a normal glucose tolerance status and 50 % remain in the prediabetic state (B) [84], [85].
Metabolic syndrome (MetSy). MetSy is defined as a cluster of metabolic risk factors for cardiovascular disease which are associated with insulin resistance [86], [87]. It is associated with an up to 2-fold elevated risk for CVD [3]. Although several diagnostic criteria have been proposed by different organizations, there is an ongoing debate regarding the existence of unique underlying pathophysiology [88], [89], [90]. The most widely used criteria were defined by the National Cholesterol Education Program (NCEP‐ATP III) and include central obesity, high fasting plasma glucose, high triglycerides, low HDL-cholesterol and high blood pressure [86]. A harmonized definition of the MetSy has recently been suggested in a joint statement issued by several international organizations [91]. Despite the fact that the MetSy strongly predicts progression to T2DM [92], several reports [93], [94], [95] show that a single measure of blood glucose is a better predictor of incident diabetes than the complex definition of the MetSy. In a recent analysis from the San Antonio heart study, however, the metabolic syndrome as defined by the NCEP criteria predicted T2DM independently of the presence of elevated FPG [96]. The MetSy was as good a predictor for the occurrence of T2DM as iIFG (OR: 5.03 versus 7.07). If both conditions occurred simultaneously, the risk for T2DM was much higher (OR: 21.0).
Dietary factors. Diet is thought to play an important role, and some data suggest that certain dietary factors may predict T2DM but confounding factors limit many nutritional clinical studies. Even randomized nutritional clinical trials often suffer from several short-comings as they may start too late in the disease process, not be continued for sufficient duration or be inadequately powered. In addition, the protective (or deleterious) effect of a certain nutrient may only operate in conjunction with other nutrients or at a particular intake level. Finally, poor dietary compliance is another common problem of dietary trials. It is clear however that diet can influence the development of T2DM by affecting body weight. It has been shown that a dietary pattern promoting weight loss reduces the risk of T2DM (A) [61], [65], [68]. More recently, higher T2DM risk was also found to be associated with diet composition, particularly with low fibre intake.
Low fibre intake [97], [98], [99], [100]. Individuals with low intake of dietary fibre, particularly of insoluble cereal fibre, have been found to be at increased risk for T2DM in several epidemiologic studies (B) [101], [102]. In studies aimed at diabetes prevention by lifestyle modification, an increase in fibre consumption was often part of the intervention [61], [65]. Fibre has a low glycemic index, which may contribute to T2DM risk reduction. However, the evidence for an increased risk associated with high glycemic index and high glycemic load diets is mixed [98], [99], [100], [103]. Nevertheless, a recent meta-analysis of 37 prospective cohort studies (B) showed, in fully adjusted models, that both high glycemic load (RR 1.27 [95 % CI 1.12, 1.45]) and high glycemic index (RR 1.40, [95 % CI 1.23, 1.59]) diets are associated with increased risk for T2DM [104]. It must be emphasized that fibre rich foods generally have a low GI, although not all foods with a low GI necessarily have high fibre content.
Low unsaturated/saturated fat ratio [105], [106], [107]. Shifting from a diet based on animal fat to a diet rich in vegetable fat might reduce the risk for T2DM (B) [61], [65]. An increased intake of monounsaturated fat appears to be of particular benefit (C) [108]. Recent studies revealed a weak positive correlation between intake of long chain omega-3 fatty acids (LCFA) and diabetes risk [109], [110].The beneficial effects of LCFA on other health outcomes, however, are well established [108], [111]. The consumption of trans fatty acids has consistently been found to be associated with increased risk for T2DM [112] and CVD [113] (A).
Other nutrients. A less consistent but still significant body of evidence suggests that the risk for T2DM is lowered by regular consumption of moderate amounts of alcohol (B) [114], [115], fruits and vegetables (B) [116], nuts (B) [117] and coffee (B) [118]. It must be emphasized that people do not consume nutrients in isolation but rather ingest a variety of nutrients at the same time as they eat their food [119]. The study of different dietary patterns such as the “Mediterranean diet” is an alternative approach to examining the possible relationships between diet and T2DM [120].
Diabetogenic drugs. A large number of drugs may worsen in glycemic control in diabetic patients, or even cause diabetes in predisposed people. These drugs include various classes of agents [121], such as glucocorticoids, antihypertensive drugs (beta blockers, thiazide diuretics) [122], niacin, immunosuppressive drugs, gonadotropin releasing hormone agonists, pentamidine, diazoxide, atypical antipsychotic agents [123], the antineoplastic agent asparaginase, danazole, and anti-retroviral drugs used for the treatment of HIV infection [124].
Obesogenic/diabetogenic environment. The recent increase in T2DM seems to be strongly linked to unfavorable changes in the environment (B) [125]. The abundant availability of energy dense and highly palatable food and changes in transport, work and leisure infrastructure and opportunities decreasing physical activity are the main obesogenic and diabetogenic environmental factors [126]. To change this environment in a beneficial way is a major challenge for T2DM prevention [127], [128].
Smoking increases the risk for T2DM by adversely affecting insulin sensitivity and beta-cell function [129], [130]. The potential of xenobiotics to disturb glucose and lipid metabolism in mammals is well established [131].
A strong correlation between insulin resistance and serum concentrations of persistent organic pollutants (POPs), especially organochlorine compounds has been reported [131], [132], [133], [134], [135], [136]. It has also been proposed that modern food processing can generate diabetogenic compounds, such as glycation end products or oxidized ascorbic acid and lipoic acid [125].
Depression. Psychosocial factors may play a causal role in the chain of events leading to development of the MetSy [137]. Depression has been considered as a risk factor for T2DM and its complications [138], [139] and an increased risk for developing T2DM in adults with depression has been demonstrated in a meta-analysis of 9 longitudinal studies [140] (B). A recent analysis of the DPP found that baseline antidepressant use was associated with diabetes risk in the placebo and intensive lifestyle arms, but not in the metformin arm [141]. Potential mediators of the effects of depression on diabetes risk have been summarized elsewhere [139].
Low socio-economic status. Several studies have recognized the adverse influence of low socioeconomic status (SES) on general health, prevalence of obesity, smoking, CVD, and early mortality [142], [143], [144], [145], [146], [147], [148]. There is also an inverse association between SES and T2DM, with a higher prevalence among less-advantaged groups. This appears to be consistent across several developed countries and across different ethnic groups (B) [149], [150], [151], [152], [153], [154], [155], [156], [157]. An inverse graded association between diabetes prevalence, metabolic disorders and different measures of SES such as education, occupation, income, poverty income ratio, and measures of material deprivation and poverty has been found (B) [158], [159], [160], [161], [162]. Although T2DM prevalence is increasing in the population at large, the increase is more pronounced among people with lower SES [163]. In some, but not all studies an independent association between lower SES in childhood and increased risk for T2DM and cardiovascular disease in adulthood has been observed [163], [164], [165], [166], [167], [168]. The underlying processes are not yet fully understood, but associations between lower SES and diabetes risk factors like obesity, waist circumference, smoking, inappropriate diet, and leisure time inactivity appear to be important [169], [170], [171].
#
#### Definition of target populations
For successful prevention of T2DM both a whole population approach and an individual (targeted high-risk) approach are recommended.
#
#### Whole population approach
The IDF consensus [172] recommends a population as well as an individually targeted approach for diabetes prevention. Simply distributing information about T2DM risk and available strategies for risk reduction, however, is not sufficient to reverse the T2DM epidemic. For successful prevention it is important to create environmental conditions that are conducive to achieving and maintaining a healthy lifestyle. The health sector on its own cannot accomplish such population-wide changes. National diabetes prevention plans are required, which should include the components proposed in the IDF consensus, namely advocacy, community support, fiscal and legislative measures, engagement of the private sector, media communication, and improving level of knowledge and motivation of the population [172].
Unlike interventions that focus on high-risk individuals, the population approach is not supported by a large database of clinical studies. A UK cohort study found that diabetes incidence was inversely related to the achievement of five “healthy behaviour goals for diabetes prevention” (BMI < 25 kg/m2, fat intake < 30 % of energy intake, saturated fat intake < 10 % of energy intake, fibre intake ≥ 15 g/4, 184 kJ, physical activity > 4 h/week) [173]. The incidence of T2DM was inversely and linearly related to the number of goals achieved. None of the participants who met all five goals developed diabetes, whereas the highest incidence was observed in subjects who did not meet any of the goals. (2++, B).
#
#### High-risk approach
It is current practice in several countries (e.g. UK, USA, Finland and France) to recommend targeted or opportunistic screening to identify high risk individuals. The IDF consensus document [172] recommends the use of opportunistic screening by health care personnel, particularly those working in primary care. Risk for T2DM and CVD may be assessed quantitatively by appropriate methods such as blood testing (fasting plasma glucose, OGTT, lipid profile, and HbA1c) and searching for the presence of other risk factors like family history of premature CVD, hypertension, visceral obesity, physical inactivity, unhealthy diet and smoking. Appropriate interventions (e.g. antihypertensive and lipid therapy, aspirin, smoking cessation, dietary changes, exercise, weight loss) targeting all identified risk factors should subsequently be initiated.
The IDF recommends the following criteria for opportunistic screening (or targeted screening) [172]: obesity (including visceral), family history of diabetes, age, history of raised blood pressure and/or heart disease, history of GDM, and drug history. Recently best practice guidelines for vascular risk assessment and management have been issued by the UK National Health Service (www.dh.gov.uk/publications). According to these guidelines targeted screening for T2DM risk by measuring either FPG or HbA1c is recommended in asymptomatic subjects aged 40–74 years with obesity and/or elevated blood pressure (≥ 140/90 mmHg). In subjects with FPG (6–7 mmol/l) or HbA1c (6–6.5 %) in the prediabetic range, an OGTT is recommended.
#
#### Target populations for interventions
It is thought that most patients pass through a prediabetic phase before developing T2DM [174]. Subjects with IGT are at highest risk for T2DM, but individuals with isolated IFG and the MetSy [95], [96], [175] are also at increased risk [96], [176]. Particularly high conversion rates (> 10 % per year) have been observed in subjects with a combination of 2 or 3 prediabetic conditions (IGT ± IFG, ± MetSy) [83], [96]. Therefore a hierarchical approach is proposed starting with subjects with IGT ± IFG, ± MetSy (A), IFG and/or MetSy (C), (overweight, obesity, hypertension, or physical inactivity) (C) and finally the general population (C) [177], [178] ([Table 6]). It has to be emphasized that the major prevention trials [120] have all focused on patients with IGT (± IFG). Given the fact that resources are limited, the intensity of the intervention should be adjusted to the level of risk, implying that subjects at highest risk should receive the most intensive intervention (B).
Highest priority persons with IGT (IGT ± IFG) (Evidence A) High priority persons with isolated IFG (Evidence C) persons with MetSy (age (45 years) as defined by the ATPIII criteria or other criteria which are associated with increased T2D risk (e.g. IDF criteria for MetSy) (Evidence C). Medium priority persons with overweight, obesity, or physical inactivity (Evidence C) Low priority general population (Evidence C)
#
#### Recommendations
A hierarchical approach for prevention of T2DM is proposed:
A starting with subjects at highest risk for T2DM (IGT ± IFG, ± MetSy) with highest priority,
C followed by subjects at high risk (IFG and/or MetSy) with high priority,
C subjects with overweight, obesity, hypertension, or physical inactivity with medium priority and
C finally the general population with low priority.
B Given the fact that resources are limited, the intensity of the intervention should be adjusted to the level of risk, implying that subjects at highest risk should receive the most intensive intervention.
#
### Screening Tools, Diagnosis and Detection
#
#### Categorization of abnormal glucose metabolism
In practice, the glucometabolic category depends on whether FPG is measured alone or combined with a 2-h PG. For example an individual falling into the IFG category may also have IGT or diabetes, which would only be discovered by a post-load BG measurement. In the text, IFG and/or IGT will be defined as prediabetes. As to HbA1c, a high HbA1c level may only identify a fraction of asymptomatic people with diabetes. HbA1c is insensitive in the low range and a normal HbA1c level cannot exclude the presence of diabetes or prediabetes.
#
#### Epidemiological arguments
Several epidemiological studies have challenged the practice of not using the 2-h PG and have showed that a substantial number of people who do not meet the FPG criteria for glucose disorders will satisfy the criteria when exposed to an OGTT [180], [181], [182]. Thus, the OGTT is more sensitive than FPG for detecting diabetes and the only way to detect IGT. The probability of false negative results is substantial, when measuring FPG only. But there are some important arguments against OGTT. OGTT needs to be performed in appropriate conditions and should be standardized (Appendix 1). In particular, OGTT should be carried out after at least 3 days of unrestricted (> 150 g) carbohydrate intake daily. This test has been considered to be less appropriate at a population level, mainly because it takes more than 2 h to perform, is costly and has a low reproducibility. However, the true primary prevention of diabetes requires the identification of high-risk subjects and treatment to prevent their transition to overt diabetes. This needs a definite categorization of glycemic states.
#
#### Arguments based on the natural history of glucose abnormalities
The rate of conversion to diabetes is very high in people with IGT or IFG, and even higher in those with IGT + IFG, as discussed earlier. Approximately 30 % of people with IGT will convert to T2DM within 5 years [183] implying that at-risk individuals should be screened for both IFG and IGT.
#
#### Arguments based on prevention trials
To date, prevention trials mainly included patients with IGT, whereas only one trial included also patients with IFG only [61], [65], [68], [184], [185], [186], [187]. This trial showed that prevention of the transition from IFG to diabetes is possible so that IFG may also be considered a target for intervention [188].
#
#### Arguments based on CVD risk
Patients with IGT are at high risk for developing CVD. The most convincing evidence of increased CVD risk was provided by the DECODE (Diabetes Epidemiology: Collaborative analysis Of Diagnostic criteria in Europe) study, which showed that IGT is more predictive of CVD mortality than FPG levels [5]. Many individuals with prediabetes have a cluster of other cardiovascular risk factors, i.e. abdominal adiposity, elevated triglycerides, low HDL-cholesterol, elevated blood pressure, known as components of the MetSy, as well as raised LDL-cholesterol levels [189].
Taking all of the above arguments into account, it is strongly suggested that clinicians categorize the type of glycemic abnormalities as precisely as possible to identify people with IFG and IGT and, in those with IGT, to screen for associated CVD risk factors, in order to achieve the goals of both diabetes and CVD prevention. However this step should be preceded by a screening phase in order to select subjects with a high chance of having prediabetes or developing T2DM.
#
#### Detection of people at high risk for diabetes
Detection programmes may be targeted widely or restricted to higher risk populations. They may use risk scores and/or blood glucose measurement. Scoring systems based on the presence and extent of a number of aetiological factors may be helpful to identify people at high risk for T2DM. They need to be reliable, simple and practical. A number of tools have been developed to screen for undiagnosed diabetes and/or diabetes and for the risk of incident diabetes ([Tables 3], [4]).
Data source Ref. Age n T2DM diagnosis T2DM cases Predictive variables Sensitivity Specificity Positive predictive value Area under the receiver-operating characteristic (ROC) curve “The ADA risk score” The Second National Health and Nutrition Examination Survey [319] 20–74 3770 OGTT (WHO 1985) 164 Age, sex, delivery of macrosomic infant, race, education, obesity, sedentary lifestyle, family history of diabetes 79 % 65 % 10 % 0.78 Model development: The Rotterdam Study [206] 55–75 1016 OGTT (WHO 1985) 118 Model 1: Age, sex, presence of obesity, use of antihypertensive medication. M1: 78 % M1: 55 % M1: 8 % M2: 7 % M1: 0.68 Model validation: The Hoorn Study 50–74 2364 110 Model 2: + family history of diabetes, physical activity, BMI M2: 72 % M2: 55 % M2: 0.74 “The Cambridge risk score” Model development: The Ely Study (ES; œ), The Wessex Study (WS) [201] 40–64 549 (ES)+101 (WS) ES: OGTT (WHO 1985), WS: diabetes diagnosed during 12 months 25 (ES) + 101 (WS) Age, sex, BMI, family history of diabetes, use of antihypertensive or steroid medication, smoking Model validation: The Ely Study (œ) 40–64 528 23 77 % 72 % 0.80 Other cohort: The European Prospective Investigation of Cancer – Norfolk Study [203] 39–78 6567 HbA1c ≥ 7 % 84 51 % 78 % 0.74 Model development: Inter99 (œ) [200] 30–60 3250 OGTT (WHO 1999) 135 Age, sex, BMI, family history of diabetes, known hypertension, physical activity 73 % 74 % 11 % 0.80 Model validation: Inter99 (œ) 30–60 2874 117 67 % 74 % 10 % 0.76 ADDITION pilot study [183] 40–69 1028 29 76 % 72 % 7 % 0.80 FINDRISC [198] 45–74 2966 OGTT (WHO 1999) 259 prediabetes Age, BMI, waist circumference, use of antihypertensive therapy, history of high blood glucose, familial history of diabetes (score 0–26, ≥ 11) M 46 % M 66 % M 0.65 W 53 % W 45 % W 0.66 1222 diabetes M 66 % M 22 % M 0.72 W 73 % W 11 % W 0.73 OTT = oral glucose tolerance test; WHO = World Health Organization; ADA = American Diabetes Association; BMI = body mass index; HbA1c = glycosylated haemoglobin; ADDITION = the Danish Anglo-Danish-Dutch Study on Intensive Treatment in People with Screen Detected Diabetes in Primary Care. The sensitivity, specificity and positive predictive value are calculated using the cut-off point suggested by the authors
Data source/follow-up time Ref. Age n T2DM diagnosis T2DM cases Predictive variables Sensitivity Specificity Positive predictive value Area under the receiver-operating characteristic (ROC) curve The San Antonio Heart Study/7.5 years [85] 25–64 2903 OGTT (WHO 1999), medical records 269 Clinical model: Age, sex, ethnicity, fasting glucose, systolic blood pressure, HDL cholesterol, Body mass index, family history of diabetes 0.84 Full model: Also 2-hour glucose, diastolic blood pressure, total and LDL cholesterol, triglyceride 0.86 Model development: The Rancho Bernardo Study (cross-sectional) [193] 67 ± 11 1549 IGT in OGTT 514 (IGT) Sex, age, triglycerides, fasting glucose Model validation: The Health ABC Study/5 years 70–79 2503 Medical records 143 0.71 The Atherosclerosis Risk in Communities Study/9 years [192] 45–64 7915 OGTT or medical records 1892 Model 1: Age, ethnicity, waist, height, fasting glucose, systolic blood pressure, family history of diabetes 51 % 86 % 41 % 0.78 65 % 77 % 35 % 75 % 67 % 30 % 83 % 56 % 27 % Model 2: Also HDL cholesterol and triglycerides 52 % 86 % 42 % 0.80 67 % 77 % 36 % 77 % 67 % 31 % 85 % 57 % 27 % FINDRISC Model development [190] 35–64 4435 Antidiabetic treatment 182 Age, body mass index, waist circumference, use of antihypertensive therapy, history of high blood glucose 78 % 77 % 13 % 0.85 Model validation 45–64 4615 Antidiabetic treatment 67 (score > 8 max: 20) 81 % 76 % 5 % 0.87 DESIR Model development [208] 30–65 1863 M FBG ≥ 7 mM 140 M Model 1: waist circumference, hypertension and smoking (M) or familial history of diabetes (W) 0.71 (M) 1954 W 63 W 0.83 (W) Model 2: also FBG 0.85 (M) 0.92 (W) Model validation SU.VI.MAX FBG ≥ 7 mM or treatment 0.85 E3N Self question/treatment 0.92 OGTT=oral glucose tolerance test; WHO=World Health Organization; IGT=impaired glucose tolerance; men M; women W; fasting blood glucose FBG. The sensitivity, specificity and positive predictive value are calculated using the cut-off point suggested by the authors. In the paper by Schmidt and co-workers [191], four different cut-off points, with proportion of screen-positives 20 %, 30 %, 40 %, or 50 %, are given
Several risk scores based on large cohort studies are available [85], [190], [191], [192], [193], [194], [195], [196]. However, few of these rely on factors that are measurable with non-invasive methods and are not therefore applicable outside of clinical practice. Most were developed to rate the risk for developing T2DM ([Table 4]) and some seem to be valuable for detecting current undiagnosed diabetes ([Table 3]), and for identification of patients with MetSy, insulin resistance and at risk for CVD. Therefore such questionnaires are able to select populations in whom blood glucose could be measured and/or lifestyle advice provided in order to prevent diabetes. Some scores directly require a diagnostic test, such as random capillary glucose measurement (interpreted according to time since last meal) [197].
#
#### European population
The Finnish risk test (FINDRISC) takes only a couple of minutes, can be taken online (www.diabetes.fi or http://care.diabetesjournals.org for an English version) and provides a measure of the probability of developing T2DM over the following 10 years. The FINDRISC test is based on a representative random sample of the Finnish population, aged 35–64 years and their 10-year incidence of drug-treated T2DM. It includes 8 items: age, BMI, waist circumference, antihypertensive medication, history of elevated blood glucose (including GDM), meeting the criterion for daily physical activity and daily intake of fruit or vegetables. The last 2 variables were introduced to increase awareness about the importance of lifestyle modifications, although they were not associated with increased diabetes risk. The performance of this scoring test as a screening tool was assessed in a cross-sectional, population-based survey of subjects aged 45–74 years. The risk score was associated with the presence of previously undiagnosed T2D, IGT, MetSy and CVD risk factors. Using a cut-off score of 11 (maximum: 20), the sensitivity to identify undiagnosed diabetes was 66 % in men and 70 % in women, with false-negative rates of 31 % and 39 % [198]. The performance was also satisfactory in an Italian cohort [122].
A simplified version of the FINDRISC consisting of 6 questions was validated in a German population aged 41–79 years with a family history of T2D, obesity, or dyslipoproteinemia and found to be a simple tool with high performance to predict diabetes risk but less efficient to identify asymptomatic T2DM [199]. In the IGLOO (Impaired Glucose tolerance and Long-term Outcomes Observational) study in an Italian cohort aged 55–75 years with one or more CVD risk factors, the FINDRISC score had a sensitivity of 77 % and a specificity of 45 % to detect people with T2DM [122].
A Danish diabetes risk score including six questions (age, sex, BMI, family history of diabetes, known hypertension, physical activity at leisure time) has been developed in a population-based sample of individuals aged 30–60 years who underwent an OGTT. This simple score which can be completed at home identified 76 % of individuals with previously undiagnosed T2DM, with a specificity of 72 %, reducing the proportion of individuals in the population that need subsequent testing to 29 % [200].
The Cambridge risk score comprises data routinely available in UK general practice (age, sex, BMI, family history of diabetes, smoking habits, and prescribed anti-hypertensive drugs or steroids) [201] and allows to identify individuals with undiagnosed diabetes in different ethnic groups [202], [203], [204]. It has also been validated in a Danish population where a risk score above the threshold of 0.246 provided a sensitivity of 71 %, a specificity of 81 %, and a positive predictive value of 8 % to detect diabetes [205]. The QDS score has been developed from data of 2 540 753 patients aged 25–79 years collected by 355 GPs in UK and Wales, of whom 78 081 had an incident diagnosis of type 2 diabetes. This score includes ethnicity, age, sex, BMI, smoking status, family history of diabetes, Townsend deprivation score, treated hypertension, cardiovascular disease, and current use of corticosteroids. It can be applied systematically to computerised patient databases [196].
A questionnaire including readily available information (age, sex, presence of obesity, use of anti-hypertensive medication) has been developed from a sample of participants aged 55–75 years who were recruited in the Rotterdam Study, to screen for prevalent T2DM. This simple questionnaire has been tested in the Dutch Hoorn Study and provides good performance at the cut-off point of > 6 with a sensitivity 78 %, specificity 55 %, negative predictive value 98 %, positive predictive value 8 % [206].
The German Diabetes Risk Score which includes information on age, waist circumference, height, history of hypertension, physical activity, smoking, and consumption of red meat, whole-grain bread, coffee, and alcohol is also publicly available (http://www.dife.de). This score has been developed in the prospective Potsdam cohort of the EPIC (European Prospective Investigation into Cancer and Nutrition) study of individuals aged 35–65 years. It was further shown to be an accurate tool to identify individuals at high risk for undiagnosed T2DM and to correlate well with measures of insulin resistance and impaired insulin secretion in three other German cohorts [207].
The Data from Epidemiological Study on the Insulin Resistance syndrome (DESIR) study produced a risk score from a French cohort followed up over 9 years, and validated in two other French cohorts [208]. It includes waist circumference and hypertension in both sexes, and smoking in men and family history of diabetes in women. A risk score of 5 confers a > 30 % chance of diabetes in the following 9 years.
In conclusion, the FINDRISC score meets the requirements of being a simple, non-invasive and inexpensive tool. It has been used in several European cohorts, and shown to be a reliable tool both for detecting undiagnosed diabetes and for predicting future diabetes risk. The DESIR score also meets the same criteria but has not been tested for detecting undiagnosed diabetes. It is simpler than the FINDRISC score, but that latter provides more opportunities for lifestyle discussion and has been validated in several European populations.
#
#### US population
The ADA risk test is a simple, user-friendly risk test based on age, weight, family history of diabetes, and for women, having a baby weighing more than 9 pounds at birth (www.diabetes.org).
The score, derived from the San Antonio Heart Study, includes age, sex, ethnicity, BMI, family history of diabetes and systolic blood pressure, and two biological parameters (FPG and HDL-cholesterol) but does not perform better than FPG alone [85].
The ARIC risk score demonstrated low validity in the testing sample [100]. Furthermore its applicability to European Caucasian populations may be limited because it was derived from a US population.
The Diabetes Risk Calculator has been developed as a screening tool for undiagnosed diabetes and prediabetes, based on the Third National Health and Nutrition Examination Survey (NHANES III) dataset (7092 participants ≥ 20 years with FPG being measured in all at fasting, and an OGTT being performed in approximately half of those aged 40–75 years). This tool includes simple questions and performed well [209].
#
#### Asian population
An Indian Diabetes Risk Score has been developed from the Chennai Urban Rural Epidemiological Study (CURES) in India for screening for undiagnosed diabetes. This simple test uses four risk factors (age, waist circumference, family history of diabetes and physical activity), and has a sensitivity of 72 % and a specificity of 60 % with a positive predictive value of 17 % and a negative predictive value of 95 % [210].
A simple risk equation (including age, BMI, and hypertension) has been described in a high-risk Thai population and allowed to detect 87 % of undiagnosed diabetes [211].
#
#### Applicability of screening tools
Screening tests using questionnaires also need to be performed in appropriate conditions. The FINDRISC score may be used as a self-administered test (as it was in the first validation study). However it is recommended that the answers should be checked by a nurse or a physician.
More importantly, four published screening tests (Rotterdam Diabetes Study, Cambridge Risk score, San Antonio Heart Study and FINDRISC score) have been applied to detect undiagnosed diabetes in a German population (KORA Survey 2000). These tests yielded low validity when applied to that new population, most likely due to differences in population characteristics [212]. Low performances were also demonstrated in German subjects with a family history of T2DM [199] and in the population of Oman [213]. This suggests that performance of diabetes risk questionnaires or scores must be assessed in the target population where they will be ultimately applied. However, all these screening tools had a high negative predictive value (94–98 %), and thus may be helpful when the findings are negative rather than positive.
The DETECT-2 project, an international data pooling collaboration, on screening for T2D specifically addressed ethnicity and population differences [214]. Nine datasets were selected, which were representative of people from a diverse range of ethnic backgrounds. The use of the Rotterdam Predictive Model [206], yielded a wide variation of the performance with sensitivity, specificity, and percentage needing further testing ranging between 12 and 57 %, 72 and 93 %, and 2 and 25 %, with a worse performance in non-Caucasian populations. Thus, a risk score developed in Caucasian populations cannot be applied to other populations of more diverse ethnic origins.
After scoring for diabetes risk, it is mandatory to inform patients about their elevated risk and to take time to deliver explanations, in particular to low educated individuals, as recently stressed in a study carried out in the US that included a large number of minority populations [215]. This needs to be done appropriately in order to raise the awareness and understanding of T2DM and its risk factors, while avoiding or minimizing negative effects, such as emotional distress and denial [216].
#
#
#### Community based strategies
Various approaches exist: (i) measuring PG in specified population groups (e.g. age over 40) to determine prevalent prediabetes (a strategy that will detect undiagnosed diabetes as well); (ii) using computer database searches/risk-scoring algorithms or collecting questionnaires to provide an estimate of the risk for incident diabetes (a strategy that leaves the current glycemic state undetermined); (iii) using risk scores or questionnaires as primary screening tools to identify sub-groups of the population in whom glycemic testing may be targeted efficiently.
Alternative (iii) has been tested in the IGLOO study [122]. In that study, the use of the FINDRISC score as initial instrument, followed by the measurement of FPG in individuals with a score ≥ 9 and by the OGTT in individuals with FPG between 5.6 and 6.9 mmol/l, would have led to the identification of 83 % of T2DM cases and 57 % of IGT cases, at a cost of an OGTT in 38 % of the sample and a FPG in 64 % [122]. Therefore, a multiple step approach may be proposed, consisting of using first a risk score, then measuring fasting BG, and if FPG is increased, lastly performing an OGTT. An alternative may be performing an OGTT in all individuals with a high test score.
A similar approach was tested in the Anglo-Danish-Dutch study of intensive treatment in people with screen-detected diabetes in primary care (ADDITION [183]). Stepwise screening strategies were performed using risk questionnaires and routine clinical practice data plus random blood glucose, HbA1c and fasting blood glucose measurement. Diabetes was diagnosed using the 1999 WHO criteria and estimated 10-year coronary heart disease risk was calculated using the UK Prospective Diabetes Study risk engine. Out of 76 308 people aged 40–69 years, a total of 3057 individuals with screen-detected diabetes were identified.
A community-based strategy ([Fig. 1]) should consist of a screening test as a first step in order to estimate the risk for current diabetes or prediabetes and the risk for future diabetes (A). In agreement with the IDF, we recommend the use of opportunistic screening by health-care personnel including those working in general practice, nurses and pharmacists [172] (A). Self-administered questionnaires may also be used to identify people at risk (e.g. by anyone on the Internet, or in pharmacies, or as part of national health surveys) and used to prompt further diagnostic testing by a health care provider. If a person is considered to be at increased risk for diabetes, they will proceed to PG measurements (either fasting or preferably using an OGTT). At the very least, measurement of random capillary blood glucose can be used with an improved performance if measurement is done in the postprandial period [217]. A high HbA1c level may also identify a subset of asymptomatic people with diabetes. Indeed, the sensitivity of HbA1c measurement for the screening of undiagnosed diabetes was found to be fairly good as compared to FPG [218]. HbA1c was less sensitive for detecting prediabetes or diabetes when compared to OGTT results [219]. The available resources may define the testing regime used in each country/locality.
#
#### Clinical practice-based strategies
A diagnostic test may be used in routine clinical practice but as it is time-consuming, one may propose to select the patients with at least one obvious risk factor for diabetes, such as age > 40 years, overweight or obesity, components of the MetSy, family history of GDM, polycystic ovary syndrome or ethnicity (migrants). Such factors are identified in the sections above, and have been considered in other recent guidelines, including IDF [172], Diabetes UK [220], France [221], American Diabetes Association [222], that all recommend targeted or opportunistic screening of high risk individuals. Some of these recommendations have already been validated [223]. [Table 5] summarizes the populations we recommend for targeted screening (Evidence I, B). Systematically targeted screening programmes may be possible here. For example, GPs or health insurance companies with computerised databases can pro-actively identify all people with various combinations of risk factors and post the questionnaires to these targeted groups. A screening strategy may consist of PG measurement at fasting or even better of OGTT due to its higher sensitivity. One alternative may be a stepped approach including an initial screening questionnaire in the process ([Fig. 1]). Two risk factors, obesity and CVD, provide some examples for the operation of a targeted screening process.
Criteria for screening for diabetes and prediabetes within targeted populations a) White people aged over 40 years or people from Black, Asian and minority ethnic groups aged over 25 years with 1 or more of the following risk factors: a first degree family history of diabetes and/or BMI over 25 kg/m2 and/or waist measurement of over ≥ 94 cm for White and Black men and ≥ 80 cm for White, Black and Asian women, and ≥ 90 cm for Asian men and/or systolic blood pressure ≥ 140 mmHg or diastolic blood pressure ≥ 90 mmHg or treated hypertension and/or HDL-cholesterol ≤ 0.35 g/l (0.9 mM) or triglycerides ≥ 2 g/l (2.2 mM) or treated dyslipidemia b) Women with a history of gestational diabetes or with a child weighing > 4 kg at birth, c) People with history of temporarily induced diabetes, e.g. steroids, d) People who have ischaemic heart disease, cerebrovascular disease, peripheral vascular disease, e) Women with polycystic ovary syndrome who have a BMI ≥ 30 kg/m², f) People who have severe mental health problems and/or receiving long term anti-psychotic drugs, g) People with a history of IGT or IFG.
Cosson et al. [224] performed OGTT in 933 overweight or obese patients with mean age of 39 years and free of known glycemic abnormalities. Their FINDRISC score was retrospectively calculated using the clinical files. Prediabetes or diabetes was diagnosed in 26 % of the subjects of whom 75 % would not have been diagnosed with FPG alone. Selecting the subjects with a FINDRISC ≥ 11 to be screened directly with an OGTT had a sensitivity of 78 %, a specificity of 44 % and limited the number of OGTTs to 575 (60 % of the study sample) [224] (B).
In patients with established CVD but without known diabetes: the percentage of those who have IFG or unknown diabetes according to FPG is higher than 17 %, but the percentage of those with prediabetes or diabetes according to OGTT is far higher (> 50 %). In other words among patients with CVD and glucose abnormalities, in most cases it is the 2-hour PG which is elevated, whereas FPG is often normal [225]. Therefore, in patients with CVD a scoring diabetes risk test can be applied but an OGTT should be carried out in all patients [226] (B).
In practice, the screening strategy depends on local possibilities. However, due to the very high number of obese subjects, OGTT is perhaps best reserved for those with higher scores, whereas the very high prevalence of diabetes or prediabetes in CVD patients suggests that performing OGTT routinely in these patients is the best strategy ([Table 6]).
#
#### Recommendations
A As OGTT has a higher sensitivity than FPG for detecting diabetes and is the only test to detect IGT, a definite categorization of glycemic state needs an OGTT.
A Several risk scores are available and valuable for detecting current undiagnosed diabetes and/or to rate the risk for developing T2DM. The FINDRISC meets the requirements of being a simple, non-invasive and inexpensive tool and has been shown to be a reliable tool both for detecting undiagnosed diabetes and for predicting future diabetes risk in several European cohorts.
B Performance of diabetes risk scores must be assessed in the target population where they will be ultimately applied.
A After scoring for diabetes risk, it is mandatory to inform participants about their risk and to take time to deliver explanations, in particular to low educated individuals. This needs to be done appropriately in order to raise the awareness and understanding of T2DM and its risk factors, while avoiding or minimizing negative effects, such as emotional distress and denial [216].
A A community-based strategy should consist of using a screening test as a first step in order to estimate the risk for current diabetes or prediabetes and the risk for future diabetes. It is recommended the use of opportunistic screening by health-care personnel including those working in general practice, nurses and pharmacists. If after this first step a person is considered to be at increased risk for diabetes, they will proceed to PG measurements (either fasting or preferably using an OGTT) in order to determine more precisely their glycemic status.
B In routine clinical practice, a screening strategy should be targeted to patients with at least one obvious risk factor for diabetes. It may consist of PG measurement at fasting or even better of OGTT due to its higher sensitivity. One alternative may be a stepped approach including an initial screening questionnaire (score of risk for diabetes) in the process. As examples, due to the very high number of obese subjects, OGTT is best reserved for those with higher scores, whereas the very high prevalence of diabetes or prediabetes in CVD patients suggests that performing OGTT routinely in these patients is the best strategy.
#
### Prevention of T2DM and its Comorbidities
#
#### Methodology
This section was compiled following a systematic search for primary studies, systematic reviews and meta-analyses [69], [178] of research on preventing the onset of T2DM. The initial search was undertaken using MEDLINE with follow-up of cited references. The final selection was limited to randomized controlled trials (RCTs) published in English between1979 and 2008, which featured development of T2DM as a study endpoint and used standard criteria for the diagnosis of diabetes mellitus. Key data from the major studies are summarized in ([Tables 7], [8], [9]).
Ref. Acronym Design Intervention (int, D = diet, PA = physical activity) Follow-up (years) DROP out [184], [227] CDQDPS Cluster randomized D: increase vegetables, decrease alcohol and sugar, caloric and weight reduction if overweight 6 [104] (int) 44/577 (8 %) at 6 years PA: 1–2 units/day; 1 unit = 30 min slow walking/house cleaning, 20 min fast walking/cycling, or 5 min jumping rope/swimming), D + PA: individual counselling + compliance evaluation by physician/nurse every 3 m + small groups weekly for 1 m, monthly for 3 m and every 3 m thereafter 20 [105] (int + follow-up) 14/577 (2 %) at 20 years [231], [232], [320], [321] DPS RCT D + PA: weight reduction 5 % or more; D: < 30 E% fat, < 10 E% sat fat, 15 g fibres/1000 kcal; PA: ≥ 30 min/day 3.2 [193] (int) 42/522 (8 %) at 3.2 years Individual, personalized dietary counselling; 7 sessions during the first year and every 3 m thereafter voluntary gym 1–2 sessions/w 7 [195] (int + follow-up) 47/522 (9 %) at 7 years [234], [322] DPP RCT D + PA: weight reduction 7 % 2.8 7.5 % D: 25 E% fat PA (e.g. brisk walking) 150 min/week (700 kcal/w) Goal-based behavioural intervention; case-managers (1/20–26 participants) 16-session core curriculum in groups during the first 24 w; individual session every 2 m thereafter + “toolbox funds” ($100/participant/year) for expenses (cookbooks, personal trainer, aerobic tapes, reinforcers for fulfilling behavioural contracts etc.) [186], [235] IDPP RCT D + PA 2.5 1.5 % (con) D: decrease energy, refined carbohydrates and fats, avoidance of sugar and inclusion of fibre-rich foods 9 % (int) Face-to-face counselling at baseline and every 6 m; telephone contacts at two weeks and monthly thereafter PA: Brisk walking 30 min/day or more (or comparable physical labour or other activity) [187] Japanese trial in IGT males RCT (1 : 4) Intensive vs. standard intervention: 4 5.6 % (con), 4.7 % (int) at year 1 BMI goal < 22 kg/m2 D: reduce amount by 10 % (smaller rice bowl etc.), increase vegetables; total fat < 50 g/day, alcohol < 50 g/day; eating out ≤ 1/day PA: walking 30–40 min/day Face-to-face counselling in hospital every 2–3 m Ref. Acronym Age (years) BMI (kg/m2) Waist (cm) (male/female) Blood pressure (mmHg) Lipids TC (mmol/l) MetSy (%) FPG (mmol/l) [184], [227] CDQDPS 45 26 – 134/89 (con) 5.3 – 5.5 132/87 (int) 5.2 5.6 [231], [232], [320], [321] DPS 55 31 101 138/86 5.6 74 % 6.1 [234], [322] DPP 50 34 105 – – 53 % 5.9 [186], [235] IDPP 45 (con) 26 91/86 124/76 5.1 46 % 46 (int) 26 89/88 122/74 5.2 [187] Japanese trial in IGT males 30–60 24 – 124/79 (con) TC – 6.2 123/78 (int) 5.5 6.3 Ref. Intervention Number of T2DM cases (per 100 person years) Risk reduction Numbers-needed-to-treat [184], [185], [186], [187], [188], [189], [190], [191], [192], [193], [194], [195], [196], [197], [198], [199], [200], [201], [202], [203], [204], [205], [206], [207], [208], [209], [210], [211], [212], [213], [214], [215], [216], [217], [218], [219], [220], [221], [222], [223], [224], [225], [226], [227] CDQDPS At 6 years: At 6 years: 4.2 (D) 90/133 (con) = 15.7 DRR (adjusted): 3.8 (PA) 57/130 (D) = 10.0 0.69 (D), p < 0.3 4.6 (D + PA) 58/141 (PA) = 8.3 0.54 (PA), p < 0.0005 for 6 years 58/126 (D + PA) = 9.6 0.58 (diet + PA), p < 0.005; no diff. between ints At 20 years: At 20 years: 11.3 (con) Adjusted HRR: 6.9 (combined int) 0.57 (combined int) [231], [232], [320], [321] DPS At 3.2 years: At 3.2 years: 22 for 1 year 59/257 (con) = 7.8 HRR 0.42, p < 0.001 27/265 (int) = 3.2 At 7 years: At 7 years: 110/257 (cont) = 7.4 HRR 0.57, p < 0.001 75/257 (int) = 4.3 [234], [322] DPP 11.0 (con) HRR 0.42 6.9 for 3 years 4.8 (int) [186], [235] IDPP 3-years cumulative incidence: 55.0 % (con), 39.3 % (int) RRR 28.5 %, p = 0.018 6.4 for 3 years [187] Japanese trial in IGT males 4-years cumulative incidence: 9.3 % (con), 3.0 % (int) RRR 67.4 %, p < 0.001 # #### Findings for prevention by lifestyle modification # #### Major T2DM prevention studies Da-Qing Study (CDQDPS) [184], [227]. Cluster randomisation was used to allocate 577 people with IGT attending 33 participating clinics to diet alone, exercise alone, diet-and-exercise combined or no intervention. The participants in the dietary intervention were encouraged to reduce weight aiming at < 24 kg/m2, otherwise high-carbohydrate (55–65 E%) and moderate-fat (25–30 E%) diet was recommended. Participants were encouraged to consume more vegetables, reduce simple sugar intake and control alcohol intake. The participants in the exercise intervention were encouraged to increase their level of leisure-time physical activity by at least 1–2 “units” per day, one unit corresponding for instance for 30 minutes of slow walking, 20 minutes of cycling, 10 minutes of slow running, or 5 minutes of swimming. The cumulative 6-year incidence of T2DM was lower in all intervention groups (41–46 %) compared with the control group (68 %). A 20-year follow-up [184] found that the incidence of T2DM was persistently lower in the combined intervention group compared with the control group. There were no statistically significant differences in CVD events, CVD mortality, or total mortality between the control group and the combined intervention groups, on the other hand, the study was under-powered to detect such effects. Although the non-significant 17 % reduction in death from CVD is suggestive of an effect, lifestyle intervention has not yet been proven to prevent CVD morbidity and mortality in persons at high risk for T2DM and further, well-powered studies are needed to confirm this. Nevertheless, there is preliminary evidence from CDQDPS [228] and other studies [229], [230]. Finnish Diabetes Prevention Study (DPS) [185], [231], [232]. A total of 522 middle-aged, overweight individuals with IGT were allocated either to a the intensive lifestyle intervention or to the control group. The intervention included individualized advice and behavioural support to achieve the intervention goals: body weight reduction of ≥ 5 %, total fat intake < 30 % of energy, saturated fat intake < 10 % of energy, fibre intake of ≥ 15 g/1000 kcal, and moderate exercise for ≥ 30 min/day. Consumption of wholemeal products, vegetables, berries and fruit, low-fat milk and meat products, soft margarines, and vegetable oils rich in monounsaturated fatty acids were recommended. The participants were also individually guided to increase their level of physical activity and individually tailored circuit-type resistance training sessions were also offered to improve the functional capacity and strength of the large muscle groups. The control group received only general advice about healthy lifestyle at baseline. Body weight reductions from baseline to years 1 and 3 were 4.5 kg and 3.5 kg respectively in the intervention group and 1.0 kg and 0.9 kg in the control group. The cumulative incidence of T2DM was 11 % [CI 6, 15 %] in the intervention group and 23 % [95 % CI 17, 29 %] in the control group after 4 years, with 58 % relative risk reduction. None of those achieving all five lifestyle goals developed T2DM. Post hoc analyses showed that adopting a diet with moderate fat and high fibre content [233], as well as increasing physical activity [97] were independently associated with a reduced risk of T2DM. After a median of seven years follow-up, the marked reduction in the cumulative incidence of T2DM was sustained. United States Diabetes Prevention Program (DPP) [229], [234]. The DPP compared the efficacy of intensive lifestyle intervention and standard lifestyle recommendations; the study also had a metformin arm. A total of 3234 high-risk individuals with IGT and slightly elevated FPG were recruited. Lifestyle intervention in DPP was primarily undertaken by “case managers”. The goals were to achieve and maintain 7 % weight reduction by consuming healthy, low-calorie, low-fat diet and to engage in physical activities of moderate intensity (such as brisk walking) 150 minutes per week or more. Compared with placebo, lifestyle intervention reduced T2DM risk by 58 % at 2.8 years mean follow-up. Among the lifestyle intervention group, 74 % achieved the physical activity goal of > 150 minutes/week at 24 weeks. At one-year the mean weight loss was 7 kg (about 7 %). Body weight at baseline and weight reduction during intervention were most important predictors of T2DM risk [60]. For each kilogram lost, the risk of T2DM was reduced by 16 %. Indian DPP (IDPP) [186], [235]. A total of 531 subjects with IGT were randomized into four groups (control, lifestyle modification, metformin, and combined lifestyle modification and metformin). Lifestyle modification included advice on physical activity (30 minutes of brisk walking per day) and reduction in total calories, refined carbohydrates and fats, avoidance of sugar, and inclusion of fibre-rich foods. After a median follow-up of 30 months, the relative risk reduction was 29 % with lifestyle modification, 26 % with metformin and 28 % with lifestyle modification and metformin, as compared with control. Japanese Prevention Trial [187]. This trial randomized 458 men with IGT to receive either an intensive lifestyle intervention or standard management. Participants in the intensive intervention group visited hospital every 2–3 months to receive detailed advise to reduce body weight if BMI was ≥ 22 kg/m2 (by consuming large amount of vegetables and reducing the total amount of other food by 10 %. Intake of fat (< 50 g per day) and alcohol (< 50 g per day) were limited and physical activity recommended (30–40 min per day of walking). The intervention group achieved a 67.4 % reduction in risk compared with controls. Body weight decreased by 2.2 kg and by 0.4 kg in the intervention and control groups during 4 years. Other studies relevant to prevention of T2DM by lifestyle modification. The following studies are not included with the “major” prevention studies because of a variety of limitations including low power, inadequate randomization or insufficient description of the methodology or content of the lifestyle intervention. Some studies, although not primarily focusing on prevention of T2DM, have also published findings related to T2DM incidence and are summarised below. An early randomised intervention study, the Malmöhus study [236] included 267 men with IGT and found lower rates of development of T2D (13 % vs. 29 %) in those receiving dietary intervention, although the published report neither defined clearly what type of diet was advocated nor the degree of adherence. The “Whitehall Borderline Diabetes Study” [237] assessed the effectiveness of carbohydrate restriction in the prevention of T2DM. A total of 204 men with IGT were randomized to one of four treatment groups: (i) carbohydrate 120 g/day + placebo, (ii) “control diet“ with sucrose limitation + placebo, (iii) 120 g/day carbohydrate + 50 mg phenformin and (iv) sucrose limitation + 50 mg phenformin. After 5 years, the incidence of T2DM cases in each group was as follows: 18 %, 13 %, 18 %, and 9 %, with none of the differences significant. The feasibility of a diet and exercise intervention was assessed in 217 men with IGT in the Malmö feasibility study [238]. Effects of exercise training (twice weekly 60-min with various dynamic activities) and diet (reduction in refined sugar, simple carbohydrates, fat, saturated fat, energy, alcohol and increase in complex carbohydrates and vegetables) were compared with a non-randomized group receiving no intervention. After 5 years, 11 % of the intervention and 29 % of the reference groups had developed T2DM. The 12-year follow-up [154] revealed that mortality in the former IGT intervention group was lower than in those who received “routine care” only (6.5 and 14.0/1000 person years, p = 0.009). In 200 women with IGT and previous GDM, intensive versus routine dietary advice and emphasizing the importance of regular exercise was tested at the University of Melbourne [240]. Advice was delivered using a diet sheet and reinforced during frequent telephone contacts. Annual incidence rates for T2DM were 6.1 % in the intervention group versus 7.3 % in controls, with no difference between groups. In Auckland [241], 176 subjects with IGT or newly diagnosed T2DM were randomized to dietary intervention designed solely to reduce total dietary fat and “usual diet“ controls. Despite lower 2-hour glucose, insulin and incidence of T2DM or IGT at one year in the intervention group, there was no difference after 5 years. The SLIM Study [177], [178] assessed the effect of a diet and exercise intervention based on general public health recommendations on glucose tolerance, insulin resistance, and CVD risk factors in individuals with IGT. Altogether 147 participants with IGT were randomised to receive either intensive lifestyle intervention or standard care. After three years, mean weight changes were greater in the intervention than in the control group (1.1 kg and + 0.2 kg; p = 0.011). Desired changes in insulin resistance and 2-h glucose were observed only in the intervention group. Among the 106 who completed the intervention, the cumulative incidence of T2DM was 18 % in the intervention group and 38 % in the control group, a relative risk 0.42 (p = 0.025), representing a 58 % risk reduction. However an intention-to-treat analysis, which included 121participants, attenuated the effect with a non-significant relative risk (RR) of 0.58 (p = 0.07). The primary aim of The Multiple Risk Factor Intervention Trial (MRFIT) [242] was prevention of coronary heart disease (CHD) among 12 866 men at high CHD risk, followed up over 6–7 years. The intervention included dietary counselling aimed at reducing saturated fat and cholesterol and increasing polyunsaturated fat, and body weight reduction if needed. In the intervention group, 11.5 % developed T2DM, compared with 10.8 % of the control group with a HR of 1.08 [95 % CI: 0.96, 1.20]. However, among smokers HR was 1.26 [95 % CI 1.10, 1.45] and among non-smokers 0.82 [95 % CI 0.68, 0.98] (p = 0.0003). Thus, the intervention reduced the risk of T2DM only among non-smokers. A one-year trial in Italy [243] compared the effectiveness of a structured lifestyle intervention program in reducing the onset of the MetSy or T2DM in 375 persons with metabolic disorders recruited from a population-based cohort. Intervention consisted of general information from family physician (control group), followed by 5 training sessions with structured core but flexible contents (intervention group only) in line with general dietary recommendations (50–60 % of energy as carbohydrates, < 30 % energy as fat, < 10 % energy as saturated fat, 15–20 % energy as protein, and 20 to 30 g fibre/day) and with individualized exercise and weight loss goals. After one year, 1.8 % of the intervention group and 7.2 of the control group had developed T2DM, with an odds ratio (OR) of 0.23 [95 % CI 0.06, 0.85] (p = 0.03). # #### Prevention of T2DM in children and adolescents Applying the predefined search criteria failed to identify any RCTs designed to prevent the onset of T2DM in children or adolescents and there is a need for “long-term studies of multi-ethnic cohorts followed into adulthood to determine the natural history and effectiveness of intervention strategies, particularly lifestyle“ [10]. Expert opinion, drawing mainly on evidence in adults, identifies weight loss and/or prevention of weight gain as the best way to prevent T2DM. The American Academy of Pediatrics has made the following recommendations: supporting breast feeding, promoting healthy eating habits and physical activity, i.e. discouraging sedentary activities such as watching TV or playing video games, screening for family readiness for change, education about complications of obesity, maintaining normal, healthy body weight, and avoidance of smoking [244], [245]. At present, the evidence for the long-term effectiveness of obesity prevention programs in children and adolescents is insufficient. The best results have been obtained when schools and family are involved [246]. Nevertheless, on the basis of evidence on the determinants of obesity, lifestyle changes are strongly recommended for all children and adolescents at risk for overweight, IGT and T2DM. # #### Recommendations A Intensive lifestyle interventions that encourage people to change their diet and to increase their level of physical activity should be used to prevent or delay the onset of T2DM in adults with IGT. The NNT for prevention of one case of T2DM of 6.4 [95 % CI 5.0, 8.4] at mean follow up ranging from 1.83 to 4.62 years [247]. A Weight reduction is an essential element of prevention of T2DM prevention. Sustained weight reduction by 5–7 % is sufficient to substantially lower the risk of T2DM. B An increase in physical activity even at a level of 30 minutes per day of moderate exercise reduces the risk of T2DM and is therefore recommended. B A diet with high fibre (≥ 15 g per 1000 kcal), moderate fat (≤ 35 % of total energy) reduced saturated and trans fat (< 10 % of total energy) can lower body weight and reduce the risk of T2DM and is therefore recommended. C Comorbidities, particularly MetSy, should be monitored and taken into account while planning the diet [119], [248]. C Currently there is no evidence from long-term prevention studies that reducing total dietary carbohydrate prevents T2DM. Carbohydrate sources should mainly be whole-grain cereal, fruit, vegetables, and legumes. D There is no evidence from clinical trials of the effectiveness of interventions to prevent the onset of T2DM among children and adolescents. However, on the basis of physiological evidence and research in adults it can reasonably be assumed that maintaining a healthy weight through physical activity and balanced/healthy nutrition is the key factor will be important to prevent or postpone the onset of T2DM among youth. # #### Findings for prevention by pharmaceutical treatment Studies of the effectiveness of drug treatment in preventing or delaying the onset of T2DM have been performed mostly in persons at high-risk of T2DM, such as those who are obese and/or exhibit IGT, or women with a history of GDM. Key data of these studies are summarized in [Tables 10], [11], [12]. Ref. Acronym Design Intervention (D = diet, PA = physical activity, P = placebo) Follow up (years) Drop out Adverse effects [249] SMOMS RCT, DB, PC 8-w very-low-calorie D, then D + PA 3 gastrointestinal + Orlistat, O, 3 × 120 mg/d, (n = 153) 33 % 88 % + P (n = 156) 37 % 63 % [250] XENDOS RCT, DB, PC Low caloric D + PA 4 gastrointestinal + Orlistat, O, 3 × 120 mg/d, (n = 1640) 48 % 36 % + P (n = 1637) 66 % 23 % [258] BOTNIA RCT, DB, PC, + 12 m wash out 1.5 hypoglycemia Glipizide, G, 2.5 mg7d, (n = 17) 1 41 % P (n = 17) 32 % [255] STOP-NIDDM RCT, DB, 3-m washout: data not available D + PA prescription gastrointestinal + Acarbose, A, 3 × 100 mg/d, (n = 682) 3.3 30 % 13 % +P (n = 686) 3.5 18 % 3 % [65], [260] DPP RCT, DB, intention-to-treat Standard lifestyle recommendation 2.8 7.5 % gastrointestinal (events/100 person-yr) + Metformin, M, 2 × 850 mg/d, (n = 1037) M: 78, D + PA: 24, p: 31 + Intensified D + PA (n = 1079) + P (n = 1082) + Troglitazone, T, discontinued due to safety 31 [186] IDPP RT Individual D modification 2.5 26 gastrointestinal + Standard lifestyle advice (con, n = 136) M, D + PA + M: 5 + D + PA (n = 133) hypoglycemic symptoms: + Metformin, M, 2 × 500/250 mg/d, (n = 133) M, D + PA + M: 22 + Combined D + PA + M (n = 129) [262] TRIPOD RCT, DB,PC, open-label follow-up Lifestyle (standard D + PA) – + Troglitazone, T, 400 mg/d, (n = 133) 2.6 19 + D + PA (n = 133) 2.3 11 [188] DREAM RCT, DB, PC, 2 × 2 factorial design + 2–3 m wash out 17-d SB P run-in, compliant patients enrolled, D + PA 3 CV events/HF RO + Rosiglitazone, RO, 8 mg/d, (n = 2635) 75/14 + P + lifestyle (n = 2634) 772 55/2 658 ns/p = 0.01 [264] DREAM RCT, DB, PC, 2 × 2 factorial design +2–3 m wash out + Ramipril, RA, ‐15 mg/d, (n = 2623) 3 27 % cough/angioedema: 9.7 %/0.1 % RA + P + lifestyle (n = 2646) 22 % 1.8 %/0.2 % Ref. Acronym Age (years) BMI (kg/m²) Waist (cm) (male/female) Blood pressure (mmHg) Lipids HDL‐C (mmol/l) Lipids TG (mmol/l) MetSy or diabetes risk FPG (mmol/l) [249] SMOMS 47 (O) 37 119 144/91 1.1 2.4 309 Obese + MetSy 6.4 47 (P) 38 119 144/91 1.2 2.5 6.3 [250] XENDOS 43 (O) 37 115 131/82 1.2 1.9 3277 Obese, 694 IGT 4.6 44 (P) 37 115 130/82 1.2 1.9 4.6 [258] BOTNIA 58 (G) 28 88 143/88 1.0 1.8 First-degree relatives + IGT 5.3 53 (P) 29 90 134/83 1.1 1.6 5.3 [255] STOP-NIDDM 54 (A) 31 102 131 1.2 2.1 1429 IGT + IFGa 6·2 55 (P) 31 102 131 1.2 2.1 6.2 [65] DPP 51 34 105 125/79 1.0 – 3234 IGT + IFG 6.0 [186] IDPP 35–55 26 124 (con) 1, 1.9 531 IGT 122 (D + PA) 2, 2.0 121 (M) 3, 1.7 123 (D + PA + M) 4, 1.8 [262] TRIPOD 35 (T) 31 – – – – 266 pGDM, 63 % IGT 5.3 34 (P) 30 5.2 [188] DREAM-Rosi 55 (Ro) 31 101/96 136/83 – – 739 IFG, 4530 IGT 5.8 55 (P) 31 102/96 136/84 5.8 [264] DREAM-Rami 55 (Ra) 31 136/83 IGT: 1513 (Ra), 1515 (P) 5.9 55 (P) 31 – 136/83 – – IFG: 366 (Ra), 373 (P) 5.9 a 90 % family history of diabetes, 78 % BMI > 27 kg/m2, 53 % > 1 risk factor for T2DM, 51 % high blood pressure, 48 % dyslipidemia, 23 % women with a history of gestational diabetes Ref. Acronym Number of T2DM cases per group Risk reduction Number needed to treat [249] SMOMS O: 8 (5.2 %) HR: 0.63 – P: 17 (10.9 %) [250] XENDOS O: 102 (6.2 %) HR for all: 0.59, p = 0.028) 10 IGTs/4years P: 147 (9.0 %) HR for IGT: 0.55, p = 0.0024c [258] BOTNIA V: 5.9 % ARR: 23.5 % – P: 29.4 % HR = 0.30 [255] STOP-NIDDM V: 105 (15 %) ARR: 8.7 % – P: 165 (24 %) HR: 0.64, p = 0·0003 [65] DPP Incidence/100 person-years: RRR: D + PA: 4.8 D + PA vs. P: 58 % 6.9/3 years (D + PA) M: 7.8 M vs. P: 31 % 13.9/3 years (M) P: 11.0 D + PA vs. M: 39 % [186] IDPP D + PA: 0.623, p = 0.018 HR: D + PA: 0.62, p = 0.018 6.4/3 years M: 0.651, p = 0.029 M: 0.65, p = 0.029 6.9/3 years D + PA + M: 0.629, p = 0.022 D + PA + M: 0.63, p = 0.022 6.5/3 years [262] TRIPOD T: 30 % (12.1 %/y) HR: 0.45 (non-adjusted), 0.44 (adjusted) – P: 14 % (5.4 %/y) [188] DREAM-RO RO: 280 (10.6 %) HR: 0.38, p < 0.0001b 6.9/3 years P: 658 (25.0 %) a [264] DREAM-RA RA: 17.1 % (449) HR: 0.91 – P: 18.5 % (489) a CV events: RO: 306 (11.6 %), P: 686 (26.0 %); b Composite primary endpoint: HR: 0·40, p < 0·0001; c Progression NGT to IGT: no difference # #### Antiobesity treatment Orlistat. The Scandinavian Multicenter Orlistat in Metabolic Syndrome (SMOMS) study was performed in obese subjects with the MetSy (n = 309). After 8-weeks on a very-low caloric diet, participants were given the intestinal lipase inhibitor, orlistat or placebo in addition to lifestyle modification. Over 36 months the incidence of diabetes was 58 % lower in the orlistat group compared with placebo, with no differences in insulin secretion and activity [249]. The XEnical in the prevention of Diabetes in Obese Subjects (XENDOS) study was conducted in 3277 obese subjects, of whom 694 had IGT. They received orlistat or placebo in addition to lifestyle modification. After a median follow up of 48 months, the HR for all patients was 0.59 (p = 0.028), for those with IGT it was 0.55 (p = 0.0024), but there was no difference in the rate of progression from NGT to IGT [250]. Rimonabant. One sub-group analysis of the Rimonabant-In-Obesity (RIO)-Europe study compared effects of the endocannabinoid receptor antagonist rimonabant (n = 399) with placebo (n = 123) as part of a lifestyle intervention in healthy obese subjects with a mean BMI of 36 kg/m2 [251]. Within 24 months, rimonabant treatment achieved weight reduction and improvement of HDL‐C and triglycerides. Although only 0.5 % of the rimonabant group, compared with 4.1 % of the control group, developed diabetes, the high drop-out rates (58 % and 55 %) and low incidence of T2DM limit the relevance of the study. In October 2008, rimonabant was withdrawn from the market because of concerns about risk-benefit ratios so that it cannot be used for diabetes prevention [251]. Bariatric surgery. The Swedish Obesity Surgery (SOS) trial studied 2010 severely obese subjects (BMI > 35 kg/m2) who underwent surgery for obesity (gastric banding, gastroplasty, gastric bypass) and 2037 patients who chose conventional treatment in a matched pairs-design non-randomized study. Subjects undergoing surgery achieved a reduction in body weight of 20–30 kg, accompanied by a reduction in cardiovascular risk factors. After 8 years, the surgical group had a greatly reduced risk of developing diabetes (OR: 0.16) [252]. Two recent reviews analyzed the available evidence for the use of bariatric surgery in overt diabetic and in obese patients. While surgery improved T2DM in 87 % and resolved it in 79 % of cases [253] and was more efficient than conventional treatment to induce weight loss in obese patients [254], the evidence on safety is less clear due to limited number and quality of studies [254]. # #### Oral glucose lowering drugs Alpha glucosidase inhibitors. In the Study-To-Prevent-NIDDM (STOP-NIDDM), persons with IGT (n = 1.429) were randomized in a double-blind trial to either the alpha glucosidase inhibitor, Acarbose, or placebo. After a mean follow-up of 3.3 years, the acarbose-treated group achieved 25 % and 36 % RR reduction based on one or two OGTTs in progression to diabetes compared with placebo. The effect of acarbose was observed across all ages, at all BMIs, and in both sexes, while there was some evidence of an accompanying improvement in CVD risks [255], [256], [257]. Sulfonylurea. Within the BOTNIA study, 34 first-degree relatives of patients with T2DM and IGT were assigned randomly to either glipizide or placebo. At 6 months of treatment, measures of insulin secretion/action such as fasting plasma insulin and measures of insulin resistance and HDL‐C had improved in the glipizide group. Thereafter, the treatment was withdrawn and the participants observed for another 12 months of washout. At 18 months, both FPG and 2-h PG were lower in the glipizide than in the placebo group. The prevalence of T2DM was 29.4 % in the placebo group and 5.9 % in the glipizide group, corresponding to an 80 % relative risk reduction at 18 months. However, although not reaching statistical significance in this study, there was some evidence suggesting a need for caution in the use of glipizide because of an increased frequency of symptoms suggesting hypoglycemia [258]. Biguanides. As described above, the Whitehall Borderline Diabetes Study examined the ability of carbohydrate restriction to prevent the onset of diabetes in men with IGT (n = 204) with or without treatment with phenformin, but detected no difference in the incidence of diabetes between the different groups after 45 years [259]. As decribed above, the US DPP, a multicenter RCT, tests an intensive lifestyle intervention with metformin (850 mg twice daily), or troglitazone (400 mg daily) or placebo [260] in high-risk persons with IGT (n = 3.234) and slightly elevated fasting plasma glucose (> 5.5 mmol/l), with about 45 % of the study population from Non-Caucasian groups such as African-American and Hispanic. After a mean follow-up of 2.8 years, the relative reductions in the progression to diabetes were 58 % in the lifestyle group and 31 % in the metformin vs. placebo-treatment. As these data were based on an OGGT performed during ongoing treatment, another OGTT was performed after a 1–2-weeks washout period. After the washout, diabetes was slightly but not significantly more common in the metformin group with a OR of 1.49 ([0.93, 2.38] p = 0.098). Comparison of the probabilities of developing diabetes during the DPP and during the wash out period revealed that 26 % of the metformin effect did not persist upon drug withdrawal. Nevertheless, metformin still reduced the incidence of diabetes by 25 % [65]. The IDPP studied persons with IGT (n = 531) who were slightly younger and less overweight than in DPP and DPS. They received the following interventions: control, lifestyle modification, metformin or combined lifestyle modification + metformin. After median follow-up of 30 months, the relative risk reductions versus control were between 26–29 % for all study arms. The lifestyle intervention was less intensive and the diabetes incidence was higher (55.0 % in 3 years) than in the DPP and DPS. Of note, as in the Indian Diabetes Prevention Study, metformin, when added to lifestyle intervention exerted no benefit beyond that of the lifestyle intervention [186]. Thiazolidendiones. “Insulin sensitizer” such as thiazolidinediones which are agonists at the peroxisome proliferator activator receptor-gamma have been also tested in prevention. The troglitazone arm of the DPP was discontinued because of concerns about liver toxicity. Before discontinuation (at a mean of 0.9 years), the incidence of diabetes was 3.0/100 person-years was not different versus intensive lifestyle intervention, but lower than in the placebo and metformin arms. During the 3 years after troglitazone withdrawal, the incidence of diabetes was comparable to that in the placebo group indicating that its effect did not persist upon withdrawal [261]. In the Troglitazone In Prevention of Diabetes (TRIPOD) study, Hispanic women with previous gestational diabetes (n = 235) were randomized to receive the troglitazone, since withdrawn from sale, or placebo. After a median follow-up of 30 months, the incidence of T2DM was 5.4 % and 12.1 % with troglitazone and placebo. Thus, troglitazone treatment was associated with a 56 % relative reduction in progression to diabetes which remained even after an 8-m washout period [262]. The Pioglitazone In Prevention of Diabetes (PIPOD) study was performed as an open-label observational trial using pioglitazone (45 mg daily) in 89 women participating in TRIPOD, of whom 30 had received verum during the previous study. After a median follow-up of 36 months, 65 women had completed all study visits: The rates of annual and cumulative incidence of diabetes were 5.2 % and 17 %. It is noteworthy that parameters of insulin resistance were not affected, whereas body weight increased [263]. In the Diabetes REduction Assessment with ramipril and rosiglitazone Medication (DREAM) Study, 5.269 adults with IFG or IGT, or both, and no previous cardiovascular disease were randomly assigned to receive rosiglitazone (8 mg daily) vs. placebo or ramipril (up to 15 mg) vs. placebo. After a median follow up of 3 years, rosiglitazone treatment reduced the incidence of T2DM as assessed from a HR of 0.38 [95 % CI 0.33, 0.44] (p < 0.0001) and of the composite primary end point (death or onset of diabetes: hazard ratio (HR: 0.40 [0.35; 0.46]) (p < 0.0001). In addition, rosiglitazone for 3 years increased the likelihood of regression to normoglycemia in adults with IFG or IGT, or both [188], [264]. Antihypertensive and lipid-lowering drugs. Several meta-analyses and reviews of studies on angiotensin receptor (AT1) blockers or angiotensin converting enzyme inhibitors (ACEI) report an association with reduced risk of T2DM [265], [266]. However, occurrence of diabetes was not a primary endpoint in these studies and the methods used to diagnose and detect diabetes were heterogeneous. No prospective RCT had examined the ability of antihypertensive drugs to prevent diabetes prior to the DREAM study. After a median follow up of 3 years, ramipril treatment was not associated with lower incidence of diabetes at a HR of 0.91 [0.80, 1.03] or death, but with increased probability (“HR” 1.16, p = 0.001) of regression to normoglycemia compared with placebo. At the end of the study, the median 2-h PG but not FPG was slightly lower in the ramipril than in the placebo group (p = 0.01) [264]. Post hoc analyses of placebo-controlled statin trials show conflicting results on associations between statin therapy and diabetes incidence [267], [268], [269], [270]. However, currently, there is no prospective RCT investigating the effect of lipid lowering drugs on diabetes onset. # #### Recommendations A In persons with IGT, metformin and acarbose can be used as second line strategies for prevention of T2DM, provided that the drugs are tolerated (gastrointestinal side-effects), and contraindications to metformin therapy (kidney, liver diseases, hypoxic conditions) are considered [65], [179], [186], [255]. A In obese people with or without IGT, carefully monitored anti-obesity treatment with orlistat, in addition to intensive lifestyle modification, can be used as a second line strategy for obese patients to prevent T2DM. C In severely obese patients at high risk of T2DM and CVD, bariatric surgery in addition to careful monitoring and lifestyle change can induce sustained weight loss for [252], [271], but long-term safety is less clear so that it cannot be recommended for diabetes prevention at present. C Glucose lowering drugs such as glipizide or thiazolidendiones may reduce the risk of T2DM in certain high risk groups, but either long-term efficacy or safety are unclear so that these drugs cannot be recommended for diabetes prevention at present [179]. C Antihypertensive and lipid-lowering drugs cannot be recommended for the prevention of T2DM at present. # #### Considerations of societal and public health aspects International organizations (IDF, EASD, WHO) have issued consensus statements on prevention programs [172], [272] and a number of national or regional programs are already implemented at the societal level [273], [274]. The pan-European DE-plan project further described public health approaches for implementing prevention of T2DM at the primary care level [239]. The majority of the prevention programs designed at community and national levels are based on implementation of lifestyle interventions in public health and primary healthcare settings [185]. Data on the efficiency of the population approach are scarce, but public health surveys demonstrate its utility if based on the promotion of healthier lifestyle [185], [275]. This suggests that the combination and coordination of individualized and population approaches at the societal level would be useful [172]. The health sector cannot implement the population approach on its own, so other stakeholders (e.g. schools, communities, politicians, industry/employers) need to be involved [172], particularly for coordinated prevention in adolescents [276]. Prevention programs also need to address cultural differences in dietary habits and physical activity patterns, particularly in minorities [277], [278]. Governments should develop and implement national diabetes prevention plans including a wide range of initiatives in different segments of societies such as advocacy (providing support for relevant associations and organizations and a favorable economic environment), community support (to produce a favorable environment for adequate nutrition by education and for physical activity by sport facilities and urban design), fiscal and legislative changes (food pricing labeling and advertising as well as environmental and infrastructure regulation), engaging of the private sector (promotion of health at workplace and ensuring healthy policies in food industry) and media support [172]. # #### Recommendations A Interventions to prevent T2DM should be implemented at the societal level though a structured public health plan which should take into account both high-risk/targeted approach and population approaches. C The structured plan should also include specific approaches to meet the needs of subpopulations, e.g. adolescents, minorities and disadvantaged groups. B The establishment and implementation of an effective plan for the prevention of T2DM at national levels requires government initiatives comprising advocacy, community support, fiscal and legislative changes involving infrastructure, engagement of private sector and continuous media communication. A This plan should be part of a network with other relevant prevention programs and public educational activities. # ### Supporting Change in Lifestyle Behaviour for Adults at Risk of T2DM Changing an existing habit requires people to establish a motivation or intention to change, make decisions and action plans, recognise and overcome barriers (both practical and psychological), initiate the new routine, and then to maintain the new routine, resisting temptations to relapse back to former habits. Approaches for supporting changes in diet and physical activity vary from simple information-giving to more intensive programmes, which may or may not be based on theoretical models of behaviour change [279], [280], [281], [282], [283]. # #### Methodology The recommendations on this topic are based on a systematic review, which summarised the available scientific evidence on the relationship between increased intervention effectiveness and: (i) theoretical basis, (ii) behaviour change techniques used, (iii) mode of delivery, (iv) intervention provider, (v) intervention intensity, (vi) characteristics of the target population (e.g. gender, ethnicity), (vii) setting. Our systematic “review of reviews” of the scientific evidence base on dietary and/or physical activity interventions is published separately and will be available to download from the IMAGE website (http://www.image-project.eu). Only the key recommendations and a summary of the evidence are presented here. The review examined systematic reviews, published between 1998 and 2008, which focused primarily on populations of adults at risk of developing T2DM and/or CVD. Articles were identified by searching multiple electronic databases of published evidence and other sources. The methodological quality of studies was systematically assessed using an established rating system [284] and data were only extracted from reviews meeting a pre-set quality standard. Selection and data extraction were undertaken independently by two reviewers and any disagreements resolved by discussion. We identified 3856 potentially relevant articles, of which 30 met the quality and selection criteria. We systematically extracted data relevant to the specific aims above, rated each piece of evidence, using the SIGN evidence rating system and produced detailed evidence tables. Further discussion of the evidence in workshops of experts in primary care, behavioural science and diabetes prevention (the IMAGE study group) helped to derive the recommendations below. # #### Findings The evidence showed that interventions to promote lifestyle changes are more likely to be effective if they target both diet and physical activity, mobilise social support, involve the planned use of established behaviour change techniques (as defined in [Table 14]) and provide a higher frequency of contacts. Specific techniques to support behaviour change and maintenance were also associated with increased effectiveness. [Table 13] provides a concise summary of the evidence we examined. Source Basis for categorisation of whether studies used established behaviour change techniques or not Avenell et al. [323] Definitions of behaviour therapy varied by study but include self-monitoring, stimulus control, problem solving, relapse prevention management, cognitive restructuring, self-assertion, social support, goal setting, self-reinforcement. McTigue et al. [324] Behavioural interventions are strategies to help patients acquire the skills, motivations, and support to change diet and exercise patterns. These include barrier identification, problem solving, self-monitoring, social support, goal-setting, developing action plans, relapse prevention, stimulus control, cognitive restructuring. Shaw et al. [325] Behavioural therapy aims to provide the individual with coping skills to handle various cues to overeat and to manage lapses in diet and physical activity when they occur and to provide motivation essential to maintain adherence to a healthier lifestyle once the initial enthusiasm for the program has waned. Therapeutic techniques in studies relating to the benefit of using “established behaviour change techniques” include stimulus control, self-control and therapist-controlled contingencies, self-monitoring, problem solving, goal setting, behaviour modification, reinforcement. NICE Obesity Guidance [308] This guidance document comprises a summary of/expansion of reviews by Shaw et al. [325], McTigue et al. [324], and Avenell et al. [323]. Definitions vary by analysis but typically include cue avoidance, self-monitoring, stimulus control, social support, planning problem solving, cognitive restructuring, modifying thoughts, relapse prevention, reinforcement of change, coping strategies, coping imagery, goal setting, social assertion, reinforcement techniques for enhancing motivation. Evidence Evidence gradea Overall intervention effectiveness Dietary or diet plus physical activity interventions can have effects on weight loss. Mean net weight loss in successful interventions varies from 3–5 kg at 12 months and from 2–3 kg at 36 months 1++ Weight loss from dietary or diet plus physical activity interventions can be sustained for 48 months and 7 years (2–3 kg). 1+ Physical activity interventions can have effects at up to 18 months of follow up. The increase for successful interventions is 30–60 minutes of moderate activity per week. 1+ Physical activity interventions can also have small effects on weight at up to 12 months of follow up (decrease 1–2 kg). 1+ Dietary and/or physical activity interventions can strongly reduce progression to T2DM (49 % relative reduction in risk) at 3.4 years of follow up. 1++ Interventions need to pay more attention to behaviour maintenance, once the active phase of intervention is completed. During the active phase of interventions the net weight loss at 3–12 months was on 0.08 body mass index units per months. During the maintenance phase (from 6–60 months), patients regained weight at a rate of 0.03 body mass index units per month. 1++ Theoretical basis Interventions for weight loss (diet or physical activity) which stated a theoretical model as their foundation delivered no greater weight loss than interventions that did not state their theoretical underpinnings. 2+ Behaviour change techniques The planned use of established behaviour change techniques in dietary and/or physical activity interventions increases the amount of weight loss and physical activity produced at 6 months of follow up. 1+ Interventions with diet plus physical activity produce greater weight loss than those with dietary change only at up to 24 months of follow up. 1+ Interventions based on motivational interviewing are more effective than traditional advice at least in the short-term (3 to 6 months of follow up). 1++ Using social support (usually from a family member) increases effectiveness of weight loss interventions at 12 months of follow up. 1+ Interventions which include the use of pedometers to self-monitor walking activity produce moderate weight loss and moderate increases in moderate levels of physical activity at up to 4 months of follow up. 1+ Dietary and/or physical activity interventions which include prompting of self-monitoring alongside other “self-regulatory techniques” (Specific goal setting; Providing feedback on performance; Review of behavioural goals) produce an average weighted effect size more than twice that of other interventions. 2+ Prompting self-monitoring of behaviour and/or outcomes and prompting self-talk, alongside other intervention techniques (i.e. not in isolation), are associated with increased intervention effectiveness for both dietary and physical activity interventions. 2+ Providing instruction and the use of relapse prevention techniques (for dietary change), prompting practice, individual tailoring, setting goals and time management (for physical activity) may also enhance intervention effectiveness. 2+ Mode of delivery Successful physical activity and/or dietary interventions have been delivered using group, individual or combined (individual and group) modes of delivery. 1++ There is no strong evidence of any difference in the effectiveness of physical activity and/or dietary interventions between individual, group and mixed modes of delivery at up to 12 months of follow up. 2+ Intervention provider Successful physical activity and/or dietary interventions have been delivered by doctors, nurses, dieticians/nutritionists, exercise specialists and lay people. It should be noted however that these providers were often working within a multi-disciplinary team. 1++ There is no strong evidence of any difference in the effectiveness of physical activity and/or dietary interventions delivered by medically trained health professionals (doctors, nurses), other professionals (psychologists, counsellors, dieticians, health educators), public health students, or lay people at up to 12 months of follow up. 2+ Intensity When intensity is considered in terms of intervention duration or total contact time, there is insufficient evidence to draw any clear conclusions about its impact on the effectiveness of dietary and/or physical activity interventions. – A greater frequency of meetings, particularly in the active phase of the intervention is associated with greater effectiveness in dietary and/or physical activity interventions at up to 15 months of follow up. 2++ A greater total number of personal contacts/intervention sessions is associated with greater intervention effectiveness at up to 36 months of follow up. 2+ Different populations Gender: 2+ There seems to be no substantial difference between men and women in responsiveness to dietary and/or physical activity interventions (once recruited) at up to 16 months of follow up. [However, strong gender imbalances in recruiting are reported, so there may be some types of intervention which suit women more than men.] Older people: 1+ Dietary and/or physical activity interventions can produce weight loss in older populations at up to 34 months of follow up. There is evidence suggesting that older people (over age 45) are more responsive to dietary and/or physical activity interventions at up to 34 months of follow up. 2+ Black and minority ethnic groups: 1+ Combined dietary and physical activity intervention can be effective for a wide range of ethnicities, including Caucasian, Afro-Caribbean, Hispanic, American Indian and Asian (mainly East Asian) populations at up to 34 months of follow up. Disability groups: – The basic effectiveness of interventions is not yet established in adults with physical limitations. High quality evidence focusing on the effectiveness or relative effectiveness of dietary and/or physical activity interventions in adults with physical limitations or other disabilities is urgently needed. Settings Successful interventions have been delivered in a wide range of settings, including health care settings, the workplace, the home, and in the community. However, evidence on the differential impact of different settings on effectiveness is lacking. 1++ There is tentative evidence that remotely delivered walking support interventions (using internet or phone-based delivery) can produce short-term effects on physical activity (at 3 months of follow up). 2+ a Based on SIGN evidence grading system; * Evidence grading is explained in the methods section and in Appendix 2 # #### Recommendations Individual level interventions for people at risk of T2DM should: A Aim to promote changes in both diet and physical activity A Use established, well defined behaviour change techniques (e. g., specific goal-setting, relapse prevention, self-monitoring, motivational interviewing, prompting self-talk, prompting practice, individual tailoring, time management), as defined in [Table 14]. D A clear plan of intervention should be developed based on a systematic analysis of factors preceding, enabling and supporting behaviour change in the social/organisational context in which the intervention is to be delivered. The plan should also identify the processes of change and the specific techniques and method of delivery designed to achieve these processes. Such planning should ensure that the behaviour change techniques and strategies used are mutually compatible and well-adapted to the local delivery context. Following the procedures of the PRECEDE-PROCEED model [285], Intervention Mapping [286], or a similar intervention-design procedure is recommended. A Work with participants to engage social support for the planned behaviour change (i.e. engage important others such as family, friends, and colleagues). B Maximize the frequency or number of contacts with participants (within the resources available). A Include a strong focus on maintenance. It is not clear how best to achieve this, but behaviour change techniques designed to address maintenance include establishing self-monitoring of progress, providing feedback (e.g. on changes achieved in blood glucose and other risk factors), reviewing of goals, engaging social support, use of relapse prevention/relapse management techniques and providing follow-up prompts. C Building on a coherent set of 'self-regulatory' intervention techniques (Specific goal setting; Prompting self-monitoring; Providing feedback on performance; Review of behavioural goals) may provide a good starting point for intervention design. However, this is by no means the only approach available and It is worth noting that self-regulation techniques are not normally used in isolation (e.g. techniques designed to explore and enhance initial motivation would normally be applied prior to goal-setting). A Interventions to prevent T2DM may be delivered by a wide range of people/professions, subject to appropriate training (including the use of established behaviour change techniques). There are examples of successful physical activity and/or dietary interventions delivered by doctors, nurses, dieticians/nutritionists, exercise specialists and lay people, often working within a multi-disciplinary team. A Interventions to prevent T2DM may be delivered in a wide range of settings. There are examples of successful physical activity and/or dietary interventions delivered, in health care settings, the workplace, the home, and in the community. A Interventions to prevent T2DM may be delivered using group, individual or mixed modes (individual and group). There are numerous examples of successful physical activity and/or dietary interventions using each of these delivery modes. C No specific intervention adaptations are recommended for men or women, although steps may be needed to increase engagement/recruitment of men. D People planning interventions should consider what adaptations may be needed for different ethnic groups (particularly with regard to culturally-specific dietary advice), people with physical limitations and people with mental health problems. # ### Models of Care and Economic Aspects of T2DM Prevention # #### Methodology This section updates and extends a recent systematic review that focused on environmental determinants of cardiovascular disease, which shares many risk factors with diabetes [287]. The initial review employed an iterative process, in which PubMed and Google were searched, initially using the following search terms: (environment or community) and (measures or index or risk factors or determinants), (built environment or nutrition environment or obesogenic environment or social environment) and cardiovascular (risk factors or disease), subsequently complemented by diabetes. For the review of the economic aspects of diabetes prevention strategies, PubMed was searched using the terms: (economics of preventing diabetes), (economics diabetes prevention) and (economic evaluation diabetes prevention), with follow up of references cited. To guide the review of health systems issues, a questionnaire was sent to all IMAGE partners seeking their input as to how they might implement a model of diabetes prevention in their own country. It was supplemented by a review of tabled results from the IMAGE questionnaire “Analyses of Type 2 Diabetes Prevention Programmes at national level” sent to all IMAGE partners. The results were then interpreted in the light of a series of studies undertaken by the European Observatory on Health Systems and Policies on the health system response to chronic diseases [288], [289], human resources in the health sector [290], [291], social insurance systems [292], and primary care in Europe [293]. This used, as a conceptual framework, the Chronic Care Model [294], which was developed in the USA but is now forming the basis of innovative models of care in several other countries. The components of the Chronic Care Model are support for self-management, appropriate delivery system design, decision support methods, and clinical information systems. # #### Findings # #### Health in all policies A comprehensive policy response to the growing challenge of T2DM will require action at two levels. The first is action at the societal or community level, to create environments that are less obesogenic. The second is action at the individual level, to identify and intervene in those at risk. The marked increase in prevalence of T2DM has mirrored the epidemic of its major modifiable risk factor – obesity. Large scale societal changes, and in particular the industrialisation of food production and the growth in motorised transport, both coupled with rising incomes, have contributed to a situation in which people are consuming more calories than they expend, and subsequently gaining weight at an alarming rate. This phenomenon is seen in its most extreme form in populations that have evolved over generations in settings at risk of periodic famine, who now have access to secure and plentiful food supplies and where motorised transport has brought about greatly reduced physical activity [225]. This has led researchers to coin the term “obesogenic environment” [295]. This is defined as “the sum of influences that the surrounding opportunities or conditions of life have on promoting obesity in individuals or populations” [296]. Thus the increasing prevalence of T2DM has its origins in policies that lie far outside the health sector. These have recently been reviewed elsewhere [287], but in brief, obesity is correlated with both objective measures of the environment, such as “walkability” [297] and urban sprawl [298] or the structure of the retail food market [299] as well as how individuals perceive the environment in which they live (for example, fear of violence as a deterrent to physical activity) [300]. Many of these factors are modifiable, for example by incorporating health considerations into urban planning [262] or fiscal or legislative changes affecting food marketing (such as advertising bans or taxes on “junk” foods [233]). Consequently, no matter how intensive they are, individualised interventions within the health sector can only begin to overcome the pervasive forces arising from the environments within which people work, as recognised by the IDF [172] and the American Heart Association (AHA) [301]. Similarly, a key element of the European Union's public health strategy is Health in All Policies [302]. However, while there is a large volume of evidence correlating characteristics of the environment with levels of obesity, and thus T2DM, there are no population-based intervention studies (for example to change the built environment or retail food sector) that have been shown to reduce obesity. This is unsurprising given the complexity and scale of such potential interventions. However, on the basis of the clearly demonstrated association between the environment and obesity, we recommend that any individualised model of care for diabetes prevention is accompanied by other policy responses to address obesity prevalence in the population. A # #### The economic aspects of diabetes prevention strategies Using data from 2001 and 2002, the average annual cost of health care in Italy for T2DM were estimated to be 1910 €/patient, with 52 % of costs accounted for by medication, 28 % by hospitalisation, and 11 % by diagnostic procedures [303] (2). An earlier Swedish study calculated the annual direct and indirect costs of diabetes per person to be approximately 6338 €/year. 28 % of the costs were for healthcare, 41 % for lost productivity and 31 % fell on the municipality and relatives [304]. The recent IDF Diabetes Atlas [305] estimates that, in 2010, USD 105.5 billion will be spent on healthcare for diabetes in Europe, equating to a mean expenditure of USD 2046 per person with diabetes in the region [306]. Evidently, the cost is high, emphasising the importance of identifying cost-effective strategies for prevention. The DPP group evaluated the progression of disease, costs and quality of life of their own program using a Markov simulation model. They estimated that, compared with placebo, the lifestyle intervention delayed the onset of T2DM by 11 years and decreased the absolute incidence by 20 %. This translated into a cost per QALY of approximately 1100 US$; or \$ 8800 from a societal perspective; and the intervention was cost-effective across all age groups. Further, in a sensitivity analysis, cost-effectiveness improved.
However, the DPP group's analysis was later challenged by independent researchers who found the cost-effectiveness to be more modest: concluding that “lifestyle modification … should be recommended to all high-risk people.” but that the DPP may be, “too expensive for health plans or a national program to implement” and suggested that lower cost methods would be needed for “real world” delivery [307] (2).
One study examined cost-effectiveness in a European context, modelling the application of the intervention used in the DPS to a hypothetical Swedish population of 60 year olds [308]. It assumed that those at risk, on the basis of IGT, had already been identified (so a more comprehensive evaluation would need to incorporate these costs). It took a societal perspective and reported a cost per QALY of 2363 €.
In a separate study, a group in the UK conducted a cost-effectiveness analysis of: screening for T2DM and IGT and then implementing lifestyle interventions in those found to have IGT [309] (2). They used a hybrid decision tree/Markov model to simulate the long term (50 year time horizon) effects of the strategy, both in terms of clinical and cost-effectiveness outcomes. For each QALY gained, the estimated cost of the strategy was 6242 £ (7802 €). They found that, given a willingness-to-pay threshold of 20 000 £ (25 000 €), the probability of the intervention being cost-effective was 93 % and so concluded that, in a population aged 45 at above-average risk, the strategy seemed to be cost effective.
However, a quite different conclusion emerged from a recent meta-analysis [310] (2+). This noted that most of the available evidence came from research settings, among high-risk populations and that, despite encouraging evidence of effectiveness, so far the economic case for a widespread lifestyle or drug intervention to prevent development of T2DM has not been made.
#
#### Adapting a model of care to the circumstances of each country
In considering which model of care to adopt, it is necessary to take account of the tremendous diversity of health care systems across Europe. There is a need for flexibility, with the model adopted being tailored to the circumstances of individual countries.
Ten responses to the survey among IMAGE partners conducted for this work package were obtained, from Bulgaria, Finland, Greece, Latvia, The Netherlands, Poland, Spain, Switzerland, Ukraine and the United Kingdom. The responses, along with those to the IMAGE “Analyses of T2DM Prevention Programmes at national level” questionnaire, identify very few examples of existing national diabetes (or obesity) prevention programs.
The Finnish DPS model, which has attracted moderate overall support, is viewed as facing several obstacles, mainly in gaining financial support, although there are a variety of possible sources of funding, including pharmaceutical companies, health insurance agencies and local and regional authorities, as well as national governments. In terms of settings, there was most support for primary health care, with university hospitals and endocrinologists also mentioned. There were few precedents for the delivery of such a program in the patient's home.
Two examples of potentially national level programmes were identified. The Krakow (Poland) “City” project for the prevention of CVD and T2DM, which commenced in 2002, is described as being very similar to the DPS model, with screening based on the FINDRISK and simple biochemical indices, while those at risk are offered lifestyle intervention in the form of individual consultations. Since 2005 Poland has also participated in the DE-PLAN project and there is local enthusiasm for scaling this up to national level.
In Finland, FIN‐D2D, an implementation project (in the primary health care setting) of the Finnish national program for the prevention of T2DM, ran from 2003 to 2007 [274]. It was conducted in 5 hospital districts – covering a total population of 1.5 million. Subjects at high risk (identified using the FINDRISC) were offered lifestyle modification, which although drawn directly from the DPS seemed to be somewhat more flexible in terms of its approach. There were individual, self-acting and group interventions. At enrolment, subjects underwent a global risk assessment including a questionnaire; and then, together with their health professional, agreed on the level of intervention required. Further, there was flexibility as to the setting: interventions could also be implemented outside the health care system by private or third sector organisations. At the time of writing the evaluation of FIN‐D2D is not yet published but will be extremely important as it is one of the first projects to actually implement lifestyle interventions to prevent diabetes in the primary care setting.
Similarly, the results of the “Diabetes in Europe-Prevention using Lifestyle, Physical Activity and Nutritional Intervention Plan” (DE-Plan), which involved 25 institutions from 17 European countries, will be invaluable. The DE-Plan project is testing models of lifestyle modification intervention in individuals at high risk of T2DM. Since the programmes are being implemented in existing health care systems, they will provide important evaluations of the cost-effectiveness and feasibility of the models used [239].
#
#### Recommendations
A Any model of care for diabetes prevention should be accompanied by other policy responses to address the determinants of obesity in a population.
D Any model of care must be able to be adapted to the specifics of each health care system. From the evidence available to date, FIN‐D2D appears to provide the most flexible approach. The evaluations of FIN‐D2D, DE-PLAN and other programs such as the Dutch “Roadmap” will be invaluable in providing evidence for future recommendations on model of cares. The cost-effectiveness of “real world” T2DM prevention programs has not yet been clearly established.
#
### Recommendations for Economic Evaluation of T2DM Prevention Strategies
Effective interventions to prevent T2DM diabetes, to treat its symptoms and delay its complications will reduce the burden of disease to society and to patients, but require new resources so there is a need to analyze the cost-effectiveness of these interventions.
A The recommended perspective of economic evaluation is a large societal perspective including: payer's (state and local governments, health insurance companies), provider's and participant's perspectives of analysis [311], [312] (1++), [307], [313] (2++).
A The economic costs of the intervention, not financial costs have to be assessed. All resources used, not only paid for, should be considered.
A The ingredient approach for costs analysis is recommended which consists of the following steps: (i) measurement of all resources used by resource category (e.g. personnel, materials & supplies, laboratory tests, equipment etc.), (ii) eliciting unit costs (prices) of resources used (for the year of evaluation) and (iii) multiplying quantity of resources used by their prices. The ingredients approach allows comparability between different intervention settings. The evaluation for another year can be easily undertaken using revised prices of the resources used.
A It should be ensured that the time of personnel allocated to the intervention is netted out from the remaining activities.
D Where exact measurement of costs such as office materials, utilities, office space, computer etc. is difficult, the estimated standard values of these costs per person-month of personnel time involved can be applied.
B To support future planning of the necessary resources for intervention, the costs should be assessed for two periods: (i) start-up (pre-implementation phase of the programme, costs spent once), (ii) post start-up (actual program running) [239], [311], [314], and for two levels: (i) management level (costs by health care provider and authorities involved in planning, organizing, continuous training of the intervention managers, monitoring and supervision of the intervention) and (ii) participant level (all costs at the individual level of delivery of the intervention).
A Two type of costs are to be analysed: (i) direct and (ii) productivity (indirect) costs. Direct medical costs comprise costs of identifying high-risk groups, laboratory testing, implementing and maintaining the intervention, costs of care incurred by the intervention that are captured by costs of medical care outside the analysed intervention. Direct non-medical costs include out-of-pocket payments (e.g. traveling) and purchases, costs of change of food due to the intervention, as well as value of leisure time physical activities. It is recommended to estimate the value of time spent on intervention by the participant using the average hourly wage in this country in the year of evaluation. Indirect costs represent the value of production lost due to absence from work or usual activity resulting from the intervention as well as present value of future productivity lost due to premature death either caused or averted by intervention. The human capital approach is recommended, applying the average annual wage and unemployment rate of the country in the year of evaluation.
A Three groups of effects of the intervention are to be analysed: (i) benefits achieved, measured in monetary units; (ii) effects measured in specific units such as number of cases of T2DM avoided; (iii) outcomes measured in time gained, adjusted for quality of life – Quality Adjusted Life Years (QALYs) [315]. The weights are then aggregated across time periods. The costs associated with the added years of life can be excluded from analyses [316] (2++).
B We recommend excluding from the analysis the costs associated with longer life achieved with intervention.
A Present the cost-effectiveness ratios for strategies applied. Provide an incremental cost-effectiveness analysis, comparing costs and effects with lack of intervention or with current standard of practice.
A In order to estimate the full economic impact of the intervention, the lifetime health and economic consequences of preventing T2DM (progression of disease, costs and quality of life) should be quantified. The modeling approach is recommended when direct primary or secondary empirical evaluation is not possible [198], [307], [315], [317], [318].
#
#
#
### Oral Glucose Tolerance Test (OGTT)
The oral glucose tolerance test (OGTT) is recommended by the WHO for diagnosis of T2DM.
#
#### Preparation and cautions
The OGTT should be performed in the morning, after at least three days of unrestricted carbohydrate intake (more than 150 g of carbohydrate daily). The test should not be done during an acute illness, as the results may not reflect the patient's glucose metabolism when healthy. A full test dose of glucose for adults should not be given to a person weighing less than 43 kg, due to the fact excessive amount of glucose may produce a false positive result.
#
#### The OGTT procedure
The test should be implemented after an overnight fast of 10 to 16 hours (water is allowed). Smoking or physical activity is not permitted during the test. Usually the OGTT is scheduled to begin in the morning (7–9 am) as glucose tolerance exhibits a diurnal rhythm with a significant decrease in the afternoon. At baseline, the blood sample for glucose determination is taken. The patient is then given a glucose solution to drink. The standard dose is 75 g of glucose in 250–300 ml of water. It should be ingested within 5 minutes. For children, the test load should be 1.75 g per kg of body weight, up to a maximum of 75 g of glucose, The next blood sample is collected at 120 min after the glucose load.
#
#### Plasma glucose measurement in blood samples
The processing of the samples after collection is important to ensure accurate measurement of plasma glucose. This requires rapid separation of the plasma after collection. Laboratory measurements rely upon the use of separated plasma and only immediate separation can prevent the lowering of the glucose in the sample. Only if the plasma separation is completely impossible to be done immediately upon collection, glycolysis inhibitors, e.g. sodium fluoride (6 mg per ml of the whole blood) can be used. Rapid cooling of the sample may also be helpful in reducing the loss of glucose if the plasma cannot be immediately separated. In this case, the sample should be placed immediately after collection into ice-water but the plasma separation should occur within 30 minutes. The plasma should be frozen until the glucose concentration can be measured.
International Federation of Clinical Chemistry (IFCC) recommended that all glucose measuring devices report the results in plasma values. The reason for this recommendation is the fact that plasma glucose values are approximately 11 % higher than the values of whole blood glucose measured in the same sample. Moreover, WHO recommendation is that venous plasma glucose should be the standard method for measuring and reporting. However, it should be noted if one converts from venous to capillary plasma glucose the conversion is different in the case of fasting or post-load glucose values. Fasting values for venous and capillary plasma glucose are identical, while the conversion is necessary only for post-load glucose.
#
#
### Methods and Procedures
#
#### Methods
The IMAGE project is described in detail on its website (http://www.image-project.eu/). Briefly, the development of this guideline followed a pre-defined step-wise procedure addressing:
(i) Stakeholder involvement: the IMAGE guideline development group included diabetes specialists, public health and primary care health professionals, behavioural and social scientists, epidemiologists, patients' organisations, health professional organisations, multidisciplinary, health economists, and health promotion, health policy and health services researchers (for details see Acknowledgements and website). All stakeholders were consulted at numerous stages including the design of the project, definition of the scope and purpose, identification of relevant evidence and developing and refining drafts and final versions of the guideline.
(ii) Scope and purpose: the overall objectives of the guideline were developed through consultation with all stakeholders by email, teleconference and a 2-day symposium. By this process, the clinical questions and target population covered by the guideline were defined and separate working groups established to synthesise the evidence under the following headings: definitions of risk and target population; screening tools, diagnosis and detection; prevention of T2DM and its comorbidities; supporting change in lifestyle behavior for adults at risk of T2DM; models of care and economic aspects of T2DM prevention; and recommendations for economic evaluation of T2DM prevention strategies.
(iii) Evidence identification and review: systematic methods were used to identify relevant evidence using defined search strategies appropriate to the specific topic (see Methodology sections), use of multiple databases, follow up of cited references, and consultation with experts in the field. Criteria for selecting and evaluating the quality of the evidence were based only on publications in peer-reviewed scientific journals and are described in detail (see Methodology sections). Throughout the guideline, SIGN guidance was used to define the criteria for levels (quality) of evidence and grades of the resulting recommendation, which are provided at the end of each chapter. Health benefits, side effects, and risks were considered in formulating these recommendations which were linked to the supporting evidence. Prior to publication, experts externally reviewed the guideline. A procedure for updating the guideline is to be defined.
(iv) Clarity and presentation: the recommendations were reviewed to ensure they are specific and unambiguous. Contextually specific issues arising in each participating European country were discussed to minimise any misunderstanding or misinterpretation. Different options for management are clearly presented and the key recommendations are easily identifiable. The guideline is supported by tools and materials for its application (see website).
(iv) Implementation and dissemination: Potential organizational barriers to applying the recommendations were discussed and addressed where possible. The potential cost implications of applying the recommendations were considered (recognising that precise values will depend on national circumstances such as mix of inputs and unit costs) and the guideline presents key review criteria for monitoring and/or audit purposes. A plan for disseminating the guideline to relevant professional groups and persons with increased diabetes risk is in development.
(v) Editorial independence: The guideline is editorially independent from the funding body. Conflicts of interest of guideline development members have been recorded in the Acknowledgements.
#
#### Procedures
At the initial meeting of the guideline development group (Munich, November 2007), the partners discussed the overall project strategies and, based on their specific expertise, assigned themselves to the different working groups. Working group leaders were decided by consensus within each group. Communication occurred within and across the working groups by email, intranet and face-to-face meetings. During a 2-day meeting (Vienna, March 2008), the available information was pre-screened, exclusion and inclusion criteria defined, methodology for evidence identification, grading and recommendation development was further discussed and additional partners allocated to the working groups. Drafts on specific topics were circulated by email and discussed at a further 1-day meeting (Helsinki, June 2008) and across the WPs at a further 2-day meeting (Mallorca, November 2008). In January 2009, the completed drafts were disseminated as a first version of the completed guideline to all stakeholders via email and intranet. After consensus was reached on the contents, the guideline was shortened and edited. Consensus on the final version of the guideline, authors list and publication strategies was achieved during the final 2-day IMAGE meeting (Lisbon, October 2009).
#
#### Strengths and limitations
The evidence-based guideline focuses primarily on the European environment. It does not address specific requirements for ethnic minority groups and people with different social and cultural backgrounds. Although the working groups took note of the specific need for prevention of obesity and diabetes in children with metabolic risk factors, it was determined that this laid outside the scope of this guideline. Although many of the interventions identified can be expected to have similar effects in children, the metabolic, psychosocial, behavioural and medical requirements may be different. Despite these limitations, the IMAGE guideline applies to more than 80 % of people with increased metabolic risk in Europe. Further work is necessary to extend the scope of the guideline and to address the needs of children and specific ethnic groups.
Univ. Prof. Dr. Michael Roden
Karl-Landsteiner Institute for Endocrinology and Metabolism
Hanusch Hospital
1140 Vienna
Austria
Institute for Clinical Diabetology, German Diabetes Center, and Department of Metabolic Diseases
University Clinics
Heinrich Heine University Düsseldorf
Auf'm Hennekamp 65
40225 Düsseldorf
Germany
Email: [email protected]
#
#
#
### Oral Glucose Tolerance Test (OGTT)
The oral glucose tolerance test (OGTT) is recommended by the WHO for diagnosis of T2DM.
#
#### Preparation and cautions
The OGTT should be performed in the morning, after at least three days of unrestricted carbohydrate intake (more than 150 g of carbohydrate daily). The test should not be done during an acute illness, as the results may not reflect the patient's glucose metabolism when healthy. A full test dose of glucose for adults should not be given to a person weighing less than 43 kg, due to the fact excessive amount of glucose may produce a false positive result.
#
#### The OGTT procedure
The test should be implemented after an overnight fast of 10 to 16 hours (water is allowed). Smoking or physical activity is not permitted during the test. Usually the OGTT is scheduled to begin in the morning (7–9 am) as glucose tolerance exhibits a diurnal rhythm with a significant decrease in the afternoon. At baseline, the blood sample for glucose determination is taken. The patient is then given a glucose solution to drink. The standard dose is 75 g of glucose in 250–300 ml of water. It should be ingested within 5 minutes. For children, the test load should be 1.75 g per kg of body weight, up to a maximum of 75 g of glucose, The next blood sample is collected at 120 min after the glucose load.
#
#### Plasma glucose measurement in blood samples
The processing of the samples after collection is important to ensure accurate measurement of plasma glucose. This requires rapid separation of the plasma after collection. Laboratory measurements rely upon the use of separated plasma and only immediate separation can prevent the lowering of the glucose in the sample. Only if the plasma separation is completely impossible to be done immediately upon collection, glycolysis inhibitors, e.g. sodium fluoride (6 mg per ml of the whole blood) can be used. Rapid cooling of the sample may also be helpful in reducing the loss of glucose if the plasma cannot be immediately separated. In this case, the sample should be placed immediately after collection into ice-water but the plasma separation should occur within 30 minutes. The plasma should be frozen until the glucose concentration can be measured.
International Federation of Clinical Chemistry (IFCC) recommended that all glucose measuring devices report the results in plasma values. The reason for this recommendation is the fact that plasma glucose values are approximately 11 % higher than the values of whole blood glucose measured in the same sample. Moreover, WHO recommendation is that venous plasma glucose should be the standard method for measuring and reporting. However, it should be noted if one converts from venous to capillary plasma glucose the conversion is different in the case of fasting or post-load glucose values. Fasting values for venous and capillary plasma glucose are identical, while the conversion is necessary only for post-load glucose.
#
#
### Methods and Procedures
#
#### Methods
The IMAGE project is described in detail on its website (http://www.image-project.eu/). Briefly, the development of this guideline followed a pre-defined step-wise procedure addressing:
(i) Stakeholder involvement: the IMAGE guideline development group included diabetes specialists, public health and primary care health professionals, behavioural and social scientists, epidemiologists, patients' organisations, health professional organisations, multidisciplinary, health economists, and health promotion, health policy and health services researchers (for details see Acknowledgements and website). All stakeholders were consulted at numerous stages including the design of the project, definition of the scope and purpose, identification of relevant evidence and developing and refining drafts and final versions of the guideline.
(ii) Scope and purpose: the overall objectives of the guideline were developed through consultation with all stakeholders by email, teleconference and a 2-day symposium. By this process, the clinical questions and target population covered by the guideline were defined and separate working groups established to synthesise the evidence under the following headings: definitions of risk and target population; screening tools, diagnosis and detection; prevention of T2DM and its comorbidities; supporting change in lifestyle behavior for adults at risk of T2DM; models of care and economic aspects of T2DM prevention; and recommendations for economic evaluation of T2DM prevention strategies.
(iii) Evidence identification and review: systematic methods were used to identify relevant evidence using defined search strategies appropriate to the specific topic (see Methodology sections), use of multiple databases, follow up of cited references, and consultation with experts in the field. Criteria for selecting and evaluating the quality of the evidence were based only on publications in peer-reviewed scientific journals and are described in detail (see Methodology sections). Throughout the guideline, SIGN guidance was used to define the criteria for levels (quality) of evidence and grades of the resulting recommendation, which are provided at the end of each chapter. Health benefits, side effects, and risks were considered in formulating these recommendations which were linked to the supporting evidence. Prior to publication, experts externally reviewed the guideline. A procedure for updating the guideline is to be defined.
(iv) Clarity and presentation: the recommendations were reviewed to ensure they are specific and unambiguous. Contextually specific issues arising in each participating European country were discussed to minimise any misunderstanding or misinterpretation. Different options for management are clearly presented and the key recommendations are easily identifiable. The guideline is supported by tools and materials for its application (see website).
(iv) Implementation and dissemination: Potential organizational barriers to applying the recommendations were discussed and addressed where possible. The potential cost implications of applying the recommendations were considered (recognising that precise values will depend on national circumstances such as mix of inputs and unit costs) and the guideline presents key review criteria for monitoring and/or audit purposes. A plan for disseminating the guideline to relevant professional groups and persons with increased diabetes risk is in development.
(v) Editorial independence: The guideline is editorially independent from the funding body. Conflicts of interest of guideline development members have been recorded in the Acknowledgements.
#
#### Procedures
At the initial meeting of the guideline development group (Munich, November 2007), the partners discussed the overall project strategies and, based on their specific expertise, assigned themselves to the different working groups. Working group leaders were decided by consensus within each group. Communication occurred within and across the working groups by email, intranet and face-to-face meetings. During a 2-day meeting (Vienna, March 2008), the available information was pre-screened, exclusion and inclusion criteria defined, methodology for evidence identification, grading and recommendation development was further discussed and additional partners allocated to the working groups. Drafts on specific topics were circulated by email and discussed at a further 1-day meeting (Helsinki, June 2008) and across the WPs at a further 2-day meeting (Mallorca, November 2008). In January 2009, the completed drafts were disseminated as a first version of the completed guideline to all stakeholders via email and intranet. After consensus was reached on the contents, the guideline was shortened and edited. Consensus on the final version of the guideline, authors list and publication strategies was achieved during the final 2-day IMAGE meeting (Lisbon, October 2009).
#
#### Strengths and limitations
The evidence-based guideline focuses primarily on the European environment. It does not address specific requirements for ethnic minority groups and people with different social and cultural backgrounds. Although the working groups took note of the specific need for prevention of obesity and diabetes in children with metabolic risk factors, it was determined that this laid outside the scope of this guideline. Although many of the interventions identified can be expected to have similar effects in children, the metabolic, psychosocial, behavioural and medical requirements may be different. Despite these limitations, the IMAGE guideline applies to more than 80 % of people with increased metabolic risk in Europe. Further work is necessary to extend the scope of the guideline and to address the needs of children and specific ethnic groups.
Univ. Prof. Dr. Michael Roden
Karl-Landsteiner Institute for Endocrinology and Metabolism
Hanusch Hospital
1140 Vienna
Austria
Institute for Clinical Diabetology, German Diabetes Center, and Department of Metabolic Diseases
University Clinics
Heinrich Heine University Düsseldorf
Auf'm Hennekamp 65
40225 Düsseldorf
Germany
Email: [email protected]
|
{}
|
# Differentiability Of Multivariable Function
$$f(x,y)= \begin{cases} x\sin(1/x) + y\sin(1/y), &xy \neq 0 \\ x \sin(1/x), &y=0, x \neq 0 \\ y \sin(1/y), &x=0, y\neq 0 \\ 0, &x=y=0. \end{cases}$$ I have to check differentiability at origin , i have seen that partial derivatives at $(0,0)$ do not exist and so it is not differentiable at $(0,0)$. Am I right ? Else how to do this. Thanks.
• You are correct. – Git Gud Feb 4 '15 at 9:43
• @GitGud THANKS. – godonichia Feb 4 '15 at 10:33
|
{}
|
# Computational Complexity of Space-Bounded Real Numbers
In this work we study the space complexity of computable real numbers represented by fast convergent Cauchy sequences. We show the existence of families of trascendental numbers which are logspace computable, as opposed to algebraic irrational numbers which seem to required linear space. We characterized the complexity of space-bounded real numbers by quantifying the space complexities of tally sets. The latter result introduces a technique to prove the space complexity of real numbers by studying its corresponding tally sets, which is arguably a more natural approach. Results of this work present a new approach to study real numbers whose transcendence is unknown.
## Authors
• 6 publications
• 6 publications
• ### On the computability of ordered fields
In this paper we develop general techniques for classes of computable re...
07/29/2020 ∙ by M. V. Korovina, et al. ∙ 0
• ### Property and structure in constructive analysis
Real numbers such as Dedekind reals or (quotiented) Cauchy reals (as opp...
05/17/2018 ∙ by Auke B. Booij, et al. ∙ 0
• ### A recursion theoretic foundation of computation over real numbers
We define a class of computable functions over real numbers using functi...
10/02/2020 ∙ by Keng Meng Ng, et al. ∙ 0
• ### Approximation of subsets of natural numbers by c.e. sets
The approximation of natural numbers subsets has always been one of the ...
02/09/2019 ∙ by Mohsen Mansouri, et al. ∙ 0
• ### Algorithmic Complexities in Backpropagation and Tropical Neural Networks
In this note, we propose a novel technique to reduce the algorithmic com...
01/03/2021 ∙ by Ozgur Ceyhan, et al. ∙ 0
• ### Computations with p-adic numbers
This document contains the notes of a lecture I gave at the "Journées Na...
01/24/2017 ∙ by Xavier Caruso, et al. ∙ 0
• ### On the computability properties of topological entropy: a general approach
The dynamics of symbolic systems, such as multidimensional subshifts of ...
06/04/2019 ∙ by Silvere Gangloff, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
The set of computable real numbers form the basis of a modern theory of computable analysis. Real numbers were originally studied in computational complexity theory in the seminal work of Hartmanis and Stearns
[5], where it was shown that algebraic numbers are polynomial-time computable. Later, Cobham [3] showed that no finite state automaton can compute the digits of algebraic irrational numbers. From those works stemmed what today is known as the Hartmanis-Stearns conjecture, which states that if a real number is real time computable with respect to some natural base, then that number is either rational or trascendental. The Hartamnis-Stearns conjecture has many implications as recounted by Freivalds [4], for example, it implies the no existence of optimal algorithms for integer multiplication and it will solve the transcendence of several real numbers, just to name a few. More recently, Adamczewski and Bugeaud [1] made an important breakthrough towards the Hartmanis-Stearns conjecture where they showed that no algebraic irrational number is computable by a pushdown automaton. Currently, this is the only result where algebraic irrational numbers are not computable even in the presence of some non-constant amount of memory.
Heartmanis and Stearns [5] presented an algorithm that computes any algebraic numbers in polynomial time and linear space. Motivated by this fact, in this work we study real numbers that are computable in bounded-space in order to understand if linear space is a necessary amount of memory for algebraic irrational numbers. First, in Theorem 3.1 of Section 3 we show a space hierarchy theorem for real numbers, which implies that the space complexity of trascendental numbers cannot be bounded, unlike algebraic numbers. In Section 4 we study the space complexity of trascendental numbers and we show general theorems for trascendental numbers with certain natural forms. Contrary to the situation of trascendental numbers where space-efficient algorithms exist, it is very difficult to construct space-efficient algorithms for algebraic irrational numbers.
In order to conduct a study in the theory of space-bounded real numbers, we follow an idea initiated by Ko [6] which relates the polynomial time computability of real numbers to classical computational complexity theory. Yu and Ko [8] initiated a study of logspace computable real numbers where they showed how open problems in the theory of space-bounded tally sets relates to representations of a real number. In Theorem 5.1 we present a characterization of the space complexity of a real number in terms of the space complexity of a tally set representing the same number. With this characterization, if we would like to design a space-efficient algorithm for a real number, it suffices to construct such an algorithm for its corresponding tally set, which is most of the time more natural to work with. In Theorem 6.1 we present a relation between the set of left-cuts representations of a real number and its space complexity. As a result we have that tally sets corresponding to a real number and its left-cuts representations are polynomial-space equivalent. Finally, in Section 7 we show that constant-space machines modeled by finite automata cannot recognize irrational numbers even with the help of advice.
For several cases of the real numbers studied in this work there exist space-efficient algorithms, whereas for algebraic irrational numbers the best algorithm needs to remember all digits previously computed. The results of this work thus suggest a new conjecture for algebraic irrational numbers, namely, that there is no algorithm that computes algebraic irrational numbers using sublinear space.
## 2 Preliminaries on Computable Numbers and Complexity
### 2.1 Computable Numbers
We use to denote the set of the natural numbers including 0. In the rest of this paper, we will use multitape Turing machines with blank symbols denoted by .
A dyadic number is a number of the form for integers and with . The binary expansion of a dyadic number is
d=m∑i=0si⋅2i+n∑j=1tj⋅2−j,
where and each and are the digits of the integer and binary parts of , respectively. We say that the string represents or is a representation of in base 2.
Given a representation of a dyadic number , we denote by the number of symbols in the representation of and we use to denote the number of symbols to the right of the binary point in . Furthermore, we let and .
Given a function we say that binary converges to a real number if for all , . Thus the sequence is a fast-convergence Cauchy sequence. If binary converges to , we call a Cauchy function of . We denote by the set of all Cauchy functions of ; see Ko [6] for more details.
The function is computable if there exists a Turing machine that on input in unary outputs a representation of . A real number is computable if there exists a computable function .
Given a function over , the time (or space) complexity of a computable real number is bounded by if there exists a Turing machine that on input outputs a representation of a dyadic number such that and uses moves (or memory cells, respectively). As a short-hand we will use and to denote the time and space complexities of , respectively. For any we define the class .
###### Fact 1
Any rational number can be computed in constant space.
Hartmanis and Stearns [5] present an algorithm for computing algebraic real numbers, which runs in linear space.
###### Fact 2
Any real algebraic number can be computed in linear space. (App.0.A)
### 2.2 Oracle Turing Machines and Reducibility
Consider any Turing machine . An oracle is a subrutine that is incorporated in and can be used to ask questions with answers “yes” and “no.” More formally, let be any set and let be a Turing machine that computes with as an oracle. The machine aditionally has one write-only query tape, one query inner state and two answer states, namely the “yes” state and the “no” state . The query tape works like an output tape, that is, it is write-only and everytime a symbol is written, the head over the query tape moves inmediately one cell to the right and can never move left.
In order to make a query to the oracle , first writes a string in the query tape and enters a query state . Then, automatically, changes its state to if the query string belongs to the set , or otherwise. When changes from the query state to one of the answer states, the query string is inmediately deleted and the head is positioned over the first cell from the left. This way, making a query to only takes one step of .
The time complexity of is defined as follows. For any function and any input , the time complexity of , denoted , is if stops within moves on input . Note that the time it takes for to write in the query tape is counted towards its time complexity, whereas each oracle query counts only as one move.
Space complexity is defined similarly. For any function and any input , the space complexity of , denoted , is if the maximum number of work tape cells that uses at any time of its computation is at most on input ; the number of query tape cells used are not counted.
A set is many-one reducible to a set , denoted , if there exists a computable function such that if and only if . The set is Turing reducible to , denoted , if there exists a TM such that computes .
In this work we use and to denote many-one and Turing reductions bounded by space, respectively. Analogously, we use the superscript “” for polynomial-space reductions. We also use to denote equivalence, that is, if and only if and .
## 3 A Space-Hierarchy Theorem
From Fact 2 we know that any algebraic real number is computable in linear space. In this section, however, we show in Theorem 3.1 that there exists an infinite hierarchy of computable real numbers. Thus, for trascendental numbers there is no such bound.
Before proving the main result of this section, we introduce the following technical lemma.
###### Lemma 1 (App.0.d)
Let be a Turing machine that computes a real number using space. There exists a machine that on input outputs the -th digit of in space .
###### Theorem 3.1 (Space-Hierarchy Theorem for Real Numbers)
Let and be two space-constructible functions where and . Then .
###### Proof
By the space hierarchy theorem of [7], there exists a recursive set A such that the set is computable in space but not computable in space ; in particular, there exists a machine that on input outputs 1 if and outputs 0 if in space , and there is no Turing machine with a unary input that decides A in space .
We will construct a real number from A such that is in and not in . Let , and define if and 0 otherwise.
For the sake of contradiction, assume that , that is, there exists a machine that on input writes on its output tape and .
Now we construct a machine that on input decides if using space, thus contradicting the fact that A is not computable in space .
Since computes in space , from Lemma 1, there exists a TM that on input outputs the -th bit of in space . Thus, the machine simulates on input and outputs whatever outputs, and hence, the set A is computable in space , which is a contradiction.∎
## 4 Space Complexity of Trascendental Numbers
From the Space-Hierarchy Theorem of the previous section it is understood that there is no space upper-bound on the set of trascendental numbers. There are trascendental numbers, however, with “natural” definitions that can be computed efficiently.
###### Theorem 4.1
Let be any computable and strictly monotonically increasing function over the natural numbers.
Let
μf=∞∑k=1110f(k).
Then the number is computable in space , where is an upper bound on the space complexity of .
###### Proof (sketch)
We can construct a TM that outputs the representation of in base 10 as follows. For each input symbol (say, the -th input symbol), we compute and store it in the work tape, which needs and space, respectively. Then, we write a 1 in the -th cell of the output tape. Note that the function is strictly monotonically increasing. Thus, the output tape-head never goes back to the left. In order to execute the above procedure, we need to track the positions of the input and the output tapes, which needs and space, respectively. Thus, our algorithm uses space. ∎
See App.0.B for the detailed proof. Immediately we obtain the following corolary.
###### Corollary 1
Let be any positive integer. Any number of the form
∞∑k=1110kd
is computable in logspace.
In particular, the number is known to be trascendental, but is still open. Another interesting example is Liouvilles’s constant which is when , and hence, from Theorem 4.1 it follows that Liouville’s constant is in .
From Fact 1 we know that finite automata or constant-space machines can only compute rational numbers. By a slight change in the definition of what it means for a number to be computable it allows finite automata to compute some irrational numbers.
Let be the set and let in be the -ary expansion of , that is, . A real number is -automatic if there exists a finite state automaton that takes as input in radix- and outputs .
###### Theorem 4.2 (App.0.f)
Any -automatic number is computable in logspace.
## 5 Space-Bounded Real Numbers and Tally Sets
Tally sets are languages over unary alphabets, that is, singleton alphabets. In this section, we show in Theorem 5.1 that computable real numbers are computationally equivalent to tally sets, thus, establishing a strong connection between space-bounded computational complexity and the theory of computable real numbers.
Before going into the main result of this section in Theorem 5.1, first we present some technical lemmas which are also relevant for the remaining of this paper.
###### Lemma 2 (Unary simulation lemma)
Let be a TM that computes a function using space with space-constructible. There exists a TM that simulates using space and computes a function with .
###### Proof
In order for to simulate , we use a “fake tape” trick and exploit the fact that only works with unary inputs. The machine uses a tape pos, initialized in 0, that stores in binary the position of the input head of . Furthermore, will required an additional constant number of work tapes sufficient to run the simulation of .
On input , first initializes pos:=0 and repeats the following procedure until stops its computation. If , simulate one step of with input symbol ; otherwise, simulate one step of with input symbol . In either case, write in the output tape of whatever output symbol generates and update pos according to the move performed by ’s input tape head. Thus, the output of equals the output of .
The space used by the simulation of is and we only require bits to store the input tape position of .∎
###### Lemma 3
For any given in unary, is computable in space.
###### Proof
We construct a TM that on input writes on its output tape.
1. Initialize tapes and in binary.
2. Compute in binary and store it in number.
3. Enter loop.
1. Compute and and store the results in lower-bound and upper-bound, respectively.
2. If then copy the contents of floor to the output tape and stop;
3. else increment the contents of floor and ceiling by 1.
The procedure above works by checking in each step of the loop the relation , and since is finite the loop terminates in finite time. Furthermore, tapes floor, ceiling, number, lower-bound and upper-bound use bits of storage. ∎
For any nonnegative integers and , define a pairing function as a bijective function from to given by . A pairing function from to is easily defined inductively as .
###### Lemma 4
For any nonnegative integers of at most bits, the pairing function and its inverse are computable in space.
###### Proof
The computation of is clearly computable in space, because we only need to do four additions, one multiplication and one bit shift which can all be done using logspace.
If , we can obtain and using the following procedure. Let , then
c =⌊√2h−1/2⌋, (1) i =h−Δ(c), (2) j =c−i+2. (3)
The arithmetic operations of multiplication and addition of Eq.(2) and Eq.(3) can be done in logspace. For the square root of Eq.(1), however, all known algorithms use linear space. For the inversion of the pairing function, however, we only need the floor which can be done in logspace by Lemma 3.∎
###### Theorem 5.1
Let be any space-constructible function such that . A real number is in if and only if the tally set
Tϕx={0⟨n,i,b⟩|n,i∈N,1≤i≤n,b∈{0,1},ϕx(n)i=b}
is computable in space , where denotes the -th bit in a base-2 representation of .
###### Proof
We construct a machine computing that operates as follows. Given an input , first inverts the pairing function using the length of as a parameter to obtain and in binary. If then accepts , otherwise it rejects . Since is computable in space , we only need to show that all the other steps can be done in space.
In order to invert the pairing function we need to invoque Lemma 4 two times, first to obtain and , and a second time to obtain and from , using space. This proves the first part of the implication.
Now suppose that is computable in space , and we want to prove that is computable in space . By Lemma 2, there exists a machine that computes using space a function defined as if and otherwise, where is the integer number represented by .
A machine for computing works as follows. On input , in a tape post initialized in 0, keep a count in binary of the output tape position of , and in a tape length store in binary. Then using a tape counter initialized in 0, repeat the following procedure. Simulate with counter as input to obtain its single-symbol output, say . If then for some nonnegative integers and by Lemma 4 we can obtain and in logspace. If and , then write in the output tape of , increment pos in one and set counter:=0; otherwise, if or , increment counter in one. Repeat this procedure to obtain the first output bits of . Since runs in space and counter only stores bits, then runs in space.∎
The proposition below presents an application of Theorem 5.1.
###### Proposition 1
Let A be any subset of and let . If is decidable in space, then is in .
Proposition 1 follows immediately from Theorem 5.1 and Lemma 5.
###### Lemma 5 (App.0.e)
.
If we let , the set of prime numbers, we have that has a 1 in every prime position and it is clearly not rational. From Proposition 1 it follows that is computable in logspace because the tally set is computable in logspace; we do not know, however, if is algebraic or trascendental for .
## 6 Space-Bounded Real Numbers and Non-Tally Sets
In the previous section we presented a characterization between tally sets and space-bounded real numbers. In this section we explore relations between space-bounded real numbers and languages whose alphabets are not singletons. Languages representing real numbers were introduced by Ko [6]. Given , for each define the language as the set of strings representing dyadic rational numbers such that with .
###### Theorem 6.1
Let be any space-constructible function with . If , then is computable in space for some .
###### Proof
Let be TM that computes in space for some . We construct another machine that on input simulates and checks if is less than or equal to the output generated by .
To make the proof work we need two technical considerations. First, must compares its input against the output of using space . To compare against using space it suffices to remember only one symbol of at a time. Second, in order to avoid simulating using linear space, we make use of the “fake tape” trick of Lemma 2 for , that is, we store the position of the input head of and simulate by feeding it one 0 symbol at a time. Note that here we cannot use Lemma 2 directly, because we need to remember the output of and compare it against the input which would use linear space.
The machine has one tape to simulate , another tape pos to record the position of the input head of , and a tape prec to store the precision of the input.
With no loss of generality, we consider dyadic numbers betweenen 0 and 1. Let be an input for , and let be the output of on input .
The algorithm executed by is the following.
1. Compute and store it in the tape prec in binary. To do this, first initialize and scan each input symbol after the binary point. For each input symbol after the binary point, increment prec by one until a blank symbol tape is encountered.
2. Initialize and position the input head of on the first symbol after the binary point.
3. Enter a loop.
1. If , simulate one step of with input symbol ; else, simulate one step of with input symbol .
2. Update the contents of tape pos accordingly, that is, if moved its input head right or left, increment or decrease pos by one, respectively.
3. If did not generated any output symbol, then do nothing.
4. Suppose generated an output symbol and let be the current position of the input head of . If , accept; if , reject.
5. If entered a halting state, then stop the computation and accept.
Tapes pos and prec use at most bits and the simulation of uses bits.∎
.
###### Proof
Let . We construct a machine that decides with oracle access to using space.
We use tapes length and pos to store in binary the length of the input and the position of the input head, respectively. For each with and using tapes length and pos, by Lemma 4, the pairing functions and are computable in space. Note that and have bits. Then query and to the oracle to determine the correct bit . If , reject; if accept. After checkin all bits of , accept. ∎
.
###### Proof
Suppose with no loss of generality that . We construct a machine that, on input and using as oracle, decides using space.
In a tape called length we store in binary using bits. By Lemma 4, we can invert the pairing function using space. Note that each and also have bits.
The reduction implements the following procedure. Use exhaustive search to find a largest dyadic number such that . If accept , otherwise reject.
Suppose that . Then, using the procedure above we obtain a dyadic number whose first digits agree with . Therefore, and is accepted. Now suppose that . Since we have that . Hence, and is rejected.
To finish the proof, we need to show that the above mentioned procedure can be implemented in polynomial space. The tape length is used as input to invert the pairing function , which suffices with bits. During the exhaustive search procedure, we only need to remember bits, which again there are at most .∎
.
## 7 Non-uniform Deterministic Finite Automata
Constant-space machines are modeled by finite automata, and from Fact 1 it is clear that no irrational number is computable by constant-space machines. As a final result of this paper, we show that constant-space machines cannot recognize irrational numbers even in the presence of external aid which we model as advice.
A deterministic finite automaton with advice has a read-only advice tape in which an advice string is written prior to the computation. The advice string is allowed to depend on the length of the input, but not on the input itself.
A deterministic finite automaton with advice is formally defined as a 7-tuple , where is a finite set of states, (resp. ) is a finite input (resp. output) alphabet, is a state transition function, is the initial state, is the set of halting states, and is a set of advice strings. The input is written in the input tape as , and the advice string is written in the advice tape as , where represents the length of the input , and ¢(resp. \$) is the left (resp. the right) end-marker. In the initial configuration, the input tape head and the advice tape head scan the left end-markers, and the state is in . Then, at each step of the computation, the automaton changes its state, moves the input and the advice tape heads by one cell, and outputs a symbol (which can be an empty word ) according to the state transition function. When the automaton reaches one of the halting states, it halts.
Note that our definition of deterministic finite automata with advice can be seen as the Mealy machine equipped with an advice tape.
We say that a deterministic finite automaton computes a real number if for any unary input , it outputs .
A (complete) configuration of a deterministic finite automaton with advice is represented as a triple , where is the current state, is the position of the input tape head, and is the position of the advice tape head. We define a partial configuration as a triple where is the current state, is the symbol scanned by the input tape head, and is the position of the advice tape head.
###### Lemma 6
Let an input string be . We consider the computation of a deterministic finite automaton with constant-sized advice that outputs more than symbols. Then, the output string is of the form where , and .
###### Proof
Let be the sequence of configurations made in a computation of the automaton for input . Let be a subsequence induced by all the configurations in which the tape head scans the left or the right endmarker. Let be the length of an advice string and let be the set of states of the automaton. We consider the sequence of the first configurations in , . We have the following two cases.
Case 1. There exists () such that the automaton outputs symbols along the transitions from to .
Let be a smallest number that satisfies the condition of Case 1. We consider a sequence of configurations, . Let be the sequence of partial configurations that corresponds to . Then, there exists and () such that since the number of possible partial configurations is a constant while the length of the sequence, , is . We consider the smallest and . Then, . Note that for the partial configurations, , the scanned input symbol is always . Therefore, the transitions of partial configurations from the initial configuration can be written as , i.e. once the automaton reaches the configuration , it repeats the transitions from to until it reads the left or the right end-marker in . This means that the automaton outputs where is generated by the transitions from to , is generated by the iterations of the transitions from to , and is generated from the rest of the transitions that include the transitions after . Note that is bounded above by a constant (). Also, note that for each , the automaton outputs symbols along the transitions from to . As mentioned above, . Thus, . Also, since . Recall that by the condition of Case 1, the automaton outputs symbols along the transitions from to . Thus, .
Case 2. For all (), the automaton outputs symbols along the transitions from to .
In this case, there exists and () such that since the number of possible configurations in which the automaton scans the left or the right endmarker is . Thus, the automaton repeats the transitions from to infinitely. Therefore, the transitions from the initial configuration can be written as , which means that the automaton outputs where is generated by the transitions from to , and is generated by the iterations of the transitions from to . Note that by the condition of Case 2. ∎
###### Theorem 7.1
If a deterministic finite automaton with constant-sized advice computes a real number , then is represented as , where can be infinite.
###### Proof
We consider a deterministic finite automaton with constant-sized advice that computes a real number . Let denote the output for the input . Note that by Lemma 6, we can assume that the output of the automaton is of the form where , and . Then, for any input , there exists an input such that for , the automaton outputs where , and . This means that the first symbols of can be written as for some where is a prefix of since the first symbols of to the right of the binary point agree with . Therefore, is represented as for some , where can be infinite. ∎
Theorem 7.1 implies that deterministic finite automata with constant-sized advice cannot compute any irrational numbers.
Acknowledgements. The authors thank Abuzer Yakaryilmaz for useful discussions.
## References
• [1] Adamczewski, B., Bugeaud, Y.: On the complexity of algebraic numbers I. Expansion in integer bases. Annals of Mathematics 165, 547–565 (2007)
• [2] Chiu, A., Davida, G., Litow, B.: Division in logspace-uniform NC1. RAIRO - Theoretical Informatics and Applications 35(3), 259–275 (2001)
• [3] Cobham, A.: On the base-dependence of sets of numbers recognizable by finite automata. Mathematical Systems Theory 3, 186–192 (1969)
• [4] Freivalds, R.: Hartmanis-Stearns conjecture on real time and transcendence. In: Proceedings of Computation, Physics and Beyond – International Workshop on Theoretical Computer Science. LNCS, vol. 7160, pp. 105–119. Springer (2012)
• [5] Hartmanis, J., Stearns, R.E.: On the computational complexity of algorithms. Transactions of the American Mathematical Society 117, 285–306 (1965)
• [6] Ko, K.: Complexity Theory of Real Functions. Birkhäuser (1991)
• [7] Stearns, R.E., Hartmanis, J., Lewis, P.M.: Hierarchies of memory limited computations. In: Proceedings of the 6th Annual Symposium on Switching Circuit Theory and Logical Design (FOCS). pp. 179–190 (1965)
• [8] Yu, F., Ko, K.: On logarithmic-space computable real numbers. Theoretical Computer Science 469, 127–133 (2013)
## Appendix 0.A Proof of Fact 2
Since computing the integer parts are trivial, we pick a real algebraic number, say , between 0 and 1. Let be its minimal polynomial. The rational number represents the first -th bit(s) of after the decimal point. We pick a sufficiently big integer such that is the closest root to . Since is minimal either and or vice versa. We can easily determine the case by computing both values. We assume the first case. (We can take for the other case.)
Let be a Turing machine that on input operates as follows. The first bits of are stored in the description of as . If , the machine outputs each bit of up to . In this case, does not use any work tape and constant space suffices to output the first bits.
If the machine operates as follows. First, write in the output tape. Let be different work tapes in . Copy on tape . Starting from repeat the procedure below times.
1. For each compute and store it on tape .
2. For each multiply with and store the result in tape .
3. Add and store the result in tape .
4. If , write 1 on the output tape and copy it on tape ; this will make
5. If , write 0 on the output tape; this will make .
Each addition and multiplication can be done in logspace. Each work tape , however, stores at most bits; this is because in the last iteration of the algorithm the tape stores .
## Appendix 0.B Proof of Theorem 4.1
We construct a machine that outputs the representation of in base 10 as follows. The TM has three main tapes called pos_in, pos_out and res. The pos_in tape will track the position of the input tape-head. The pos_out tape will track the position of the output tape-head. The res tape will store temporary results for the computation of . Besides these three main tapes, the machine will use some extra finite number of tapes that depends on the computation of . All tapes involved will use a binary alphabet.
The algorithm is the following.
1. If the first symbol under the input tape-head is , then write 0 in the output and stop the computation.
2. Initialize tapes. Write 1 in pos_in and pos_out. Leave the others blank.
3. Write ”0.” in the output tape.
4. Enter a cycle
1. If the input tape-head scans (i.e., the input tape-head reaches the end of the input), break the loop.
2. Compute and store the result in res.
3. Enter cycle
• If pos_out=res, write 1 in the output tape and break the inner loop.
• If pos_out res (in this case, pos_out res ), write a 0 in the output tape and move the output tape-head to the right by one cell. At the same time, increment pos_out.
4. Move the input tape-head to the right, and increment pos_in.
At the start of iteration of the cycle of step 4(c), pos_in=. The algorithm checks if the current tape-head position of the output tape (pos_out) equals the value of res. If pos is not equal to res, the algorithm moves the output tape head by one cell towards res (i.e., to the right) and repeats the cycle. When , the algorithm writes a 1 in the output tape and breaks the inner circle.
We analyze the space complexity. The tape pos_in uses at most cells. In the worst case, the tape pos_out and res store the value of , thus, it uses at most cells. The amount of space to perform the computation of is bounded from above by . Thus, it follows that the total amount of space is . ∎
## Appendix 0.C Space Complexity of Arithmetic Operations on Rational numbers
###### Lemma 7 ([2])
Given any two integer numbers and in binary, the division of by can be done in logspace.
###### Lemma 8
Any two rational numbers can be multiplied in logspace.
###### Proof
Let and be two rational numbers. We assume that are given as inputs to a Turing machine in radix-2.
The machine multiplies and and then and , which can be done in logspace. Let where and .∎
###### Lemma 9
Any two rational numbers can be added in logspace.
###### Proof
Let and be two rational numbers. We assume that are given as inputs to a Turing machine in radix-2.
The machine operates as follows. First, multiplies in logspace . Then computes , where each addition and multiplication takes logspace.∎
###### Lemma 10
Any fraction can be transformed to its lowest terms in linear space.
###### Proof
Let be a rational number. With no loss of generality assume that and that and are given as inputs in radix-2. Then execute the following algorithm.
1. Let , and .
2. Repeat until .
1. Divide and .
2. If both divisions have residue 0, let and ; else, increment in one.
The procedure above outputs to its lowest terms and it takes linear space because we only need to keep and in memory at all times and division also takes logspace.∎
###### Corollary 3
Any two rational numbers can be added and multiplied to its lowest terms in linear space.
## Appendix 0.D Proof of Lemma 1
We construct a machine that simulates . Let counter and max be two working tapes. First, sets counter:=0 and counts in binary the input size using max. Thus, max stores the input length in binary. Then, sets its input tape head at the beginning of its input tape and starts simulating using its own input tape as input for . Every time generates an output symbol, if then increment counter; otherwise, if , let write the output symbol of on ’s output tape and stop the computation.
The space utilized by is because max and counter tapes use at most space, and the simulation of uses space.∎
## Appendix 0.E Proof of Lemma 5
We construct a logspace machine with oracle access to the set . On input , using Lemma 4 compute in logspace the inverse of the pairing function to obtain . Note that each has bits. If and then accept, or if and then also accept; in other cases, reject. The reduction is computable in logspace because of Lemma 4 and the logspace computability of .∎
## Appendix 0.F Proof of Theorem 4.2
Let be a -automatic real number and let be a finite-state automaton that computes . We construct a Turing machine that on input outputs .
With no loss of generality, assume that , that is, is in the interval . Let count be tape that keeps a counter in binary. If the input tape is empty, write 0 in the output tape and stop. If the input tape is not emtpy, then for each symbol in the input tape, increment count by one and simulate using the contents of count as input. Thus, for each input symbol that is scanned, write the output of on the output tape of .
|
{}
|
USF Home > College of Arts and Sciences > Department of Mathematics & Statistics
Mathematics & Statistics
# Colloquia — Fall 2004
## Wednesday, December 8, 2004
Title
Speaker
Time
Place
Minimal Generalized Interpolating Projections and the $$P$$-lambda Problem
Bruce Chalmers
University of California-Riverside
3:30pm-4:30pm
EDU 411
Boris Shekhtman
Abstract
Given an $$n$$-dimensional Banach space $$V$$, the $$P$$-lambda problem asks for a relationship between the projection constant of $$V$$ and the (Banach-Mazur) distance of $$V$$ to the space with ball the $$n$$-cube. This talk will show how this latter quantity can be recognized as the norm of a minimal generalized interpolating projection onto $$V$$ and discuss some recent progress on the $$P$$-lambda problem.
## Friday, November 12, 2004
Title
Speaker
Time
Place
Scarce sets with the Green function possessing the highest smoothness
Kent State University
3:00pm-4:00pm
PHY 120
Vilmos Totik
Abstract
Let $$E$$ be a regular compact subset of the real line. We relate the Green function $$g$$ for the complement of $$E$$ with respect to the extended complex plane to some conformal mapping $$f$$. We discuss the relationship between the geometry of $$E$$ and $$f(E)$$.
As an application we construct examples of sets of the minimal possible Hausdorff dimension with $$g$$ satisfying the Hoelder $$1/2$$ condition localy or uniformly.
## Friday, October 29, 2004
Title
Speaker
Time
Place
Note
DNA Topology: Experiments and Analysis
DeWitt L. Sumners
Distinguished Professor
Florida State University
2:00pm-3:00pm
PHY 130
Mohamed Elhamdadi and Masahiko Saito
This colloquium is joint with the Biology Department.
Abstract
Cellular DNA is a long, thread-like molecule with remarkably complex topology. Enzymes which manipulate the geometry and topology of cellular DNA perform many important cellular processes (including segregation of daughter chromosomes, gene regulation, DNA repair, and generation of antibody diversity). Some enzymes pass DNA through itself via enzyme-bridged transient breaks in the DNA; other enzymes break the DNA apart and reconnect it to different ends. In the topological approach to enzymology, circular DNA is incubated with an enzyme, producing an enzyme signature in the form of DNA knots and links. By observing the changes in DNA geometry (supercoiling) and topology (knotting and linking) due to enzyme action, the enzyme binding and mechanism can often be characterized. This talk will discuss topological models for DNA strand passage and exchange in site-specific DNA recombination, and use of the spectrum of DNA knots to infer bacteriophage DNA packing in viral capsids.
## Friday, October 1, 2004
Title
Speaker
Time
Place
Resonance and web structure in integrable systems
Gino Biondini
State University of New York at Buffalo
3:00-4:00 p.m
PHY 120
Wen-Xiu Ma
Abstract
We discuss a family of non-singular soliton solutions of the Kadomtsev-Petviashvili equation. We show that all of these solutions are of resonant type, consisting of an arbitrary number of line solitons in both aymptotics: namely, an arbitrary number $$M$$ of incoming solitons interacts to form an arbitrary number $$N$$ of outgoing solitons. We also describe the interaction pattern, and we show that the resonant interactions create a web-like structure having $$(M-1)(N-1)$$ holes. Finally, we show that the class of elastic $$N$$-soliton solutions of the Kadomtsev-Petviashvili equation is much broader than previously thought and includes partially and fully resonant solutions. We end by presenting a classification of all these elastic $$N$$-soliton solutions in terms of the individual soliton parameters.
Title
Speaker
Time
Place
|
{}
|
ISSN 1000-1239 CN 11-1777/TP
• Paper •
### Operating and Analyzing the Reproducibility of Empty Marking Nets
Ye Jianhong1,2, Song Wen2, and Sun Shixin3
1. 1(School of Computer Science & Technology, Huaqiao University, Quanzhou, Fujian 362021) 2(School of Mathematics & Computer Engineering, Xihua University, Chengdu 610039) 3(School of Computer Science & Engineering, University of Electronic Science and Technology of China, Chengdu 610054)
• Online:2009-08-15
Abstract: To reproduce the empty marking, there is a necessary and sufficient condition in which a non-negative T-invariant exists, whose net representations have neither siphons nor traps, containing a positive entry for at least one fact and goal transition. This result is extended and it is proved that the operations of composition, insertion, deletion and substitution do not influence the reproducibility of the empty marking. Moreover, some properties related to the empty marking net are discussed, For example, a net with reproducibility of the empty marking has preserved the reproducibility in its inversed net, and the empty marking in acyclic P/T nets with a positive entry for at least one fact and goal transition is reproducible if and only if the net is covered by T-invariant. In particular, if a Horn-net satisfies all the above conditions and it is acyclic, then the T-invariant is realizable. These theoretical results show that there are interesting connections to other notions, for example, to the forward and backward liveness of the empty marking. On the other hand, there are application oriented aspects of those results. Examples can be found in proving complex logic inference and checking throughness of workflow logic net. Finally its algorithm is proposed.
|
{}
|
# C Code Generation for a MATLAB Kalman Filtering Algorithm
This example shows how to generate C code for a MATLAB® Kalman filter function, `kalmanfilter`, which estimates the position of a moving object based on past noisy measurements. It also shows how to generate a MEX function for this MATLAB code to increase the execution speed of the algorithm in MATLAB.
### Prerequisites
There are no prerequisites for this example.
### About the `kalmanfilter` Function
The `kalmanfilter` function predicts the position of a moving object based on its past values. It uses a Kalman filter estimator, a recursive adaptive filter that estimates the state of a dynamic system from a series of noisy measurements. Kalman filtering has a broad range of application in areas such as signal and image processing, control design, and computational finance.
### About the Kalman Filter Estimator Algorithm
The Kalman estimator computes the position vector by computing and updating the Kalman state vector. The state vector is defined as a 6-by-1 column vector that includes position (x and y), velocity (Vx Vy), and acceleration (Ax and Ay) measurements in a 2-dimensional Cartesian space. Based on the classical laws of motion:
`$\left\{\begin{array}{rcl}X& =& {X}_{0}+{V}_{x}dt\\ Y& =& {Y}_{0}+{V}_{y}dt\\ {V}_{x}& =& {V}_{x0}+{A}_{x}dt\\ {V}_{y}& =& {V}_{y0}+{A}_{y}dt\end{array}$`
The iterative formula capturing these laws are reflected in the Kalman state transition matrix "A". Note that by writing about 10 lines of MATLAB code, you can implement the Kalman estimator based on the theoretical mathematical formula found in many adaptive filtering textbooks.
`type kalmanfilter.m`
```% Copyright 2010 The MathWorks, Inc. function y = kalmanfilter(z) %#codegen dt=1; % Initialize state transition matrix A=[ 1 0 dt 0 0 0;... % [x ] 0 1 0 dt 0 0;... % [y ] 0 0 1 0 dt 0;... % [Vx] 0 0 0 1 0 dt;... % [Vy] 0 0 0 0 1 0 ;... % [Ax] 0 0 0 0 0 1 ]; % [Ay] H = [ 1 0 0 0 0 0; 0 1 0 0 0 0 ]; % Initialize measurement matrix Q = eye(6); R = 1000 * eye(2); persistent x_est p_est % Initial state conditions if isempty(x_est) x_est = zeros(6, 1); % x_est=[x,y,Vx,Vy,Ax,Ay]' p_est = zeros(6, 6); end % Predicted state and covariance x_prd = A * x_est; p_prd = A * p_est * A' + Q; % Estimation S = H * p_prd' * H' + R; B = H * p_prd'; klm_gain = (S \ B)'; % Estimated state and covariance x_est = x_prd + klm_gain * (z - H * x_prd); p_est = p_prd - klm_gain * H * p_prd; % Compute the estimated measurements y = H * x_est; end % of the function ```
The position of the object to track are recorded as x and y coordinates in a Cartesian space in a MAT file called `position_data.mat`. The following code loads the MAT file and plots the trace of the positions. The test data includes two sudden shifts or discontinuities in position which are used to check that the Kalman filter can quickly re-adjust and track the object.
```load position_data.mat hold; grid;```
```Current plot held ```
```for idx = 1: numPts z = position(:,idx); plot(z(1), z(2), 'bx'); axis([-1 1 -1 1]); end title('Test vector for the Kalman filtering with 2 sudden discontinuities '); xlabel('x-axis');ylabel('y-axis'); hold;```
```Current plot released ```
### Inspect and Run the `ObjTrack` Function
The `ObjTrack.m` function calls the Kalman filter algorithm and plots the trajectory of the object in blue and the Kalman filter estimated position in green. Initially, you see that it takes a short time for the estimated position to converge with the actual position of the object. Then, three sudden shifts in position occur. Each time the Kalman filter readjusts and tracks the object after a few iterations.
`type ObjTrack`
```% Copyright 2010 The MathWorks, Inc. function ObjTrack(position) %#codegen % First, setup the figure numPts = 300; % Process and plot 300 samples figure;hold;grid; % Prepare plot window % Main loop for idx = 1: numPts z = position(:,idx); % Get the input data y = kalmanfilter(z); % Call Kalman filter to estimate the position plot_trajectory(z,y); % Plot the results end hold; end % of the function ```
`ObjTrack(position)`
```Current plot held ```
```Current plot released ```
### Generate C Code
The `codegen` command with the `-config:lib` option generates C code packaged as a standalone C library.
Because C uses static typing, `codegen` must determine the properties of all variables in the MATLAB files at compile time. Here, the `-args` command-line option supplies an example input so that `codegen` can infer new types based on the input types.
The `-report` option generates a compilation report that contains a summary of the compilation results and links to generated files. After compiling the MATLAB code, `codegen` provides a hyperlink to this report.
```z = position(:,1); codegen -config:lib -report -c kalmanfilter.m -args {z}```
```Code generation successful: To view the report, open('codegen/lib/kalmanfilter/html/report.mldatx') ```
### Inspect the Generated Code
The generated C code is in the `codegen/lib/kalmanfilter/` folder. The files are:
`dir codegen/lib/kalmanfilter/`
```. kalmanfilter.h .. kalmanfilter_data.c .gitignore kalmanfilter_data.h _clang-format kalmanfilter_initialize.c buildInfo.mat kalmanfilter_initialize.h codeInfo.mat kalmanfilter_rtw.mk codedescriptor.dmr kalmanfilter_terminate.c compileInfo.mat kalmanfilter_terminate.h examples kalmanfilter_types.h html rtw_proj.tmw interface rtwtypes.h kalmanfilter.c ```
### Inspect the C Code for the `kalmanfilter.c` Function
`type codegen/lib/kalmanfilter/kalmanfilter.c`
```/* * File: kalmanfilter.c * * MATLAB Coder version : 5.5 * C/C++ source code generated on : 31-Aug-2022 01:26:17 */ /* Include Files */ #include "kalmanfilter.h" #include "kalmanfilter_data.h" #include "kalmanfilter_initialize.h" #include <math.h> #include <string.h> /* Variable Definitions */ static double x_est[6]; static double p_est[36]; /* Function Definitions */ /* * Arguments : const double z[2] * double y[2] * Return Type : void */ void kalmanfilter(const double z[2], double y[2]) { static const short R[4] = {1000, 0, 0, 1000}; static const signed char b_a[36] = {1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1}; static const signed char iv[36] = {1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1}; static const signed char c_a[12] = {1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0}; static const signed char iv1[12] = {1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0}; double a[36]; double p_prd[36]; double B[12]; double Y[12]; double x_prd[6]; double S[4]; double b_z[2]; double a21; double a22; double a22_tmp; double d; int i; int k; int r1; int r2; signed char Q[36]; if (!isInitialized_kalmanfilter) { kalmanfilter_initialize(); } /* Copyright 2010 The MathWorks, Inc. */ /* Initialize state transition matrix */ /* % [x ] */ /* % [y ] */ /* % [Vx] */ /* % [Vy] */ /* % [Ax] */ /* [Ay] */ /* Initialize measurement matrix */ for (i = 0; i < 36; i++) { Q[i] = 0; } /* Initial state conditions */ /* Predicted state and covariance */ for (k = 0; k < 6; k++) { Q[k + 6 * k] = 1; x_prd[k] = 0.0; for (i = 0; i < 6; i++) { r1 = k + 6 * i; x_prd[k] += (double)b_a[r1] * x_est[i]; d = 0.0; for (r2 = 0; r2 < 6; r2++) { d += (double)b_a[k + 6 * r2] * p_est[r2 + 6 * i]; } a[r1] = d; } } for (i = 0; i < 6; i++) { for (r2 = 0; r2 < 6; r2++) { d = 0.0; for (r1 = 0; r1 < 6; r1++) { d += a[i + 6 * r1] * (double)iv[r1 + 6 * r2]; } r1 = i + 6 * r2; p_prd[r1] = d + (double)Q[r1]; } } /* Estimation */ for (i = 0; i < 2; i++) { for (r2 = 0; r2 < 6; r2++) { d = 0.0; for (r1 = 0; r1 < 6; r1++) { d += (double)c_a[i + (r1 << 1)] * p_prd[r2 + 6 * r1]; } B[i + (r2 << 1)] = d; } for (r2 = 0; r2 < 2; r2++) { d = 0.0; for (r1 = 0; r1 < 6; r1++) { d += B[i + (r1 << 1)] * (double)iv1[r1 + 6 * r2]; } r1 = i + (r2 << 1); S[r1] = d + (double)R[r1]; } } if (fabs(S[1]) > fabs(S[0])) { r1 = 1; r2 = 0; } else { r1 = 0; r2 = 1; } a21 = S[r2] / S[r1]; a22_tmp = S[r1 + 2]; a22 = S[r2 + 2] - a21 * a22_tmp; for (k = 0; k < 6; k++) { double d1; i = k << 1; d = B[r1 + i]; d1 = (B[r2 + i] - d * a21) / a22; Y[i + 1] = d1; Y[i] = (d - d1 * a22_tmp) / S[r1]; } for (i = 0; i < 2; i++) { for (r2 = 0; r2 < 6; r2++) { B[r2 + 6 * i] = Y[i + (r2 << 1)]; } } /* Estimated state and covariance */ for (i = 0; i < 2; i++) { d = 0.0; for (r2 = 0; r2 < 6; r2++) { d += (double)c_a[i + (r2 << 1)] * x_prd[r2]; } b_z[i] = z[i] - d; } for (i = 0; i < 6; i++) { d = B[i + 6]; x_est[i] = x_prd[i] + (B[i] * b_z[0] + d * b_z[1]); for (r2 = 0; r2 < 6; r2++) { r1 = r2 << 1; a[i + 6 * r2] = B[i] * (double)c_a[r1] + d * (double)c_a[r1 + 1]; } for (r2 = 0; r2 < 6; r2++) { d = 0.0; for (r1 = 0; r1 < 6; r1++) { d += a[i + 6 * r1] * p_prd[r1 + 6 * r2]; } r1 = i + 6 * r2; p_est[r1] = p_prd[r1] - d; } } /* Compute the estimated measurements */ for (i = 0; i < 2; i++) { d = 0.0; for (r2 = 0; r2 < 6; r2++) { d += (double)c_a[i + (r2 << 1)] * x_est[r2]; } y[i] = d; } } /* * Arguments : void * Return Type : void */ void kalmanfilter_init(void) { int i; for (i = 0; i < 6; i++) { x_est[i] = 0.0; } /* x_est=[x,y,Vx,Vy,Ax,Ay]' */ memset(&p_est[0], 0, 36U * sizeof(double)); } /* * File trailer for kalmanfilter.c * * [EOF] */ ```
### Accelerate the Execution Speed of the MATLAB Algorithm
You can accelerate the execution speed of the `kalmanfilter` function that is processing a large data set by using the `codegen` command to generate a MEX function from the MATLAB code.
### Call the `kalman_loop` Function to Process Large Data Sets
First, run the Kalman algorithm with a large number of data samples in MATLAB. The `kalman_loop` function runs the `kalmanfilter` function in a loop. The number of loop iterations is equal to the second dimension of the input to the function.
`type kalman_loop`
```% Copyright 2010 The MathWorks, Inc. function y=kalman_loop(z) % Call Kalman estimator in the loop for large data set testing %#codegen [DIM, LEN]=size(z); y=zeros(DIM,LEN); % Initialize output for n=1:LEN % Output in the loop y(:,n)=kalmanfilter(z(:,n)); end; ```
### Baseline Execution Speed Without Compilation
Now time the MATLAB algorithm. Use the `randn` command to generate random numbers and create the input matrix `position` composed of 100,000 samples of (2x1) position vectors. Remove all MEX files from the current folder. Use the MATLAB stopwatch timer (`tic` and `toc` commands) to measure how long it takes to process these samples when running the `kalman_loop` function.
```clear mex delete(['*.' mexext]) position = randn(2,100000); tic, kalman_loop(position); a=toc;```
### Generate a MEX Function for Testing
Next, generate a MEX function using the command `codegen` followed by the name of the MATLAB function `kalman_loop`. The `codegen` command generates a MEX function called `kalman_loop_mex`. You can then compare the execution speed of this MEX function with that of the original MATLAB algorithm.
`codegen -args {position} kalman_loop.m`
```Code generation successful. ```
`which kalman_loop_mex`
```/tmp/Bdoc22b_2054784_1168239/tp66a43dd5/coder-ex53054096/kalman_loop_mex.mexa64 ```
### Time the MEX Function
Now, time the MEX function `kalman_loop_mex`. Use the same signal `position` as before as the input, to ensure a fair comparison of the execution speed.
`tic, kalman_loop_mex(position); b=toc;`
### Comparison of the Execution Speeds
Notice the speed execution difference using a generated MEX function.
`display(sprintf('The speedup is %.1f times using the generated MEX over the baseline MATLAB function.',a/b));`
```The speedup is 12.6 times using the generated MEX over the baseline MATLAB function. ```
|
{}
|
Share
# maximal factorizations of the finite simple groups and their automorphism groups
• ·
Written in English
### Subjects:
• Finite simple groups.,
• Maximal subgroups.,
• Factorization (Mathematics),
• Automorphisms.
## Book details:
Edition Notes
Classifications The Physical Object Statement M.W. Liebeck, C.E. Praeger, and J. Saxl.. Series Memoirs of the American Mathematical Society -- no. 432 Contributions Praeger, Cheryl E., Saxl, Jan., American Mathematical Society. LC Classifications QA3, QA171 Pagination p. cm. Open Library OL22569014M ISBN 10 0821824945
### Download maximal factorizations of the finite simple groups and their automorphism groups
PDF EPUB FB2 MOBI RTF
The Maximal Factorizations of the Finite Simple Groups and Their Automorphism Groups, Issue Volume of American Mathematical Society: Memoirs of the American Mathematical Society. Issue of Memoirs of the AMS Series. Memoirs of the American Mathematical Society. The Maximal Factorizations of the Finite Simple Groups and Their Automorphism Groups Share this page Jan Saxl; Cheryl E. Praeger. Table of Contents. Search. Go > Advanced search. Table of Contents The Maximal Factorizations of the Finite Simple Groups and Their Automorphism Groups Base Product Code Keyword List: Book Series Name: Memoirs. The book The maximal factorizations of the finite simple groups and their automorphism groups (by Martin W. Liebeck, Cheryl E. Praeger and Jan Saxl) provides a classification of all the triples $(G,A,B)$ such that: $G$ is a finite simple group, $A$ and $B$ are maximal subgroups of $G$. In another paper [15] the maximal factorizations of all the other finite simple groups and their automorphism groups are classified (where the factorization G = AB is maximal if both A and B are maximal subgroups of G). The results of this paper and [15] are used in [16] Cited by:
(For G,(q) these factorizations are already known [20, ) In another paper [ the maximal factorizations of all the other finite simple groups and their automorphism groups are classified (where the factorization G = AB is maximal if both A and B are maximal subgroups of G). This paper surveys some results in the area of maximal subgroups of the finite simple groups and their automorphism groups. The first two sections are concerned with maximal subgroups of the alternating and symmetric groups. We outline the Reduction Theorem and discuss the maximality of primitive groups in the corresponding symmetric kauainenehcp.com by: 5. Using the answer of Geoff and by browsing the book The maximal factorizations of the finite simple groups and their automorphism groups (by Martin W. Liebeck, Cheryl E. Praeger and Jan Saxl), the finite simple groups without factorization are the following: Every cyclic group of prime order. Mar 01, · AbstractThe classification of p-groups of maximal class still is a wide open problem. Coclass Conjecture W proposes a way to approach such a classification: It suggests that the coclass graph 𝒢 ${\mathcal{G}}$ associated with the p-groups of maximal class can be determined from a finite subgraph using certain periodic kauainenehcp.com by: 1.
We survey some recent results on maximal subgroups of the finite simple groups. In particular, we describe progress on several of the problems raised by Aschbacher in [ 3]. ‘The factorizations of the finite simple groups and their automorphism groups’, in preparation. Google Scholar [68] ‘Maximal subgroups of automorphism groups Cited by: Factorizations of Primitive Permutation Groups. Author links open overlay panel Barbara Baumeister. Show more. The low-dimensional finite simple classical groups and their subgroups. J. SaxlThe maximal factorizations of the finite simple groups and their automorphism groups. Mem. Amer. Math. Soc, 86 () Google Scholar. kauainenehcp.com by: The maximal factorizations of the finite simple groups and their automorphism groups. The main result describes completely the maximal factorizations of all the finite simple groups and their automorphism groups. As a consequence, a classification of the maximal subgroups of the finite alternating and symmetric groups is obtained. Get this from a library! The maximal factorizations of the finite simple groups and their automorphism groups. [M W Liebeck; Cheryl E Praeger; J Saxl].
|
{}
|
# Cristian Gatu
#### lmSubsets
cran
99.99th
Percentile
Exact and approximation algorithms for variable-subset selection in ordinary linear regression models. Either compute all submodels with the lowest residual sum of squares, or determine the single-best submodel according to a pre-determined statistical criterion. Hofmann et al. (2020) <10.18637/jss.v093.i03>.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.