arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Ring with Z as its group of units?
Is there a ring with $\mathbb{Z}$ as its group of units?
More generally, does anyone know of a sufficient condition for a group to be the group of units for some ring?
-
$k[X,1/X]$ where $k$ is the 2-element field. – Noam D. Elkies Sep 12 '11 at 3:53
A necessary condition is of course that $-1 = 1$ – Fernando Muro Sep 12 '11 at 5:53
The group $G$ is always contained in the group of units of the group ring $R[G]$, when $R$ is a commutative ring with unit. I don't know precise conditions for when they are equal, but here's a reference: maths.ed.ac.uk/~aar/papers/higman.pdf – Mark Grant Sep 12 '11 at 6:58
I was thinking also to look at the group ring $\mathbb{F}_2(G)$; but the units of this ring are strictly more than just the elements of $G$, for example, if $G$ is a finite $2$-group of order greater than $2$. See the second paragraph of www.ieja.net/papers/2011/V9/13-V9-2011.pdf – Jesse Elliott Sep 12 '11 at 8:30
Since the last part of the question has been asked again in a slightly different way, I thought I would add a comment that an example of a group which is not the group of units of any ring is the cyclic group of order 5. This is a nice exercise. – Tobias Kildetoft Oct 19 '11 at 8:23
## 1 Answer
The example provided by Noam answers the first question. The second question is very old and, indeed, too general. See e.g. the notes to Chapter XVIII (page 324) of the book "László Fuchs: Pure and applied mathematics, Volume 2; Volume 36". In particular, rings with cyclic groups of units have been studied by RW Gilmer [Finite rings having a cyclic multiplicative group of units, Amer. J. Math 85 (1963), 447-452], by K. E. Eldridge, I. Fischer [D.C.C. rings with a cyclic group of units, Duke Math. J. 34 (1967), 243-248] and by KR Pearson and JE Schneider [J. Algebra 16 (1970) 243-251].
-
|
|
# How do you simplify sqrt(75x²y)?
Mar 18, 2018
5xsqrt(3y
#### Explanation:
1.The first thing you can do is to square ${x}^{2}$. This is easy, you just take away the number 2 and then place x in front of the squareroot. Like this: xsqrt(75y
2. You have to find another way to write 75. You can write 75 like $3 \cdot 25$. But 25 can you also write as ${5}^{2}$. Then we get
xsqrt(3*5^2y
3. Again square ${5}^{2}$ by just taking away the number 2. Then you can place the number 5 in front of the squareroot and we get
5x $\sqrt{3 y}$ and we can't simplify it more
|
|
# Magic of Math #3
Algebra Level 1
If $k$ is a positive integer such that there exist a number $A$ such that $k = A+A = A\times A = A^A$, find the value of $\dfrac Ak$.
×
|
|
# Integrating $\int^0_\pi \frac{x \sin x}{1+\cos^2 x}$
Could someone help with the following integration: $$\int^0_\pi \frac{x \sin x}{1+\cos^2 x}$$
So far I have done the following, but I am stuck:
I denoted $y=-\cos x$ then: \begin{align*}&\int^{1}_{-1} \frac{\arccos(-y) \sin x}{1+y^2}\frac{\mathrm dy}{\sin x}\\&= \arccos(-1) \arctan 1+\arccos 1 \arctan(-1) - \int^1_{-1}\frac{1}{\sqrt{1-y^2}}\frac{1}{1+y^2} \mathrm dy\\&=\frac{\pi^2}{4}-\int^{1}_{-1}\frac{1}{\sqrt{1-y^2}}\frac{1}{1+y^2} \mathrm dy\end{align*}
Then I am really stuck. Could someone help me?
$$I=\int_0^{\pi} \frac{-x\sin x}{1+\cos^2 x}\,dx=\int_0^{\pi} \frac{(x-\pi)\sin x}{1+\cos^2 x}dx\quad(x\to \pi-x)$$
$$\Rightarrow I=\frac{\pi}{2}\int_0^{\pi}\frac{-\sin x}{1+\cos^2 x}\,dx$$
Let $t=\cos x:$
$$I=\frac{\pi}{2}\int_{-1}^{1}-\frac{1}{1+t^2}\,dt=-\frac{\pi^2}{4}$$
Let $$I = \int_0^{\pi} \dfrac{x \sin(x)}{1+\cos^2(x)} dx = \int_{-\pi/2}^{\pi/2} \dfrac{(x+\pi/2) \sin(x+\pi/2)}{1 + \cos^2(x+\pi/2)} dx = \int_{-\pi/2}^{\pi/2} \dfrac{(x+\pi/2) \cos(x)}{1 + \sin^2(x)} dx$$ Now $$\int_{-\pi/2}^{\pi/2} \dfrac{(x+\pi/2) \cos(x)}{1 + \sin^2(x)} dx = \int_{-\pi/2}^{\pi/2} \underbrace{\dfrac{x \cos(x)}{1 + \sin^2(x)}}_{\text{Odd function}} dx + \dfrac{\pi}2 \cdot \int_{-\pi/2}^{\pi/2} \dfrac{\cos(x)}{1 + \sin^2(x)} dx$$ Hence, we get that $$I = \dfrac{\pi}2 \cdot \int_{-\pi/2}^{\pi/2} \dfrac{\cos(x)}{1 + \sin^2(x)} dx = \dfrac{\pi}2 \cdot \int_{-1}^1 \dfrac{dt}{1+t^2} = \dfrac{\pi}2 \cdot \left( \dfrac{\pi}4 - \dfrac{-\pi}4\right) = \dfrac{\pi^2}4$$ The integral you are after is $-I$ and hence the answer is $-\dfrac{\pi^2}4$.
|
|
# The Foundation of the Generalised Theory of Relativity
Jump to: navigation, search
The Foundation of the Generalised Theory of Relativity
By A. Einstein.
The theory which is sketched in the following pages forms the most wide-going generalization conceivable of what is at present known as "the theory of Relativity;" this latter theory I differentiate from the former "Special Relativity theory," and suppose it to be known. The generalization of the Relativity theory has been made much easier through the form given to the special Relativity theory by Minkowski, which mathematician was the first to recognize clearly the formal equivalence of the space like and time-like co-ordinates, and who made use of it in the building up of the theory. The mathematical apparatus useful for the general relativity theory, lay already complete in the "Absolute Differential Calculus", which were based on the researches of Gauss, Riemann and Christoffel on the non-Euclidean manifold, and which have been shaped into a system by Ricci and Levi-Civita, and already applied to the problems of theoretical physics. I have in part B of this communication developed in the simplest and clearest manner, all the supposed mathematical auxiliaries, not known to Physicists, which will be useful for our purpose, so that, a study of the mathematical literature is not necessary for an understanding of this paper. Finally in this place I thank my friend Grossmann, by whose help I was not only spared the study of the mathematical literature pertinent to this subject, but who also aided me in the researches on the field equations of gravitation. [ 770 ]
## A. Principal considerations about the Postulate of Relativity.
### § 1. Remarks on the Special Relativity Theory.
The special relativity theory rests on the following postulate which also holds valid for the Galileo-Newtonian mechanics.
If a co-ordinate system K be so chosen that when referred to it the physical laws hold in their simplest forms, these laws would be also valid when referred to another system of co-ordinates K′ which is subjected to an uniform translational motion relative to K. We call this postulate "The Special Relativity Principle". By the word special, it is signified that the principle is limited to the case, when K′ has uniform translatory motion with reference to K, but the equivalence of K and K′ does not extend to the case of non-uniform motion of K′ relative to K.
The Special Relativity Theory does not differ from the classical mechanics through the assumption of this postulate, but only through the postulate of the constancy of light-velocity in vacuum which, when combined with the special relativity postulate, gives in a well-known way, the relativity of synchronism as well as the Lorentz transformation, with all the relations between moving rigid bodies and clocks.
The modification which the theory of space and time has undergone through the special relativity theory, is indeed a profound one, but a weightier point remains untouched. According to the special relativity theory, the theorems of geometry are to be looked upon as the laws about any possible relative positions of solid bodies at rest, and more generally the theorems of kinematics, as theorems which describe the relation between measurable bodies and clocks. Consider two material points of a solid body at rest; then according to these conceptions there corresponds to these points a wholly definite extent of length, independent of kind, position, orientation and time of the body.
Similarly let us consider two positions of the pointers of a clock which is at rest with reference to a co-ordinate system; then to these positions, there always corresponds a time-interval of a definite length, independent of time and place. It would be soon shown that the general relativity theory can not hold fast to this simple physical significance of space and time. [ 771 ]
### § 2. About the reasons which explain the extension of the relativity-postulate.
To the classical mechanics (no less than) to the special relativity theory, is attached an epistemological defect, which was perhaps first clearly pointed out by E. Mach. We shall illustrate it by the following example; Let two fluid bodies of equal kind and magnitude swim freely in space at such a great distance from one another (and from all other masses) that only that sort of gravitational forces are to be taken into account which the parts of any of these bodies exert upon each other. The distance of the bodies from one another is invariable. The relative motion of the different parts of each body is not to occur. But each mass is seen to rotate by an observer at rest relative to the other mass round the connecting line of the masses with a constant angular velocity (definite relative motion for both the masses). Now let us think that the surfaces of both the bodies (${\displaystyle S_{1}}$ and ${\displaystyle S_{2}}$) are measured with the help of measuring rods (relatively at rest); it is then found that the surface of ${\displaystyle S_{1}}$ is a sphere and the surface of the other is an ellipsoid of rotation. We now ask, why is this difference between the two bodies? An answer to this question can only then be regarded as satisfactory[1] from the epistemological standpoint when the thing adduced as the cause is an observable fact of experience. The law of causality has the sense of a definite statement about the world of experience only when observable facts alone appear as causes and effects.
The Newtonian mechanics does not give to this question any satisfactory answer. For example, it says:— The laws of mechanics hold true for a space ${\displaystyle R_{1}}$ relative to which the body ${\displaystyle S_{1}}$ is at rest, not however for a space relative to which ${\displaystyle S_{2}}$ is at rest.
The Galiliean space, which is here introduced is however only a purely imaginary cause, not an observable thing. It is thus clear that the Newtonian mechanics [ 772 ] does not, in the case treated here, actually fulfil the requirements of causality, but produces on the mind a fictitious complacency, in that it makes responsible a wholly imaginary cause ${\displaystyle R_{1}}$ for the different behaviours of the bodies ${\displaystyle S_{1}}$ and ${\displaystyle S_{2}}$ which are actually observable.
A satisfactory explanation to the question put forward above can only be thus given:— that the physical system composed of ${\displaystyle S_{1}}$ and ${\displaystyle S_{2}}$ shows for itself alone no conceivable cause to which the different behaviour of ${\displaystyle S_{1}}$ and ${\displaystyle S_{2}}$ can be attributed. The cause must thus lie outside the system. We are therefore led to the conception that the general laws of motion which determine specially the forms of ${\displaystyle S_{1}}$ and ${\displaystyle S_{2}}$ must be of such a kind, that the mechanical behaviour of ${\displaystyle S_{1}}$ and ${\displaystyle S_{2}}$ must be essentially conditioned by the distant masses, which we had not brought into the system considered. These distant masses, (and their relative motion as regards the bodies under consideration) are then to be looked upon as the seat of the principal observable causes for the different behaviours of the bodies under consideration. They take the place of the imaginary cause ${\displaystyle R_{1}}$. Among all the conceivable spaces ${\displaystyle R_{1}}$ and ${\displaystyle R_{1}}$ etc. moving in any manner relative to one another, there is a priori, no one set which can be regarded as affording greater advantages, against which the objection which was already raised from the standpoint of the theory of knowledge cannot be again revived. The laws of physics must be so constituted that they should remain valid for any system of co-ordinates moving in any manner. We thus arrive at an extension of the relativity postulate.
Besides this momentous epistemological argument, there is also a well-known physical fact which speaks in favour of an extension of the relativity theory. Let there be a Galiliean co-ordinate system K relative to which (at least in the four-dimensional region considered) a mass at a sufficient distance from other masses move uniformly in a line. Let ${\displaystyle K'}$ be a second co-ordinate system which has a uniformly accelerated motion relative to K. Relative to ${\displaystyle K'}$ any mass at a sufficiently great distance experiences an accelerated motion such that its acceleration and its direction of acceleration is independent of its material composition and its physical conditions.
Can any observer, at rest relative to ${\displaystyle K'}$, [ 773 ] then conclude that he is in an actually accelerated reference-system? This is to be answered in the negative; the above-named behaviour of the freely moving masses relative to ${\displaystyle K'}$ can be explained in as good a manner in the following way. The reference-system ${\displaystyle K'}$ has no acceleration. In the space-time region considered there is a gravitation-field which generates the accelerated motion relative to ${\displaystyle K'}$.
This conception is feasible, because to us the experience of the existence of a field of force (namely the gravitation field) has shown that it possesses the remarkable property of imparting the same acceleration to all bodies.[2] The mechanical behaviour of the bodies relative to ${\displaystyle K'}$ is the same as experience would expect of them with reference to systems which we assume from habit as stationary; thus it explains why from the physical stand-point it can be assumed that the systems K and ${\displaystyle K'}$ can both with the same legitimacy be taken as at rest, that is, they will be equivalent as systems of reference for a description of physical phenomena.
From these discussions we see, that the working out of the general relativity theory must, at the same time, lead to a theory of gravitation; for we can "create" a gravitational field by a simple variation of the co-ordinate system. Also we see immediately that the principle of the constancy of light-velocity must be modified, for we recognise easily that the path of a ray of light with reference to ${\displaystyle K'}$ must be, in general, curved, when light travels with a definite and constant velocity in a straight line with reference to K.
### § 3. The time-space continuum. Requirements of the general Co-variance for the equations expressing the laws of Nature in general.
In the classical mechanics as well as in the special relativity theory, the co-ordinates of time and space have an immediate physical significance; when we say that any arbitrary point has ${\displaystyle x_{1}}$ as its ${\displaystyle X_{1}}$ co-ordinate, it signifies [ 774 ] that the projection of the point-event on the ${\displaystyle X_{1}}$-axis ascertained by means of a solid rod according to the rules of Euclidean Geometry is reached when a definite measuring rod, the unit rod, can be carried ${\displaystyle x_{1}}$ times from the origin of co-ordinates along the ${\displaystyle X_{1}}$ axis. A point having ${\displaystyle x_{4}=t}$ as the ${\displaystyle X_{1}}$ co-ordinate signifies that a unit clock which is adjusted to be at rest relative to the system of co-ordinates, and coinciding in its spatial position, with the point-event and set according to some definite standard has gone over ${\displaystyle x_{4}=t}$ periods before the occurrence of the point-event.[3]
This conception of time and space is continually present in the mind of the physicist, though often in an unconscious way, as is clearly recognised from the role which this conception has played in physical measurements. This conception must also appear to the reader to be lying at the basis of the second consideration of the last paragraph and imparting a sense to these conceptions. But we wish to show that we are to abandon it and in general to replace it by more general conceptions in order to be able to work out thoroughly the postulate of general relativity,— the case of special relativity appearing as a limiting case when there is no gravitation.
We introduce in a space, which is free from Gravitation-field, a Galiliean Co-ordinate System K(x, y, z, t) and also, another system K'(y', y', z', t') rotating uniformly relative to K. The origin of both the systems as well as their Z-axes might continue to coincide. We will show that for a space-time measurement in the system ${\displaystyle K'}$, the above established rules for the physical significance of time and space can not be maintained. On grounds of symmetry it is clear that a circle round the origin in the X-Y plane of K, can also be looked upon as a circle in the ${\displaystyle X'}$-${\displaystyle Y'}$ plane of ${\displaystyle K'}$. Let us now think of measuring the circumference and the diameter of these circles, with a unit measuring rod (infinitely small compared with the radius) and take the quotient of both the results of measurement. If this experiment be carried out with a measuring rod at rest relatively to the Galiliean system [ 775 ] K we would get π, as the quotient. The result of measurement with a rod relatively at rest as regards ${\displaystyle K'}$ would be a number which is greater than π. This can be seen easily when we regard the whole measurement-process from the system K and remember that the rod placed on the periphery suffers a Lorentz-contraction, not however when the rod is placed along the radius. Euclidean Geometry therefore does not hold for the system ${\displaystyle K'}$; the above fixed conceptions of co-ordinates which assume the validity of Euclidean Geometry fail with regard to the system ${\displaystyle K'}$. We cannot similarly introduce in ${\displaystyle K'}$ a time corresponding to physical requirements, which will be shown by all similarly prepared clocks at rest relative to the system ${\displaystyle K'}$. In order to see this we suppose that two similarly made clocks are arranged one at the centre and one at the periphery of the circle, and considered from the stationary system K. According to the well-known results of the special relativity theory it follows — (as viewed from K) — that the clock placed at the periphery will go slower than the second one which is at rest. The observer at the common origin of co-ordinates who is able to see the clock at the periphery by means of light will see the clock at the periphery going slower than the clock beside him. Since he cannot allow the velocity of light to depend explicitly upon the time in the way under consideration he will interpret his observation by saying that the clock on the periphery "actually" goes slower than the clock at the origin. He cannot therefore do otherwise than define time in such a way that the rate of going of a clock depends on its position.
We therefore arrive at this result. In the general relativity theory time and space magnitudes cannot be so defined that the difference in spatial co-ordinates can be immediately measured by the unit-measuring rod, and time-like co-ordinate difference with the aid of a normal clock.
The means hitherto at our disposal, for placing our co-ordinate system in the time-space continuum, in a definite way, therefore completely fail and [ 776 ] it appears that there is no other way which will enable us to fit the co-ordinate system to the four-dimensional world in such a way, that by it we can expect to get a specially simple formulation of the laws of Nature. So that nothing remains for us but to regard all conceivable[4] co-ordinate systems as equally suitable for the description of natural phenomena. This amounts to the following law:—
That in general, Laws of Nature are expressed by means of equations which are valid for all co-ordinate systems, that is, which are covariant for all possible transformations. It is clear that a physics which satisfies this postulate will be unobjectionable from the standpoint of the general relativity postulate. Because among all substitutions there are, in every case, contained those, which correspond to all relative motions of the co-ordinate system (in three dimensions). This condition of general covariance which takes away the last remnants of physical objectivity from space and time, is a natural requirement, as seen from the following considerations. All our well-substantiated space-time propositions amount to the determination of space-time coincidences. If, for example, the event consisted in the motion of material points, then, for this last case, nothing else are really observable except the encounters between two or more of these material points. The results of our measurements are nothing else than well-proved theorems about such coincidences of material points, of our measuring rods with other material points, coincidences between the hands of a clock, dial-marks and point-events occurring at the same position and at the same time.
The introduction of a system of co-ordinates serves no other purpose than an easy description of totality of such coincidences. We fit to the world our space-time variables ${\displaystyle x_{1},x_{2},x_{3},x_{4}}$ such that to any and every point-event corresponds a system of values of ${\displaystyle x_{1}\dots x_{4}}$. Two coincident point-events correspond to the same [ 777 ] value of the variables ${\displaystyle x_{1}\dots x_{4}}$; i.e., the coincidence is characterised by the equality of the co-ordinates. If we now introduce any four functions ${\displaystyle x'_{1},x'_{2},x'_{3},x'_{4}}$ as coordinates, so that there is an unique correspondence between them, the equality of all the four co-ordinates in the new system will still be the expression of the space-time coincidence of two material points. As the purpose of all physical laws is to allow us to remember such coincidences, there is a priori no reason present, to prefer a certain co-ordinate system to another; i.e., we get the condition of general covariance.
### § 4. Relation of four co-ordinates to spatial and temporal measurements. Analytical expression for the Gravitation-field.
I am not trying in this communication to deduce the general Relativity-theory as the simplest logical system possible, with a minimum of axioms. But it is my chief aim to develop the theory in such a manner that the reader perceives the psychological naturalness of the way proposed, and the fundamental assumptions appear to be most reasonable according to the light of experience. In this sense, we shall now introduce the following supposition; that for an infinitely small four-dimensional region, the relativity theory is valid in the special sense when the axes are suitably chosen.
The nature of acceleration of an infinitely small (positional) co-ordinate system is hereby to be so chosen, that the gravitational field does not appear; this is possible for an infinitely small region. ${\displaystyle X_{1},X_{2},X_{3}}$ are the spatial co-ordinates; ${\displaystyle X_{4}}$ is the corresponding time-co-ordinate measured[5] by some suitable measuring clock. These coordinates have, with a given orientation of the system, an immediate physical significance in the sense of the special relativity theory (when we take a rigid rod as our unit of measure), The expression
(1) ${\displaystyle ds^{2}=-dX_{1}^{2}-dX_{2}^{2}-dX_{3}^{2}+dX_{4}^{2}}$
[ 778 ] had then, according to the special relativity theory, a value which may be obtained by space-time measurement, and which is independent of the orientation of the local co-ordinate system. Let us take ds as the magnitude of the line-element belonging to two infinitely near points in the four-dimensional region. If ds² belonging to the element ${\displaystyle \left(dX_{1}\dots dX_{4}\right)}$ be positive we call it with Minkowski, time-like, and in the contrary case space-like.
To the line-element considered, i.e., to both the infinitely near point-events belong also definite differentials ${\displaystyle dx_{1}\dots dx_{4}}$, of the four-dimensional co-ordinates of any chosen system of reference. If there be also a local system of the above kind given for the case under consideration, ${\displaystyle dX_{\nu }}$ would then be represented by definite linear homogeneous expressions of ${\displaystyle dx_{\sigma }}$
(2) ${\displaystyle dX_{\nu }=\sum \limits _{\sigma }\alpha _{\nu \sigma }dx_{\sigma }}$
If we substitute the expression in (1) we get
(3) ${\displaystyle ds^{2}=\sum \limits _{\sigma \tau }g_{\sigma \tau }dx_{\sigma }dx_{\tau }}$
where ${\displaystyle g_{\sigma \tau }}$ will be functions of ${\displaystyle x_{\sigma }}$, but will no longer depend upon the orientation and motion of the "local" co-ordinates; for ds² is a definite magnitude belonging to two point-events infinitely near in space and time and can be got by measurements with rods and clocks. The ${\displaystyle g_{\sigma \tau }}$ are hereto be so chosen, that ${\displaystyle g_{\sigma \tau }=g_{\tau \sigma }}$; the summation is to be extended over all values of σ and τ, so that the sum is to be extended, over 4×4 terms, of which 12 are equal in pairs.
From the method adopted here, the case of the usual relativity theory comes out when owing to the special behaviour of ${\displaystyle g_{\sigma \tau }}$ in a finite region it is possible to choose the system of co-ordinates in such a way that ${\displaystyle g_{\sigma \tau }}$ assumes constant values —
(4) ${\displaystyle \left\{{\begin{array}{ccccccc}-1&&0&&0&&0\\\\0&&-1&&0&&0\\\\0&&0&&-1&&0\\\\0&&0&&0&&+1\end{array}}\right.}$
[ 779 ] We would afterwards see that the choice of such a system of co-ordinates for a finite region is in general not possible.
From the considerations in § 2 and § 3 it is clear, that from the physical stand-point the quantities ${\displaystyle g_{\sigma \tau }}$ are to be looked upon as magnitudes which describe the gravitation-field with reference to the chosen system of axes. We assume firstly, that in a certain four-dimensional region considered, the special relativity theory is true for some particular choice of co-ordinates. The ${\displaystyle g_{\sigma \tau }}$ then have the values given in (4). A free material point moves with reference to such a system uniformly in a straight line. If we now introduce, by any substitution, the space-time co-ordinates ${\displaystyle x_{1}\dots x_{4}}$, then in the new system g${\displaystyle g_{\mu \nu }}$ are no longer constants, but functions of space and time. At the same time, the motion of a free point-mass in the new co-ordinates, will appear as curvilinear, and not uniform, in which the law of motion, will be independent of the nature of the moving mass-points. We can thus signify this motion as one under the influence of a gravitation field. We see that the appearance of a gravitation-field is connected with space-time variability of ${\displaystyle g_{\sigma \tau }}$. In the general case, we can not by any suitable choice of axes, make special relativity theory valid throughout any finite region. We thus deduce the conception that ${\displaystyle g_{\sigma \tau }}$ describe the gravitational field. According to the general relativity theory, gravitation thus plays an exceptional role as distinguished from the others, specially the electromagnetic forces, in as much as the 10 functions ${\displaystyle g_{\sigma \tau }}$ representing gravitation, define immediately the metrical properties of the four-dimensional region.
## B. Mathematical Auxiliaries for Establishing the General Covariant Equations.
We have seen before that the general relativity-postulate leads to the condition that the system of equations for Physics, must be Covariants for any possible substitution of co-ordinates ${\displaystyle x_{1}\dots x_{4}}$; [ 780 ] we have now to see how such general covariant equations can be obtained. We shall now turn our attention to these purely mathematical propositions. It will be shown that in the solution, the invariant ds, given in equation (3) plays a fundamental role, which we, following Gauss's Theory of Surfaces, style as the "line-element".
The fundamental idea of the general covariant theory is this: — With reference to any co-ordinate system, let certain things ("tensors") be defined by a number of functions of co-ordinates which are called the components of the tensor. There are now certain rules according to which the components can be calculated in a new system of co-ordinates, when these are known for the original system, and when the transformation connecting the two systems is known. The things herefrom designated as Tensors have further the property that the transformation equation of their components are linear and homogeneous; so that all the components in the new system vanish if they are all zero in the original system. Thus a law of Nature can be formulated by putting all the components of a tensor equal to zero so that it is a general covariant equation; thus while we seek the laws of formation of the tensors, we also reach the means of establishing general Covariant laws.
### § 5. Contravariant and covariant Four-vector.
Contravariant Four-vector. The line-element is defined by the four components ${\displaystyle dx_{\nu }}$ whose transformation law is expressed by the equation
(5) ${\displaystyle dx'_{\sigma }=\sum \limits _{\nu }{\frac {\partial x'_{\sigma }}{\partial x_{\nu }}}dx_{\nu }}$
The ${\displaystyle dx'_{\sigma }}$ are expressed as linear and homogeneous function of ${\displaystyle dx_{\nu }}$; we can look upon the differentials of the co-ordinates ${\displaystyle dx_{\nu }}$ as the components of a tensor, which we designate specially as a contravariant Four-vector. Everything which is defined by Four quantities ${\displaystyle A^{\nu }}$, with reference to a co-ordinate system, and transforms according to the same law,
(5a) ${\displaystyle A'^{\sigma }=\sum \limits _{\nu }{\frac {\partial x'_{\sigma }}{\partial x_{\nu }}}A^{\nu }}$
[ 781 ] we may call a contravariant Four-vector. From (5a), it follows at once that the sums ${\displaystyle \left(A^{\sigma }\pm B^{\sigma }\right)}$ are also components of a four-vector, when ${\displaystyle A^{\sigma }}$ and ${\displaystyle B^{\sigma }}$ are so; corresponding relations hold also for all systems afterwards introduced as "tensors" (Rule of addition and subtraction of Tensors).
Covariant Four-vector. We call four quantities ${\displaystyle A_{\nu }}$ as the components of a covariant four-vector, when for any choice of the contravariant four-vector ${\displaystyle B^{\nu }}$
(6) ${\displaystyle \sum \limits _{\nu }A_{\nu }B^{\nu }=\mathrm {invariant} }$
From this definition follows the law of transformation of the covariant four-vectors. If we substitute in the right band side of the equation
${\displaystyle \sum \limits _{\sigma }A'_{\sigma }B'^{\sigma }=\sum \limits _{\nu }A_{\nu }B^{\nu }}$
the expressions
${\displaystyle \sum \limits _{\sigma }{\frac {\partial x_{\nu }}{\partial x'_{\sigma }}}B'^{\sigma }}$
for ${\displaystyle B^{\nu }}$ following from the inversion of the equation (5a) we get
${\displaystyle \sum \limits _{\sigma }B'^{\sigma }\sum \limits _{\nu }{\frac {\partial x_{\nu }}{\partial x'_{\sigma }}}A_{\nu }=\sum \limits _{\sigma }B'^{\sigma }A'_{\sigma }}$
As in the above equation ${\displaystyle B'^{\sigma }}$ are independent of one another and perfectly arbitrary, it follows that the transformation law is: —
(7) ${\displaystyle A'_{\sigma }=\sum {\frac {\partial x_{\nu }}{\partial x'_{\sigma }}}A_{\nu }}$
Remarks on the simplification of the mode of writing the expressions. A glance at the equations of this paragraph will show that the indices which appear twice within the sign of summation [for example ν in (5)] are those over which the summation is to be made and that only over the indices which appear twice. It is therefore possible, without loss of clearness, to leave off the summation sign; so that we introduce the rule: wherever the index in any term of an expression appears twice, it is to be summed over all of them except when it is not expressedly said to the contrary.
The difference between the covariant and the contravariant four-vector lies in the transformation laws [ 782 ] [(7) and (5)]. Both the quantities are tensors according to the above general remarks; in it lies its significance. In accordance with Ricci and Levi-Civita, the contravariants and covariants are designated by the over and under indices.
### § 6. Tensors of the second and higher ranks.
Contravariant tensor: — If we now calculate all the 16 products ${\displaystyle A^{\mu \nu }}$ of the components ${\displaystyle A^{\mu }}$ and ${\displaystyle B^{\nu }}$, of two contravariant four-vectors
(8) ${\displaystyle A^{\mu \nu }=A^{\mu }B^{\nu }}$
${\displaystyle A^{\mu \nu }}$ will according to (8) and (5a) satisfy the following transformation law.
(9) ${\displaystyle A'^{\sigma \tau }={\frac {\partial x'_{\sigma }}{\partial x_{\mu }}}{\frac {\partial x'_{\tau }}{\partial x_{\nu }}}A^{\mu \nu }}$
We call a thing which, with reference to any reference system is defined by 16 quantities and fulfils the transformation relation (9), a contravariant tensor of the second rank. Not every such tensor can be built from two four-vectors, (according to 8). But it is easy to show that any 16 quantities ${\displaystyle A^{\mu \nu }}$, can be represented as the sum of ${\displaystyle A^{\mu }B^{\nu }}$ of properly chosen four pairs of four-vectors. From it, we can prove in the simplest way all laws which hold true for the tensor of the second rank defined through (9), by proving it only for the special tensor of the type (8).
Contravariant Tensor of any rank: — If is clear that corresponding to (8) and (9), we can define contravariant tensors of the 3rd and higher ranks, with ${\displaystyle 4^{3}}$, etc. components. Thus it is clear from (8) and (9) that in this sense, we can look upon contravariant four-vectors, as contravariant tensors of the first rank.
Covariant tensor. If on the other hand, we take the 16 products ${\displaystyle A_{\mu \nu }}$ of the components of two covariant four-vectors ${\displaystyle A_{\mu }}$ and ${\displaystyle B_{\nu }}$,
(10) ${\displaystyle A_{\mu \nu }=A_{\mu }B_{\nu }}$
for them holds the transformation law
(11) ${\displaystyle A'_{\sigma \tau }={\frac {\partial x_{\mu }}{\partial x'_{\sigma }}}{\frac {\partial x_{\nu }}{\partial x'_{\tau }}}A_{\mu \nu }}$
By means of these transformation laws, the covariant tensor of the second rank is defined. All remarks which we have already made concerning the contravariant tensors, hold also for covariant tensors.
Remark:— It is convenient to treat the scalar (Invariant) either as a contravariant or a covariant tensor of zero rank.
Mixed tensor. We can also define a tensor of the second rank of the type
(12) ${\displaystyle A_{\mu }^{\nu }=A_{\mu }B^{\nu }}$
which is covariant with reference to μ and contravariant with reference to ν. Its transformation law is
(13) ${\displaystyle A'{}_{\sigma }^{\tau }={\frac {\partial x'_{\tau }}{\partial x_{\beta }}}{\frac {\partial x_{\alpha }}{\partial x'_{\sigma }}}A_{\alpha }^{\beta }}$
Naturally there are mixed tensors with any number of covariant indices, and with any number of contravariant indices. The covariant and contravariant tensors can be looked upon as special cases of mixed tensors.
Symmetrical tensors: — A contravariant or a covariant tensor of the second or higher rank is called symmetrical when any two components obtained by the mutual interchange of two indices are equal. The tensor ${\displaystyle A^{\mu \nu }}$ or ${\displaystyle A_{\mu \nu }}$ is symmetrical, when we have for any combination of indices
(14) ${\displaystyle A^{\mu \nu }=A^{\nu \mu }}$
or
(14a) ${\displaystyle A_{\mu \nu }=A_{\nu \mu }}$
It must be proved that a symmetry so defined is a property independent of the system of reference. It follows in fact from (9) remembering (14)
${\displaystyle A'^{\sigma \tau }={\frac {\partial x'_{\sigma }}{\partial x_{\mu }}}{\frac {\partial x'_{\tau }}{\partial x_{\nu }}}A^{\mu \nu }={\frac {\partial x'_{\sigma }}{\partial x_{\mu }}}{\frac {\partial x'_{\tau }}{\partial x_{\nu }}}A^{\nu \mu }={\frac {\partial x'_{\tau }}{\partial x_{\mu }}}{\frac {\partial x'_{\sigma }}{\partial x_{\nu }}}A^{\mu \nu }=A'^{\tau \sigma }}$
Anti-symmetrical tensor. A contravariant or covariant tensor of the 2nd, 3rd or 4th rank is called [ 784 ] anti-symmetrical when the two components got by mutually interchanging any two indices are equal and opposite. The tensor ${\displaystyle A^{\mu \nu }}$ or ${\displaystyle A_{\mu \nu }}$ is thus anti-symmetrical when we have
(15) ${\displaystyle A^{\mu \nu }=-A^{\nu \mu }}$
or
(15a) ${\displaystyle A_{\mu \nu }=-A_{\nu \mu }}$
Of the 16 components ${\displaystyle A^{\mu \nu }}$, the four components ${\displaystyle A^{\mu \mu }}$ vanish, the rest are equal and opposite in pairs; so that there are only 6 numerically different components present (Six-vector).
Thus we also see that the anti-symmetrical tensor ${\displaystyle A^{\mu \nu \sigma }}$ (3rd rank) has only 4 components numerically different, and the anti-symmetrical tensor ${\displaystyle A^{\mu \nu \sigma \tau }}$ only one.
Symmetrical tensors of ranks higher than the fourth, do not exist in a continuum of 4 dimensions.
### § 7. Multiplication of Tensors.
Outer multiplication of Tensors:— We get from the components of a tensor of rank z and another of a rank ${\displaystyle z'}$, the components of a tensor of rank ${\displaystyle z+z'}$ for which we multiply all the components of the first with all the components of the second in pairs. For example, we obtain the tensor T from the tensors A and B of different kinds: —
${\displaystyle {\begin{array}{ll}T_{\mu \nu \sigma }&=A_{\mu \nu }B_{\sigma }\\\\T^{\alpha \beta \gamma \delta }&=A^{\alpha \beta }B^{\gamma \delta }\\\\T_{\alpha \beta }^{\gamma \delta }&=A_{\alpha \beta }B^{\gamma \delta }\end{array}}}$
The proof of the tensor character of T, follows immediately from the expressions (8), (10) or (12), or the transformation equations (9), (11), (13); equations (8), (10) and (12) are themselves examples of the outer multiplication of tensors of the first rank.
Reduction in rank of a mixed Tensor. From every mixed tensor we can get a tensor which is two ranks lower, when we put an index of covariant character equal to an index of the contravariant character and [ 785 ] sum according to these indices ("Reduction"). We get for example, out of the mixed tensor of the fourth rank ${\displaystyle A_{\alpha \beta }^{\gamma \delta }}$, the mixed tensor of the second rank
${\displaystyle A_{\beta }^{\delta }=T_{\alpha \beta }^{\alpha \delta }\left(=\sum \limits _{\alpha }A_{\alpha \beta }^{\alpha \delta }\right)}$
and from it again by reduction, the tensor of the zero rank ${\displaystyle A=A_{\beta }^{\beta }=A_{\alpha \beta }^{\alpha \beta }}$.
The proof that the result of reduction retains a truly tensorial character, follows either from the representation of tensor according to the generalisation of (12) in combination with (6) or out of the generalisation of (13).
Inner and mixed multiplication of Tensors. This consists in the combination of outer multiplication with reduction. Examples: — From the covariant tensor of the second rank ${\displaystyle A_{\mu \nu }}$ and the contravariant tensor of the first rank ${\displaystyle B^{\sigma }}$ we get by outer multiplication the mixed tensor
${\displaystyle D_{\mu \nu }^{\sigma }=A_{\mu \nu }B^{\sigma }}$
Through reduction according to indices ${\displaystyle \nu ,\sigma }$, the covariant four vector
${\displaystyle D_{\mu }=D_{\mu \nu }^{\nu }=A_{\mu \nu }B^{\nu }}$
is generated.
These we denote as the inner product of the tensor ${\displaystyle A_{\mu \nu }}$ and ${\displaystyle B^{\sigma }}$. Similarly we get from the tensors ${\displaystyle A_{\mu \nu }}$ and ${\displaystyle B^{\sigma \tau }}$ through outer multiplication and two-fold reduction the inner product ${\displaystyle A_{\mu \nu }B^{\mu \nu }}$. Through outer multiplication and one-fold reduction we get out of ${\displaystyle A_{\mu \nu }}$ and ${\displaystyle B^{\sigma \tau }}$, the mixed tensor of the second rank ${\displaystyle D_{\mu }^{\tau }=A_{\mu \nu }B^{\nu \tau }}$. We can fitly call this operation a mixed one; for it is outer with reference to the indices μ and τ, and inner with respect to the indices ν and σ.
We now prove a law, which will be often applicable for proving the tensor-character of certain quantities. According to the above representation, ${\displaystyle A_{\mu \nu }B^{\mu \nu }}$ is a scalar, when ${\displaystyle A_{\mu \nu }}$ and ${\displaystyle B^{\sigma \tau }}$ are tensors. We also remark that when ${\displaystyle A_{\mu \nu }B^{\mu \nu }}$ is an invariant for every choice of the tensor ${\displaystyle B^{\mu \nu }}$, then ${\displaystyle A_{\mu \nu }}$ has a tensorial character.
Proof: — According to the above assumption, for any substitution we have
${\displaystyle A'_{\sigma \tau }B'^{\sigma \tau }=A_{\mu \nu }B^{\mu \nu }}$
[ 786 ] From the inversion of (9) we have however
${\displaystyle B^{\mu \nu }={\frac {\partial x{}_{\mu }}{\partial x'_{\sigma }}}{\frac {\partial x{}_{\nu }}{\partial x'_{\tau }}}B'^{\sigma \tau }}$
Substitution of this in the above equation gives
${\displaystyle \left(A'_{\sigma \tau }-{\frac {\partial x{}_{\mu }}{\partial x'_{\sigma }}}{\frac {\partial x{}_{\nu }}{\partial x'_{\tau }}}A_{\mu \nu }\right)B'^{\sigma \tau }=0}$
This can be true, for any choice of ${\displaystyle B'^{\sigma \tau }}$ only when the term within the bracket vanishes. From which by referring to (11), the theorem at once follows. This law correspondingly holds for tensors of any rank and character. The proof is quite similar. The law can also be put in the following form. If ${\displaystyle B^{\mu }}$ and ${\displaystyle C^{\nu }}$ are any two vectors, and if for every choice of them the inner product
${\displaystyle A_{\mu \nu }B^{\mu }C^{\nu }}$
is a scalar, then ${\displaystyle A_{\mu \nu }}$ is a covariant tensor. The last law holds even when there is the more special formulation, that with any arbitrary choice of the four-vector ${\displaystyle B^{\mu }}$ alone the scalar product
${\displaystyle A_{\mu \nu }B^{\mu }B^{\nu }}$
is a scalar, in which case we have the additional condition that ${\displaystyle A_{\mu \nu }}$ satisfies the symmetry condition ${\displaystyle A_{\mu \nu }=A_{\nu \mu }}$. According to the method given above, we prove the tensor character of ${\displaystyle \left(A_{\mu \nu }+A_{\nu \mu }\right)}$, from which on account of symmetry follows the tensor-character of ${\displaystyle A_{\mu \nu }}$. This law can easily be generalized in the case of covariant and contravariant tensors of any rank.
Finally, from what has been proved, we can deduce the following law which can be easily generalized for any kind of tensor: If the quantities ${\displaystyle A_{\mu \nu }B^{\nu }}$ form a tensor of the first rank, when ${\displaystyle B^{\nu }}$ is any arbitrarily chosen four-vector, then ${\displaystyle A_{\mu \nu }}$ is a tensor of the second rank. If for example, ${\displaystyle C^{\mu }}$ is any four-vector, then owing to the tensor character of ${\displaystyle A_{\mu \nu }B^{\nu }}$, the inner product ${\displaystyle A_{\mu \nu }C^{\mu }B^{\nu }}$ is a scalar, both the four-vectors ${\displaystyle C^{\mu }}$ and ${\displaystyle B^{\nu }}$ being arbitrarily chosen. Hence the proposition follows at once.
### § 8. A few words about the Fundamental Tensor ${\displaystyle g_{\mu \nu }}$.
The covariant fundamental tensor. In the invariant expression of the square of the linear element
${\displaystyle ds^{2}=g_{\mu \nu }dx_{\mu }dx_{\nu }}$
[ 787 ] ${\displaystyle dx_{\mu }}$ plays the role of any arbitrarily chosen contravariant vector, since further ${\displaystyle g_{\mu \nu }=g_{\nu \mu }}$, it follows from the considerations of the last paragraph that ${\displaystyle g_{\mu \nu }}$ is a symmetrical covariant tensor of the second rank. We call it the "fundamental tensor". Afterwards we shall deduce some properties of this tensor, which will also be true for any tensor of the second rank. But the special role of the fundamental tensor in our Theory, which has its physical basis on the particularly exceptional character of gravitation makes it clear that those relations are to be developed which will be required only in the case of the fundamental tensor.
The contravariant fundamental tensor. If we form from the determinant scheme ${\displaystyle g_{\mu \nu }}$ the minors of ${\displaystyle g_{\mu \nu }}$ and divide them by the determinant ${\displaystyle g=\left|g_{\mu \nu }\right|}$ of ${\displaystyle g_{\mu \nu }}$, we get certain quantities ${\displaystyle g^{\mu \nu }\left(=g^{\nu \mu }\right)}$, which as we shall prove generates a contravariant tensor.
According to the well-known law of Determinants
(16) ${\displaystyle g_{\mu \sigma }g^{\nu \sigma }=\delta _{\mu }^{\nu }}$
where ${\displaystyle \delta _{\mu }^{\nu }}$ is 1, or 0, depending on ${\displaystyle \mu =\nu }$ or ${\displaystyle \mu \nu }$. Instead of the above expression for ds² we can also write
${\displaystyle g_{\mu \sigma }\delta _{\nu }^{\sigma }dx_{\mu }dx_{\nu }}$
or according to (16) also in the form
${\displaystyle g_{\mu \sigma }g_{\nu \tau }g^{\sigma \tau }dx_{\mu }dx_{\nu }}$
Now according to the rules of multiplication, of the foregoing paragraph, the magnitudes
${\displaystyle d\xi _{\sigma }=g_{\mu \sigma }dx_{\mu }}$
forms a covariant four-vector, and in fact (on account of the arbitrary choice of ${\displaystyle dx_{\mu }}$) any arbitrary four-vector.
If we introduce it in our expression, we get
${\displaystyle ds^{2}=g^{\sigma \tau }d\xi _{\sigma }d\xi _{\tau }{\text{ }}}$
For any choice of the vectors ${\displaystyle d\xi _{\sigma }}$ this is scalar, and ${\displaystyle g^{\sigma \tau }}$, according to its definition is a symmetrical thing in σ and τ, so it follows from the above results, that ${\displaystyle g^{\sigma \tau }}$ is a contravariant tensor. Out of (16) it also follows that ${\displaystyle \delta _{\mu }^{\nu }}$ is a tensor which we may call the mixed fundamental tensor.
[ 788 ] Determinant of the fundamental tensor. According to the law of multiplication of determinants, we have
${\displaystyle \left|g_{\mu \alpha }g^{\alpha \nu }\right|=\left|g_{\mu \alpha }\right|\left|g^{\alpha \nu }\right|}$
On the other hand we have
${\displaystyle \left|g_{\mu \alpha }g^{\alpha \nu }\right|=\left|\delta _{\mu }^{\nu }\right|=1}$
So that it follows
(17) ${\displaystyle \left|g_{\mu \nu }\right|\left|g^{\mu \nu }\right|=1}$
Invariant of volume. We seek first the transformation law for the determinant ${\displaystyle g=\left|g_{\mu \nu }\right|}$. According to (11)
${\displaystyle g'=\left|{\frac {\partial x{}_{\mu }}{\partial x'_{\sigma }}}{\frac {\partial x{}_{\nu }}{\partial x'_{\tau }}}g_{\mu \nu }\right|}$
From this by applying the law of multiplication twice, we obtain.
${\displaystyle g'=\left|{\frac {\partial x{}_{\mu }}{\partial x'_{\sigma }}}\right|\left|{\frac {\partial x{}_{\nu }}{\partial x'_{\tau }}}\right|\left|g_{\mu \nu }\right|=\left|{\frac {\partial x{}_{\mu }}{\partial x'_{\sigma }}}\right|^{2}g}$
or
${\displaystyle {\sqrt {g'}}=\left|{\frac {\partial x{}_{\mu }}{\partial x'_{\sigma }}}\right|{\sqrt {g}}}$
On the other hand the law of transformation of the volume element
${\displaystyle d\tau '=\int dx_{1}dx_{2}dx_{3}dx_{4}}$
is according to the well-known law of Jacobi.
${\displaystyle d\tau '=\left|{\frac {\partial x'{}_{\sigma }}{\partial x{}_{\mu }}}\right|d\tau }$
By multiplication of the two last equations we get
(18) ${\displaystyle {\sqrt {g'}}d\tau '={\sqrt {g}}d\tau }$
Instead of ${\displaystyle {\sqrt {g}}}$, we shall afterwards introduce ${\displaystyle {\sqrt {-g}}}$ which has a real value on account of the hyperbolic character of the time-space continuum. The invariant ${\displaystyle {\sqrt {-g}}d\tau }$ is equal in magnitude to the four-dimensional volume-element measured with solid rods and clocks, in accordance with the special relativity theory.
Remarks on the character of the space-time continuum — Our assumption that in an infinitely small region the special relativity theory holds, leads us [ 789 ] to conclude that ds² can always, according to (1) be expressed in real magnitudes ${\displaystyle dX_{1}\dots dX_{4}}$. If we call ${\displaystyle d\tau _{0}}$ "natural" volume element ${\displaystyle dX_{1}dX_{2}dX_{3}dX_{4}}$, we have thus
(18a) ${\displaystyle d\tau _{0}={\sqrt {-g}}d\tau }$
Should ${\displaystyle {\sqrt {-g}}}$ vanish at any point of the four-dimensional continuum it would signify that to a finite co-ordinate volume at the place corresponds an infinitely small "natural" volume. This can never be the case; so that g can never change its sign; we would, according to the special relativity theory assume that g has a finite negative value. It is a hypothesis about the physical nature of the continuum considered, and also a pre-established rule for the choice of co-ordinates.
If however -g remains positive and finite, it is clear that the choice of co-ordinates can be so made that this quantity becomes equal to one. We would afterwards see that such a limitation of the choice of co-ordinates would produce a significant simplification in expressions for laws of nature.
In place of (18) it follows then simply that
${\displaystyle d\tau '=d\tau }$
from this it follows, remembering the law of Jacobi,
(19) ${\displaystyle \left|{\frac {\partial x'{}_{\sigma }}{\partial x{}_{\mu }}}\right|=1}$
With this choice of co-ordinates, only substitutions with determinant 1, are allowable.
It would however be erroneous to think that this step signifies a partial renunciation of the general relativity postulate. We do not seek those laws of nature which are covariants with regard to the transformations having the determinant 1, but we ask: what are the general covariant laws of nature? First we get the law, and then we simplify its expression by a special choice of the system of reference.
Building up of new tensors with the help of the fundamental tensor. Through inner, outer and mixed multiplications of a tensor with the fundamental tensor, tensors of other kinds and of other ranks can be formed.
[ 790 ] Example:—
${\displaystyle {\begin{array}{ll}A^{\mu }&=g^{\mu \sigma }A_{\sigma }\\A&=g_{\mu \nu }A^{\mu \nu }\end{array}}}$
We would point out specially the following combinations:
${\displaystyle {\begin{array}{ll}A^{\mu \nu }&=g^{\mu \alpha }g^{\nu \beta }A_{\alpha \beta }\\A_{\mu \nu }&=g_{\mu \alpha }g_{\nu \beta }A^{\alpha \beta }\end{array}}}$
("complement" to the covariant or contravariant tensors) and,
${\displaystyle B_{\mu \nu }=g_{\mu \nu }g^{\alpha \beta }A_{\alpha \beta }}$
We can call ${\displaystyle B_{\mu \nu }}$ the reduced tensor related to ${\displaystyle A_{\mu \nu }}$.
Similarly
${\displaystyle B^{\mu \nu }=g^{\mu \nu }g_{\alpha \beta }A^{\alpha \beta }}$
It is to be remarked that ${\displaystyle g^{\mu \nu }}$ is no other than the complement of ${\displaystyle g_{\mu \nu }}$, for we have —
${\displaystyle g^{\mu \alpha }g^{\nu \beta }g_{\alpha \beta }=g^{\mu \alpha }\delta _{\alpha }^{\nu }=g^{\mu \nu }}$
### § 9. Equation of the geodetic line (or of point-motion).
As the "line element" ds is a definite magnitude independent of the co-ordinate system, we have also between two points ${\displaystyle P_{1}}$ and ${\displaystyle P_{2}}$ of a four dimensional continuum a line for which ${\displaystyle \int ds}$ is an extremum (geodetic line), i.e., one which has got a significance independent of the choice of co-ordinates.
Its equation is
(20) ${\displaystyle \delta \left\{\int \limits _{P_{1}}^{P_{2}}ds\right\}=0}$
From this equation, we can in a well-known way deduce 4 total differential equations which define the geodetic line; this deduction is given here for the sake of completeness.
Let λ, be a function of the co-ordinates ${\displaystyle x_{\nu }}$; this defines a series of surfaces which cut the geodetic line sought-for as well as all neighbouring lines from ${\displaystyle P_{1}}$ to ${\displaystyle P_{2}}$. We can suppose that all such curves are given when the value of its co-ordinates ${\displaystyle x_{\nu }}$ are given in terms of λ. The sign δ corresponds to a passage from a point of the geodetic [ 791 ] curve sought for to a point of the contiguous curve, both lying on the same surface λ.
Then (20) can be replaced by
(20a) ${\displaystyle {\begin{cases}\int \limits _{\lambda _{1}}^{\lambda _{2}}\delta w\ d\lambda =0\\\\w^{2}=g_{\mu \nu }{\frac {dx_{\mu }}{d\lambda }}{\frac {dx_{\nu }}{d\lambda }}\end{cases}}}$
But
${\displaystyle \delta w={\frac {1}{w}}\left\{{\frac {1}{2}}{\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}{\frac {dx_{\mu }}{d\lambda }}{\frac {dx_{\nu }}{d\lambda }}\delta x_{\sigma }+g_{\mu \nu }{\frac {dx_{\mu }}{d\lambda }}\delta \left({\frac {dx_{\nu }}{d\lambda }}\right)\right\}}$
So we get by the substitution of δw in (0Oa), remembering that
${\displaystyle \delta \left({\frac {dx_{\nu }}{d\lambda }}\right)={\frac {d\delta x_{\nu }}{d\lambda }}}$
after partial integration,
(20b) ${\displaystyle {\begin{cases}\int \limits _{\lambda _{1}}^{\lambda _{2}}d\lambda \ \varkappa _{\sigma }\delta x_{\sigma }=0\\\\\varkappa _{\sigma }={\frac {d}{d\lambda }}\left\{{\frac {g_{\mu \nu }}{w}}{\frac {dx_{\mu }}{\partial \lambda }}\right\}-{\frac {1}{2w}}{\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}{\frac {dx_{\mu }}{\partial \lambda }}{\frac {dx_{\nu }}{\partial \lambda }}\end{cases}}}$
From which it follows, since the choice of ${\displaystyle \delta x_{\sigma }}$ is perfectly arbitrary that ${\displaystyle \varkappa _{\sigma }}$ should vanish; Then
(20c) ${\displaystyle \varkappa _{\sigma }=0}$
are the equations of geodetic line; since along the geodetic line considered we have ds = 0, we can choose the parameter λ, as the length of the arc measured along the geodetic line. Then w = 1, and we would get in place of (20c)
${\displaystyle g_{\mu \nu }{\frac {d^{2}x_{\mu }}{ds^{2}}}+{\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}{\frac {dx_{\sigma }}{\partial \lambda }}{\frac {dx_{\mu }}{\partial \lambda }}-{\frac {1}{2}}{\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}{\frac {dx_{\mu }}{\partial \lambda }}{\frac {dx_{\nu }}{\partial \lambda }}=0}$
Or by merely changing the notation suitably,
(20d) ${\displaystyle g_{\alpha \sigma }{\frac {d^{2}x_{\alpha }}{ds^{2}}}+\left[{\mu \nu \atop \sigma }\right]{\frac {dx_{\mu }}{ds}}{\frac {dx_{\nu }}{ds}}=0}$
where we have put, following Christoffel,
(21) ${\displaystyle \left[{\mu \nu \atop \sigma }\right]={\frac {1}{2}}\left({\frac {\partial g_{\mu \sigma }}{\partial x_{\nu }}}+{\frac {\partial g_{\nu \sigma }}{\partial x_{\mu }}}-{\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}\right)}$
Multiply finally (20d) with ${\displaystyle g^{\sigma \tau }}$ (outer multiplication with reference to τ, and inner with respect to σ) we get at last the final form of the equation of the geodetic line — [ 792 ]
(22) ${\displaystyle {\frac {d^{2}x_{\tau }}{ds^{2}}}+\left\{{\mu \nu \atop \tau }\right\}{\frac {dx_{\mu }}{ds}}{\frac {dx_{\nu }}{ds}}=0}$
Here we have put, following Christoffel,
(23) ${\displaystyle \left\{{\mu \nu \atop \tau }\right\}=g^{\tau \alpha }\left[{\mu \nu \atop \alpha }\right]}$
### § 10. Formation of Tensors through Differentiation.
Relying on the equation of the geodetic line, we can now easily deduce laws according to which new tensors can be formed from given tensors by differentiation. For this purpose, we would first establish the general covariant differential equations. We achieve this through a repeated application of the following simple law. If a certain curve be given in our continuum whose points are characterised by the arc-distances s measured from a fixed point on the curve, and if further ${\displaystyle \varphi }$, be an invariant space function, then ${\displaystyle d\varphi /ds}$ is also an invariant. The proof follows from the fact that ${\displaystyle d\varphi }$ as well as ds, are both invariants.
Since
${\displaystyle {\frac {d\varphi }{ds}}={\frac {\partial \varphi }{\partial x_{\mu }}}{\frac {dx_{\mu }}{ds}}}$
so that
${\displaystyle \psi ={\frac {\partial \varphi }{\partial x_{\mu }}}{\frac {dx_{\mu }}{ds}}}$
is also an invariant for all curves which go out from a point in the continuum, i.e., for any choice of the vector ${\displaystyle dx_{\mu }}$. From which follows immediately that
(24) ${\displaystyle A_{\mu }={\frac {\partial \varphi }{\partial x_{\mu }}}}$
is a covariant four-vector (gradient of ${\displaystyle \varphi }$).
According to our law, the differential-quotient
${\displaystyle \chi ={\frac {d\psi }{ds}}}$
taken along any curve is likewise an invariant. Substituting the value of ${\displaystyle \varphi }$, we get
${\displaystyle \chi ={\frac {\partial ^{2}\varphi }{\partial x_{\mu }\partial x_{\nu }}}{\frac {dx_{\mu }}{ds}}{\frac {dx_{\nu }}{ds}}+{\frac {\partial \varphi }{\partial x_{\mu }}}{\frac {d^{2}x_{\mu }}{ds^{2}}}}$
Here however we can not at once deduce the existence of any tensor. If we however take that the curves [ 793 ] along which we are differentiating are geodesics, we get from it by replacing ${\displaystyle d^{2}x_{\nu }/ds^{2}}$ according to (22)
${\displaystyle \chi =\left\{{\frac {\partial ^{2}\varphi }{\partial x_{\mu }\partial x_{\nu }}}-\left\{{\mu \nu \atop \tau }\right\}{\frac {\partial \varphi }{\partial x_{\tau }}}\right\}{\frac {dx_{\mu }}{ds}}{\frac {dx_{\nu }}{ds}}}$
Prom the interchangeability of the differentiation with regard to μ and ν, and also according to (23) and (21) we see that the bracket ${\displaystyle \left\{{\mu \nu \atop \tau }\right\}}$ is symmetrical with respect to μ and ν.
As we can draw a geodetic line in any direction from any point in the continuum, ${\displaystyle dx_{\mu }/ds}$ is thus a four-vector, with an arbitrary ratio of components, so that it follows from the results of § 7 that
(25) ${\displaystyle A_{\mu \nu }={\frac {\partial ^{2}\varphi }{\partial x_{\mu }\partial x_{\nu }}}-\left\{{\mu \nu \atop \tau }\right\}{\frac {\partial \varphi }{\partial x_{\tau }}}}$
is a covariant tensor of the second rank. We have thus got the result that out of the covariant tensor of the first rank
${\displaystyle A_{\mu }={\frac {\partial \varphi }{\partial x_{\mu }}}}$
we can get by differentiation a covariant tensor of 2nd rank
(26) ${\displaystyle A_{\mu \nu }={\frac {\partial A_{\mu }}{\partial x_{\nu }}}-\left\{{\mu \nu \atop \tau }\right\}A_{\tau }}$
We call the tensor ${\displaystyle A_{\mu \nu }}$ the "extension" of the tensor ${\displaystyle A_{\mu }}$. Then we can easily show that this combination also leads to a tensor, when the vector ${\displaystyle A_{\mu }}$ is not representable as a gradient. In order to see this we first remark that
${\displaystyle \psi {\frac {\partial \varphi }{\partial x_{\mu }}}}$
is a covariant four-vector when ψ and ${\displaystyle \varphi }$ are scalars. This is also the case for a sum of four such terms: —
${\displaystyle S_{\mu }=\psi ^{(1)}{\frac {\partial \varphi ^{(1)}}{\partial x_{\mu }}}+\cdot +\cdot +\psi ^{(4)}{\frac {\partial \varphi ^{(4)}}{\partial x_{\mu }}}}$
when ${\displaystyle \psi ^{(1)}\varphi ^{(1)}\dots \psi ^{(4)}\varphi ^{(4)}}$ are scalars. Now it is however clear that every covariant four-vector is representable in the form of ${\displaystyle S_{\mu }}$.
If for example ${\displaystyle A_{\mu }}$ is a four-vector whose components [ 794 ] are any given functions of ${\displaystyle x_{\nu }}$, we have, (with reference to the chosen co-ordinate system) only to put
${\displaystyle {\begin{array}{ccc}\psi ^{(1)}=A_{1},&&\varphi ^{(1)}=x_{1},\\\psi ^{(2)}=A_{2},&&\varphi ^{(2)}=x_{2},\\\psi ^{(3)}=A_{3},&&\varphi ^{(3)}=x_{3},\\\psi ^{(4)}=A_{4},&&\varphi ^{(4)}=x_{4},\end{array}}}$
in order to arrive at the result that ${\displaystyle S_{\mu }}$ is equal to ${\displaystyle A_{\mu }}$.
In order to prove then that ${\displaystyle A_{\mu \nu }}$ in a tensor when on the right side of (26) we substitute any covariant four-vector for ${\displaystyle A_{\mu }}$ we have only to show that this is true for the four-vector ${\displaystyle S_{\mu }}$. For this latter case, however, a glance on the right hand side of (26) will show that we have only to bring forth the proof for the case when
${\displaystyle A_{\mu }=\psi {\frac {\partial \varphi }{\partial x_{\mu }}}}$
Now the right hand side of (25) multiplied by ψ is
${\displaystyle \psi {\frac {\partial ^{2}\varphi }{\partial x_{\mu }\partial x_{\nu }}}-\left\{{\mu \nu \atop \tau }\right\}\psi {\frac {\partial \varphi }{\partial x_{\tau }}}}$
which has a tensor character. Similarly,
${\displaystyle {\frac {\partial \psi }{\partial x_{\mu }}}{\frac {\partial \varphi }{\partial x_{\nu }}}}$
is also a tensor (outer product of two four-vectors). Through addition follows the tensor character of
${\displaystyle {\frac {\partial }{\partial x_{\nu }}}\left(\psi {\frac {\partial \varphi }{\partial x_{\mu }}}\right)-\left\{{\mu \nu \atop \tau }\right\}\left(\psi {\frac {\partial \varphi }{\partial x_{\tau }}}\right)}$
Thus we get the desired proof for the four-vector,
${\displaystyle \psi {\frac {\partial \varphi }{\partial x_{\mu }}}}$
hence for any four-vectors ${\displaystyle A_{\mu }}$ as shown above. —
With the help of the extension of the four-vector, we can easily define "extension" of a covariant tensor of any rank. This is a generalisation of the extension of the four-vector. We confine ourselves to the case of the extension of the tensors of the 2nd rank for which the law of formation can be clearly seen.
[ 795 ] As already remarked every covariant tensor of the 2nd rank can be represented[6] as a sum of the tensors of the type ${\displaystyle A_{\mu }B_{\nu }}$. It would therefore be sufficient to deduce the expression of extension, for one such special tensor. According to (26) we have the expressions
${\displaystyle {\frac {\partial A_{\mu }}{\partial x_{\sigma }}}-\left\{{\sigma \mu \atop \tau }\right\}A_{\tau }}$ ${\displaystyle {\frac {\partial B_{\nu }}{\partial x_{\sigma }}}-\left\{{\sigma \nu \atop \tau }\right\}B_{\tau }}$
are tensors. Through outer multiplication of the first with ${\displaystyle B_{\nu }}$ and the 2nd with ${\displaystyle A_{\mu }}$ we get tensors of the third rank. Their addition gives the tensor of the third rank
(27) ${\displaystyle A_{\mu \nu \sigma }={\frac {\partial A_{\mu \nu }}{\partial x_{\sigma }}}-\left\{{\sigma \mu \atop \tau }\right\}A_{\tau \nu }-\left\{{\sigma \nu \atop \tau }\right\}A_{\mu \tau }}$
where ${\displaystyle A_{\mu \nu }=A_{\mu }B_{\nu }}$. The right hand side of (27) is linear and homogeneous with reference to ${\displaystyle A_{\mu \nu }}$, and its first differential co-efficient so that this law of formation leads to a tensor not only in the case of a tensor of the type ${\displaystyle A_{\mu }B_{\nu }}$ but also in the case of a summation for all such tensors, i.e, in the case of any covariant tensor of the second rank. We call ${\displaystyle A_{\mu \nu \sigma }}$ the extension of the tensor ${\displaystyle A_{\mu \nu }}$.
It is clear that (26) and (24) are only special cases of (27) (extension of the tensors of the first and zero rank). In general we can get all special laws of formation of tensors from (27) combined with tensor multiplication.
### § 11. Some special cases of Particular Importance.
A few auxiliary lemmas concerning the fundamental tensor. We shall first deduce some of the lemmas much used afterwards. [ 796 ] According to the law of differentiation of determinants, we have
(28) ${\displaystyle dg=g^{\mu \nu }g\ dg_{\mu \nu }=-g_{\mu \nu }g\ dg^{\mu \nu }}$
The last form follows from the first when we remember that ${\displaystyle g_{\mu \nu }g^{\mu '\nu }=\delta _{\mu }^{\mu '}}$, and therefore ${\displaystyle g_{\mu \nu }g^{\mu \nu }=4}$. consequently
${\displaystyle g_{\mu \nu }dg^{\mu \nu }+g^{\mu \nu }dg_{\mu \nu }=0}$
From (28), it follows that
${\displaystyle {\frac {1}{\sqrt {-g}}}{\frac {\partial {\sqrt {-g}}}{\partial x_{\sigma }}}={\frac {1}{2}}{\frac {\partial \lg(-g)}{\partial x_{\sigma }}}={\frac {1}{2}}g^{\mu \nu }{\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}=-{\frac {1}{2}}g_{\mu \nu }{\frac {\partial g^{\mu \nu }}{\partial x_{\sigma }}}}$
Again, since
${\displaystyle g_{\mu \sigma }g^{\nu \sigma }=\delta _{\mu }^{\nu }}$
we have, by differentiation,
(30) ${\displaystyle {\begin{cases}&g_{\mu \sigma }dg^{\nu \sigma }=-g^{\nu \sigma }dg_{\mu \sigma }\\\mathrm {bzw.} \\&g_{\mu \sigma }{\frac {\partial g^{\nu \sigma }}{\partial x_{\lambda }}}=-g^{\nu \sigma }{\frac {\partial g_{\nu \sigma }}{\partial x_{\lambda }}}\end{cases}}}$
By mixed multiplication with ${\displaystyle g^{\sigma \tau }}$ and ${\displaystyle g_{\nu \lambda }}$ respectively we obtain (changing the mode of writing the indices).
(31) ${\displaystyle {\begin{cases}dg^{\mu \nu }&=-g^{\mu \alpha }g^{\nu \beta }dg_{\alpha \beta }\\\\{\frac {\partial g^{\mu \nu }}{\partial x_{\sigma }}}&=-g^{\mu \alpha }g^{\nu \beta }{\frac {dg_{\alpha \beta }}{\partial x_{\sigma }}}\end{cases}}}$
and
(32) ${\displaystyle {\begin{cases}dg_{\mu \nu }&=-g_{\mu \alpha }g_{\nu \beta }dg^{\alpha \beta }\\\\{\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}&=-g_{\mu \alpha }g_{\nu \beta }{\frac {dg^{\alpha \beta }}{\partial x_{\sigma }}}\end{cases}}}$
The expression (31) allows a transformation which we shall often use; according to (21)
(33) ${\displaystyle {\frac {\partial g_{\alpha \beta }}{\partial x_{\sigma }}}=\left[{\alpha \sigma \atop \beta }\right]+\left[{\beta \sigma \atop \alpha }\right]}$
If we substitute this in the second of the formula (31), we get, remembering (23),
(34) ${\displaystyle {\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}=-\left(g^{\mu \tau }\left\{{\tau \sigma \atop \nu }\right\}+g^{\nu \tau }\left\{{\tau \sigma \atop \mu }\right\}\right)}$
By substituting the right-hand side of (34) in (29), we get
(29a) ${\displaystyle {\frac {1}{\sqrt {-g}}}{\frac {\partial {\sqrt {-g}}}{\partial x_{\sigma }}}=\left\{{\mu \sigma \atop \mu }\right\}}$
[ 797 ] Divergence of the contravariant four-vector. Let us multiply (26) with the contravariant fundamental tensor ${\displaystyle g^{\mu \nu }}$ (inner multiplication), then by a transformation of the first member, the right-hand side takes the form
${\displaystyle {\frac {\partial }{\partial x_{\nu }}}\left(g^{\mu \nu }A_{\mu }\right)-A_{\mu }{\frac {\partial g^{\mu \nu }}{\partial x_{\nu }}}-{\frac {1}{2}}g^{\tau \alpha }\left({\frac {\partial g^{\mu \alpha }}{\partial x_{\nu }}}+{\frac {\partial g_{\nu \alpha }}{\partial x_{\mu }}}-{\frac {\partial g_{\mu \nu }}{\partial x_{\alpha }}}\right)g^{\mu \nu }A_{\tau }}$
According to (31) and (29) the last member can take the form
${\displaystyle {\frac {1}{2}}{\frac {\partial g^{\tau \nu }}{\partial x_{\nu }}}A_{\tau }+{\frac {1}{2}}{\frac {\partial g^{\tau \mu }}{\partial x_{\mu }}}A_{\tau }+{\frac {1}{\sqrt {-g}}}{\frac {\partial {\sqrt {-g}}}{\partial x_{\alpha }}}g^{\mu \nu }A_{\tau }}$
Both the first members of that expression, and the second member of the expression above cancel each other, since the naming of the summation-indices is immaterial. The last member of that can then be united with first expression above. If we put
${\displaystyle g^{\mu \nu }A_{\mu }=A^{\nu }}$
where ${\displaystyle A^{\nu }}$ as well as ${\displaystyle A_{\mu }}$ are vectors which can be arbitrarily chosen, we obtain finally
(35) ${\displaystyle \Phi ={\frac {1}{\sqrt {-g}}}{\frac {\partial }{\partial x_{\nu }}}\left({\sqrt {-g}}A^{\nu }\right)}$
This scalar is the Divergence of the contravariant four-vector ${\displaystyle A^{\nu }}$.
"Rotation" of the (covariant) four-vector. The second member in (26) is symmetrical in the indices μ, and ν, Hence ${\displaystyle A_{\mu \nu }-A_{\nu \mu }}$ is an anti-symmetrical tensor built up in a very simple manner. We obtain
(36) ${\displaystyle B_{\mu \nu }={\frac {\partial A_{\mu }}{\partial x_{\nu }}}-{\frac {\partial A_{\nu }}{\partial x_{\mu }}}}$
Anti-symmetrical Extension of a Six-vector. If we apply the operation (27) on an anti-symmetrical tensor of the second rank ${\displaystyle A_{\mu \nu }}$, and form all the equations arising from the cyclic interchange of the indices ${\displaystyle \mu ,\nu ,\sigma }$, and add all them, we obtain a tensor of the third rank
(37) ${\displaystyle B_{\mu \nu \sigma }=A_{\mu \nu \sigma }+A_{\nu \sigma \mu }+A_{\sigma \mu \nu }={\frac {\partial A_{\mu \nu }}{\partial x_{\sigma }}}+{\frac {\partial A_{\nu \sigma }}{\partial x_{\mu }}}+{\frac {\partial A_{\sigma \mu }}{\partial x_{\nu }}}}$
from which it in easy to see that the tensor is anti-symmetrical.
Divergence of the Six-vector. If (27) is multiplied by ${\displaystyle g^{\mu \alpha }g^{\nu \beta }}$ (mixed multiplication), then a tensor is obtained. [ 798 ] The first member of the right hand side of (27) can be written in the form
${\displaystyle {\frac {\partial }{\partial x_{\sigma }}}\left(g^{\mu \alpha }g^{\nu \beta }A_{\mu \nu }\right)-g^{\mu \alpha }{\frac {\partial g^{\nu \beta }}{\partial x_{\sigma }}}A_{\mu \nu }-g^{\nu \beta }{\frac {\partial g^{\mu \alpha }}{\partial x_{\sigma }}}A_{\mu \nu }}$
If we replace ${\displaystyle g^{\mu \alpha }g^{\nu \beta }A_{\mu \nu \sigma }}$ by ${\displaystyle A_{\sigma }^{\alpha \beta }}$, ${\displaystyle g^{\mu \alpha }g^{\nu \beta }A_{\mu \nu }}$ by ${\displaystyle A^{\alpha \beta }}$ and replace in the transformed first member
${\displaystyle {\frac {\partial g^{\nu \beta }}{\partial x_{\sigma }}}}$ and ${\displaystyle {\frac {\partial g^{\mu \alpha }}{\partial x_{\sigma }}}}$
with the help of (34), then from the right-hand side of (27) there arises an expression with seven terms, of which four cancel. There remains
(38) ${\displaystyle A_{\sigma }^{\alpha \beta }={\frac {\partial A^{\alpha \beta }}{\partial x_{\sigma }}}+\left\{{\sigma \varkappa \atop \alpha }\right\}A^{\varkappa \beta }+\left\{{\sigma \varkappa \atop \beta }\right\}A^{\alpha \varkappa }}$
This is the expression for the extension of a contravariant tensor of the second rank; extensions can also be formed for corresponding contravariant tensors of higher and lower ranks.
We remark that in the same way, we can also form the extension of a mixed tensor ${\displaystyle A_{\mu }^{\alpha }}$:
(39) ${\displaystyle A_{\mu \sigma }^{\alpha }={\frac {\partial A_{\mu }^{\alpha }}{\partial x_{\sigma }}}-\left\{{\sigma \mu \atop \tau }\right\}A_{\tau }^{\sigma }+\left\{{\sigma \tau \atop \alpha }\right\}A_{\mu }^{\tau }}$
By the reduction of (38) with reference to the indices β and σ (inner multiplication with ${\displaystyle \delta _{\beta }^{\sigma }}$), we get a contravariant four-vector
${\displaystyle A^{\alpha }={\frac {\partial A^{\alpha \beta }}{\partial x_{\beta }}}+\left\{{\beta \varkappa \atop \beta }\right\}A^{\alpha \varkappa }+\left\{{\beta \varkappa \atop \alpha }\right\}A^{\varkappa \beta }}$
On the account of the symmetry of ${\displaystyle \left\{{\beta \varkappa \atop \alpha }\right\}}$ with reference to the indices β, and ${\displaystyle \varkappa }$, the third member of the right hand side vanishes when ${\displaystyle A^{\alpha \beta }}$ is an anti-symmetrical tensor, which we assume here; the second member can be transformed according to (29a); we therefore get
(40) ${\displaystyle A^{\alpha }={\frac {1}{\sqrt {-g}}}{\frac {\partial \left({\sqrt {-g}}A^{\alpha \beta }\right)}{\partial x_{\beta }}}}$
This is the expression of the divergence of a contravariant six-vector.
Divergence of the mixed tensor of the second rank. Let us form the reduction of (39) with reference to the indices α and σ, we obtain remembering (29a) [ 799 ]
(41) ${\displaystyle {\sqrt {-g}}A_{\mu }={\frac {\partial \left({\sqrt {-g}}A_{\mu }^{\sigma }\right)}{\partial z_{\sigma }}}-\left\{{\sigma \mu \atop \tau }\right\}{\sqrt {-g}}A_{\tau }^{\sigma }}$
If we introduce into the last term the contravariant tensor ${\displaystyle A^{\varrho \sigma }=g^{\varrho \tau }A_{\tau }^{\sigma }}$, it takes the form
${\displaystyle -\left[{\sigma \mu \atop \varrho }\right]{\sqrt {-g}}A^{\varrho \sigma }}$
If further ${\displaystyle A^{\varrho \sigma }}$ is symmetrical it is reduced to
${\displaystyle -{\frac {1}{2}}{\sqrt {-g}}{\frac {\partial g_{\varrho \sigma }}{\partial x_{\mu }}}A^{\varrho \sigma }}$
If instead of ${\displaystyle A^{\varrho \sigma }}$, we introduce in a similar way the symmetrical covariant tensor ${\displaystyle A_{\varrho \sigma }=g_{\varrho \alpha }g_{\sigma \beta }A^{\alpha \beta }}$, then owing to (31) the last member can take the form
${\displaystyle {\frac {1}{2}}{\sqrt {-g}}{\frac {\partial g^{\varrho \sigma }}{\partial x_{\mu }}}A_{\varrho \sigma }}$
In the symmetrical case treated, (41) can be replaced by either of the forms
(41) ${\displaystyle {\sqrt {-g}}A_{\mu }={\frac {\partial \left({\sqrt {-g}}A_{\mu }^{\sigma }\right)}{\partial x_{\sigma }}}-{\frac {1}{2}}{\frac {\partial g_{\varrho \sigma }}{\partial x_{\mu }}}{\sqrt {-g}}A^{\varrho \sigma }}$
or
(41b) ${\displaystyle {\sqrt {-g}}A_{\mu }={\frac {\partial \left({\sqrt {-g}}A_{\mu }^{\sigma }\right)}{\partial x_{\sigma }}}+{\frac {1}{2}}{\frac {\partial g^{\varrho \sigma }}{\partial x_{\mu }}}{\sqrt {-g}}A_{\sigma \varrho }}$
which we shall have to make use of afterwards.
### § 12. The Riemann-Christoffel Tensor.
We now seek only those tensors, which can be obtained from the fundamental tensor ${\displaystyle g_{\mu \nu }}$ by differentiation alone. It is found easily. We put in (27) instead of any tensor ${\displaystyle A_{\mu \nu }}$ the fundamental tensor ${\displaystyle g_{\mu \nu }}$ and get from it a new tensor, namely the extension of the fundamental tensor. We can easily convince ourselves that this vanishes identically. We prove it in the following way; we substitute in (27)
${\displaystyle A_{\mu \nu }={\frac {\partial A_{\mu }}{\partial x_{\nu }}}-\left\{{\mu \nu \atop \varrho }\right\}A_{\varrho }}$
[ 800 ] i.e., the extension of a four-vector ${\displaystyle A_{\nu }}$.
Thus we get (by slightly changing the indices) the tensor of the third rank
${\displaystyle {\begin{array}{ll}A_{\mu \sigma \tau }&={\frac {\partial ^{2}A_{\mu }}{\partial x_{\sigma }\partial x_{\tau }}}\\\\&-\left\{{\mu \sigma \atop \varrho }\right\}{\frac {\partial A_{\varrho }}{\partial x_{\tau }}}-\left\{{\mu \tau \atop \varrho }\right\}{\frac {\partial A_{\varrho }}{\partial x_{\sigma }}}-\left\{{\sigma \tau \atop \varrho }\right\}{\frac {\partial A_{\mu }}{\partial x_{\varrho }}}\\\\&+\left[-{\frac {\partial }{\partial x_{\tau }}}\left\{{\mu \sigma \atop \varrho }\right\}+\left\{{\mu \tau \atop \alpha }\right\}\left\{{\alpha \sigma \atop \varrho }\right\}+\left\{{\sigma \tau \atop \alpha }\right\}\left\{{\alpha \mu \atop \varrho }\right\}\right]A_{\varrho }\end{array}}}$
We use these expressions for the formation of the tensor ${\displaystyle A_{\mu \sigma \tau }-A_{\mu \tau \sigma }}$. Thereby the following terms in ${\displaystyle A_{\mu \sigma \tau }}$ cancel the corresponding terms in ${\displaystyle A_{\mu \tau \sigma }}$; the first member, the fourth member, as well as the member corresponding to the last term within the square bracket. These are all symmetrical in σ and τ. The same is true for the sum of the second and third members. We thus get
(42) ${\displaystyle A_{\mu \sigma \tau }-A_{\mu \tau \sigma }=B_{\mu \sigma \tau }^{\varrho }A_{\varrho }}$
(43) ${\displaystyle {\begin{cases}B_{\mu \sigma \tau }^{\varrho }=&-{\frac {\partial }{\partial x_{\tau }}}\left\{{\mu \sigma \atop \varrho }\right\}+{\frac {\partial }{\partial x_{\sigma }}}\left\{{\mu \tau \atop \varrho }\right\}\\\\&-\left\{{\mu \sigma \atop \alpha }\right\}\left\{{\alpha \tau \atop \varrho }\right\}+\left\{{\mu \tau \atop \alpha }\right\}\left\{{\alpha \sigma \atop \varrho }\right\}\end{cases}}}$
The essential thing in this result is that on the right hand side of (42) we have only ${\displaystyle A_{\varrho }}$, but not its differential co-efficients. From the tensor-character of ${\displaystyle A_{\mu \sigma \tau }-A_{\mu \tau \sigma }}$, and from the fact that ${\displaystyle A_{\varrho }}$ is an arbitrary four vector, it follows, on account of the result of §7, that ${\displaystyle B_{\mu \sigma \tau }^{\varrho }}$ is a tensor (Riemann-Christoffel Tensor).
The mathematical significance of this tensor is as follows; when the continuum is so shaped, that there is a co-ordinate system for which the ${\displaystyle g_{\mu \nu }}$ are constants, ${\displaystyle R_{\mu \sigma \tau }^{\varrho }}$ all vanish.
If we choose instead of the original co-ordinate system any new one, so would the ${\displaystyle g_{\mu \nu }}$ referred to this last system be no longer constants. The tensor character of ${\displaystyle R_{\mu \sigma \tau }^{\varrho }}$ shows us, however, that these components vanish collectively also in any other chosen system of reference. The vanishing of the Riemann Tensor is thus a necessary condition that for some choice of the axis-system [ 801 ] the ${\displaystyle g_{\mu \nu }}$ can be taken as constants.[7] In our problem it corresponds to the case when by a suitable choice of the co-ordinate system, the special relativity theory holds throughout any finite region. By the reduction of (43) with reference to indices to τ and ${\displaystyle \varrho }$, we get the covariant tensor of the second rank
(44) ${\displaystyle {\begin{cases}B_{\mu \nu }&=R_{\mu \nu }+S_{\mu \nu }\\\\R_{\mu \nu }&=-{\frac {\partial }{\partial x_{\alpha }}}\left\{{\mu \nu \atop \alpha }\right\}+\left\{{\mu \alpha \atop \beta }\right\}\left\{{\nu \beta \atop \alpha }\right\}\\\\S_{\mu \nu }&={\frac {\partial \lg {\sqrt {-g}}}{\partial x_{\mu }\partial x_{\nu }}}-\left\{{\mu \nu \atop \alpha }\right\}{\frac {\partial \lg {\sqrt {-g}}}{\partial x_{\alpha }}}\end{cases}}}$
Remarks upon the choice of co-ordinates. — It has already been remarked in § 8, with reference to the equation (18a), that the co-ordinates can with advantage be so chosen that ${\displaystyle {\sqrt {-g}}=1}$. A glance at the equations got in the last two paragraphs shows that, through such a choice, the law of formation of the tensors suffers a significant simplification. It is specially true for the tensor ${\displaystyle B_{\mu \nu }}$, which plays a fundamental role in the theory. By this simplification, ${\displaystyle S_{\mu \nu }}$ vanishes of itself so that tensor ${\displaystyle B_{\mu \nu }}$ reduces to ${\displaystyle R_{\mu \nu }}$.
I shall give in the following pages all relations in the simplified form, with the above-named specialisation of the co-ordinates. It is then very easy to go back to the general covariant equations, if it appears desirable in any special case.
## C. The Theory of the Gravitation-Field
### § 13. Equation of motion of a material point in a gravitation-field. Expression for the field-components of gravitation.
A freely moving body not acted on by external forces moves, according to the special relativity theory, along a straight line and uniformly. This also holds for the generalised [ 802 ] relativity theory for any part of the four-dimensional region, in which the co-ordinates ${\displaystyle K_{0}}$ can be, and are, so chosen that ${\displaystyle g_{\mu \nu }}$ have special constant values of the expression (4).
Let us discuss this motion from the stand-point of any arbitrary co-ordinate-system ${\displaystyle K_{1}}$; it moves with reference to ${\displaystyle K_{1}}$ (as explained in § 2) in a gravitational field. The laws of motion with reference to ${\displaystyle K_{1}}$, follow easily from the following consideration. With reference to ${\displaystyle K_{0}}$, the law of motion is a four-dimensional straight line and thus a geodesic. As a geodetic-line is defined independently of the system of co-ordinates, it would also be the law of motion for the motion of the material-point with reference to ${\displaystyle K_{1}}$; If we put
(45) ${\displaystyle \Gamma _{\mu \nu }^{\tau }=-\left\{{\mu \nu \atop \tau }\right\}}$
we get the motion of the point with reference to ${\displaystyle K_{1}}$ given by
(46) ${\displaystyle {\frac {d^{2}x_{\tau }}{ds^{2}}}=\Gamma _{\mu \nu }^{\tau }{\frac {dx_{\mu }}{ds}}{\frac {dx_{\nu }}{ds}}}$
We now make the very simple assumption that this general covariant system of equations defines also the motion of the point in the gravitational field, when there exists no reference-system ${\displaystyle K_{0}}$, with reference to which the special relativity theory holds throughout a finite region. The assumption seems to us to be all the more legitimate, as (46) contains only the first differentials of ${\displaystyle g_{\mu \nu }}$, among which there is no relation in the special case when ${\displaystyle K_{0}}$ exists.[8]
If ${\displaystyle \Gamma _{\mu \nu }^{\tau }}$ vanish, the point moves uniformly and in a straight line; these magnitudes therefore determine the deviation from uniformity. They are the components of the gravitational field.
### § 14. The Field-equation of Gravitation in the absence of matter.
In the following, we differentiate "gravitation-field" from "matter", in the sense that everything besides the gravitation-field will be signified as matter; therefore [ 803 ] the term includes not only "matter" in the usual sense, but also the electro-dynamic field. Our next problem is to seek the field-equations of gravitation in the absence of matter. For this we apply the same method as employed in the foregoing paragraph for the deduction of the equations of motion for material points. A special case in which the field-equations sought-for are evidently satisfied is that of the special relativity theory in which ${\displaystyle g_{\mu \nu }}$ have certain constant values. This would be the case in a certain finite region with reference to a definite co-ordinate system ${\displaystyle K_{0}}$. With reference to this system, all the components ${\displaystyle B{}_{\mu \sigma \tau }^{\varrho }}$ of the Riemann's Tensor [equation 43] vanish. These vanish then also in the region considered, with reference to every other co-ordinate system.
The equations of the gravitation-field free from matter must thus be in every case satisfied when all ${\displaystyle B{}_{\mu \sigma \tau }^{\varrho }}$ vanish.
But this condition is clearly one which goes too far. For it is clear that the gravitation-field generated by a material point in its own neighbourhood can never be transformed away by any choice of axes, i.e., it cannot be transformed to a case of constant ${\displaystyle g_{\mu \nu }}$.
Therefore it is clear that, for a gravitational field free from matter, it is desirable that the symmetrical tensors ${\displaystyle B{}_{\mu \sigma \tau }^{\varrho }}$ deduced from the tensors ${\displaystyle B_{\mu \nu }}$ should vanish. We thus get 10 equations for 10 quantities ${\displaystyle g_{\mu \nu }}$, which are fulfilled in the special case when ${\displaystyle B_{\mu \sigma \tau }^{\varrho }}$ all vanish.
Remembering (44) we see that in absence of matter the field-equations come out as follows; (when referred to the special co-ordinate-system chosen.)
(47) ${\displaystyle \left\{{\begin{array}{c}{\frac {\partial \Gamma _{\mu \nu }^{\alpha }}{\partial x_{\alpha }}}+\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \alpha }^{\beta }=0\\\\{\sqrt {-g}}=1\end{array}}\right.}$
It can also be shown that the choice of these equations is connected with a minimum of arbitrariness. For besides ${\displaystyle B_{\mu \nu }}$, there is no tensor of the second rank, which [ 804 ] can be built out of ${\displaystyle g_{\mu \nu }}$ and their derivatives no higher than the second, and which is also linear in them.[9]
It will be shown that the equations arising in a purely mathematical way out of the conditions of the general relativity, together with equations (46), give us the Newtonian law of attraction as a first approximation, and lead in the second approximation to the explanation of the perihelion-motion of mercury discovered by Leverrier (the residual effect which could not be accounted for by the consideration of all sorts of disturbing factors). My view is that these are convincing proofs of the physical correctness of the theory.
### § 15. Hamiltonian Function for the Gravitation-field. Laws of Impulse and Energy.
In order to show that the field equations correspond to the laws of impulse and energy, it is most convenient to write it in the following Hamiltonian form: —
(47a) ${\displaystyle \left\{{\begin{array}{c}\delta \left\{\int Hd\tau \right\}=0\\\\H=g^{\mu \nu }\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \alpha }^{\beta }\\\\{\sqrt {-g}}=1\end{array}}\right.}$
Here the variations vanish at the limits of the finite four-dimensional integration-space considered.
It is first necessary to show that the form (47a) is equivalent to equations (47). For this purpose, let us consider H as a function of ${\displaystyle g^{\mu \nu }}$ and
${\displaystyle g_{\sigma }^{\mu \nu }\left(={\frac {\partial g^{\mu \nu }}{\partial x_{\sigma }}}\right)}$
We have at first
${\displaystyle {\begin{array}{ll}\delta H&=\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \alpha }^{\beta }\delta g^{\mu \nu }+2g^{\mu \nu }\Gamma _{\mu \beta }^{\alpha }\delta \Gamma _{\nu \alpha }^{\beta }\\\\&=-\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \alpha }^{\beta }\delta g^{\mu \nu }+2\Gamma _{\mu \beta }^{\alpha }\delta \left(g^{\mu \nu }\Gamma _{\nu \alpha }^{\beta }\right)\end{array}}}$
But
${\displaystyle \delta \left(g^{\mu \nu }\Gamma _{\nu \alpha }^{\beta }\right)=-{\frac {1}{2}}\delta \left[g^{\mu \nu }g^{\beta \gamma }\left({\frac {\partial g_{\nu \lambda }}{\partial x_{\alpha }}}+{\frac {\partial g_{\alpha \lambda }}{\partial x_{\nu }}}-{\frac {\partial g_{\alpha \nu }}{\partial x_{\lambda }}}\right)\right]}$
[ 805 ] The terms arising out of the two last terms within the round bracket are of different signs, and change into one another by the interchange of the indices μ and β. They cancel each other in the expression for δH, when they are multiplied by ${\displaystyle \Gamma _{\mu \beta }^{\alpha }}$, which is symmetrical with respect to μ and β so that only the first member of the bracket remains for our consideration. Remembering (31), we thus have: —
${\displaystyle \delta H=-\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \alpha }^{\beta }\delta g^{\mu \nu }-\Gamma _{\mu \beta }^{\alpha }\delta g_{\alpha }^{\mu \beta }}$
Therefore
(48) ${\displaystyle {\begin{cases}{\frac {\partial H}{\partial g^{\mu \nu }}}=&-\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \alpha }^{\beta }\\\\{\frac {\partial H}{\partial g_{\sigma }^{\mu \nu }}}=&\Gamma _{\mu \nu }^{\sigma }\end{cases}}}$
If we now carry out the variations in (47a), we obtain the system of equations
(47b) ${\displaystyle {\frac {\partial }{\partial x_{\alpha }}}\left({\frac {\partial H}{\partial g_{\sigma }^{\mu \nu }}}\right)-{\frac {\partial H}{\partial g^{\mu \nu }}}=0}$
which, owing to the relations (48), coincide with (47), as was required to be proved.
If (47b) is multiplied by ${\displaystyle g_{\sigma }^{\mu \nu }}$, since
${\displaystyle {\frac {\partial g_{\sigma }^{\mu \nu }}{\partial x_{\alpha }}}={\frac {\partial g_{\alpha }^{\mu \nu }}{\partial x_{\sigma }}}}$
and consequently
${\displaystyle g_{\sigma }^{\mu \nu }{\frac {\partial }{\partial x_{\alpha }}}\left({\frac {\partial H}{\partial g_{\alpha }^{\mu \nu }}}\right)={\frac {\partial }{\partial x_{\alpha }}}\left(g_{\sigma }^{\mu \nu }{\frac {\partial H}{\partial g_{\alpha }^{\mu \nu }}}\right)-{\frac {\partial H}{\partial g_{\alpha }^{\mu \nu }}}{\frac {\partial g_{\alpha }^{\mu \nu }}{\partial x_{\sigma }}}}$
we obtain the equation
${\displaystyle {\frac {\partial }{\partial x_{\alpha }}}\left(g_{\sigma }^{\mu \nu }{\frac {\partial H}{\partial g_{\alpha }^{\mu \nu }}}\right)-{\frac {\partial H}{\partial x_{\sigma }}}=0}$
or[10]
(49) ${\displaystyle \left\{{\begin{array}{c}{\frac {\partial t_{\sigma }^{\alpha }}{\partial x_{\alpha }}}=0\\\\-2\varkappa t_{\sigma }^{\alpha }=g_{\sigma }^{\mu \nu }{\frac {\partial H}{\partial g_{\alpha }^{\mu \nu }}}-\delta _{\sigma }^{\alpha }H\end{array}}\right.}$
[ 806 ] or, owing to the relations (48), the equations (47) and (34),
(50) ${\displaystyle \varkappa t_{\sigma }^{\alpha }={\frac {1}{2}}\delta _{\sigma }^{\alpha }g^{\mu \nu }\Gamma _{\mu \beta }^{\lambda }\Gamma _{\nu \lambda }^{\beta }-g^{\mu \nu }\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \sigma }^{\beta }}$
It is to be noticed that ${\displaystyle t_{\sigma }^{\alpha }}$ is not a tensor, so that the equation (49) holds only for systems for which ${\displaystyle {\sqrt {-g}}=1}$. This equation expresses the laws of conservation of impulse and energy in a gravitation-held. In fact, the integration of this equation over a three-dimensional volume V leads to the four equations
(49a) ${\displaystyle {\frac {d}{dx_{4}}}\left\{\int d_{\sigma }^{4}dV\right\}=\int \left(t_{\sigma }^{1}\alpha _{1}+t_{\sigma }^{2}\alpha _{2}+t_{\sigma }^{3}\alpha _{3}\right)dS}$
where ${\displaystyle a_{1},a_{2},a_{3}}$ are the direction-cosines of the inward drawn normal to the surface-element dS in the Euclidean Sense. We recognise in this the usual expression for the laws of conservation. We denote the magnitudes ${\displaystyle t_{\sigma }^{\alpha }}$ as the energy-components of the gravitation-field.
I will now put the equation (47) in a third form which will be very serviceable for a quick realisation of our object. By multiplying the field-equations (47) with ${\displaystyle g^{\nu \sigma }}$, these are obtained in the "mixed" forms. If we remember that
${\displaystyle g^{\nu \sigma }{\frac {\partial \Gamma _{\mu \nu }^{\alpha }}{\partial x_{\alpha }}}={\frac {\partial }{\partial x_{\alpha }}}\left(g^{\nu \sigma }\Gamma _{\mu \nu }^{\alpha }\right)-{\frac {\partial g^{\nu \sigma }}{\partial x_{\alpha }}}\Gamma _{\mu \nu }^{\alpha }}$
which owing to (34) is equal to
${\displaystyle {\frac {\partial }{\partial x_{\alpha }}}\left(g^{\nu \sigma }\Gamma _{\mu \nu }^{\alpha }\right)-g^{\nu \beta }\Gamma _{\alpha \beta }^{\sigma }\Gamma _{\mu \nu }^{\alpha }-g^{\sigma \beta }\Gamma _{\beta \alpha }^{\nu }\Gamma _{\mu \nu }^{\alpha }}$
or slightly altering the notation equal to
${\displaystyle {\frac {\partial }{\partial x_{\alpha }}}\left(g^{\sigma \beta }\Gamma _{\mu \nu }^{\alpha }\right)-g^{mn}\Gamma _{m\beta }^{\sigma }\Gamma _{n\mu }^{\beta }-g^{\nu \sigma }\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \alpha }^{\beta }}$
The third member of this expression cancel with the second member of the field-equations (47). In place of the second term of this expression, we can, on account of the relations (50), put
${\displaystyle \varkappa \left(t_{\mu }^{\sigma }-{\frac {1}{2}}\delta _{\mu }^{\sigma }t\right)}$
where ${\displaystyle \left(t=t_{\alpha }^{\alpha }\right)}$
Therefore in the place of the equations (47), we obtain
(51) ${\displaystyle \left\{{\begin{array}{c}{\frac {\partial }{\partial x_{\alpha }}}\left(g^{\sigma \beta }\Gamma _{\mu \beta }^{\alpha }\right)=-\varkappa \left(t_{\mu }^{\sigma }-{\frac {1}{2}}\delta _{\mu }^{\sigma }t\right)\\\\{\sqrt {-g}}=1\end{array}}\right.}$
[ 807 ]
### § 16. General formulation of the field-equation of Gravitation.
The field-equations established in the preceding paragraph for spaces free from matter is to be compared with the equation
${\displaystyle \Delta \varphi =0}$
of the Newtonian theory. We have now to find the equations which wall correspond to Poisson's Equation
${\displaystyle \Delta \varphi =4\pi \varkappa \varrho }$
(${\displaystyle \varrho }$ signifies the density of matter) .
The special relativity theory has led to the conception that the inertial mass is no other than energy. It can also be fully expressed mathematically by a symmetrical tensor of the second rank, the energy-tensor. We have therefore to introduce in our generalised theory an energy-tensor ${\displaystyle T_{\sigma }^{\alpha }}$ associated with matter, which like the energy components ${\displaystyle t_{\sigma }^{\alpha }}$ of the gravitation-field (equations 49, and 50) have a mixed character but which however can be connected with symmetrical covariant tensors.[11] The equation (51) teaches us how to introduce the energy-tensor (corresponding to the density ${\displaystyle \varrho }$ of Poisson's equation) in the field equations of gravitation. If we consider a complete system (for example the Solar-system) its total mass, as also its total gravitating action, will depend on the total energy of the system, ponderable as well as gravitational.
This can be expressed, by putting in (51), in place of energy-components ${\displaystyle t_{\mu }^{\sigma }}$ of gravitation-field alone the sum of the energy-components of matter and gravitation, i.e., ${\displaystyle t_{\mu }^{\sigma }+T_{\mu }^{\sigma }}$.
We thus get instead of (51), the tensor-equation
(52) ${\displaystyle \left\{{\begin{array}{c}{\frac {\partial }{\partial x_{\alpha }}}\left(g^{\sigma \beta }\Gamma _{\mu \beta }^{\alpha }\right)=-\varkappa \left(\left(t_{\mu }^{\sigma }+T_{\mu }^{\sigma }\right)-{\frac {1}{2}}\delta _{\mu }^{\sigma }(t+T)\right)\\\\{\sqrt {-g}}=1\end{array}}\right.}$
where ${\displaystyle T=T_{\mu }^{\mu }}$ (Laue's Scalar). These are the general field-equations of gravitation in [ 808 ] the mixed form. In place of (47), we get by working backwards the system
(53) ${\displaystyle \left\{{\begin{array}{c}{\frac {\partial \Gamma _{\mu \nu }^{\alpha }}{\partial x_{\alpha }}}+\Gamma _{\mu \beta }^{\alpha }\Gamma _{\nu \alpha }^{\beta }=-\varkappa \left(T_{\mu \nu }-{\frac {1}{2}}g_{\mu \nu }T\right)\\\\{\sqrt {-g}}=1\end{array}}\right.}$
It must be admitted, that this introduction of the energy-tensor of matter cannot be justified by means of the Relativity-Postulate alone; for we have in the foregoing analysis deduced it from the condition that the energy of the gravitation-field should exert gravitating action in the same way as every other kind of energy. The strongest ground for the choice of the above equation however lies in this, that they lead, as their consequences, to equations expressing the conservation of the components of total energy (the impulses and the energy) which exactly correspond to the equations (49) and (49a). This shall be shown afterwards.
### § 17. The laws of conservation in the general case.
The equations (52) can be easily so transformed that the second member on the right-hand side vanishes. We reduce (52) with reference to the indices μ and σ and subtract the equation so obtained after multiplication with ${\displaystyle {\frac {1}{2}}\delta _{\mu }^{\sigma }}$ from (52). We obtain:
(52a) ${\displaystyle {\frac {\partial }{\partial x_{\alpha }}}\left(g^{\sigma \beta }\Gamma _{\mu \beta }^{\alpha }-{\frac {1}{2}}\delta _{\mu }^{\sigma }g^{\lambda \beta }\Gamma _{\lambda \beta }^{\alpha }\right)=-\varkappa \left(t_{\mu }^{\sigma }+T_{\mu }^{\sigma }\right)}$
we operate on it by ${\displaystyle \partial /\partial x_{\sigma }}$. Now,
${\displaystyle {\frac {\partial ^{2}}{\partial x_{\alpha }\partial x_{\sigma }}}\left(g^{\sigma \beta }\Gamma _{\mu \beta }^{\alpha }\right)=-{\frac {1}{2}}{\frac {\partial ^{2}}{\partial x_{\alpha }\partial x_{\sigma }}}\left[g^{\sigma \beta }g^{\alpha \lambda }\left({\frac {\partial g_{\mu \lambda }}{\partial x_{\beta }}}+{\frac {\partial g_{\beta \lambda }}{\partial x_{\mu }}}-{\frac {\partial g_{\mu \beta }}{\partial x_{\lambda }}}\right)\right]}$
The first and the third member of the round bracket lead to expressions which cancel one another, as can be easily seen by interchanging the summation-indices α and σ on the one hand, and β and γ on the other. The second term can be transformed according to (31). So that we get
${\displaystyle {\frac {\partial ^{2}}{\partial x_{\alpha }\partial x_{\sigma }}}\left(g^{\sigma \beta }\Gamma _{\mu \beta }^{\alpha }\right)={\frac {1}{2}}{\frac {\partial ^{3}g^{\alpha \beta }}{\partial x_{\alpha }\partial x_{\beta }\partial x_{\mu }}}}$
The second member of the expression on the left-hand side of (52a) leads first to
${\displaystyle -{\frac {1}{2}}{\frac {\partial ^{2}}{\partial x_{\alpha }\partial x_{\sigma }}}\left(g^{\lambda \beta }\Gamma _{\lambda \beta }^{\alpha }\right)}$
[ 809 ] or
${\displaystyle {\frac {1}{4}}{\frac {\partial ^{2}}{\partial x_{\alpha }\partial x_{\mu }}}\left[g^{\lambda \beta }g^{\alpha \delta }\left({\frac {\partial g_{\delta \gamma }}{\partial x_{\beta }}}+{\frac {\partial g_{\delta \beta }}{\partial x_{\lambda }}}-{\frac {\partial g_{\lambda \beta }}{\partial x_{\delta }}}\right)\right]}$
The expression arising out of the last member within the round bracket vanishes according to (29) on account of the choice of axes. The two others can be taken together and give us on account of (31), the expression
${\displaystyle -{\frac {1}{2}}{\frac {\partial ^{3}g^{\alpha \beta }}{\partial x_{\alpha }\partial x_{\beta }\partial x_{\mu }}}}$
So that remembering (54) we have the identity
(55) ${\displaystyle {\frac {\partial ^{2}}{\partial x_{\alpha }\partial x_{\sigma }}}\left(g^{\sigma \beta }\Gamma _{\mu \beta }^{\alpha }-{\frac {1}{2}}\delta _{\mu }^{\sigma }g^{\gamma \beta }\Gamma _{\lambda \beta }^{\alpha }\right)\equiv 0}$
From (55) and (52a) it follows that
(56) ${\displaystyle {\frac {\partial \left(t_{\mu }^{\sigma }+T_{\mu }^{\sigma }\right)}{\partial x_{\sigma }}}=0}$
From the field equations of gravitation, it also follows that the conservation-laws of impulse and energy are satisfied. We see it most simply following the same reasoning which lead to equations (49a); only instead of the energy-components ${\displaystyle t_{\mu }^{\sigma }}$ of the gravitational-field, we are to introduce the total energy-components of matter and gravitational field.
### § 18. The Impulse-energy law for matter as a consequence of the field-equations.
If we multiply (53) with ${\displaystyle \partial g^{\mu \nu }/\partial x_{\sigma }}$, we get in a way similar to § 15, remembering that
${\displaystyle g_{\mu \nu }{\frac {\partial g^{\mu \nu }}{\partial x_{\sigma }}}}$
vanishes, the equations
${\displaystyle {\frac {\partial t_{\sigma }^{\alpha }}{\partial x_{\alpha }}}+{\frac {1}{2}}{\frac {\partial g^{\mu \nu }}{\partial x_{\sigma }}}T_{\mu \nu }=0}$
or remembering (56)
(57) ${\displaystyle {\frac {\partial T_{\sigma }^{\alpha }}{\partial x_{\alpha }}}+{\frac {1}{2}}{\frac {\partial g^{\mu \nu }}{\partial x_{\sigma }}}T_{\mu \nu }=0}$
A comparison with (41b) shows that these equations for the above choice of co-ordinates [ 810 ] asserts nothing but the vanishing of the divergence of the tensor of the energy-components of matter.
Physically the appearance of the second term on the left-hand side shows that for matter alone the law of conservation of impulse and energy cannot hold; or can only hold when ${\displaystyle g^{\mu \nu }}$ are constants; i.e., when the field of gravitation vanishes. The second member is an expression for impulse and energy which the gravitation-field exerts per time and per volume upon matter. This comes out clearer when instead of (57) we write it in the Form of (41).
(57a) ${\displaystyle {\frac {\partial T_{\sigma }^{\alpha }}{\partial x_{\alpha }}}=-\Gamma _{\sigma \beta }^{\alpha }T_{\alpha }^{\beta }}$
The right-hand side expresses the interaction of the energy of the gravitational-field on matter. The field-equations of gravitation contain thus at the same time 4 conditions which are to be satisfied by all material phenomena. We get the equations of the material phenomena completely when the latter is characterised by four other differential equations independent of one another.[12]
## D. The "Material" Phenomena.
The Mathematical auxiliaries developed under B at once enables us to generalise, according to the generalised theory of relativity, the physical laws of matter (Hydrodynamics, Maxwell's Electro-dynamics) as they lie already formulated according to the special-relativity-theory. The generalised Relativity Principle leads us to no further limitation of possibilities; but it enables us to know exactly the influence of gravitation on all processes without the introduction of any new hypothesis.
It is owing to this, that as regards the physical nature of matter (in a narrow sense) no definite necessary assumptions are to be introduced. The question may lie open whether the theories of the electro-magnetic field and the gravitational-field [ 811 ] together, will form a sufficient basis fur the theory of matter. The general relativity postulate can teach us no new principle. But by building up the theory it must be shown whether electro-magnetism and gravitation together can achieve what the former alone did not succeed in doing.
### § 19. Euler's equations for frictionless adiabatic liquid.
Let p and ${\displaystyle \varrho }$, be two scalars, of which the first denotes the "pressure" and the last the "density" of the liquid; between them there is a relation. Let the contravariant symmetrical tensor
(58) ${\displaystyle T^{\alpha \beta }=-g^{\alpha \beta }p+\varrho {\frac {dx_{\alpha }}{ds}}{\frac {dx_{\beta }}{ds}}}$
be the contravariant energy-tensor of the liquid. To it also belongs the covariant tensor
(58a) ${\displaystyle T_{\mu \nu }=-g_{\mu \nu }p+g_{\mu \alpha }{\frac {dx_{\alpha }}{ds}}g_{\mu \beta }{\frac {dx_{\beta }}{ds}}\varrho }$
as well as the mixed tensor[13]
(58b) ${\displaystyle T_{\sigma }^{\alpha }=-\delta _{\sigma }^{\alpha }p+g_{\sigma \beta }{\frac {dx_{\beta }}{ds}}{\frac {dx_{\alpha }}{ds}}\varrho }$
If we put the right-hand side of (58b) in (57a), we get the general hydrodynamical equations of Euler according to the generalised relativity theory. This in principle completely solves the problem of motion; for the four equations (57a) together with the given equation between p and ${\displaystyle \varrho }$, and the equation
${\displaystyle g_{\alpha \beta }{\frac {dx_{\alpha }}{ds}}{\frac {dx_{\beta }}{ds}}=1}$
are sufficient, with the given values of ${\displaystyle g_{\alpha \beta }}$, for finding out the six unknowns
${\displaystyle p,\ \varrho ,\ {\frac {dx_{1}}{ds}},\ {\frac {dx_{2}}{ds}},\ {\frac {dx_{3}}{ds}},\ {\frac {dx_{4}}{ds}}}$
[ 812 ] If ${\displaystyle g_{\mu \nu }}$ are unknown we have also to take the equations (53). There are now 11 equations for finding out 10 functions ${\displaystyle g_{\mu \nu }}$, so that the number is more than sufficient. Now it is be noticed that the equation (57a) is already contained in (53), so that the latter only represents (7) independent equations. This indefiniteness is due to the wide freedom in the choice of co-ordinates, so that mathematically the problem is indefinite in the sense that three of the Space-functions can be arbitrarily chosen.[14]
### § 20. Maxwell's Electro-Magnetic field-equations for the vacuum.
Let ${\displaystyle \varphi _{\nu }}$ be the components of a covariant four-vector, the electro-magnetic potential; from it let us form according to (36) the Components ${\displaystyle F_{\varrho \sigma }}$ of the covariant six-vector of the electro-magnetic field according to the system of equations
(59) ${\displaystyle F_{\varrho \sigma }={\frac {\partial \varphi _{\varrho }}{\partial x_{\sigma }}}-{\frac {\partial \varphi _{\sigma }}{\partial x_{\varrho }}}}$
From (59) it follows that the system of equations
(60) ${\displaystyle {\frac {\partial F_{\varrho \sigma }}{\partial x_{\tau }}}+{\frac {\partial F_{\sigma \tau }}{\partial x_{\varrho }}}+{\frac {\partial F_{\tau \varrho }}{\partial x_{\varrho }}}=0}$
is satisfied of which the left-hand side, according to (37), is an anti-symmetrical tensor of the third kind. This system (60) contains essentially four equations, which can be thus written: —
(60a) ${\displaystyle {\begin{cases}{\frac {\partial F_{23}}{\partial x_{4}}}+{\frac {\partial F_{34}}{\partial x_{2}}}+{\frac {\partial F_{42}}{\partial x_{3}}}&=0\\\\{\frac {\partial F_{34}}{\partial x_{1}}}+{\frac {\partial F_{41}}{\partial x_{3}}}+{\frac {\partial F_{13}}{\partial x_{4}}}&=0\\\\{\frac {\partial F_{41}}{\partial x_{2}}}+{\frac {\partial F_{12}}{\partial x_{4}}}+{\frac {\partial F_{24}}{\partial x_{1}}}&=0\\\\{\frac {\partial F_{12}}{\partial x_{3}}}+{\frac {\partial F_{23}}{\partial x_{1}}}+{\frac {\partial F_{31}}{\partial x_{2}}}&=0\end{cases}}}$
[ 813 ] This system of equations corresponds to the second system of equations of Maxwell. We see it at once if we put
(61) ${\displaystyle \left\{{\begin{array}{ccc}F_{23}={\mathfrak {h}}_{x}&&F_{14}={\mathfrak {e}}_{x}\\F_{31}={\mathfrak {h}}_{y}&&F_{24}={\mathfrak {e}}_{y}\\F_{12}={\mathfrak {h}}_{z}&&F_{34}={\mathfrak {e}}_{z}\end{array}}\right.}$
Instead of (60a) we can therefore write according to the usual notation of three-dimensional vector-analysis: —
(60b) ${\displaystyle \left\{{\begin{array}{c}{\frac {\partial {\mathfrak {h}}}{\partial t}}+\mathrm {rot} \ {\mathfrak {e}}=0\\\\\mathrm {div} \ {\mathfrak {h}}=0\end{array}}\right.}$
The first Maxwellian system is obtained by a generalisation of the form given by Minkowski.
We introduce the contravariant six-vector ${\displaystyle F_{\alpha \beta }}$ by the equation
(62) ${\displaystyle F^{\mu \nu }=g^{\mu \alpha }q^{\nu \beta }F_{\alpha \beta }}$
and also a contravariant four-vector ${\displaystyle J^{\mu }}$, which is the electrical current-density in vacuum. Then remembering (40) we can establish the system of equations, which remains invariant for any substitution with determinant 1 (according to our choice of co-ordinates)
(63) ${\displaystyle {\frac {\partial F^{\mu \nu }}{\partial x_{\nu }}}=J^{\mu }}$
If we put
(64) ${\displaystyle \left\{{\begin{array}{ccc}F^{23}={\mathfrak {h'}}_{x}&&F^{14}={\mathfrak {e'}}_{x}\\F^{31}={\mathfrak {h'}}_{y}&&F^{24}={\mathfrak {e'}}_{y}\\F^{12}={\mathfrak {h'}}_{z}&&F^{34}={\mathfrak {e'}}_{z}\end{array}}\right.}$
which quantities become equal to ${\displaystyle h_{x}\dots {\mathfrak {e}}_{z}}$, in the case of the special relativity theory, and besides
${\displaystyle J^{1}={\mathfrak {i}}_{x},\ J^{2}={\mathfrak {i}}_{y},\ J^{3}={\mathfrak {i}}_{z},\ J^{4}=\varrho }$
we get instead of (63)
(63a) ${\displaystyle \left\{{\begin{array}{c}\mathrm {rot} \ {\mathfrak {h}}-{\frac {\partial {\mathfrak {e}}'}{\partial t}}={\mathfrak {i}}\\\\\mathrm {div} \ {\mathfrak {e'}}=\varrho \end{array}}\right.}$
The equations (60), (62) and (63) give thus a generalisation of Maxwell's field-equations [ 814 ] in vacuum, which remains true in our chosen system of co-ordinates.
The energy-components of the electromagnetic field. Let us form the inner-product
(65) ${\displaystyle \varkappa _{\sigma }=F_{\sigma \mu }J^{\mu }}$
According to (61) its components can be written down in the three-dimensional notation.
(65a) ${\displaystyle {\begin{cases}\varkappa _{1}=\varrho {\mathfrak {e}}_{x}+[{\mathfrak {i,h}}]_{x}\\\dots \dots \\\dots \dots \\\varkappa _{4}=-({\mathfrak {i,e}})\end{cases}}}$
${\displaystyle \varkappa _{\sigma }}$ is a covariant four-vector whose components are equal to the negative impulse and energy which are transferred to the electro-magnetic field per unit of time, and per unit of volume, by the electrical masses. If the electrical masses be free, that is, under the influence of the electromagnetic field only, then the covariant four-vector ${\displaystyle \varkappa _{\sigma }}$ will vanish.
In order to get the energy components ${\displaystyle T_{\sigma }^{\nu }}$ of the electro-magnetic field, we require only to give to the equation ${\displaystyle \varkappa _{\sigma }=0}$, the form of the equation (57).
From (63) and (65) we get first,
${\displaystyle \varkappa _{\sigma }=F_{\sigma \mu }{\frac {\partial F^{\mu \nu }}{\partial x_{\nu }}}={\frac {\partial }{\partial x_{\nu }}}\left(F_{\sigma \mu }F^{\mu \nu }\right)-F^{\mu \nu }{\frac {\partial F_{\sigma \mu }}{\partial x_{\nu }}}}$
On account of (60) the second member on the right-hand side admits of the transformation —
${\displaystyle F^{\mu \nu }{\frac {\partial F_{\sigma \mu }}{\partial x_{\nu }}}=-{\frac {1}{2}}F^{\mu \nu }{\frac {\partial F_{\mu \nu }}{\partial x_{\sigma }}}=-{\frac {1}{2}}g^{\mu \alpha }g^{\nu \beta }F_{\alpha \beta }{\frac {\partial F_{\mu \nu }}{\partial x_{\sigma }}}}$
Owing to symmetry, this expression can also be written in the form
${\displaystyle -{\frac {1}{4}}\left[g^{\mu \alpha }g^{\nu \beta }F_{\alpha \beta }{\frac {\partial F_{\mu \nu }}{\partial x_{\sigma }}}+g^{\mu \alpha }g^{\nu \beta }{\frac {\partial F_{\alpha \beta }}{\partial x_{\sigma }}}F_{\mu \nu }\right]}$
which can also be put in the form
${\displaystyle -{\frac {1}{4}}{\frac {\partial }{\partial x_{\sigma }}}\left(g^{\mu \alpha }g^{\nu \beta }F_{\alpha \beta }F_{\mu \nu }\right)+{\frac {1}{4}}F_{\alpha \beta }F_{\mu \nu }{\frac {\partial }{\partial x_{\sigma }}}\left(g^{\mu \alpha }g^{\nu \beta }\right)}$
The first of these terms can be written shortly as
${\displaystyle -{\frac {1}{4}}{\frac {\partial }{\partial x_{\sigma }}}\left(F^{\mu \nu }F_{\mu \nu }\right)}$
[ 815 ] and the second after differentiation can be transformed in the form
${\displaystyle -{\frac {1}{2}}F^{\mu \tau }F_{\mu \nu }g^{\nu \varrho }{\frac {\partial g_{\sigma \tau }}{\partial x_{\sigma }}}}$
If we take all the three terms together, we get the relation
(66) ${\displaystyle \varkappa _{\sigma }={\frac {\partial T_{\sigma }^{\nu }}{\partial x_{\nu }}}-{\frac {1}{2}}g^{\tau \mu }{\frac {\partial g_{\mu \nu }}{\partial x_{\sigma }}}T_{\tau }^{\nu }}$
where
(66a) ${\displaystyle T_{\tau }^{\nu }=-F_{\sigma \alpha }F^{\nu \alpha }+{\frac {1}{4}}\delta _{\sigma }^{\nu }F_{\alpha \beta }F^{\alpha \beta }}$
On account of (30) the equation (66) becomes equivalent to (57) and (57a) when ${\displaystyle \varkappa _{\sigma }}$ vanishes. Thus ${\displaystyle T_{\sigma }^{\nu }}$ are the energy-components of the electro-magnetic field. With the help of (61) and (64) we can easily show that the energy-components of the electro-magnetic field, in the case of the special relativity theory, give rise to the well-known Maxwell-Poynting expressions.
We have now deduced the most general laws which the gravitation-field and matter satisfy when we use a co-ordinate system for which ${\displaystyle {\sqrt {-g}}=1}$. Thereby we achieve an important simplification in all our formulas and calculations, without renouncing the conditions of general covariance, as we have obtained the equations through a specialisation of the co-ordinate system from the general covariant-equations. Still the question is not without formal interest, whether, when the energy-components of the gravitation-field and matter is defined in a generalised manner without any specialisation of co-ordinates, the laws of conservation have the form of the equation (56), and the field-equations of gravitation hold in the form (52) or (52a); such that on the left-hand side, we have a divergence in the usual sense, and on the right-hand side, the sum of the energy-components of matter and gravitation. I have found out that this is indeed the case. But I am of opinion that the communication of my rather comprehensive work on this subject will not pay, for nothing essentially new comes out of it. [ 816 ]
## E. § 21. Newton's Theory as a First Approximation.
We have already mentioned several times that the special relativity theory is to be looked upon as a special case of the general, in which ${\displaystyle g_{\mu \nu }}$ have constant values (4).
This signifies, according to what has been said before, a total neglect of the influence of gravitation. We get a more realistic approximation if we consider the case when ${\displaystyle g_{\mu \nu }}$ differ from (4) only by small magnitudes (compared to 1) where we can neglect small quantities of the second and higher orders (first aspect of the approximation.)
Further it should be assumed that within the space-time region considered, ${\displaystyle g_{\mu \nu }}$ at infinite distances (using the word infinite in a spatial sense) can, by a suitable choice of co-ordinates, tend to the limiting values (4); i.e., we consider only those gravitational fields which can be regarded as produced by masses distributed over finite regions.
We can assume that this approximation should lead to Newton's theory. For it however, it is necessary to treat the fundamental equations from another point of view. Let us consider the motion of a particle according to the equation (46). In the case of the special relativity theory, the components
${\displaystyle {\frac {dx_{1}}{ds}},\ {\frac {dx_{2}}{ds}},\ {\frac {dx_{3}}{ds}}}$
can take any values; This signifies that any velocity
${\displaystyle v={\sqrt {{\frac {dx_{1}^{2}}{dx_{4}}}+{\frac {dx_{2}^{2}}{dx_{4}}}+{\frac {dx_{3}^{2}}{dx_{4}}}}}}$
can appear which is less than the velocity of light in vacuum (v<1). If we finally limit ourselves to the consideration of the case when v is small compared to the velocity of light, it signifies that the components
${\displaystyle {\frac {dx_{1}}{ds}},\ {\frac {dx_{2}}{ds}},\ {\frac {dx_{3}}{ds}}}$
can be treated as small quantities, whereas ${\displaystyle dx_{4}/ds}$ is equal to 1, up to the second-order magnitudes (the second point of view for approximation).
[ 817 ] Now we see that, according to the first view of approximation the magnitudes ${\displaystyle \Gamma _{\mu \nu }^{\tau }}$ are all small quantities of at least the first order. A glance at (46) will also show, that in this equation according to the second view of approximation, we are only to take into account those terms for which ${\displaystyle \mu =\nu =4}$.
By limiting ourselves only to terms of the lowest order we get instead of (46), first, the equations : —
${\displaystyle {\frac {d^{2}x_{\tau }}{dt^{2}}}=\Gamma _{44}^{\tau }}$
where ${\displaystyle dx=dx_{4}=dt}$, or by limiting ourselves only to those terms which according to the first stand-point are approximations of the first order,
${\displaystyle {\begin{array}{l}{\frac {d^{2}x_{\tau }}{dt^{2}}}=\left[{44 \atop \tau }\right]\ (\tau =1,2,3)\\\\{\frac {d^{2}x_{4}}{dt^{2}}}=-\left[{44 \atop \tau }\right]\end{array}}}$
If we further assume that the gravitation-field is quasi-static, i.e., it is limited only to the case when the matter producing the gravitation-field is moving slowly (relative to the velocity of light) we can neglect the differentiations of the positional co-ordinates on the right-hand side with respect to time, so that we get
(67) ${\displaystyle {\frac {d^{2}x_{\tau }}{dt^{2}}}=-{\frac {1}{2}}{\frac {\partial g_{44}}{\partial x_{\tau }}}\ (\tau =1,2,3)}$
This is the equation of motion of a material point according to Newton's theory, where ${\displaystyle g_{44}/2}$ plays the part of gravitational potential. The remarkable thing in the result is that in the first-approximation of motion of the material pointy only the component ${\displaystyle g_{44}}$ of the fundamental tensor appears.
Let us now turn to the field-equation (53). In this case, we have to remember that the energy-tensor of matter is exclusively defined in a narrow sense by the density ${\displaystyle \varrho }$ of matter, i.e., by the second member on the right-hand side of 58 [(58a, or 58b)]. If we make the necessary approximations, then all component vanish except
${\displaystyle T_{44}=\varrho =T}$
[ 818 ] On the left-hand side of (53) the second term is an infinitesimal of the second order, so that the first leads to the following terms in the approximation, which are rather interesting for us
${\displaystyle +{\frac {\partial }{\partial x_{1}}}\left[{\mu \nu \atop 1}\right]+{\frac {\partial }{\partial _{2}}}\left[{\mu \nu \atop 2}\right]+{\frac {\partial }{\partial x_{3}}}\left[{\mu \nu \atop 3}\right]-{\frac {\partial }{\partial x_{4}}}\left[{\mu \nu \atop 4}\right]}$
neglecting all differentiations with regard to time, this leads, when ${\displaystyle \mu =\nu =4}$, to the expression
${\displaystyle -{\frac {1}{2}}\left({\frac {\partial ^{2}g_{44}}{\partial x_{1}^{2}}}+{\frac {\partial ^{2}g_{44}}{\partial x_{2}^{2}}}+{\frac {\partial ^{2}g_{44}}{\partial x_{3}^{2}}}\right)=-{\frac {1}{2}}\Delta g_{44}}$
The last of the equations (53) thus leads to
(68) ${\displaystyle \Delta g_{44}=\varkappa \varrho }$
The equations (67) and (68) together, are equivalent to Newton's law of gravitation.
For the gravitation-potential we get from (67) and (68) the exp.
(68a) ${\displaystyle -{\frac {\varkappa }{8\pi }}\int {\frac {\varrho d\tau }{r}}}$
whereas the Newtonian theory for the chosen unit of time gives
${\displaystyle -{\frac {K}{c^{2}}}\int {\frac {\varrho d\tau }{r}}}$
where K denotes usually the gravitation-constant ${\displaystyle 6,7\cdot 10^{-8}}$; equating them we get (69)
(69) ${\displaystyle \varkappa ={\frac {8\pi K}{c^{2}}}=11,87\cdot 10^{-27}}$
### § 22. Behaviour of measuring rods and clocks in a statical gravitation-field. Curvature of light-rays. Perihelion-motion of the paths of the Planets.
In order to obtain Newton's theory as a first approximation we had to calculate only ${\displaystyle g_{44}}$ out of the 10 components of the gravitation-potential ${\displaystyle g_{\mu \nu }}$, for that is the only component which conies in the first approximate equations of motion of a material point in a gravitational field. We see however, that the other components of ${\displaystyle g_{\mu \nu }}$ should also differ from the values given in (4) as required by the condition ${\displaystyle g=-1}$ .
[ 819 ] For a mass-point at the origin of co-ordinates and generating the gravitational field, we get as a first approximation the symmetrical solution of the equation: —
${\displaystyle {\begin{cases}g_{\varrho \sigma }=-\delta _{\varrho \sigma }-\alpha {\frac {x_{\varrho }x_{\sigma }}{r^{3}}}\ (\varrho \ \mathrm {and} \ \sigma \ \mathrm {between} \ 1\ \mathrm {and} \ 3)\\\\g_{\varrho 4}=g_{4\varrho }=0\ (\varrho \ \mathrm {between} \ 1\ \mathrm {and} \ 3)\\\\g_{44}=1-{\frac {\alpha }{r}}\end{cases}}}$
${\displaystyle \delta _{\varrho \sigma }}$ is 1 or 0, according as ${\displaystyle \varrho =\sigma }$ or ${\displaystyle \varrho \sigma }$, and r is the quantity
${\displaystyle +{\sqrt {x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}}}$
On account of (68a) we have
(70a) ${\displaystyle \alpha ={\frac {\varkappa M}{8\pi }}}$
where M denotes the mass generating the field. It is easy to verify that this solution satisfies approximately the field-equation outside the mass.
Let us now investigate the influences which the field of mass M will have upon the metrical properties of the field. Between the lengths and times ds measured "locally" (§ 4) on the one hand, and the differences in co-ordinates ${\displaystyle dx_{\nu }}$ on the other, we have the relation
${\displaystyle ds^{2}=g_{\mu \nu }dx_{\mu }dx_{\nu }}$
For a unit measuring rod, for example, placed "parallel" to the x-axis, we have to put
${\displaystyle ds^{2}=-1;\ dx_{2}=dx_{3}=dx_{4}=0}$
then
${\displaystyle -1=g_{11}dx_{1}^{2}}$
If the unit measuring rod lies on the x-axis, the first of the equations (70) gives
${\displaystyle g_{11}=-\left(1+{\frac {\alpha }{r}}\right)}$
From both these relations it follows as a first approximation that
(71) ${\displaystyle dx=1-{\frac {\alpha }{2r}}}$
[ 820 ] The unit measuring rod appears, when referred to the co-ordinate-system, shortened by the calculated magnitude through the presence of the gravitational field, when we place it radially in the field.
Similarly we can get its co-ordinate-length in a tangential position, if we put for example
${\displaystyle ds^{2}=-1;\ dx_{1}=dx_{3}=dx_{4}=0;\ x_{1}=r,\ x_{2}=x_{3}=0}$
we then get
(71a) ${\displaystyle -1=g_{22}dx_{2}^{2}=-dx_{2}^{2}}$
The gravitational field has no influence upon the length of the rod, when we put it tangentially in the field.
Thus Euclidean geometry does not hold in the gravitational field even in the first approximation, if we conceive that one and the same rod independent of its position and its orientation can serve as the measure of the same extension. But a glance at (70a) and (69) shows that the expected difference is much too small to be noticeable in the measurement of earth's surface.
We would further investigate the rate of going of a unit-clock which is placed in a statical gravitational field. Here we have for a period of the clock
${\displaystyle ds=1;\ dx_{1}=dx_{2}=dx_{3}=0}$
then we have
${\displaystyle {\begin{array}{c}1=g_{44}dx_{4}^{2};\\\\dx_{4}={\frac {1}{\sqrt {g_{44}}}}={\frac {1}{\sqrt {1+\left(g_{44}-1\right)}}}=1-{\frac {g_{44}-1}{2}}\end{array}}}$
or
(72) ${\displaystyle dx_{4}=1+{\frac {\varkappa }{8\pi }}\int {\frac {\varrho d\tau }{r}}}$
Therefore the clock goes slowly when it is placed in the neighbourhood of ponderable masses. It follows from this that the spectral lines in the light coming to us from the surfaces of big stars should appear shifted towards the red end of the spectrum.[15]
[ 821 ] Let us further investigate the path of light-rays in a statical gravitational field. According to the special relativity theory, the velocity of light is given by the equation
${\displaystyle -dx_{1}^{2}-dx_{2}^{2}-dx_{3}^{2}+dx_{4}^{2}=0}$
thus also according to the generalised relativity theory it is given by the equation
(73) ${\displaystyle ds^{2}=g_{\mu \nu }dx_{\mu }dx_{\nu }=0}$
If the direction, i.e., the ratio ${\displaystyle dx_{1}:dx_{2}:dx_{3}}$ is given, the equation (73) gives the magnitudes
${\displaystyle {\frac {dx_{1}}{dx_{4}}},\ {\frac {dx_{2}}{dx_{4}}},\ {\frac {dx_{3}}{dx_{4}}}}$
and with it the velocity,
|
|
##### Get a free home demo of LearnNext
Available for CBSE, ICSE and State Board syllabus.
Call our LearnNext Expert on 1800 419 1234 (tollfree)
OR submit details below for a call back
clear
searchtune
Prachi Sharma
Sep 10, 2014
I dont know
Prachi Sharma
Sep 11, 2014
#### Define communalism. What role does it play in Indian politics?
Communalism implies a strong sense of belonging to a particular religious community to the exclusion of others. The concept of communalism holds that religious distinction is the most fundamental and overriding...
Aryan Rawat
Sep 17, 2015
#### What is the difference between CASTE IN POLITICS and POLITICS IN CASTE ?
caste in politics:
caste in politics explains how caste is an important factor in elections and also it refers to various forms caste can take in politics, and how this issue which is indeed a social fac...
Navya agarwal
May 16, 2014
#### Every social difference does not lead to social division –Explain with examples
Every social difference does not lead to social division people belonging to different social groups also share certain similarities cutting across the boundaries of thier groups.
Navya agarwal
May 30, 2014
#### What is Mandal Commission?
In 1979, a commision under the chairmanship of B.P.Mandal was set up to identity other backward classes (OBC) and make recommendations to the government of India for the welfare and develpoment of the peo...
Harsha Bohra
Sep 8, 2014
#### What is difference between socialdivision and social difference
Social division-
Social division is the result of the aggregation of social differences with other forms of differences. E.g. caste based division becoming a basis of economic stratificat...
Apr 25, 2015
#### Explain the ideas suggested by Johann Gottfried in promoting true spirit of a nation
German philospher and romanticist Johann Gottfried Herder believed that true German culture can be discovered only among common people through their practices of flock traditions.<...
Arick bir Singh
Sep 2, 2015
#### Explain the three features of the modal of the secular state in India?
(i) The Constitution of India does not give special recognition to any religion. (ii) All individuals and communities have been given freedom to practice, profess and propagate any religion. (iii) The Constit...
Mukesh
Aug 18, 2015
Prachi Sharma
Sep 10, 2014
I dont know
Prachi Sharma
Sep 11, 2014
#### Define communalism. What role does it play in Indian politics?
Communalism implies a strong sense of belonging to a particular religious community to the exclusion of others. The concept of communalism holds that religious distinction is the most fundamental and overriding...
Aryan Rawat
Sep 17, 2015
#### What is the difference between CASTE IN POLITICS and POLITICS IN CASTE ?
caste in politics:
caste in politics explains how caste is an important factor in elections and also it refers to various forms caste can take in politics, and how this issue which is indeed a social fac...
Navya agarwal
May 16, 2014
#### Every social difference does not lead to social division –Explain with examples
Every social difference does not lead to social division people belonging to different social groups also share certain similarities cutting across the boundaries of thier groups.
Navya agarwal
May 30, 2014
#### What is Mandal Commission?
In 1979, a commision under the chairmanship of B.P.Mandal was set up to identity other backward classes (OBC) and make recommendations to the government of India for the welfare and develpoment of the peo...
Harsha Bohra
Sep 8, 2014
#### What is difference between socialdivision and social difference
Social division-
Social division is the result of the aggregation of social differences with other forms of differences. E.g. caste based division becoming a basis of economic stratificat...
Apr 25, 2015
#### Explain the ideas suggested by Johann Gottfried in promoting true spirit of a nation
German philospher and romanticist Johann Gottfried Herder believed that true German culture can be discovered only among common people through their practices of flock traditions.<...
Arick bir Singh
Sep 2, 2015
#### Explain the three features of the modal of the secular state in India?
(i) The Constitution of India does not give special recognition to any religion. (ii) All individuals and communities have been given freedom to practice, profess and propagate any religion. (iii) The Constit...
Mukesh
Aug 18, 2015
|
|
# Math Help - Vectors forming a basis,spanning
1. ## Vectors forming a basis,spanning
Do the following vectors form a basis of $R^3$, span $R^3$, or neither?
a1 = (1, 2, 1), a2 = (-1, 0, -1), a3 = (0, 0, 1)
Do I just check to see if they're linearly independent?
2. Do I just check to see if they're linearly independent?
Correct. Since the dimension of the space is 3, and you have 3 vectors, it follows that being a basis is equivalent to spanning the space.
3. Ok, so can I say this is true:
If the number of vectors equals the n in R^n, then they are spanning? So, in this case, if there are two vectors, they do not span? What if I have more than 3 vectors?
I can also say that if they span AND they are linearly independent, then the vectors form a basis?
4. If the number of vectors equals the n in R^n, then they are spanning?
Only if they are linearly independent. In that case, they are also a basis.
What if I have more than 3 vectors?
In three-dimensional space, more than 3 vectors could span the space, but they definitely won't be a basis, because they won't be linearly independent.
I can also say that if they span AND they are linearly independent, then the vectors form a basis?
Yes.
5. Thank you for your quick help.
For clarification:
If I have two vectors, for a 3-dimensional space, these vectors can not span or be a basis. This is true?
If I have four vectors, for a 3-d space, these vectors CAN span, but can not be a basis, since they will not be linearly independent. This is true?
How would you show that the 4 vectors span?
6. If I have two vectors, for a 3-dimensional space, these vectors can not span or be a basis. This is true?
True.
If I have four vectors, for a 3-d space, these vectors CAN span, but can not be a basis, since they will not be linearly independent. This is true?
True.
How would you show that the 4 vectors span?
Write an arbitrary vector $x$ in the space as a linear combination of the 4 vectors. That will generate a linear system of equations. If there are any solutions, then the vectors span. If there are not solutions, the vectors do not span. Make sense?
7. With the linear system of equations, what is the right hand equal to for the equations?
8. With the linear system of equations, what is the right hand equal to for the equations?
The components of your arbitrary vector $x$.
9. Hmm, I don't quite get it. I have four vectors: (-1, 2, 3), (0, 1, 0), (1, 2, 3), (-3, 2, 4).
Would I be solving for:
-1*X_1 + X_3 - 3*X_4 = Y_1
2*X_1 + X_2 + 2*X_3 + 2*X_4 = Y_2
3*X_1 + 3*X_3 + 4*X_4 = Y_3
I'm sure I didn't set that up right. There are 7 unknowns and 3 equations.
10. Your system is correct, but you're interpreting it incorrectly. In the context of solving that system, you don't treat your $Y_{j}$ as unknowns for which to solve. They are just arbitrary numbers. Treat them like you would treat the RHS of any system. You need to solve for the $X_{k}$'s. Make sense?
11. I can't solve this to get each X in terms of only Y's.
Does this mean it does not span?
12. You can't solve it exactly to get a unique solution. However, you're not after a unique solution (which, incidentally, would correspond with being a basis). You're after any solutions at all. Since the system is under-determined, if you have any solutions, you're going to have infinitely many solutions. You're checking to see whether you have zero solutions, or infinitely many solutions. Zero solutions means you don't have a spanning set. Infinitely many solutions means you have a spanning set, but it's not a basis. Exactly one solution means you have yourself a basis. Make sense?
13. Very much so. Thank you for your help.
14. Great. You're welcome. Have a good one!
|
|
# Basics on Dirichlet Series 08/2008, Oliver Knill
The finite sum is a discrete version of a definite integral, the discrete version of the derivative. A discrete version of the partial integration formula
with is:
Lemma 1 (Abel's summation formula)
Proof. The statement is true for . Induction with respect to allows to compare the with the right hand side : the equation reads
Lemma 2 let be a monotonic sequence. Then , if .
Proof.
Theorem 3 (Jensen-Cahen) If the Dirichlet series is convergent for , then it is convergent for in any cone .
Proof.
A sequence of functions defined on a region converge uniformely to a function if .
It follows from the Jensen-Cahen Theorem that for every region in the cone , the series is uniformly convergent as well as any of its derivatives .
Theorem 4 In every cone intersected with , there are only finitely many roots of the Dirichlet series, unless the function is identically zero.
Proof. We show that there can not be any accumulation points of roots in any such intersection . Assume there were roots with . Then the function is uniformly convergent in and for uniformly along any path. Because , we must have . We can now continue in the same way to get for any integer .
The abscissa of simple convergence of a Dirichlet series is converges for all . The abscissa of absolute convergence of is converges absolutely for all .
Example. The Dirichlet eta function has the abscissa of convergence and the absolute abscissa of convergence .
Assume a Dirichlet series is not convergent for . In other words, the series does not converge. The following formula generalizes the formula for the radius of convergence for Taylor series, where and where the radius of convergence is related with the abscissa of convergence by .
Theorem 5 (Cahen's formula) Assume the series does not converge, then the abscissa of convergence of the Dirichlet series is
Proof. Because the sequence does not converge and especially not converge to 0, there is a constant and infinitely many for which . Therefore, .
(i) Given show that the series converges. Given or for large enough . Now use Abel's formula to show that the sum converges.
(ii) Assume converges. Now write with Abel
showing that there exists for which .
Similarly, there is a formula for the abscissa of absolute convergence: , where .
Cahen's formula links the growth of the random walk with the convergence properties of the zeta function .
Example: has and .
Source: [1].
## Bibliography
1
G.H. Hardy and M. Riesz.
The general theory of Dirichlet's series.
Hafner Publishing Company, 1972.
2008-11-09
|
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
Due to system maintenance, CK-12 will be unavailable on Friday,8/19/2016 from 6:00p.m to 10:00p.m. PT.
20.3: The pH Concept
Difficulty Level: At Grade Created by: CK-12
Lesson Objectives
The student will:
• calculate \begin{align*}[\mathrm{H}^+]\end{align*} and \begin{align*}[\mathrm{OH}^-]\end{align*} for a solution of acid or base.
• define autoionization.
• state the \begin{align*}[\mathrm{H}^+]\end{align*}, \begin{align*}[\mathrm{OH}^-]\end{align*}, and \begin{align*}K_w\end{align*} values for the autoionization of water.
• define pH and describe the pH scale.
• write the formulas for pH and pOH and express their values in a neutral solution at \begin{align*}25^\circ\mathrm{C}\end{align*}.
• explain the relationships among pH, pOH, and \begin{align*}K_w\end{align*}.
• calculate \begin{align*}[\mathrm{H}^+]\end{align*}, \begin{align*}[\mathrm{OH}^-]\end{align*}, pH, and pOH given the value of any one of the other values.
• explain the relationship between the acidity or basicity of a solution and the hydronium ion concentration, \begin{align*}[\mathrm{H}_3\mathrm{O}^+]\end{align*} , and the hydroxide ion concentration, \begin{align*}[\mathrm{OH}^-]\end{align*}, of the solution.
• predict whether an aqueous solution is acidic, basic, or neutral from \begin{align*}[\mathrm{H}_3\mathrm{O}^+]\end{align*}, \begin{align*}[\mathrm{OH}^-]\end{align*}, or the pH.
Vocabulary
• autoionization
• hydronium ion
• ion product constant for water (\begin{align*}K_w\end{align*})
• pH
• pOH
Introduction
We have learned many properties of water, such as pure water does not conduct electricity. The reason pure water does not conduct electricity is because the concentration of ions present when water ionizes is small. In this lesson, we will look a little closer at this property of water and how it relates to acids and bases.
The Hydronium Ion
Recall that ions in solution are hydrated. That is, water molecules are loosely bound to the ions by the attraction between the charge on the ion and the oppositely charged end of the polar water molecules, as illustrated in the figure below. When we write the formula for these ions in solution, we do not show the attached water molecules. It is simply recognized by chemists that ions in solution are always hydrated.
As with any other ion, a hydrogen ion dissolved in water will be closely associated with one or more water molecules. This fact is sometimes indicated explicitly by writing the hydronium ion, \begin{align*}\mathrm{H}_3\mathrm{O}^+\end{align*}, in place of the hydrogen ion, \begin{align*}\mathrm{H}^+\end{align*}. Many chemists still use \begin{align*}\mathrm{H}^+_{(aq)}\end{align*} to represent this situation, but it is understood that this is just an abbreviation for what is really occurring in solution. You are likely to come across both, and it is important for you to understand that they are actually describing the same entity. When using the hydronium ion in a chemical equation, you may need to add a molecule of water to the other side so that the equation will be balanced. This is illustrated in the equations below. Note that you are not really adding anything to the reaction. The \begin{align*}(aq)\end{align*} symbol indicates that the various reaction components are dissolved in water, so writing one of these water molecules out explicitly in the equation does not change the reaction conditions.
\begin{align*}\mathrm{HCl}_{(aq)} \rightarrow \mathrm{H}^+_{(aq)} + \mathrm{Cl}^-_{(aq)}\end{align*} (not showing hydronium)
\begin{align*}\mathrm{HCl}_{(aq)} + \mathrm{H}_2\mathrm{O}_{(l)} \rightarrow \mathrm{H}_3\mathrm{O}^+_{(aq)} + \mathrm{Cl}^-_{(aq)}\end{align*} (showing hydronium)
Relationship Between [H+] and [OH-]
Even totally pure water will contain a small amount of \begin{align*}\mathrm{H}^+\end{align*} and \begin{align*}\mathrm{OH}^-\end{align*}. This is because water undergoes a process known as autoionization. Autoionization occurs when the same reactant acts as both the acid and the base. Look at the reaction below.
\begin{align*}\mathrm{H}_2\mathrm{O}_{(aq)} + \mathrm{H}_2\mathrm{O}_{(aq)} \rightarrow \mathrm{H}_3\mathrm{O}^+_{(aq)} + \mathrm{OH}^-_{(aq)}\end{align*}
The ionization of water is frequently written as:
\begin{align*}\mathrm{H}_2\mathrm{O}_{(l)} \rightarrow \mathrm{H}^+ + \mathrm{OH}^-\end{align*}.
The equilibrium constant expression for this dissociation would be \begin{align*}K_w = [\mathrm{H}^+][\mathrm{OH}^-]\end{align*}. From experimentation, chemists have determined that in pure water, \begin{align*}[\mathrm{H}^+] = 1 \times 10^{-7} \ \mathrm{mol/L}\end{align*} and \begin{align*}[\mathrm{OH}^-] = 1 \times 10^{-7} \ \mathrm{mol/L}\end{align*}.
Because this is a particularly important equilibrium, the equilibrium constant is given a subscript to differentiate it from other reactions. \begin{align*}K_w\end{align*}, also known as the ion product constant for water, always refers to the autoionization of water. We can the calculate \begin{align*}K_w\end{align*} because we know the value of \begin{align*}[\mathrm{H}^+]\end{align*} and \begin{align*}[\mathrm{OH}^-]\end{align*} for pure water at \begin{align*}25^\circ\mathrm{C}\end{align*}.
\begin{align*}K_w = [\mathrm{H}^+][\mathrm{OH}^-]\end{align*}
\begin{align*}K_w = (1 \times 10^{-7})(1 \times 10^{-7})\end{align*}
\begin{align*}K_w = 1 \times 10^{-14}\end{align*}
A further definition of acids and bases can now be made:
When \begin{align*}[\mathrm{H}_3\mathrm{O}^+] = [\mathrm{OH}^-]\end{align*} (as in pure water), the solution is neutral.
When \begin{align*}[\mathrm{H}_3\mathrm{O}^+] > [\mathrm{OH}^-]\end{align*}, the solution is an acid.
When \begin{align*}[\mathrm{H}_3\mathrm{O}^+] < [\mathrm{OH}^-]\end{align*}, the solution is a base.
Stated another way, an acid has a \begin{align*}[\mathrm{H}_3\mathrm{O}^+]\end{align*} that is greater than \begin{align*}1 \times 10^{-7}\end{align*} and a \begin{align*}[\mathrm{OH}^-]\end{align*} that is less than \begin{align*}1 \times 10^{-7}\end{align*}. A base has a \begin{align*}[\mathrm{OH}^-]\end{align*} that is greater than \begin{align*}1 \times 10^{-7}\end{align*} and a \begin{align*}[\mathrm{H}_3\mathrm{O}^+]\end{align*} that is less than \begin{align*}1 \times 10^{-7}\end{align*}.
The equilibrium between \begin{align*}\mathrm{H}^+\end{align*}, \begin{align*}\mathrm{OH}^-\end{align*}, and \begin{align*}\mathrm{H}_2\mathrm{O}\end{align*} will exist in all water solutions, regardless of anything else that may be present in the solution. Some substances that are placed in water may become involved with either the hydrogen or hydroxide ions and alter the equilibrium state. However, as long as the temperature is kept constant at \begin{align*}25^\circ\mathrm{C}\end{align*}, the equilibrium will shift to maintain the equilibrium constant, \begin{align*}K_w\end{align*}, at exactly \begin{align*}1 \times 10^{-14}\end{align*}.
For example, a sample of pure water at \begin{align*}25^\circ\mathrm{C}\end{align*} has \begin{align*}[\mathrm{H}^+]\end{align*} equal to \begin{align*}1 \times 10^{-7} \ \mathrm{M}\end{align*} and \begin{align*}[\mathrm{OH}^-] = 1 \times 10^{-7} \ \mathrm{M}\end{align*}. The \begin{align*}K_w\end{align*} for this solution, of course, will be \begin{align*}1 \times 10^{-14}\end{align*}. Suppose some \begin{align*}\mathrm{HCl}\end{align*} gas is added to this solution so that the \begin{align*}\mathrm{H}^+\end{align*} concentration increases. This is a stress to the equilibrium system. Since the concentration of a product is increased, the reverse reaction rate will increase and the equilibrium will shift toward the reactants. The concentrations of both ions will be reduced until equilibrium is re-established. If the final \begin{align*}[\mathrm{H}^+] = 1 \times 10^{-4} \ \mathrm{M}\end{align*}, we can calculate the \begin{align*}[\mathrm{OH}^-]\end{align*} because we know that the product of \begin{align*}[\mathrm{H}^+]\end{align*} and \begin{align*}[\mathrm{OH}^-]\end{align*} at equilibrium is always \begin{align*}1 \times 10^{-14}\end{align*}.
\begin{align*}K_w = [\mathrm{H}^+][\mathrm{OH}^-] = 1 \times 10^{-14}\end{align*}
\begin{align*}[\mathrm{OH}^-] = \frac {1 \times 10^{-14}} {[\mathrm{H}^+]} = \frac {1 \times 10^{-14}} {1 \times 10^{-4}} = 1 \times 10^{-10} \ \mathrm{M}\end{align*}
Suppose, on the other hand, something is added to the solution that reduces the hydrogen ion concentration. As soon as the hydrogen ion concentration begins to decrease, the reverse rate decreases and the forward rate will shift the equilibrium toward the products. The concentrations of both ions will be increased until equilibrium is re-established. If the final hydrogen ion concentration is \begin{align*}1 \times 10^{-12} \ \mathrm{M}\end{align*}, we can calculate the final hydroxide ion concentration.
\begin{align*}K_w = [\mathrm{H}^+][\mathrm{OH}^-] = 1 \times 10^{-14}\end{align*}
\begin{align*}[\mathrm{OH}^-] = \frac {1 \times 10^{-14}} {[\mathrm{H}^+]} = \frac {1 \times 10^{-14}} {1 \times 10^{-12}} = 1 \times 10^{-2} \ \mathrm{M}\end{align*}
Using the \begin{align*}K_w\end{align*} expression and our knowledge of the \begin{align*}K_w\end{align*} value, as long as we know either the \begin{align*}[\mathrm{H}^+]\end{align*} or the \begin{align*}[\mathrm{OH}^-]\end{align*} in a water solution, we can always calculate the value for the other one.
Example:
What would be the \begin{align*}[\mathrm{H}^+]\end{align*} for a grapefruit found to have a \begin{align*}[\mathrm{OH}^-]\end{align*} of \begin{align*}1.26 \times 10^{-11} \ \mathrm{mol/L}\end{align*}? Is the solution acidic, basic, or neutral?
Solution:
\begin{align*}K_w = [\mathrm{H}^+][\mathrm{OH}^-] = 1.00 \times 10^{-14}\end{align*}
\begin{align*}[\mathrm{H}^+] = \frac {1 \times 10^{-14}} {[\mathrm{OH}^-]} = \frac {1 \times 10^{-14}} {1.26 \times 10^{-11}} = 7.94 \times 10^{-4} \ \mathrm{M}\end{align*}
Since the \begin{align*}[\mathrm{H}^+]\end{align*} in this solution is greater than \begin{align*}1 \times 10^{-7} \ \mathrm{M}\end{align*}, the solution is acidic.
pH and pOH
There are a few very concentrated acid and base solutions used in industrial chemistry and laboratory situations. For the most part, however, acid and base solutions that occur in nature, used in cleaning, and used in biochemistry applications are relatively dilute. Most of the acids and bases dealt with in laboratory situations have hydrogen ion concentrations between \begin{align*}1.0 \ \mathrm{M}\end{align*} and \begin{align*}1.0 \times 10^{-14} \ \mathrm{M}\end{align*}. Expressing hydrogen ion concentrations in exponential numbers can become tedious, so a Danish chemist named Søren Sørensen developed a shorter method for expressing acid strength or hydrogen ion concentration with a non-exponential number. This value is referred to as pH and is defined by the following equation:
\begin{align*}\mathrm{pH} = -\log [\mathrm{H}^+]\end{align*},
where \begin{align*}\mathrm{p} = -\log\end{align*} and \begin{align*}\mathrm{H}\end{align*} refers to the hydrogen ion concentration. The p from pH comes from the German word potenz, meaning power or the exponent of. Rearranging this equation to solve for \begin{align*}[\mathrm{H}^+]\end{align*}, we get \begin{align*}[\mathrm{H}^+] = 10^{-\mathrm{pH}}\end{align*}. If the hydrogen ion concentration is between \begin{align*}1.0 \ \mathrm{M}\end{align*} and \begin{align*}1.0 \times 10^{-14} \ \mathrm{M}\end{align*}, the value of the pH will be between \begin{align*}0\end{align*} and \begin{align*}14\end{align*}.
Example:
Calculate the pH of a solution where \begin{align*}[\mathrm{H}^+] = 0.01 \ \mathrm{mol/L}\end{align*}.
Solution:
\begin{align*}\mathrm{pH} = -\log (0.01)\end{align*}
\begin{align*}\mathrm{pH} = -\log (1 \times 10^{-2})\end{align*}
\begin{align*}\mathrm{pH} = 2\end{align*}
Example:
Calculate the \begin{align*}[\mathrm{H}^+]\end{align*} if the pH is \begin{align*}4\end{align*}.
Solution:
\begin{align*}[\mathrm{H}^+] = 10^{-\mathrm{pH}}\end{align*}
\begin{align*}[\mathrm{H}^+] = 10^{-4}\end{align*}
\begin{align*}[\mathrm{H}^+] = 1 \times 10^{-4} \ \mathrm{mol/L}\end{align*}
Example:
Calculate the pH of saliva, where \begin{align*}[\mathrm{H}^+] = 1.58 \times 10^{-6} \ \mathrm{mol/L}\end{align*}.
Solution:
\begin{align*}\mathrm{pH} = -\log [\mathrm{H}^+] = -\log (1.58 \times 10^{-6})\end{align*}
\begin{align*}\mathrm{pH} = 5.8\end{align*}
Example:
Fill in the rest of Table below.
Hydrogen ion concentration and corresponding pH.
\begin{align*}[\mathrm{H}^+]\end{align*} in mol/L \begin{align*}-\log [\mathrm{H}^+]\end{align*} \begin{align*}\mathrm{pH}\end{align*}
\begin{align*}0.1\end{align*} \begin{align*}1.00\end{align*} \begin{align*}1.00\end{align*}
\begin{align*}0.2\end{align*} \begin{align*}0.70\end{align*} \begin{align*}0.70\end{align*}
\begin{align*}1 \times 10^{-5}\end{align*} ? ?
? ? \begin{align*}6.00\end{align*}
\begin{align*}0.065\end{align*} ? ?
? ? \begin{align*}9.00\end{align*}
Solution:
The completed table is shown below (Table below).
Hydrogen ion concentration and corresponding pH.
\begin{align*}[\mathrm{H}^+]\end{align*} in mol/L \begin{align*}-\log [\mathrm{H}^+]\end{align*} \begin{align*}\mathrm{pH}\end{align*}
\begin{align*}0.1\end{align*} \begin{align*}1.00\end{align*} \begin{align*}1.00\end{align*}
\begin{align*}0.2\end{align*} \begin{align*}0.70\end{align*} \begin{align*}0.70\end{align*}
\begin{align*}1.00 \times 10^{-5}\end{align*} \begin{align*}5\end{align*} \begin{align*}5\end{align*}
\begin{align*}1.00 \times 10^{-6}\end{align*} \begin{align*}6.00\end{align*} \begin{align*}6.00\end{align*}
\begin{align*}0.065\end{align*} \begin{align*}1.19\end{align*} \begin{align*}1.19\end{align*}
\begin{align*}1.00 \times 10^{-9}\end{align*} \begin{align*}9.00\end{align*} \begin{align*}9.00\end{align*}
An acid with pH = 1, then, is stronger than an acid with pH = 2 by a factor of 10. Simply put, lower pH values correspond to higher \begin{align*}\mathrm{H}^+\end{align*} concentrations and more acidic solutions, while higher pH values correspond to higher \begin{align*}\mathrm{OH}^-\end{align*} concentrations and more basic solutions. This is illustrated in the figure below. It should be pointed out that there are acids and bases that fall outside the pH range depicted. However, we will confine ourselves for now to those falling within the 0-14 range, which covers \begin{align*}[\mathrm{H}^+]\end{align*} values from \begin{align*}1.0 \ \mathrm{M}\end{align*} all the way down to \begin{align*}1 \times 10^{-14} \ \mathrm{M}\end{align*}.
pH versus Acidity
pH level Solution
\begin{align*}\mathrm{pH} < 7\end{align*} Acid
\begin{align*}\mathrm{pH} = 7\end{align*} Neutral
\begin{align*}\mathrm{pH} > 7\end{align*} Basic
Have you ever cut an onion and had your eyes water up? This is because of a compound with the formula \begin{align*}\mathrm{C}_3\mathrm{H}_6\mathrm{OS}\end{align*} that is found in onions. When you cut the onion, a variety of reactions occur that release a gas. This gas can diffuse into the air and eventfully mix with the water found in your eyes to produce a dilute solution of sulfuric acid. This is what irritates your eyes and causes them to water. There are many common examples of acids and bases in our everyday lives. Look at the pH scale below to see how these common examples relate in terms of their pH.
Even though both acidic and basic solutions can be expressed by pH, an equivalent set of expressions exists for the concentration of the hydroxide ion in water. This value, referred to as pOH, is defined as:
\begin{align*}\mathrm{pOH} = -\log [\mathrm{OH}^-]\end{align*}
If the pOH is greater than \begin{align*}7\end{align*}, the solution is acidic. If the pOH is equal to \begin{align*}7\end{align*}, the solution is neutral. If the pOH is less than \begin{align*}7\end{align*}, the solution is basic.
If we take the negative log of the complete \begin{align*}K_w\end{align*} expression, we obtain:
\begin{align*}K_w = [\mathrm{H}^+][\mathrm{OH}^-]\end{align*}
\begin{align*}-\log K_w = (-\log [\mathrm{H}^+]) + (-\log [\mathrm{OH}^-])\end{align*}
\begin{align*}-\log (1 \times 10^{-14}) = (-\log [\mathrm{H}^+]) + (-\log [\mathrm{OH}^-])\end{align*}
\begin{align*}14 = \mathrm{pH} + \mathrm{pOH}\end{align*}
Therefore, the sum of the pH and the pOH is always equal to \begin{align*}14\end{align*} (at \begin{align*}25^\circ\mathrm{C}\end{align*}). Remember that the pH scale is written with values from \begin{align*}0\end{align*} to \begin{align*}14\end{align*} because many useful acid and base solutions fall within this range. Now let’s go through a few examples to see how this calculation works for problem-solving in solutions with an added acid or base.
Example:
What is the \begin{align*}[\mathrm{H}^+]\end{align*} for a solution of \begin{align*}\mathrm{NH}_3\end{align*} whose \begin{align*}[\mathrm{OH}^-] = 8.23 \times 10^{-6} \ \mathrm{mol/L}\end{align*}?
Solution:
\begin{align*}[\mathrm{H}_3\mathrm{O}^+][\mathrm{OH}^-] = 1.00 \times 10^{-14}\end{align*}
\begin{align*}[\mathrm{H}_3\mathrm{O}^+] = \frac {1.00 \times 10^{-14}} {[\mathrm{OH}^-]} = \frac {1.00 \times 10^{-14}} {8.23 \times 10^{-6}} = 1.26 \times 10^{-9} \ \mathrm{M}\end{align*}
Example:
Black coffee has a \begin{align*}[\mathrm{H}_3\mathrm{O}^+] = 1.26 \times 10^{-5} \ \mathrm{mol/L}\end{align*}. What is the pOH?
Solution:
\begin{align*}\mathrm{pH} = -\log [\mathrm{H}^+] = -\log 1.26 \times 10^{-5} = 4.90\end{align*}
\begin{align*}\mathrm{pH} + \mathrm{pOH} = 14\end{align*}
\begin{align*}\mathrm{pOH} = 14 - \mathrm{pH} = 14 - 4.90 = 9.10\end{align*}
For a classroom demonstration of pH calculations (5d, 5f; 1e I&E Stand.), see http://www.youtube.com/watch?v=lca_puB1R8k (9:45).
Lesson Summary
• Autoionization is the process where the same molecule acts as both an acid and a base.
• Water ionizes to a very slight degree according to the equation \begin{align*}\mathrm{H}_2\mathrm{O}_{(l)} \leftrightharpoons [\mathrm{H}^+] + [\mathrm{OH}^-]\end{align*}.
• In pure water at \begin{align*}25^\circ\mathrm{C}\end{align*}, \begin{align*}[\mathrm{H}^+] = [\mathrm{OH}^-] = 1.00 \times 10^{-7} \ \mathrm{M}\end{align*}.
• The equilibrium constant for the dissociation of water, \begin{align*}K_w\end{align*}, is equal to \begin{align*}1.00 \times 10^{-14}\end{align*} at \begin{align*}25^\circ\mathrm{C}\end{align*}.
• \begin{align*}\mathrm{pH} = -\log [\mathrm{H}^+]\end{align*}
• \begin{align*}\mathrm{pOH} = -\log [\mathrm{OH}^-]\end{align*}
• \begin{align*}\mathrm{p}K_w = -\log K_w\end{align*}
• \begin{align*}\mathrm{pH} + \mathrm{pOH} = \mathrm{p}K_w = 14.0\end{align*}
Review Questions
1. What is the \begin{align*}[\mathrm{H}^+]\end{align*} ion concentration in a solution of \begin{align*}0.350 \ \mathrm{mol/L} \ \mathrm{H}_2\mathrm{SO}_4\end{align*}?
1. \begin{align*}0.175 \ \mathrm{mol/L}\end{align*}
2. \begin{align*}0.350 \ \mathrm{mol/L}\end{align*}
3. \begin{align*}0.700 \ \mathrm{mol/L}\end{align*}
4. \begin{align*}1.42 \times 10^{-14} \ \mathrm{mol/L}\end{align*}
2. A solution has a \begin{align*}\mathrm{pH}\end{align*} of \begin{align*}6.54\end{align*}. What is the concentration of hydronium ions in the solution?
1. \begin{align*}2.88 \times 10^{-7} \ \mathrm{mol/L}\end{align*}
2. \begin{align*}3.46 \times 10^{-8} \ \mathrm{mol/L}\end{align*}
3. \begin{align*}6.54 \ \mathrm{mol/L}\end{align*}
4. \begin{align*}7.46 \ \mathrm{mol/L}\end{align*}
3. A solution has a \begin{align*}\mathrm{pH}\end{align*} of \begin{align*}3.34\end{align*}. What is the concentration of hydroxide ions in the solution?
1. \begin{align*}4.57 \times 10^{-4} \ \mathrm{mol/L}\end{align*}
2. \begin{align*}2.19 \times 10^{-11} \ \mathrm{mol/L}\end{align*}
3. \begin{align*}3.34 \ \mathrm{mol/L}\end{align*}
4. \begin{align*}10.66 \ \mathrm{mol/L}\end{align*}
4. A solution contains \begin{align*}4.33 \times 10^{-8} \ \mathrm{M}\end{align*} hydroxide ions. What is the \begin{align*}\mathrm{pH}\end{align*} of the solution?
1. \begin{align*}4.33\end{align*}
2. \begin{align*}6.64\end{align*}
3. \begin{align*}7.36\end{align*}
4. \begin{align*}9.67\end{align*}
5. Fill in Table below and rank the solutions in terms of increasing acidity.
Table for Problem 5
Solutions \begin{align*}[\mathrm{H}^+] \ \mathrm{(mol/L)}\end{align*} \begin{align*}-\mathrm{log} \ [\mathrm{H}^+]\end{align*} \begin{align*}\mathrm{pH}\end{align*}
A \begin{align*}0.25\end{align*} \begin{align*}0.60\end{align*} \begin{align*}0.60\end{align*}
B ? \begin{align*}2.90\end{align*} ?
C \begin{align*}1.25 \times 10^{-8}\end{align*} ? ?
D \begin{align*}0.45 \times 10^{-3}\end{align*} ? ?
E ? \begin{align*}1.26\end{align*} ?
1. It has long been advocated that red wine is good for the heart. Wine is considered to be an acidic solution. Determine the concentration of hydronium ions in wine with \begin{align*}\mathrm{pH} \ 3.81\end{align*}.
2. What does the value of \begin{align*}K_w\end{align*} tell you about the autoionization of water?
3. If the \begin{align*}\mathrm{pH}\end{align*} of an unknown solution is \begin{align*}4.25\end{align*}, what is the \begin{align*}\mathrm{pOH}\end{align*}?
1. \begin{align*}10^{-4.25}\end{align*}
2. \begin{align*}10^{-9.75}\end{align*}
3. \begin{align*}9.75\end{align*}
4. \begin{align*}14.0 - 10^{-9.75}\end{align*}
4. A solution contains a hydronium ion concentration of \begin{align*}3.36 \times 10^{-4} \ \mathrm{mol/L}\end{align*}. What is the \begin{align*}\mathrm{pH}\end{align*} of the solution?
1. \begin{align*}3.36\end{align*}
2. \begin{align*} 3.47\end{align*}
3. \begin{align*}10.53\end{align*}
4. none of the above
5. A solution contains a hydroxide ion concentration of \begin{align*}6.43 \times 10^{-9} \ \mathrm{mol/L}\end{align*}. What is the \begin{align*}\mathrm{pH}\end{align*} of the solution?
1. \begin{align*}5.80\end{align*}
2. \begin{align*}6.48\end{align*}
3. \begin{align*}7.52\end{align*}
4. \begin{align*}8.19\end{align*}
6. An unknown solution was found in the lab. The \begin{align*}\mathrm{pH}\end{align*} of the solution was tested and found to be \begin{align*}3.98\end{align*}. What is the concentration of hydroxide ions in this solution?
1. \begin{align*}3.98 \ \mathrm{mol/L}\end{align*}
2. \begin{align*}0.67 \ \mathrm{mol/L}\end{align*}
3. \begin{align*}1.05 \times 10^{-4} \ \mathrm{mol/L}\end{align*}
4. \begin{align*}9.55 \times 10^{-11} \ \mathrm{mol/L}\end{align*}
Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects:
|
|
# Talk:Bounded set (topological vector space)
WikiProject Mathematics (Rated Start-class, Mid-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Mid Importance
Field: Analysis
I'm moving some discussion from Talk:Bounded set to here since it clearly concerns only LCTVS. — MFH:Talk 21:31, 12 October 2006 (UTC)
## seminorms ?
I think the statement around "seminorms" is incorrect. If not, a proof or a link to a proof (at least a precise reference: Thm.X of book Y) is needed.
At least, p(S) should be defined. (a priori, for a map f:E→G, f(S)={ f(x); x∈S }; most probably, here the sup is meant.)
In the context of inductive limits, there is the notion of "regular limit" (bounded set = set contained in some space and bounded there), it seems to me that this would not make sense if the property holds.
The same (need for reference) applies to the statement "bounded lin. op. is continuous".
MFH: Talk 16:39, 19 Apr 2005 (UTC)
Locally convex (which you inserted) and the sup was missing, but now the statement about boundedness in terms of seminorms should be correct. I will try to get a proof or reference untill tomorrow. MathMartin 17:39, 19 Apr 2005 (UTC)
I found a reference in the english translation of Bourbakis "Topological Vector Spaces". The statement is in the middle of chapter III page 2 (TVS III.2) MathMartin 20:51, 26 May 2005 (UTC)
## Set is bounded in locally convex space iff bounded under each semi norm
I found no reference but the proof is quite simple (the statement is from a script I am currently reading). My main aim is to make clear that boundedness for topological vector spaces does not necessarily coincide with boundeness for metric spaces (it does when the metric is given by a norm). I will try to make this more clear in the article, but perhaps it would be best to separate the articles. Anyway here is the proof:
Statement
Given a locally convex space (X,P) with P a familiy of semi norms, then a subset A of X is bounded iff ${\displaystyle \sup _{x\in A}p(x)<\infty }$ for all p in P.
Proof
${\displaystyle \Rightarrow :}$Let A be bounded. Using any semi norm p we can construct a neighbourhood ${\displaystyle N_{1}^{p}}$ for the zero vector with radius 1. As A is bounded there exists an ${\displaystyle \alpha >0}$ with ${\displaystyle A\subset \alpha N_{1}^{p}}$. This implies ${\displaystyle \sup _{x\in A}p(x)\leq \alpha <\infty }$.
${\displaystyle \Leftarrow :}$Now let ${\displaystyle \sup _{x\in A}p(x)<\infty }$ for all p in P. We have to show that for any zero vector neighbourhood ${\displaystyle N}$
${\displaystyle \exists \alpha :A\subset \alpha N}$.
It is sufficient to proof this for all neighbourhoods ${\displaystyle B}$ in a neighbourhood basis ${\displaystyle {\mathcal {B}}}$ of the zero vector. Using the family of semi norms to construct a locally convex neighbourhood basis ${\displaystyle {\mathcal {B}}}$ every neighbourhood ${\displaystyle B}$ in ${\displaystyle {\mathcal {B}}}$ can be written as
${\displaystyle B=\bigcap _{i=1}^{n}N_{\epsilon _{i}}^{p_{i}}\qquad n\in \mathbb {N} }$ with some ${\displaystyle \epsilon _{i}>0}$ and some ${\displaystyle p_{i}\in P}$.
For a given representation of ${\displaystyle B}$ we define
${\displaystyle \alpha _{i}:={\frac {1}{\epsilon _{i}}}\sup _{x\in A}p_{i}(x)\quad (i=1,\ldots ,n)}$
Then we can construct
${\displaystyle \alpha :=1+\max\{\alpha _{1},\ldots ,\alpha _{n}\}\quad (i=1,\ldots ,n)}$.
and thus for our choosen neighbourhood
${\displaystyle A\subset \alpha B}$.
MathMartin 10:21, 21 Apr 2005 (UTC)
#### still not completely convinced
Of course, if the topology of a locally convex space can be defined by a filtering family of seminorms, then your statement is correct (in some sense you cannot "make" the family of seminorms by taking the "gauge" of such neighborhoods). A concrete example is given (AFAIK) by Roumieu type ultradifferentiable functions, defined as inductive limit of semi-normed spaces. (Maybe also by hyperfunctions or something alike.) MFH: Talk 17:36, 21 Apr 2005 (UTC)
P.S.: The corresponding construction in locally convex space should maybe also be checked. MFH: Talk 17:36, 21 Apr 2005 (UTC)
I am not sure I understood your explanation. What is a filtering family of seminorms ? Anyway, I made a mistake in the proof which I fixed. If you are still not convinced, could you give some more details in your argument using more basic counterexamples (I have do not know what a Roumieu type ultradifferentiable functions is).MathMartin 20:26, 22 Apr 2005 (UTC)
Filtering family means that for p,p' you can find p" such that p" is greater or equal than both, p and p'. (aka directed set).
Roumieu type spaces are inductive limit spaces. More generally, I think that any not (semi-)metrizable space, or any space which has not a bounded open neighbourhood of zero, will provide a counter example (but maybe this is not completely true). MFH: Talk 14:08, 27 May 2005 (UTC)
|
|
# O KINETICS AND EQUILIBRIUM Calculating equilibrium composition from an equilibrium const... Suppose a 250 ml flask...
###### Question:
O KINETICS AND EQUILIBRIUM Calculating equilibrium composition from an equilibrium const... Suppose a 250 ml flask is filled with 0.30 mol of NO, 1.7 mol of CO and 1.4 mol of NO. The following reach NO,()+CO(g) + NO(g) +00,() The equilibrium constant K for this reaction is 0.866 at the temperature of the flask. Calculate the equilibrium molarity of NO . Round your answer to two decimal places. IMx 5. ? Explanation
#### Similar Solved Questions
##### Using an insertion sort, sort the array 5, 7, 4, 9, 8, 6, 3 into ascending...
Using an insertion sort, sort the array 5, 7, 4, 9, 8, 6, 3 into ascending order. After the first swap, what will the array look like?...
##### Roberto ate 3 pieces of pizza and then felt that he should pay 1/4 of the costs because that's the fraction be ate
Roberto ate 3 pieces of pizza and then felt that he should pay 1/4 of the costs because that's the fraction be ate. How many pieces was the pizza cut into?...
##### A small circular loop of area 6.00 cm is placed in the plane of, and concentric...
A small circular loop of area 6.00 cm is placed in the plane of, and concentric with, a large circular loop of radius 1.00 m. The current in the large loop is changed at a constant rate from 200 A to -200 A (a change in direction) in a time of 1.0 s, starting at t = 0. What is the magnetic field B a...
##### 8. For the compound below provide a stepwise retrosynthetic analysis and a separate forward synthesis showing...
8. For the compound below provide a stepwise retrosynthetic analysis and a separate forward synthesis showing reagents and solvents used. Make sure you justify the regio- selectivity. (10 pts) and any other reagents...
##### A continuous random variable X which represents the amount of sugar (in kg) used by a...
A continuous random variable X which represents the amount of sugar (in kg) used by a family per week, has the probability density function (x)-Ida-92-r) ; otherwise (i)Determine the value of c (ii) Obtain cumulative distribution function. iii) Find P(X 1.2)...
##### Please help!! Give the product of the bimolecular elimination from each of the following isomeric halogenated...
please help!! Give the product of the bimolecular elimination from each of the following isomeric halogenated compour ou + HOIBU -Br One of these compounds undergoes elimination 50x faster than the other. Which one and why? There is additional feedback available! rmation needed for elimination Inco...
##### Chapter 4 Problem 1 KT Retailing Company has the following information for 2019. [Dollar amounts given...
Chapter 4 Problem 1 KT Retailing Company has the following information for 2019. [Dollar amounts given are pre-tax.] Division A was discontinued during the last quarter. Assume qualifies as discontinued operations . Regarding dividends: declaration date is 12/21/19, record date is 01/02/20, & p...
##### Question 1 1 pts Suppose you are investigating the reaction: M(s) + 2 HCl(aq) → MCl2(aq)...
Question 1 1 pts Suppose you are investigating the reaction: M(s) + 2 HCl(aq) → MCl2(aq) + H2(g). You weigh out a 0.241 gram piece of metal and combine it with 60.4 mL of 1.00 M HCl in a coffee- cup calorimeter. If the molar mass of the metal is 57 g/mol, and you measure that the reaction absor...
##### 10. A solution contains 0.10 M HCl and 0.10 M HNO, K. = 4.6 x 10....
10. A solution contains 0.10 M HCl and 0.10 M HNO, K. = 4.6 x 10. What is the concentration of NO, ions? a. 0.10 M b. 0.020 M C. 4.6 x 10M d. 2.1 x 10 2M e. 0.05 M...
A watermelon farmer is operating in a perfectly competitive market. The market price of watermelon is $5 per pound and each farmer produces 1,000 pounds per week. The average variable cost per unit is$3 per pound and the average fixed cost per unit is $1. a. What is a watermelon farmer’s prof... 1 answer ##### The terminal structures of the three common blood group antigens are shown below.' но он но... The terminal structures of the three common blood group antigens are shown below.' но он но Чоно -OH 10 но Но он HO - ОН HO HONOR -OR он і но но &... 1 answer ##### The interest rate of a Japanese one‐year bond is 0.5%, in Australia it is 3.5%. The... The interest rate of a Japanese one‐year bond is 0.5%, in Australia it is 3.5%. The spot market exchange rate between the currencies of these two countries is 80 Yen/A$. What swap rate can you expect for 6 months and what will be the forward rate for this time horizon?...
Please help me solve this genetics problem, my professor wasn't able to explain it clearly!! Please give details on how to some this problem.. Thanks 4. Mutation and effects The following protein fragments (amino acid sequences) were found. Wildtype (non-mutated) a. Knowing this information...
##### 2) Moment of Inertia for Multiple Objects We have loosely defined the moment of inertia as...
2) Moment of Inertia for Multiple Objects We have loosely defined the moment of inertia as the difficulty or resistance encountered when trying to change an object's rotational motion. What if we were trying to rotation a combination of objects? a. Suppose you have a very light cloth pouch, and ...
##### How do you find the area enclosed by the x-axis and the given curve y=(6/x) for x between -4 & -2?
How do you find the area enclosed by the x-axis and the given curve y=(6/x) for x between -4 & -2?...
Question 2 of 15. Bob (67) is single and not blind. What is his standard deduction for 2019? $12,000$12,200 $13,850$18,350 Mark for follow up Summary Next >> Save / Return Later...
|
|
# An absolute convergence criterion in $\Bbb C$
Here's a problem I'm having trouble with:
Show that if $\sum u_k$ converges for $u_k\in\Bbb C$, and $|\arg(u_k)|\leq c<\pi/2$ for all $k$, then $\sum |u_k|$ converges too.
All I have after looking at it for an hour is that I kind of believe it could be true (I didn't at first), but I don't really know where to start. Could you give me a hint? I know of course that $$u_k=|u_k|(\cos(\arg(u_k))+i\sin(\arg(u_k))),$$ but I don't have any ideas. I've tried looking at it geometrically too, but I just don't see anything.
-
Let $S = \{z\in \mathbb{C} : |\arg(z)|\leq c\}$. This is a sector of the right half plane. We can make two immediate observations about points in this sector:
• If $z\in S$, then the real part of $z$ is nonnegative.
• There is a constant $A>0$ such that if $z = x + iy\in S$, then $|y|\leq Ax$. The constant $A$ is the absolute value of the slope of the bounding lines of the sector $S$.
Using the second bullet point, we see that if $z = x + iy\in S$, then $$|z|^2 = x^2 + y^2 \leq (1+A^2)x^2,$$ and hence $$|z|\leq \sqrt{1+A^2}x = \sqrt{1+A^2} Re(z).$$
We can then apply this to the $u_k$, to derive that $$\sum_{k=1}^N|u_k|\leq \sqrt{1+A^2}\sum_{k=1}^N Re(u_k)\leq \sqrt{1+A^2} Re\sum_{k=1}^Nu_k\leq \sqrt{1 + A^2}\left|\sum_{k=1}^N u_k\right|.$$ We know that $|\sum_{k=1}^N u_k|$ converges as $N\to \infty$, since $\sum u_k$ converges. Thus the right hand side is bounded. It follows that the sequence $\sum_{k=1}^N |u_k|$ is bounded in $N$. But this sum increases as $N\to \infty$, and all bounded increasing sequences converge. Thus $\sum |u_k|$ converges.
-
I think I have a counter example. Take $u_k=\frac{1}{k^2}+i\frac{(-1)^k}{k}$. Since $\Re{(u_k)}>0$ for all $k$, we satisfy $\vert \arg(u_k)\vert<\pi/2$. We also have that $\sum u_k$ converges. Yet $\vert u_k\vert=\sqrt{1/k^4+1/k^2}>\sqrt{1/k^2}=1/k$ and hence $\sum\vert u_k\vert$ diverges.
Edit: This is a faulty counter-example, since it doesn't satisfy the condition $\vert\arg(u_k)\vert\leq c<\pi/2$. In fact, $\vert\arg(u_k)\vert\rightarrow\pi/2$ as $k\rightarrow\infty$. @froggie's proof is correct.
-
The condition is stronger than just $|\arg(u_k)|<\pi/2$, and your sequence doesn't satisfy the stronger condition. – Jonas Meyer Dec 16 '12 at 0:11
Excellent! I knew something didn't quite seem right. – icurays1 Dec 16 '12 at 0:14
+1 for an excellent example of why $|\arg u_k|<\pi/2$ doesn't suffice. – Jonas Meyer Dec 16 '12 at 0:17
|
|
## November 10, 2008
### The NYT identifies Jamie Gorelick as potentially Obama's pick for Attorney General.
And provides this profile. Under the heading "Baggage":
Her work at Fannie Mae, which had to be bailed out by the government in September as part of a $200 billion deal. Ms. Gorelick left the company just as it was coming under attack for huge accounting failures. She has also drawn criticism for her role at the Justice Department, in which she allegedly created an intelligence “wall” that hindered counterterrorism agents in the years before the Sept. 11 attacks. Conservatives called for her removal from the Sept. 11 commission, but her fellow members rallied around her and said critics were distorting her record. The criticism grew so heated that the F.B.I. investigated a death threat against her family, and President Bush had to intervene personally to stop the Justice Department from releasing sealed reports involving her. Some conservative bloggers have already begun trying to derail Ms. Gorelick’s possible nomination as attorney general, pointing to her experiences at both Fannie Mae and the Sept. 11 commission. Unbelievably ponderous baggage! Oh, but conservatives have attacked her. Does that somehow cancel the baggage? A better question: Why haven't liberals attacked her? Beldar seethes: Short of appointing an actual member of al Qaeda, I cannot imagine a more offensive symbolic repudiation of the Global War on Terror — nor a more enthusiastic embrace of the chronic mismanagement, cronyism, and graft which led to this fall's credit crisis — than the appointment of Jamie Gorelick as attorney general. I voted for Obama, as I'm sure my commenters are about to remind me, and I'm hoping for the best. He told me to hope! Please don't crush my hope so early, Mr. Obama. ADDED: "They put stickies on the face of Mohammed Atta on the chart that the military intelligence unit had completed, and they said you can't talk to Atta because he's here on a green card." Something I quoted on Instapundit back in August 2005, which got me accused of "enlisting Glenn" in a "smear campaign" against Jamie Gorelick. Here's how Glenn responded at the time. #### 190 comments: Darcy said... Yeah, my heart sunk when I read that this morning. I thought it was a joke. Well, I guess we'll get a pretty clear indication of where Obama's going if he nominates her. Better to know what we're up against, if that's the case, early, I think. I wonder what John Stodder would think of this? And I'm not saying that to poke at him. I really want to know. MadisonMan said... Can I note the obvious? There is nothing in that article that says Obama is actually considering this pick. So when it says: Being considered for I have to ask By whom? Bissage said... Jamie Gorelick will not accept any appointment in the Obama administration, if she knows what’s good for her. She has already promised me she will star in our community theater company’s production of “Star Trek – The Musical.” She promised she would play the Vulcan Ambassador. I’ll sue! So help me, I’ll sue! TMink said... Madison, I hope you are correct. Trey BumperStickerist said... Ann Althouse imitates "The Onion" http://www.theonion.com/content/node/34198 - Ann Althouse said... The NYT has some basis for forefronting her name. Perhaps the transition team is trying to test the waters. In which case, I would like them to find out that the waters are fully aboil. reader_iam said... I totally have to credit Bill (of So Quoted) in an online comment to me early this morning for drawing the connection between Gorelick and Zelig (or Forrest Gump). Her name and face just keeps popping up in connection with costly disasters. Let's hope that with regard to the Obama administration, she only pops up in the NYT. Bushman of the Kohlrabi said... I think madison is right. As much as it pains me to give Barry the benefit of the doubt, this is the NYT we're talking about. Marcia said... A few details left out of her Fannie Mae involvement: like her$26 million salary, and her sweetheart Countrywide $1 million home refinancing. Big Mike said... Puh-leez folks. This is an old Washington game. Someone is pushing Jamie Gorelick and is using the MSM to float a trial balloon. Or, there is somebody else who has a nontrivial quantity of baggage, but who will look wonderful next to Ms. Gorelick and people are supposed to ignore this new person's issue because "at least it isn't Gorelick." There are only two ways she'll actually get nominated. First is a variation on the above, let her be a stalking horse and let Republicans take the heat for knocking her down, then put up the person President Obama really wants. Or, perhaps Mr. Obama is is simply overrated in the brains department. (Odds on the latter are pretty long -- I don't like his policies, but very few people think he's stupid. Just misguided.) Palladian said... "...forefronting..." Stop verbing nouns!! LarsPorsena said... "The NYT has some basis for forefronting her name. Perhaps the transition team is trying to test the waters. In which case, I would like them to find out that the waters are fully aboil." AA is right; this is a trial balloon. But it's still enough to gag a maggot. Dust Bunny Queen said... He told me to hope! Please don't crush my hope so early, Mr. Obama. And so the Kabuki theater ends earlier than expected. Is anyone seriously surprised that Obama would continue to surround himself with corruption and criminals? Its merely payback time to the people who put this empty shell in power. The Chicago political machine goes national. Salamandyr said... Based on some of the names people have floated for Obama's cabinet, Larry Summers and Rahm Emmanuel, I've been cautiously optimistic. Hearing Jamie Gorelick's name has pretty much driven a stake through the heart of that dream, burned it and spread its ashes at a crossroads. So I will cling instead to the wan hope that the NYT doesn't know what it's talking about. Palladian said... "As much as it pains me to give Barry the benefit of the doubt, this is the NYT we're talking about." I know and because of that, I automatically assume that they would never intentionally or even accidentally print anything about Barry that hasn't been confirmed and approved by Obama's office. Henry said... Michael Brown is working on his resume. Simon said... I can't for the life of me see why it wouldn't be Walter Dellinger for AG and Kathleen Sullivan for SG. SteveR said... I think Big Mike is right. Throw out a totally unacceptable name that gets trashed and anyone else looks good, except maybe Janet Reno (maybe). This is change we need, right, change we can believe in? Freeman Hunt said... Cannot be true. No way. Surely. Seriously. Please? Freeman Hunt said... How does someone like Gorelick continue to get high profile jobs? If everything you were associated with turned into a fiasco, it seems like people wouldn't want you. What gives? Simon said... I disagree with Beldar that Gorelick is worth filibustering, but I'm delighted to see that at least one other conservative has abandoned the view that filibustering Presidential nominees is unacceptable. downtownlad said... She worked at Fannie Mae until 2003. So. Freaking. What. Seriously - is that really going to be considered a disqualification? Why? On what grounds? You're a lawyer - I'd like to hear a logical explanation of how she's responsible for ANYTHING unethical Fannie Mae may have done. And nobody who knows anything about the current financial crisis actually blames Fannie or Freddie Mae. They were victims of it. downtownlad said... And 99% of the commenters on this thread didn't vote for Obama. Well guess what? He's not your President. You are unpatriotic enemies of the state and he doesn't need to listen to you. Zeb Quinn said... Please don't crush my hope so early, Mr. Obama What we seem to have here is a variation of the parable of the frog and the scorpion , except in this version the frogs have convinced themselves all on their own that the scorpion won't sting them. Roger J. said... As other posters have noted, this is the old trial balloon thing going on. And soon the sky will be filled with trial balloons for various cabinet positions. It happens every four or eight years. mjsharon said... Downtownlad, Your last sentance betrays you know nothing about the current financial crisis. Fanny and Freddie clearly share a very, very large part of the blame. Also, I doubt you'd let a potential Repub nominee off the hook so easily. The One would be a fool to consider Gorelick. Use Ann as your barometer on this one. jdeeripper said... The next AG will be Eric Holder or Larry Thompson. I'm pretty sure any Jewish anxieties about Barack Hussein Obama have been eliminated with the pick of Rahm Israel Emanuel as chief of staff and the talk of two establishment Jewish liberals Larry Summers as possible Treasury secretary and Jamie Gorelick as AG. Change, change, change from a guy who selected lifelong Senate hack Joe Biden as his running mate. The only surprise from Obama will be the selection of any person who isn't a long time member of the inside Washington crowd. The only change this guy brings is a black wife and kids. reader_iam said... While I'm here, may I quickly suggest that anyone unfamiliar with John Podesta's think tank consider boning up a bit? That background might come in useful. The think tank is The Center for American Progress (among other of its endeavours is the ThinkProgress). As you know, Podesta co-chairs President-elect Obama's transition team, and, of course, he was chief of staff under Clinton. Roger J. said... Why does Gorelick's name keep popping up? probably because she knows where the Janet Reno's justice department skeletons are buried. Just my guess. Lem said... Jamie Gorelick? The rains with the flue are coming on to ruin Obama’s honeymoon early. Please God let it be true ;) Roger J. said... Jamie G as AG, John "effen" Kerry as Sec State. Wow--the mind boggles at the possibilities of what an Obama Cabinet will look it. This is going to be a fun four years. Dust Bunny Queen said... She worked at Fannie Mae until 2003.. She didn't just "work" at Fannie. She was Vice Chair. The so what? The fact that the company was cooking the books under her watch doesn't bother you in the least? She is directly responsible for the current financial crisis. ALL board members of ANY board are directly responsible for the actions taken by their companies. The fraud perpetrated by companies like Fannie, Enron etc caused the passage of Sarbanes Oxley which was tailor made for hacks like Gorelick. "Boards of Directors, specifically Audit Committees, are charged with establishing oversight mechanisms for financial reporting in U.S. corporations on the behalf of investors. These scandals identified Board members who either did not exercise their responsibilities or did not have the expertise to understand the complexities of the businesses. In many cases, Audit Committee members were not truly independent of management." You really don't have a clue, do you? Kirk Parker said... "The NYT has some basis for forefronting her name. Perhaps the transition team is trying to test the waters. In which case, I would like them to find out that the waters are fully aboil." Ann, can you explain just why it was that this wasn't enough reason to vote against Obama? ("This" meaning that either O would actually consider Gorelick a viable AG canditate, or that he hangs out with/hires people who do?) Sal, Larry Summers, sure, I pretty much agree with your POV. But Emmanuel? One of the main guys who put the "personal" into "the politics of personal destruction"? What hopeful thing could his consideration possibly be a sign of? reader_iam said... There's nothing wrong with lobbing rotten tomatoes at floated names. Better--or at least more effective, to the degree that it effective, anyway--to do that beforehand than after the fact. downtownlad said... Please explain how Fannie and Freddie are a large part of the current crisis. And then explain how Jamie Gorelick contributed to that. I'm really interested in hearing your nonsensical babble. Fannie and Freddie were a quasi-governmental agency, doing exactly what they were chartered to do - securitize mortgages. They got burned when the housing marked collapsed as did almost every other financial institution that dealt with mortgages. Yes, they had some accounting irregularities in 2004, but that had nothing to do with the current financial crisis. MadisonMan said... And 99% of the commenters on this thread didn't vote for Obama. Work on your math skills. reader_iam said... And one can do that while still keeping in mind the phenomenon to which big mike aptly points. MadisonMan said... They put stickies on the face of Mohammed Atta I read that as They put pot stickers on the face of Mohammed Atta. Nonsense! (Why, yes, I did have chinese food for dinner last night, why do you ask?) MadisonMan said... reader, isn't that's how Simon's Hero was appointed! Darcy said... LOL, MadisonMan. And now I'm hungry! Thanks. :) garage mahal said... She is directly responsible for the current financial crisis. ALL board members of ANY board are directly responsible for the actions taken by their companies. ? 19 McCain fundraisers & advisers lobbied for Fannie Mae or Freddie Mac. Rick Davis was McCain's campaign manager who was paid 5 million dollars over 5 years by an advocacy group set up by Fannie Mae. He was being paid$15,000 month up until Sept for doing absolutely nothing aside being there for access to McCain. You are so full of shit on a consistent basis I have wonder where you find your "news".
I've seen ZERO evidence that she was involved in the scandal. If you have any, please provide it.
mjsharon said...
Dtl,
I'm sure others here can provide more detail, but here goes: Fannie and Freddie over time vastly expanded the scope of their mortgage underwriting operations and stoutly resisted any attempts to rein in their activities (by imposing some sort of reasonable capital requirements). They were far from passive in all this. And the top execs (hacks like Gorelick) made big $along the way. Do you read the papers? Paddy O. said... I'm not terribly surprised. Obama, when faced with his first choice for an appointment, picked Joe Biden as his running mate. When it comes down to it, hope and change are just words to bring back in the tired and same. Freeman Hunt said... They put pot stickers on the face of Mohammed Atta. Enhanced interrogation technique #358: "Talk, and we'll let you eat these." reader_iam said... DTL: Let's cut to the chase. You are unaware of the investigations into Fannie Mae and its governance? You don't think they matter? Putting aside our current financial crisis and various theories as to its cause, you don't think there were problems with Fannie Mae? The SEC and OFHEO charges mean nothing? There are no grounds to question the judgment of those who were on the board at relevant time periods? You wouldn't want to take such things in consideration in questioning someone's fitness as AG? (And, of course, that's not Gorelick's only problem... .) Roger J. said... Re filibustering Presidential cabinet nominees: Unless the nominee fails the Edwin Edwards test (live boy or dead girl) then the President should have whom he wants on his cabinet. This position, however, in no way applies to judicial nominees where a separate branch of government is involved. Dust Bunny Queen said... I'm really interested in hearing your nonsensical babble. You are too stupid for me to waste pixels on trying to explain the intricacies of derivatives trading, the role of the Democrats in Congress and the legal responsibilities of a Board of Directors. Google it. PoNyman said... And 99% of the commenters on this thread didn't vote for Obama. Work on your math skills. Probably just a question of semantics. Maybe those who voted for Obama and comment here are considered dissenters whereas those who did not vote for Obama are commenters. So by that logic, 100% against Obama - one for Obama (Althouse) divided by said total = 99% +- 2%. downtownlad said... MJsharon, Wrong. Fannie Mae and Freddie Mac have never underwritten ONE single mortgage. They securitize mortgages, so they act in the secondary market. That's also their charter as originally mandated by the government. That's a big difference. They guarantee half of the mortgages in this country. Do you really think there is anyway they could have NOT been involved in this crisis? TosaGuy said... DTL said: "You are unpatriotic enemies of the state" Isn't that a bad line in movies set in the USSR or Nazi Germany? downtownlad said... reader iam - I think individuals hold blame, not corporations. Despite my hatred of Bush, you will never hear me going on about how everyone at Enron was unethical and guilty. They weren't. Only a couple of people at Enron were - and they created a bad name for everyone else. The same is true at Lehman. Is there any evidence that anyone at Fannie and Freddie even broke the law, or was it just incompetence (as I suspect)? But it's a quasi-governmental entity. We shouldn't be surprised that there is incompetence there. Do I blame Gorelick for that incompetence? Well - show me some evidence. I'm willing to listen. She's a lawyer, not an accountant, so its hard for her to have oversight in that matter. Of course, I think she never should have been appointed to that role in the first place, but I can't blame her for taking it. Lots of people accept jobs they are unqualified for. For example - Palin. reader_iam said... I'm perfectly aware that elections have consequences and that administrations get to pick whom they like for various positions. Opponents suck it up, or whatever. However, I'm also of the opinion that the AG is a little different from other positions, and so--IMHO--that one is more worth fighting over. We all of us have more stake in that one, and rightly so. mjsharon said... Dtl, You have, predictably, ignored the essentials of my comment and focussed on trivialities. To answer your last question - yes - by acceding to reasonable limits on their activities, as suggested by some (such as McCain) from time to time. downtownlad said... My 99% comment was a joke. Please take a chill pill. chickenlittle said... What hopeful thing could his consideration possibly be a sign of? a 2010 check and balance I hope. DTL, you're ignoring people's perceptions of Fannie and Freddie as an enabler. PoNyman said... Ah, DTL, you are such a tease. integrity said... I support whatever Obama does and whomever he picks. The uber-corrupt dimwit boy king(dictator) got whatever he wanted(without one scintilla of punishment), so will Baracky. Tought shit. LOL. I LOVE how it's going so far. Keep the beautiful, chic photos of the future first couple coming. Love 'em. Why can't we white people generate urbane sophisticates like this? Those going to dinner pictures were amazing, fuck! I love their policies and their imagery. Yahoo! I'd comment on Professor Althouse's superbly inconsistent P.O.V.(just reading the weekend stuff alone is worthy of hours of posts), but she's got a business to run. This is what happens when you cultivate commenters that are so far right as to be virtual dead-enders. I've learned innumerable lessons from watching this scene play out over the last 8-9 months. In the period of time I have been at this site I've never seen one criticizm of Bush from the top, once the Professor made a comment about doing something competently or something. but that's it. We lefties have been given several gifts this past year from republicans and pseudo-republicans, the gifts will obviously keep coming. The righties and their water-carriers do not have the propensity for introspection and therefore can't self-correct. I think we may get an 8 year run here kids. I'm pushing Palin/Romney or Romney/Palin for 2012. They will crash and burn with either of them anywhere near the ticket. Do it, I dare you. Palladian said... integrity, no matter how much you snuffle and whimper, Michelle is not going to let you suck Barack's cock. 1jpb said... DBQ, Intricacies? Are you going to solve differential equations for us? Let's start with the basics: "The Black–Scholes PDE As per the model assumptions above, we assume that the underlying (typically the stock) follows a geometric Brownian motion. That is, dS_t = \mu S_t\,dt + \sigma S_t\,dW_t \, where Wt is Brownian. Now let V be some sort of option on S—mathematically V is a function of S and t. V(S, t) is the value of the option at time t if the price of the underlying stock at time t is S. The value of the option at the time that the option matures is known. To determine its value at an earlier time we need to know how the value evolves as we go backward in time. By Itō's lemma for two variables we have dV = \left(\mu S \frac{\partial V}{\partial S} + \frac{\partial V}{\partial t}+ \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2}\right)dt + \sigma S \frac{\partial V}{\partial S}\,dW. Now consider a trading strategy under which one holds one option and continuously trades in the stock in order to hold - \frac{\partial V}{\partial S} shares. At time t, the value of these holdings will be \Pi = V - S\frac{\partial V}{\partial S}. The composition of this portfolio, called the delta-hedge portfolio, will vary from time-step to time-step. Let R denote the accumulated profit or loss from following this strategy. Then over the time period [t, t + dt], the instantaneous profit or loss is dR = dV - \frac{\partial V}{\partial S}\,dS. By substituting in the equations above we get dR = \left(\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2}\right)dt. This equation contains no dW term. That is, it is entirely riskless (delta neutral). Thus, given that there is no arbitrage, the rate of return on this portfolio must be equal to the rate of return on any other riskless instrument. Now assuming the risk-free rate of return is r we must have over the time period [t, t + dt] r\Pi\,dt = dR = \left(\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2}\right)dt. If we now substitute in for Π and divide through by dt we obtain the Black–Scholes PDE: \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2} + rS\frac{\partial V}{\partial S} - rV = 0. This is the law of evolution of the value of the option. With the assumptions of the Black–Scholes model, this partial differential equation holds whenever V is twice differentiable with respect to S and once with respect to t." And, then you can teach us the tricky stuff. reader_iam said... I support whatever Obama does and whomever he picks. I can honestly say I've never thought or felt that way about any president or any politician, ever. Creepy. Palladian said... "This is what happens when you cultivate commenters that are so far right as to be virtual dead-enders." LOL! Funny stuff! Simon said... MadisonMan - the Scalia nomination came down to a choice between two luminaries, both doing penance in administrative law hell - the D.C. Circuit - at the time: Our Hero, and Robert Bork. As much as I'd like to tell you that they looked at both and concluded that Scalia was better, the record seems to establish that Scalia was simply younger and in better health, and therefore thought likely to serve longer. Matt Eckert said... Hey let Barack be Barack. He is entitled to be surrounded by people who think and act like he would. If he can't get Ms. Gorelick through I hope he nominates Lynn Stewart or Ron Kuby so he can have an attorney general who reflects his world view. Palladian said... "And, then you can teach us the tricky stuff." You mean like copying and pasting things from Wikipedia? Hoosier Daddy said... DTL And 99% of the commenters on this thread didn't vote for Obama. MM Work on your math skills. I think he'd be better served improving logic and interpersonal skills first. Lem said... Gorelick is Zoooe Baaaird for Obama ;) downtownlad said... Hindsight is easy guys. Remember - McCain was saying that the fundamentals of the economy were sound only two months ago - and every Republican believed it. Yet now you're blaming Fannie Mae and Freddie Mac for not anticipating the subprime collapse several years back. Hmm - very few economists predicted that. But Krugman was one of them. I didn't predict it. Although I was aware that we had a huge problem since August 2007 (as was most of Wall Street), when the credit crisis started. And I never gave a crap that McCain's campaign manager had connects to Freddie Mae. So what. Original Mike said... I sure hope MM and Big Mike are right and that there's nothing to this. Atta, Fannie; the choice would be appalling. Matt Eckert said... Let Barack be Barack. You know as a way to mend fences with women and to firmly demonstrate civilian control over the military he should nominate Cindy Sheehan to be his Secretary of Defense. The Drill SGT said... multiple comments below: 1. Ann missed the biggest giggle line from the NYT in this: Ms. Gorelick would also bring corporate experience to an Obama administration at a time of financial crisis. 2. Roger J said: Roger J. said... Re filibustering Presidential cabinet nominees: Unless the nominee fails the Edwin Edwards test (live boy or dead girl) then the President should have whom he wants on his cabinet. While I agree in general, I'd draw the line where past bad behavior that is directly germane to the office being appointed. Having said that, (and I know she's just acting for a client) some of Ms Gorelick's claims on behalf of Duke would appear to indicate that she fails to understand the fundemental rights of accused and the rights to privacy accorded all Americans. 3. I would note that Rahm Emmanuel was also a Fannie Director 4. I heard a rumor that Sarbanes Oxley doesn;t apply to the GSE's. Apparently the Dem's put an exemption in that doesn't require that the GSEs do: Sarbanes-Oxley Act on the GSEs, including certification of financial statements, codes of ethics, loan prohibitions, and independent audit committees. MadisonMan said... Simon, I now see that my memory is failing me, and that Kennedy, not Scalia, was the post-Ginsburg nominee. Another great theory shot to pieces by actual data! Hoosier Daddy said... Why can't we white people generate urbane sophisticates like this? Comment of the year. Quayle said... Gorelick's people are just pushing to get her the opportunity to prosecute Osama Bin Laden in US court for tortuous conduct Roger J. said... "hindsight is easy...!" truer words were never spoken; e.g., the fabled august 2001 PDB which clearly foretold precisely the time and place of the 9/11 attacks! Matt Eckert said... If those nasty Republicans block Ms. Gorelick from Attorney General perhaps she can get to be in charge of Homeland Security. She can make sure that all those freedom fighters get out of Guantánamo Bay and that all terrorists all over the world have the same rights and privileges as American citizens. It is most important that everyone else in the world look favorably on America and that whole terror thing was really overblown anyway. 1jpb said... palladian, 1) They don't hand out ChemE degrees w/o being able to solve differential equations. Which has been helpful since I've gone through about a dozen text books from top graduate B school programs in the states. I know that my understanding of the "intricacies" of these instruments has helped me comprehend the interrelatedness, limits, and potential risks of these products. 2) I'm not claiming ownership of the formula. This seems to be a surprise to you. Which means that I've narrowed my identity, by eliminating two Nobel prize winners. [One side benefit, is that you now know I'm not partially responsible for LTCM.] Ignacio said... Are we going to have preface every remark from now on which touches on President Obama or his activities or associates by stating that we voted for him (if we did) or not? What if we lie? What if we voted for Obama and still see Jamie Gorelick as a dreadful pick for anything? Is this disloyal? LarsPorsena said... Palladian: ""And, then you can teach us the tricky stuff." You mean like copying and pasting things from Wikipedia?" Great catch!! It should also be added that this bit of intimidating gibberish pushed LTCM and several other hedge funds into insolvency with the public paying to clean up the wreck. Simon said... Matt Eckert said... "Hey let Barack be Barack. He is entitled to be surrounded by people who think and act like he would." That's right. "The Attorney General is the hand of the President in taking care that the laws of the United States in legal proceedings, and in the prosecution of offenses, be faithfully executed," United States v. Cox, 342 F.2d 167, 171 (5th Cir. 1965) (en banc). "Executive branch actors are intermediaries for the executive power, and surrogates for the President in whom that power is vested by the Constitution." Lem said... Of all the lawyers, in all the towns, in all the world, Gorelick walks into Obamas. Dust Bunny Queen said... It should also be added that this bit of intimidating gibberish pushed LTCM and several other hedge funds into insolvency with the public paying to clean up the wreck Unfortunately, I actually had to learn to calculate this and other equations to pass the CFP exam. So DTL can cut and paste to his widdle heart's content. It means nothing since he has no understanding. Lem said... That's right. "The Attorney General is the hand of the President in taking care that the laws of the United States in legal proceedings, and in the prosecution of offenses, be faithfully executed," Why do I fear Simon’s Constitutional instincts have never been this sharp ;) The Drill SGT said... Lem said... Of all the lawyers, in all the towns, in all the world, Gorelick walks into Obamas. Does the AG have to be a lawyer? I mean the Surgeon General doesn't have to be a Surgeon or a General? Having said that, it could be worse: Lynne Stewart or Bernadine Dohrn come to mind :) Simon said... MadisonMan said... "Simon, I now see that my memory is failing me, and that Kennedy, not Scalia, was the post-Ginsburg nominee." According to Greenburg (corroborated by Toobin, IIRC), Kennedy was picked out of sheer exhaustion. The White House was out of steam after Bork was rejected and Ginsburg withdrew, and took the path of least resistance. Kennedy seemed confirmable, so they went with him despite vigorous protests from several at DoJ who knew exactly what kind of Justice AMK would be. As a result, we've had to sit through two decades of mushy, foggy prose (and even mushier, foggier thinking) polluting the U.S. Reports. There must surely be liberal lawyers who, from time to time, faced with the task of deciphering one of Kennedy's orotund little missives, have privately lamented that in some ways, it would have been better to have Ginsburg - or even Bork - on the court after all. Matt Eckert said... It is a shame that Johnnie Cochran has passed away. Then we would not have to worry about who the President Elect would pick as his first Supreme Court nominee. reader_iam said... It's so unreasonable to prefer an AG with a bit of independence and a little less baggage in at least three areas. Fine, whatever. Lem said... This is not fair. Lani Guinier was ahead of the line, she's female and black. a twofer. Hoosier Daddy said... She can make sure that all those freedom fighters get out of Guantánamo Bay and that all terrorists all over the world have the same rights and privileges as American citizens. Already under way Lem said... Did they trow Anita Hill under the bus allready? Hoosier Daddy said... Having said that, it could be worse: Lynne Stewart or Bernadine Dohrn come to mind :) Not so loud. Sheesh... ;-) bearbee said... Talk about yer obvious payoff of political debt. And don't forget Franklin Delano Raines. Hoosier Daddy said... Maybe Tony Rezko will be pardoned and promoted to HUD Director. AJ Lynch said... Fannie Mae Compensation 1998-2003 per WSJ editorial on 6/11/2008: Franklin Raines$90,128,761
Timothy Howard $30,155,029 Jaime Gorelick$ 26,466,834
Jim Johnson \$ 21,000,000
And Downtown Lad responds "nothing to see here".
Roger J. said...
I would be much more interested to see how many sitting US Atty's Obama fires on taking office; and esp if he keeps Fitzgerald on in chicago.
Jake said...
They have to be floating this so that anyone later will seem OK.
Pastafarian said...
1jpb -- your level of condescension is particularly off-putting, given that you're presenting partial differential equations, cookbook math, as if it's the pinnacle of human knowledge.
I'm no world-class mathematician, but I taught a course in partial differential equations to undergraduates. Get over yourself, dude. Chemical Engineering -- the last resort for those who can't make it through Electrical.
walter neff said...
Since the Democrats are in all of the attorney generals will be fired and that will be just great.
You see anything that the Republicans did that was a crime and a sin will be a-ok now. If you do not believe me, just wait to read about it in the New York Times.
The Drill SGT said...
Roger J. said...
I would be much more interested to see how many sitting US Atty's Obama fires on taking office; and esp if he keeps Fitzgerald on in chicago.
I predict he pulls a Clinton and fires every one.
And the press without a wimper will cite the fact that Clinto did it, so it's normal practice, but the way that Bush waited on only fired some was clearly "partisan" and illegal.
Actually I take that back, Jimmy Carter will get the HUD post.
Chemical Engineering -- the last resort for those who can't make it through Electrical.
No, that would be Industrial Engineering.
ElcubanitoKC said...
Hope and change and hope and...
And the press without a wimper will cite the fact that Clinto did it, so it's normal practice, but the way that Bush waited on only fired some was clearly "partisan" and illegal.
Drill Sgt, you know very well you are distorting history. Of course Bush fired them all (I think technically they all offered their resignations) at the start of his term. SOP. Where Bush went wrong, and how that lackey Gonzales found himself out of a job, was by having people fired who didn't pursue with enough aggression Democrats -- that is, by turning the USA Atty office into a branch of the Republican Party. Now, I don't know if Bush had anything at all to do with the Gonzales imbroglio -- my gut says probably not. But he did hire his good friend to do all the shady shenanigans. This lousy mindset is, IMO, one reason the Republicans are soon to be on the outside looking in. No one likes an obviously and demonstrably corrupt party.
Rich B said...
Bring back Warren Christopher!
Rich B said...
He's still alive! I checked Wikipedia!
Matt Eckert said...
"No one likes an obviously and demonstrably corrupt party."
Thank God we have someone from the heart of the Chicago Daley Machine to stop that in its tracks.
1jpb said...
DBQ,
DBQ why would you encourage this "gibberish" ignorance if you are acknowledging that differential equations are important?
Great catch!! It should also be added that this bit of intimidating gibberish pushed LTCM and several other hedge funds into insolvency with the public paying to clean up the wreck.
I hope this isn't a sign of the right wing's new attack on intellect. Do you folks really think that you don't need differential equations? Is this related to creationism, i.e. since there are no deferential equations in the Bible, we must not use them? [I attend an Assemblies of God church, so please save the claims that I'm anti-God.]
I'll put it in terms of the gun argument so you can understand: Differential equations don't destroy institutions, people destroy institutions.
Pastafarian,
I think I can hold my own regarding differential equations. For "fun" I took graduate level courses in Physics (among other programs that were totally unrelated to my degree.) While in these classes I was able to use computer systems to churn for days in order to determine molecular level structures based on quantum mechanic level energy formulas that I worked out. [And, BTW, the calculated structures for known molecules perfectly matched the known structures, which confirmed the the stability of the designed structures that were based on my quantum mechanics formulas.]
I'll admit that I didn't find this especially taxing, but the professors were blow-away (and many of the students for that program were unhappy because I was influencing the grading curve) so I must have been doing something notable, especially as an "outsider."
P.S.
I'm sure that the relative difficulty of E programs varies from school to school. I chose ChemE precisely because it was the most difficult E program at my school. It also had (has?) a multi-year run at being the highest paid non-professional degree straight out of school. Of course, I quickly realized that owning your own business, and then being an executive in corporate America are more profitable, so I've been an apostate ChemE for a long time.
"new attack on intellect."
No, it's a pretty old style attack on those who try to sound important by listing their self-impressive skills.
Ego assertion brings ego deflation.
I don't think Althouse is hiring, so really there's no need to offer your resume here. And it's kind of sad to do so merely to try to get folks to be impressed.
Surely, there are better places than blog comment sections to find identity confirmation.
I'm sure someone is out there who will more personally affirm the fact that you're a real person, no matter what your inner doubts might suggest.
1jpb said...
Yes, I'll check in with my crack dealer. Always a good listener.
Henry said...
Roger J. wrote: John "effen" Kerry as Sec State. Wow--the mind boggles at the possibilities of what an Obama Cabinet will look it.
Woodrow Wilson made William Jennings Bryan Secretary of State. Wilson owed Bryan for his convention support; Wilson also liked to be the smartest guy in the room.
The latter doesn't seem to be one of Obama's flaws. But he is now now facing the onslaught of Democratic job seekers and must minimize the influx of stupidity into his cabinet.
Thus Rahm Emanuel. Really a great pick for Obama. Before he could deal with the pack of prima donnas at his door, he needed his own junkyard dog.
1970_baby said...
Empty suit, man-child, squirrel, all are apt.
But my personal favorite is "Nancy Pelosi's personal hand puppet".
Joan said...
I hope this isn't a sign of the right wing's new attack on intellect. Do you folks really think that you don't need differential equations?
When you cut-and-pasted the differential equation stuff, I thought you were just a tool. But now you've made it obvious that you are an idiot. To be perfectly clear: Althouse's comment section does not now, and will likely never, need differential equations.
Whether or not your particular occupation requires the use of differential equations is immaterial to the discussion at hand. (For the record, I took my required year of calculus at MIT and then never looked back. The physics, econ and finance courses I took back then didn't get into any real heavy lifting.)
Lem said...
This comment has been removed by the author.
Lem said...
Speaking of resume.
Gorelick’s bio runs like a CSI who’s who.
Gorelick always ends up near a crime scene ;)
Revenant said...
These stories are planted by administration insiders as trial balloons, to see how people will react. If the reaction isn't too terribly negative, then they'll officially announce something.
shorter ljpb: I am really smart.
Really, I am!
See? See?
Freeman Hunt said...
You all should stop arguing with 1jpb. He is obviously super duper smart.
-------
Writing of odd ideas for appointments or advisers, what's the deal with Daley's brother being on the economic transition team? No matter how great the guy is, it seems like Obama wouldn't want the stink of infamous Chicago politics following him to the White House.
Edmund said...
It's interesting that Obama's team would float the name of someone hostile to civil rights for AG. How is she hostile to civil rights? She was the one flacking the "Clipper Chip" to Congress - it would have given the government keys to all crypotgraphy performed by individuals and corporations in the US. None of your private transactions on the InterTubes would have been secure from government snooping. None.
chickenlittle said...
1jpb wrote: And, BTW, the calculated structures for known molecules perfectly matched the known structures, which confirmed the the stability of the designed structures that were based on my quantum mechanics formulas.
I don't know how you got so far off topic, but nobody gives a shit whether calculated structures match known structures-that's just benchmarking. The sole utility of calculating chemical structures is predicting unknown chemical structures and transition states. And as you well know, that's a different story.
TitusGuessWho'sComingToDinner said...
My only hope in his cabinet is that they are hot.
Jamie is kind of hot so I approve.
thank you.
My morning loaf really smelled today.
No matter how great the guy is, it seems like Obama wouldn't want the stink of infamous Chicago politics following him to the White House.
Considering he represented the South Side of Chicago as a state rep, I think that's going to be hard to do.
M. Simon said...
I'm totally in favor of her as AG.
It will bring change much faster. Well that is my hope any way.
She has brought disaster every place she worked. I'm hoping she will do her magic on the Obama administration.
It is all good.
TitusGuessWho'sComingToDinner said...
Rahm is hot. I want a hot Obama staff.
I want lots of pecs and tits and asses and that are hot.
And just for the record, all you guys talking differential equations and all this pointy head math stuff; you're obviously overcompensating for a lack of...something.
I'll just leave it at that.
TitusGuessWho'sComingToDinner said...
If we have to look at these people over the next 4 years they need to be hot.
Freeman Hunt said...
Considering he represented the South Side of Chicago as a state rep, I think that's going to be hard to do.
Agreed. That's why I would think he wouldn't bring anyone with him. Especially not Daley's brother!
Toby said...
"Why haven't liberals attacked her?"
Because the only sin liberals recognize among fellow believers is heresy.
AJ Lynch said...
Jamie Gorelick is hot?? I think Titus needs to get an eye exam.
walter neff said...
He thinks shes a guy.
Shes got a Janet Reno thing going on.
Freeman Hunt said...
A lot of these people who've been tapped or whose names have been floated seem like payback picks. I think that's a waste. Obama doesn't owe the Democratic establishment anything. Plus, he's going to be the President of the United States; he doesn't need these has-beens and hangers-on. He's all the rage and the media loves him. I don't know why he bothers with these people.
A lot of these people who've been tapped...
Freeman, please be careful with your choice of words. Being tapped may have a different connotation among those Chicago machine politicians.
Just sayin.
AlphaLiberal said...
Recycling old, discredited right wing falsehoods I see. Like the "Gorelick wall."
I could marshall facts and show how it's a pack of lies, but no mountain of facts would have any effect on the host or her right wing base. They will repeat the same falsehoods ad naseum.
Oh, well. This crowd is irrelevant and the action has moved on. Me, too.
tjl said...
"She has brought disaster every place she worked. I'm hoping she will do her magic on the Obama administration"
Correct, in theory. But Gorelick as AG can wreak way too much havoc in the meantime. Let's hope O will put his lefties in highly visible, but relatively harmless offices instead, like Bill Ayers as Secretary of Education.
Freeman Hunt said...
Being tapped may have a different connotation among those Chicago machine politicians.
Heh. I guess we'd really be talking payback picks in that case.
Roger J. said...
Alpha--regretably your promise to move on is in the same vein as the promises of Obama. Not to be taken seriously.
Lem said...
Obama doesn't owe the Democratic establishment anything. Plus, he's going to be the President of the United States; he doesn't need these has-beens and hangers-on. He's all the rage and the media loves him. I don't know why he bothers with these people?
Because the close Obama confidants could not pass security clearance, so hes stuck with the lifers and other peoples confidants.
Trooper York said...
"Red Will" Danaher: Gorelick? Jamie Gorelick, father?
Father Peter Lonergan, Narrator: Well, I can't say it's true, and I won't say it's not, but there's been talk.
(The Quiet Man, 1952)
AllenS said...
Once I figured out how many beers were in a twelve pack, I pretty much gave up on dV = \left(\mu S \frac{\partial V}{\partial S} + \frac{\partial V}{\partial t}+ \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2}\right)dt + \sigma S \frac{\partial V}{\partial S}\,dW.
I could marshall facts and show how it's a pack of lies, but no mountain of facts would have any effect on the host or her right wing base. They will repeat the same falsehoods ad naseum.
Translation: I have no way of demonstrating the crap I just threw on the wall.
Paul Zrimsek said...
Obama should be conserving his political capital to back up his Surgeon General nominee, Typhoid Mary. I'm pretty sure to be right about this, because I know what a Riemann manifold is.
Lem said...
Translation: I have no way of demonstrating the crap I just threw on the wall.
The wall of opposition to Mrs Gorelick continues...
Lem said...
...not even a week in and I'm allready having Obasams ;)
1jpb said...
chickenlittle,
Maybe I'm using "designed" where you're using "unknown." I used the testing to demonstrated that my modeling worked. Of course, this was only the first step but it justified applying my model to "unknown" structures.
I should clarify that I was using quantum mechanics to calculate precise three dimensional molecular level structures--not the simplistic stick and orb three dimensional models that folks usually think of when they picture molecular structures.
And, beyond simply determining the most energetically stable precise three dimensional molecules, I was able to test energy induced fluctuations in these three dimensional structures. And, my math for energy changes of the unknown structures was confirmed by testing with known structures, e.g. I was able to use quantum mechanics to determine the relationship between photon energy and the two precise three dimensional structures and energy levels of retinal.
That's cool, in a very geeky way.
With enough computer power it's possible to determine all molecular level characteristics and changes using quantum mechanics.
garage mahal said...
Nice to see conservatives start caring again about who gets appointed to key positions instead of being content with horse judges. That's change we can believe in my friends.
Lem said...
With enough computer power it's possible to determine all molecular level characteristics
Up until now, we've been hitting a wall ;)
With enough computer power it's possible to determine all molecular level characteristics and changes using quantum mechanics.
Yeah well with a big enough lever I can move the world.
Although it did move for Mrs. Hoosier last night but that's another story.
Dust Bunny Queen said...
With enough computer power it's possible to determine all molecular level characteristics and changes using quantum mechanics
Then of course we have the Infinite Monkey Theorem, which actually kind of correlates to the comments from certain posters ...Michael, Infinity, DTL, Doyle :-D
My #1 issue Gorelick has to do with what is in my opinion a less than finely tuned ability to spot a conflict of interest especially when it involves herself.
This is problematic in general, but with regard to the AG and other higher level law-enforcement positions, it is a big issue--again, for me, and in my opinion.
And garage mahal, I can assure you that I had problems with Bush's AG's as well, especially Gonzales.
Can you wrap your mind around the idea that this could be precisely why I'm reacting early to this float?
PatCA said...
"Death threats!" is the new black.
To be cool, and excused from all your failings, you will wear your alleged death threats proudly.
Then of course we have the Infinite Monkey Theorem, which actually kind of correlates to the comments from certain posters ...Michael, Infinity, DTL, Doyle :-D
Damn dirty apes...
holdfast said...
Gorelick didn't create the "wall" - it was built during the Carter Administration. What she did was make it higher and thicker, for no good reason but to appease her own notions of fair play for terrorists. In that she is a perfect fit for the Obama (PBUH)Administration. Maybe Bill Ayers can be her special advisor on terrorism!
William Raines for SecTreas!
garage mahal said...
And garage mahal, I can assure you that I had problems with Bush's AG's as well, especially Gonzales.
Can you wrap your mind around the idea that this could be precisely why I'm reacting early to this float?
Sure, of course.
Cedarford said...
I'm pretty sure any Jewish anxieties about Barack Hussein Obama have been eliminated with the pick of Rahm Israel Emanuel as chief of staff and the talk of two establishment Jewish liberals Larry Summers as possible Treasury secretary and Jamie Gorelick as AG.
I don't think Gorelick is Jewish. She is a Clintonista with serious baggage and about the last sort Obama would want back at Justice, reporting to Bill and Hillary after damaging appointment hearings smearing President Obama with embracing the archtect of terrorist-friendly rules and Fannie Mae lack of rules.
(Aside from the old palsied bulldyke herself).
I expect that Obama will have half his "power player" appointments be Jews, roughly what Clinton had, with less white male Gentiles, their diminished nomber taken up by more women and blacks.
That Obama is going to think of the rich and powerful Jews that mentored his career since Harvard, and gave him and his wife good jobs since law schools is no surprise. He also has obligations to the Daley machine and certain black apparachniks that mentored him.
All that said, Rahm Emmanual is no payback, or affirmative action for Jews, pick. He is an ideal pick.
"Oh, well. This crowd is irrelevant and the action has moved on. Me, too."
Oh please God let it be true this time.
Crack dealers, like prostitutes and blog commenters, only act like they're listening so that they can get on with their own business.
I suggest a mirror and digital recorder to really talk to someone with the insight and discernment to really get what's being said, and be suitably impressed.
Freeman Hunt said...
Cedarford, I hope that you someday write a comment describing the genesis of your fixation on Jews.
Kirk Parker said...
DTL,
"My 99% comment was a joke."
Ah, a humor joke, eh? What about your "unpatriotic enemies of the state" remark? Was that a joke, too, or do I still need to ask you when the shooting begins?
Kirk Parker said...
Freeman,
I'm going to have to disagree with you, just this once, and say I hope C4 never delivers that particular load of rubbish, or at least not to this esteemed forum.
Freeman Hunt said...
I hope C4 never delivers that particular load of rubbish, or at least not to this esteemed forum.
I see your point. But aren't you curious as to how someone becomes so fixated on something random, like Jewish people, and sees nearly everything in the world through what he perceives to be its connections to that random thing? I am.
Jerry said...
As the saying goes - "be careful what you wish for, you just might get it".
Lots of folks wanted 'change' - but there's a whole lot of ways to 'change' things politically - and most of them aren't what anyone would consider good.
But you wished for 'change'.
And now - you've got it. Might as well enjoy it - and remember the results NEXT time the Election Genie comes around and you tell him you want 'change'.
knox said...
You see anything that the Republicans did that was a crime and a sin will be a-ok now.
Clinton fired, what, 92? -- not a peep. Bush & Co. fired, like 8 and it was a scandal. Obama will turn around and do the same thing as Clinton and he'll skate.
Donn said...
FH:
Cedarford, I hope that you someday write a comment describing the genesis of your fixation on Jews.
knox said...
Nice to see conservatives start caring again about who gets appointed to key positions instead of being content with horse judges. That's change we can believe in my friends.
garage,
you should be as wary of Obama as the rest of us. He did exactly the same thing to Hillary as he did to McCain and, especially, Palin: he used the political equivalent of a hatchet on them, all while claiming to be engaging in a "new kind of politics."
Now he's got Rahm Emanuel and, possibly, Gorelick as close advisors. New politics, indeed.
Revenant said...
aren't you curious as to how someone becomes so fixated on something random, like Jewish people
I would be if the person in question was otherwise interesting or insightful, and just had the one weird blind spot. But in Cedarford's case that's be a no.
blake said...
It's not hard to see how one could become fixated on Jews: They are influential on society disproportionate to their numbers.
Why is an interesting question. I tend to think it's due to a (relatively) stable and cohesive society that emphasizes scholarship and (relatively) free thought.
Others figure it's more about drinking the blood of Christian babies and pacts with Baphomet.
The latter group tend to attribute their power to more sinister forces, whether cabal or Kaballah.
JoeShipman said...
Very interesting to see how differential equations have been raised here in the context of understanding the financial meltdown. I built models using this equation to evaluate CMOs for Bloomberg back in the 90's, and I eventually couldn't stand the abuse of mathematics anymore. (Don't get me started on the even more elaborate and ridiculous epicycles such as "GARCH".) Diff EQs are great for physics, but fictitious when applied to markets, and overreliance on pseudo-physical mathematical models is what allowed a bunch of people with very high IQs but no common sense to almost destroy the financial industry.
TheCrankyProfessor said...
My next four years are looking better all the time - next, a promise for Larry Summers at Treasury: a man MOST of Big O's voters already hate.
policraticus said...
Wait, Obama is thinking about nominating a Fanny/Freddie crony and a fellow liberal who would likely gut the War on Terror and ignore the lessons of 9/11??!?!
That's unpossible!
Ken said...
Gorelick is now involved in representing Duke University in a lawsuit file in the rape-fraud case. She is as slimy as they get. If Obama wants a war with Republicans he couldn't find a better trigger than Gorelick.
mc said...
Ohh Anne,
I can't say "you poor girl" because you are too sharp for such patronizing sympathy...
And...yet...WTF could possibly be flirting with surprising you here?
I read and admire so much your blog that I must ask myself "WTF could possibly be flirting with surprising you here?" of myself.
Now that I am questioning your judgment I must question my own to be fair.
Hope. Change. Chicago. Politics.
Self-reproval.
I feel for us both. (what's a suitable emoticon to end on?)
Mister Snitch! said...
"A better question: Why haven't liberals attacked her?"
Fear.
Just a 4-letter word, like Hope. You're going to start finding them interchangeable, as the Chicago Gang takes hold of power.
"There is nothing in that article that says Obama is actually considering this pick. So when it says: Being considered for I have to ask By whom?"
Irrelevant. When a spectacularly bad choice like this is being bandied about, people of good conscience are obliged to speak out. After the fact (when we Know For Certain) is a bit late for debate.
Obama needs to know that the people who elected him for his ideals are now going to hold him to account. Unless, of course, that's really not why they elected him, and they aren't.
Ralph said...
in which she allegedly created an intelligence “wall”
"Allegedly"?--didn't the memo Ashcroft waved have her name on it?
The reaction of the other commissioners to her obvious conflict of interest (and bad policy) should have deflated the commission's credibility with the public. Why didn't it?
Richard said...
I'm still giving Obama the benefit of the doubt. I did not vote for him, and won't vote for him in 4 years, but as of Jan 20 he is my President.
OTOH, if he actually does nominate Gorelick, I'll be taking what little money I have left out of the stock market and buying shotguns and canned food.
Shy of announcing re-education camps for Republicans, I can't think of anything he could do worse.
AT90405 said...
"Please explain how Fannie and Freddie are a large part of the current crisis. And then explain how Jamie Gorelick contributed to that. I'm really interested in hearing your nonsensical babble."
DTL, you are an abject idiot. Jamie Gorelick was an executive officer at Fannie Mae from 1997 through 2003. She was personally responsible for Fannie Mae increasing the amount of sub-prime CRA mortgages that it acquired. Here is what this miscreant said in a press release published by BusinessWire in 2001:
""Our approach to our lenders is `CRA Your Way'," Gorelick said. "Fannie Mae will buy CRA loans from lenders' portfolios; we'll package them into securities; we'll purchase CRA mortgages at the point of origination; and we'll create customized CRA-targeted securities. This expanded approach has improved liquidity in the secondary market for CRA product, and has helped our lenders leverage even more CRA lending. Lenders now have the flexibility to use their own, customized loan products," Gorelick said. [http://findarticles.com/p/articles/mi_m0EIN/is_2001_May_7/ai_74223918]
Does this answer your question moron? She was promoting the CRA loans and increasing the number of these loans. The New York Times reported on this idiotic policy in 1999 and said, "In moving, even tentatively, into this new area of lending, Fannie Mae is taking on significantly more risk, which may not pose any difficulties during flush economic times. But the government-subsidized corporation may run into trouble in an economic downturn, prompting a government rescue similar to that of the savings and loan industry in the 1980's." [http://query.nytimes.com/gst/fullpage.html?res=9C0DE7DB153EF933A0575AC0A96F958260&scp=1&sq=september%201999%20fannie%20mae&st=cse].
According to Peter Wallison, by 2008, Fannie and Freddie held or had securitized over 1 Trillion dollars of sub-prime loans which constitutes one third of the entire sub-prime market. Those sub-prime mortgages were acquired pursuant to the idiotic policies of Gorelick, Raines and the Clinton Administration. This whole mess shows Gorelick is incompetent. But then again we already knew she was an idiot based on her role in hindering our intelligence agencies from sharing information about terrorists with the Justice Dept. All of that is separate from her role in cooking Fannie's books so that she could profit to the tune of 26 million in bonuses.
Democrats are hopeless. You know what you know and it just doesn't matter what the facts are. We are in serious trouble for the next four years, if not longer. Democrats screw up everything they touch and they are too stupid to understand what they have done.
Hucbald said...
How about appointing someone without a law degree as AG? In fact, how about a non-lawyer for SCOTUS?
Just a thought... but an excellent one.
amba said...
But aren't you curious as to how someone becomes so fixated on something random, like Jewish people
What's random about it?!! It's deeply traditional, a well-trodden rut that it takes no effort or thought to fall into. It's a strong attractor, with the force of immemorial habit even if it is nonsensical from the get-go. People are fixated on Jews because people before them were fixated on Jews, so Jews must be the ones to be fixated on.
It reminds me for some reason of all the exultation that we've overcome racism. When you think about it, racism is so astonishingly moronic in the first place that thinking overcoming it is a great achievement only makes humankind look really primitive and pathetic. Judging someone based on the color of their skin -- it's ludicrous!
Kevin said...
1jpb, I have a Master's degree in ChemE and have practiced in the field for 20 years. No ChemE I know brags about their intellect to strangers on the Internet. The satisfaction of doing the job well is sufficient. Perhaps you are an apostate ChemE for a reason.
Dody Jane said...
They need her; she has the blueprints for the wall they want to re-build...
Dody Jane said...
They need her; she has the blueprints for the wall they want to re-build...
Cato Renasci said...
This is just the beginning -- Ann, you were a fool to vote for Obama. Unfortunately, the rest of us who saw exactly what kind of thug Obama is will have to live with his "rule" (his word) by decree (executive order).
Next it will be his civilian "blueshirts" parallel paramilitary group silencing dissent.
A pox on everyone who voted for Obama!
Cato Renasci said...
Cedarford: Jaime Gorelick is Jewish - I knew her sister.
Rocker 419 said...
No surprises here. Obama ran as a blank slate and you could put whatever beliefs and values you wanted unto him. Now that he actually has to make decisions, most Americans will be shocked and stunned over the next few months and years. We did try to warn you Ann. All that talk about Ayers and Wright wasn't smear, dear. It was evidence of his thinking and his value system. Believe me, Ann, you have no idea what this con-man is about to do next. Stay tuned...
Roux said...
Hopey Hopey Change Change....blah blah blah
amba said...
Pat from Stubborn Facts, who has worked in local politics, has a very interesting take in a comment at my blog on what kind of play this could very well be.
amba said...
Hint: he thinks it could be "serious political hardball" aimed at the Clintons.
Cincinnatus said...
FM FM disaster: Nobody saw this coming? Google: Bush Fannie Mae 2003.
Patm said...
I see disaster in the making. A fast disaster.
Kirk Parker said...
Freeman,
"But aren't you curious as to how someone becomes so fixated on something random"
Heck no, that's why we have psychologists.
|
|
# Homework Help: Simple derivative question
1. Dec 4, 2007
### projection
hi. i need some help with a derivative question. i can get the answer and all but it takes a long time to do it.
i need to find the instantaneous rate of change expression (derivative), and i MUST use the first principle.
1/(25x+4)^4
i can do this with the chain rule method quite easily. the first principle method takes forever as i use pascal's triangle to expand and the brackets and all.
is there some sort of a quicker method where by substituting in some other variable or something. i really don't want to do ten or so of these that take 10 minutes to get through.
2. Dec 4, 2007
### rs1n
Can you clarify (i.e. state) the first principle? This term is unfamiliar to some of us without further context. Do you mean using the limit definition of the derivative?
3. Dec 4, 2007
### projection
yes.
lim h$$\rightarrow$$0
$$\frac{f(x+h)-f(x)}{h}$$
4. Dec 4, 2007
### rs1n
I'll start off by saying: what an awful assignment. One can generally assess whether or not a student understands the limit definition without resorting to such tedious assignments in symbolic manipulation.
That said, you are probably doing it correctly; and yes, it is tedious. However, you may be able try the following (via the limit definition):
The chain rule can be derived from the limit definition as follows:
$$\lim_{h\to 0} \frac{f[g(x+h)] - f[g(x)]}{(x+h)-x} = \lim_{h\to 0} \left( \frac{f[g(x+h)] - f[g(x)]}{g(x+h)-g(x)} \cdot \frac{g(x+h)-g(x)}{(x+h)-x}\right)$$
Using the properties of products of limits, we obtain:
$$\left( \lim_{h\to 0} \frac{f[g(x+h)] - f[g(x)]}{g(x+h)-g(x)} \right) \cdot \left(\lim_{h\to 0} \frac{g(x+h)-g(x)}{(x+h)-x}\right) = f'[g(x)]\cdot g'(x)$$
In this last equation, the $$g(x+h)$$ and $$g(x)$$ are now the "x-coordinates" and the $$f[g(x+h)]$$ and $$f[g(x)]$$ are the corresponding "y-coordinates" (notice that the fraction is essentially the slope through the two "points"). Perhaps you may be allowed to taylor this derivation to your own problems in order to reduce the amount of symbolic manipulation.
|
|
# Maximum principle of harmonic function without mean value formula
Are there any way to prove maximum principle of harmonic functions without the mean value formula? In other words I would like to show $$\max_{\overline{\Omega}}(f)=\max_{\partial \Omega}(f)$$ for a harmonic function $f$ on a bounded domain $\Omega$ without using the formula $$f(x)=\frac{1}{V(B(x,r)}\int_{B(x,r)}f(z)dz.$$
-
Yes, just think of the 1D-case: $f''=0$ on $(a,b)$. The function $f$ attains a maximum in $[a,b]$, and you must exclude that this maximum point lies in $(a,b)$. You first treat the situation where $f''>0$ in $(a,b)$, and then perturb $f$, for example $f(x)+\varepsilon x^2$.
|
|
Homework Help: Number of electron states in 1Bz
1. Sep 20, 2007
malawi_glenn
1. The problem statement, all variables and given/known data
How many electron states are there in the irreducable part of 1BZ (one irreducable part is 1/48 of 1BZ).
2. Relevant equations
Volume of 1Bz:
$$\Omega = \dfrac{(2 \pi)^3}{V}$$
Volume of ordinary cell:
$$V = a^3$$
Density of states, I assume that the temperature is so low that fermi-dirac function does not play big part, hence max energy = fermi energy.
$$D(\epsilon ) = \dfrac{V}{2 \pi ^2} \left( \dfrac{2m}{\hbar ^2} \right)^{3/2} \sqrt{\epsilon }$$
3. The attempt at a solution
total number of states:
$$\int_0^{\epsilon _{F}} D(\epsilon ) d \epsilon$$
just gives me N = N
:S
What do I do wrong? My plan was first to calculate the number of states i ordinary space, then convert into reciprocal space...
2. Sep 20, 2007
malawi_glenn
Never mind, I will ask my teacher tomorrow.
3. Sep 21, 2007
malawi_glenn
First Brillouin zone, A very common short notation for it...
And I realized that there is N number of k-values in 1 BZ, so that there is N/24 in the irreducible 1 BZ, since there is two electron states per k-value due to pauli principle.
|
|
lilypond-user
[Top][All Lists]
## Re: making staves full of rests disappear in a full score
From: Xavier Scheuer Subject: Re: making staves full of rests disappear in a full score Date: Tue, 28 Dec 2010 15:53:36 +0100
```On 28 December 2010 15:46, Frauke Jurgensen <address@hidden> wrote:
>
> Hello all!
>
> I have a large orchestral score, in which individual instruments at
> times are resting for many pages. I would like these instruments to
> disappear (those staves not to be shown) on systems where they have
> nothing but rests.
>
> The only way I can think of to do this, is to create a new staff at
> the point where the instrument appears, and to do so each time.
>
> Is there a less cumbersome way of doing this?
Hi!
Yes there is!
See Notation Reference manual:
NR 1.6.2 Modifying single staves > Hiding staves
version 2.12:
\layout {
\context {
\RemoveEmptyStaffContext
}
}
http://lilypond.org/doc/v2.13/Documentation/notation/modifying-single-staves.html#hiding-staves
version 2.13:
\layout {
\context {
\Staff \RemoveEmptyStaves
}
}
http://lilypond.org/doc/v2.12/Documentation/user/lilypond/Modifying-single-staves#Hiding-staves
Cheers,
Xavier
--
|
|
## Cheat Sheet: Algorithms for Supervised and Unsupervised Learning
Cheat Sheet: Algorithms for Supervised and Unsupervised Learning
A nice “cheat sheet,” more of a summary of key information on algorithms for supervised and unsupervised learning.
|
|
# Better handling of indentation and the TAB key when editing posts
This user script changes the behavior of a few keys (most notably the Tab key) within the post editor to behave more like it does in IDEs or text editors:
• When multiple lines are selected, Tab and Shift-Tab indent and dedent these lines
• When nothing is selected, Tab and Shift-Tab insert or remove whitespace to align the cursor on a tab boundary
• When the cursor is within the left margin of a line, Backspace removes whitespace to align the cursor on a tab boundary (in other words, it may delete more than just one space character)
• On indented lines, the Home key toggles the cursor between the actual beginning of the line and the beginning of the real content (in other words, it jumps back and forth to before and after the leading whitespace). This only happens on lines that are indented by at least four spaces or a tab, since it can be confusing for the following reason: When you press Home in the text editor, you expect the cursor to jump to the beginning of the line as it is displayed, which (due to wrapping) may be different from the actual previous newline character.
• So you don't have to reach for the mouse to tab out of the editor you can press and release the Ctrl key, and the next key press will not be intercepted; thus Tab takes you out of the editor. Pressing and releasing Ctrl will grey out the text editor until the next keystroke to clarify this. If you think this is too awkward, I'm open to other suggestions, but there should be some way to tab out of the editor .
Note that this will never insert TAB characters, only spaces. It does however handle already-present TABs, and it handles them the same way the Markdown converter does.
Feedback very welcome, except for discussions about a) tabs vs. spaces, and b) tab width :)
Use one of these two links to install:
-
Question: (awesome script by the way!) how does the auto-update feature work around the same-domain restriction for localStorage? – Nathan Osman♦ Mar 23 '12 at 22:20 It doesn't. It checks for updates on every SE site you visit. – balpha♦ Mar 23 '12 at 22:21 Great script, when creating questions. I tried to use its features when editing a question accessed through the review-beta system, with no success. Is this the right place to file this bug? – Spontifixus Sep 26 '12 at 15:17
This uses the jQuery .on(...) feature which is only available in 1.7 and up. Careers is still using an older version and you get an exception on every page right now with this installed.
|
|
# projectile help
• November 11th 2008, 09:48 AM
khuezy
projectile help
Hi, a projectile is fired with an initial speed of 500 m/s and angle of elevation 30 degrees.
Find the range of the projectile, the max height, and speed at impact
Thanks.
• November 11th 2008, 11:56 AM
Arch_Stanton
Firstly, we need to compute the component velocities.
$v_y=v_0 sin \alpha$
$v_x=v_0 cos \alpha$
and duration of this movement:
$v_y=gt \Rightarrow t=\frac{v_y}{g}$
Maximum height is: $h=\frac{gt^2}{2}=\frac{v_y^2}{2g}=\frac{v_0^2sin^2 \alpha}{2g}$
The rage: $x=v_xt=v_0cos\alpha \frac{v_0sin \alpha}{g}=\frac{v_0^2sin \alpha cos\alpha}{g}$
The speed at impact is equal the initial speed.
• November 12th 2008, 02:34 PM
khuezy
hmm
what am I doing wrong w/ the range?
I'm getting a 11km rather than 22km(book)
• November 12th 2008, 03:06 PM
skeeter
because the correct range equation is ...
$\frac{2v_0^2 \cdot \sin{\theta}\cos{\theta}}{g} = \frac{v_0^2 \cdot \sin(2\theta)}{g}$
• November 12th 2008, 03:33 PM
khuezy
thank you
!
|
|
# [thelist] Is http://here.localhost/ possible ?
darren darren at web-bitch.co.uk
Wed Jun 26 16:12:01 CDT 2002
On Wednesday, June 26, 2002 at 19:28, Craig wrote:
C> Thanks for all your responses so quickly!
C> Unfortunately IIS5.1 under XP Pro only allows one website so I can't point
C> the different hostnames to different websites :(
have a look in the inetpub/adminscripts directory. there should be one
called something like mkw3site.vbs which will create a new top-level
website.
the syntax is something like:
mkw3site -r d:\path\to\site -h hostName --DontStart -t "Site Title" -v
the hostname is the same name that you would put in your hosts file, and is
shown under the hosts header name under iis.
you can only have one site running at once, but you can have several
websites pointing to different places.
hth,
darren
|
|
# Why only angular momentum is conserved and not Kinetic Energy in the given problem
1. Oct 8, 2012
### Tanya Sharma
1. The problem statement, all variables and given/known data
A cockroach with mass m rides on a disk of mass 6.00m and radius R .The disk rotates like a merry go round around its central axis at angular speed $ω_i$=1.50rad/s.The cockroach is initially at radius r=.800R,but when it crawls out to the rim of the disk .Treat the cockroach as the particle.What then is the angular speed?
Using the conservation of angular momentum the correct answer is obtained.But I am not clear why cant we use conservation of energy in this problem since no external forces are acting on the system of disk and cockroach.
2. Relevant equations
3. The attempt at a solution
2. Oct 8, 2012
### rcgldr
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
In this case, the cockroach does "negative" work as it crawls towards the outside of the disk. The "negative" work done is the intergral of the centripetal force (as a function of r) x Δr (change in radius) from .8 R to 1.0 R.
3. Oct 8, 2012
### Tanya Sharma
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
rcgldr....Thank you for the response....Why centripetal force? Please can you elaborate...
4. Oct 8, 2012
### rcgldr
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
Work = force x distance. In this case the force is the centripetal force, and the distance is radial, from .8 R to 1.0 R. The force varies as r goes from .8 R to 1.0 R, so using an integral would be the normal way to determine the work done. Since the movement is outwards, the work done is "negative". During it's movement outwards, the path of the cockroach is a spiral, so there's a component of opposing force in the direction of the spiral path, decreasing the speed.
grinding out the math:
let r = position of bug from center
define unit of mass to be "1 bug", so that m = 1
define unit of length to be R so that R = 1
define unit of angular velocity to be 1 radian / second.
Angular momentum is conserved:
IB = inertia of bug = m r2 = r2 {since m = 1}
ID = inertia of disk = 1/2 6 m R2 = 3 {since m = 1 and R = 1}
Initial state:
r = .8
w(r) = w(.8) = 1.5
L(.8) = (IB + ID) w(.8) = (.64 + 3) 1.5 = 5.46
Angular momentum is conserved:
L(r) = (IB + ID) w(r) = 5.46
L(r) = (r2 + 3) w(r) = 5.46
w(r) = 5.46 / (r2 + 3)
w(1) = 5.46 / (1 + 3) = 1.365
energy versus r = e(r)
e(r) = 1/2 (IB + ID) w(r)2
e(r) = 1/2 (r2 + 3) (5.46 / (r2 + 3))2
e(r) = 14.9058 / (r2 + 3)
e(0.8) = 14.9058 / (.64 + 3) = 4.09500
e(1.0) = 14.9058 / (1.0 + 3) = 3.72645
energy change = e(1) - e(.8) = -0.36855
velocity of bug versus r = v(r)
v(r) = w(r) r = 5.46 r / (r2 + 3)
force on bug = f(r)
f(r) = - m v(r)2 / r = -1 (5.46 r / (r2 + 3))2 / r
f(r) = -29.8116 r / (r2 + 3)2
work done by bug moving from .8 to 1 R
$$w = \int_{.8}^1 - 29.8116 \ r \ dr / (r^2 + 3)^2$$
$$w = \left. 14.9058 / (r^2 + 3) \right]_{.8}^{1}$$
(note this matches the formula for energy versus r)
w = 14.9058 (1/(1+3) - 1/(.64+3)) = -0.36855
so the negative work done by the bug matches the energy change and accounts for the loss in energy.
Last edited: Oct 8, 2012
5. Oct 8, 2012
### Tanya Sharma
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
rcgldr...Thank you very much....
Plz can you explain it ...
Thanks
6. Oct 8, 2012
### rcgldr
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
That's the centripetal force on the bug as a function of r (the radius). I used -m v(r)2 / r since the direction of centripetal force is inwards, and I choose the unit of mass to be "1 bug", so m = 1.
For a similar example, imagine that you are on a merry-go-round, and that it's mass is 6 times yours and that it's inertia is the same as a uniform disk with same radius. This time, imagine that you start on the outside of the merry-go-round, and pull yourself inwards. Angular momentun is conserved, and the work you exert in pulling yourself inwards increases the energy by the amount of work done. Using the original problem scenario, when you're at the outside edge, R, the rate of rotation is 1.365 rad / sec. At .8 R, it's 1.5 rad / sec, and when you reach the middle (0 R), it's 5.46 / (0 + 3) = 1.82 rad / sec, and the energy is 4.9686.
For another example, imagine a puck whirling around on a frictionless table, attached to a string that is beign pulled at the center through a hole. All of the angular momentum and energy is in the puck, so the math is different. The math is shown in post #3 of this old thread:
Last edited: Oct 8, 2012
7. Oct 8, 2012
### Tanya Sharma
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
Thanks for the wonderful explaination....
Somehow one thing is not very clear....How have you written this force as a function of r....I want to understand why f(r) = -v(r)2 / r ?
8. Oct 8, 2012
### rcgldr
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
Centripetal force = m v2 / r, and I defined the unit of mass to be "1 bug" so that m = 1. In this case the velocity v can be expressed as a function of the radius of the bug's current location on the disk, and I use v(r) to mean v as a function of r. Linear velocity for a point at r = ω r, where ω is the angular velocity. Since ω can also be expressed as a function of r, I used ω(r).
Last edited: Oct 8, 2012
9. Oct 8, 2012
### Tanya Sharma
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
rcgldr.... Centripetal force is $\frac{mv^2}{r}$....why is m not included?
10. Oct 8, 2012
### ehild
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
No external force does not mean conservation of energy in case of a system. Think of two colliding bodies -there is no external force, but the energy conserves only in the very special case of elastic collision. If the internal forces, the forces of interaction, are not conservative the energy of a system of bodies does not conserve.
ehild
11. Oct 8, 2012
### rcgldr
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
Sorry, I didn't understand what your were asking. The mass of the bug was given as m, while the mass of the disk was given as 6 m. I just defined the unit of mass to be "1 bug", so m = 1, and unit of length to be "R", so that R = 1 for this problem to keep the math simple. I also left out other units, such as radians / second, and the units for r. Centripetal accelertaion is v2 / r, while centripetal force = m v2 / r. Since m = "1 bug", I left out the unit. I updated my previous posts to fix this, starting with post #4. Again, sorry for leaving out the mass term.
Last edited: Oct 8, 2012
12. Oct 8, 2012
### Tanya Sharma
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
In elastic collisions the bodies do exert forces ,say body 1 exerts force f1 on body 2 ... This force is doing work as the bodies may move during collision ...Why is kinetic energy being conserved here even though work is been done?
13. Oct 8, 2012
### rcgldr
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
Consider the two bodies as a closed system with no external forces. The only forces are internal to this closed two body system, and assuming elastic collisions, no energy is lost, so the total energy of the system remains constant. During the collision, some or all of the energy is stored as potential energy related to deformation (compression) of the two bodies.
14. Oct 8, 2012
### Tanya Sharma
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
rcgldr....how is the original bug problem different from elastic collision in the sense both are two body system having no external forces and one body is doing work ...work is being done in collisions also....isnt it ?
15. Oct 8, 2012
### ehild
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
Elastic collision means the transformation of kinetic energy of the bodies into some elastic energy for a short time, and then transformation back to kinetic energy.
ehild
16. Oct 8, 2012
### rcgldr
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
Although the bug problem involves an internal force, ultimately the negative work done by the bug is being coverted into heat. If the bug were moving inwards, then potential energy within the bug is being converted in to kinetic energy. A similar situation occurs when an ice skater pulls in his/her arms during a spin. The skater uses chemical potential energy to perform "postiive" work. The bug could be replaced by some type of mechanical device that converts the negative work into potential energy within the mechanical bug. Assuming that the mechanical bug was 100% efficient, then the total energy (potential + kinetic) of the mechanical bug + disk closed system would remain constant.
For the elastic two body system, during the period of collision, the kinetic energy of the system is converted into potential energy within the two bodies, as if they were springs. Since the collision is elastic, once they separate, all of that potential energy is retuned back as kinetic energy.
17. Oct 8, 2012
### Tanya Sharma
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
ehild.....rcgldr.....thank you very much for your time and energy....wonderful insight in conservation of angular momentum and energy
18. Oct 8, 2012
### rcgldr
Re: Why only angular momentum is conserved and not Kinetic Energy in the given proble
The issue with the bug problem is that energy is lost, since a real bug can't increase it's internal potential energy by doing negative work. Again, sorry for not originally including how I defined the units of mass and length in post #4 to keep the equations simple (it's fixed now).
|
|
OpenKattis
Kattis Set 05
#### Start
2020-02-10 04:15 AKST
## Kattis Set 05
#### End
2020-02-17 00:30 AKST
The end is near!
Contest is over.
Not yet started.
Contest is starting in -289 days 23:51:58
164:15:00
0:00:00
# Problem AShort Sell
Simone is learning to trade crypto currencies. Right now, she is looking into a new currency called CryptoKattis. A CryptoKattis can be used to improve your Kattis ranklist score without having to solve problems.
Simone, unlike most competitive programmers, is very perceptive of competition trends. She has lately noticed many coders talking about a new online judge called Doggo, and suspect that using the Kattis judge will soon fall out of fashion. This, of course, will cause the CryptoKattis to rapidly fall in value once demand fades.
To take advantage of this, Simone has performed very careful market research. This has allowed her to estimate the price of a CryptoKattis in dollars during the next $N$ days. She intends to use this data to perform a short sell. This means she will at some day borrow $100$ CryptoKattis from a bank, immediately selling the currency. At some other day, she must purchase the same amount of CryptoKattis to repay her loan. For every day between (and including) these two days, she must pay $K$ dollars in interest. What is the maximum profit Simone can make by carefully choosing when to borrow and repay the $100$ CryptoKattis?
## Input
The first line of the input contains two integers $1 \le N \le 100\, 000$ and $1 \le K \le 100$. $N$ is the number of days you know the CryptoKattis price of, and $K$ is the cost in dollars of borrowing CryptoKattis per day.
The next line contains $N$ integers separated by spaces. These are the CryptoKattis prices during the $N$ days. Each CryptoKattis price is between $1$ and $100\, 000$ dollars.
## Output
Output a single decimal number; the maximum profit you can make performing a short sell. If you cannot make a profit no matter what short sell you make, output $0$.
Sample Input 1 Sample Output 1
5 10
1000 980 960 940 10
98950
Sample Input 2 Sample Output 2
5 100
100 100 100 103 100
100
|
|
# Mertens function limits using $\phi+2$n
As we can see in the plot below, Mertens function: $M(x)\equiv \sum_{n=1}^{x}\mu(n)$ has wild swings from positive to negative and back again.
When we use: $$x=\frac{1}{2+\frac{1}{\phi+2}}\text{, where }\phi \text{ is the golden ratio,}$$ as the power for each $n$ we get the blue line and using its negative we get the red line. Both lines hug the extremes of the sum quite nicely. I have checked to about $10^{10}$ without finding any exceptions.
I have found only one reference to $\phi+2$ in the literature, specifically in "Mathematical Constants" by Steven R. Finch, page 418, where it is shown as the $\text{Tutte-Beraha constant} B_{10}$.
My question is: could $\phi+2$ be used to explain the behavior of the Mertens function? Or would this be a case of the Law of Small Numbers?
-
The name of the mathematician behind this is Mertens; I thus changed Merten's to Mertens; if you would still like to have some possessive s please reintroduce it in a consistent way. – quid Mar 30 '13 at 16:13
I'm not entirely sure I understand the question. However, I believe you are asking if the inequality $|M(x)| \leq x^{\frac{1}{2+1/(\phi+2)}}$ might hold.
It is known that the Mertens function satisfies $M(x) \geq 1.2 \sqrt{x}$ (as well as $M(x) \leq - 1.2 \sqrt{x}$) infinitely often (see the paper "The Mertens Conjecture Revisited" by Tadej Kotnik and Herman te Riele, although this is classical with a constant smaller than 1).
This implies that your inequality is violated infinitely often for larger values of $x$.
-
While the specific question appears to be answered, I would like to add a more general one:
The study of the Mertens function $M(x)= \sum_{n \le x} \mu(n)$ is a notorius problem, and (thus) any new insight based on numerical investigations for very small values (in this context) seems most unlikely.
The function $M(x)$ is in a very vague sense about square-root-ish and thus one sometimes writes $q(x)=M(x)/\sqrt{x}$.
On the one hand, there used to be an old conjecture that $|q(x)|\lt 1$ (Mertens' conjecture), which was refuted by Odlyzko and te Riele (1985), and was already considered very unlikely to be true before, and Mark Lewko mentioned the curent 'record' constants. But it is beleived that $q(x)$ is in fact unbounded.
On the other hand, an estimate $q(x)= O(x^{\varepsilon})$ for every $\varepsilon > 0$ is equivalent to the Riemann Hypothesis. More precisely, by a recent result of Soundararajan it is known that under RH one has $$q(x)= O(\exp( \sqrt{\log x} (\log \log x)^{14})),$$ and Balazard and de Roton showed that $14$ can be optimized to $5/2 + \varepsilon$ for every $\varepsilon > 0$.
Yet, it seems to be not clear (even conjecturally) how $q(x)$ should actually behave.
Kotnik and van de Lune (Exp. Math. 13.4) made the conjecture that $$q(x)= \Omega_{\pm}(\sqrt{ \log \log \log x}),$$ and Kotnik and te Riele (mentioned in Mark Lewko's answer) discuss that extremal observed values of $q(x)$ are close to $\pm \frac{1}{2}(\sqrt{ \log \log \log x})$.
However, and as mentioned there, if this would remain (about) true 'forever' this would contradict Ng (2004) [and Gonek (unpublished)] conjecture that limes superior and limes inferior of $$\frac{q(x)}{( \log \log \log x)^{5/4}}$$ are in fact $\pm B$ for some positive and finite $B$.
Yet, in the in the 70's still other conjectures were made namely that the limit of $$\frac{|q(x)|}{\sqrt{ \log \log x}}$$ should exist, and even two(!) values were suggested for the limit. [Note: there are only two log's here.]
And, there would still be different contributions to this. For example, Kaczorowski (Journal London Math. Soc. 2007) showed that a 'twisted' version of $M(x)$ is fairly large, namely $$\sum_{n \le x} \mu(n) (\cos (x/n) -1) = \Omega_{\pm}(\sqrt{x} \log \log \log x)$$ and he derives from this that for every real $a\neq 0$ $$|\sum_{n \le x} \mu(n)|+|\sum_{n \le x} \mu(n) \cos (ax/n)| = \Omega(\sqrt{x} \log \log \log x)$$ which would, if one could take $a=0$, imply $|q(x)| = \Omega (\log \log \log x)$. Or, put differently, shows that if the $|q(x)|$ is not as large, the sum with the cosine has to be large for every non-zero $a$. In contrast to the idea that $\frac{1}{2}(\sqrt{ \log \log \log x})$ might be about right.
In any case, this problem is complicated in that even detailed and recent investigations can arrive at different conclusions what should or might be the right expectation regarding the behavior of $M(x)$.
-
|
|
Algebra
# Distributive Property - Multiple Terms
If $(x+5)(3x + 8) = Ax^2 + Bx + C$, what is the sum of $A$, $B$, and $C$?
What is the coefficient of $y$ in the expansion of the expression $(4x-y+4)(x+2y-1) ?$
Simplify $\left( 9xy - 6y^2 + 15y \right) \div 3y.$
If $a=-3$ and $b=2$, evaluate $\left(a^2 - 3ab \right) \times \frac{1}{3a} + \left(ab -\frac{b^2}{2} \right) \div 2b.$
Consider the expansion of $(8x+y) (14 y + x)$. After combining like terms, how many terms are there in the expression?
×
|
|
## Main Question or Discussion Point
I am a beginner in thermodynamics. I was going through a educational board's chapter on thermodynamics. I want to know if they are accurate and other comments if any. The statements are as below. Also we come across state variables in applications of current laws too. So what is special about state variables, how are they different from other variables.
1) The distinction between mechanics and thermodynamics is worth bearing in mind. In mechanics our interest is in the motion of particles or bodies under the action of forces and torques. Thermodynamics is not concerned with the motion of the system as a whole. It is concerned with the internal macroscopic state of the body. When a bullet is fired from a gun, what changes is the mechanical state of the bullet (its kinetic energy in particular), not its temperature. When the bullet pierces a wood and stops, the kinetic energy of the bullet gets converted into heat, changing the temperature of the bullet into the surrounding layers of wood. Temperature is related to the internal (disordered) motion of the bullet, not to the motion of the bullet as a whole.
………………
2) The concept of internal energy of a system is not difficult to understand. We know that every bulk system consists of a large number of molecules, internal energy is simply the sum of the kinetic energies and potential energies of these molecules. We remarked earlier that in thermodynamics the kinetic theory of the system as a whole, is not relevant. Internal energy is thus, is thus, the sum of molecular kinetic and potential energies in the frame of reference relative to which the centre of mass of the system is at rest. Thus, it includes only the (disordered) energy associated with the random motion of molecules of the system. We denote the internal energy of a system by U.
………………………………………
3) The notion of heat should be carefully distinguished from the notion of internal energy. Heat is certainly energy, but it is the energy in transit. This is not just a play of words. The distinction is of basic significance. The state of a thermodynamic system is characterized by its internal energy, not heat. A statement like “a gas in a given state has a certain amount of heat” is as meaningless as the statement that “ a gas in a given state has a certain amount of work”. In contrast, “a gas in a given state has a certain amount of internal energy” is a perfectly meaningful statement. Similarly, the statements “ a certain amount of heat is supplied to the system’ or ‘ certain amount of work was done by the system’ are perfectly meaningful.
4) To summarise, heat and work in thermodynamics are not state variables. They are modes of energy transfer to a system resulting in change in internal energy, which as already mentioned is a state variable. For proper understanding of thermodynamics, however, the distinction between heat and internal energy is crucial.
……………………………
5) Now, the system may go from an initial state to the final state in a number of ways. Since internal energy is a state variable, delta(U) depends only on the initial and final sates and not on the path taken to go from initial and final states. However, delta(Q) and delta(W) will, in general, depend on the path taken to go from initial to final steates. From delta(Q) – delta(W) = delta(U), however, it is clear that the combination delta(Q) – delta(W) is, however, path independent.
Related Classical Physics News on Phys.org
Danger
Gold Member
When a bullet is fired from a gun, what changes is the mechanical state of the bullet (its kinetic energy in particular), not its temperature. When the bullet pierces a wood and stops, the kinetic energy of the bullet gets converted into heat, changing the temperature of the bullet into the surrounding layers of wood.
While I don't know much about thermodynamics, I do know a bit about guns. The quoted statement is crap. A bullet absorbs tremendous heat from the combustion of the powder, and generates more through friction with the barrel on the way out. The impact does generate more for the aforementioned reasons, but it's not the only source of heat.
Andy Resnick
There's a lot here- let's break it down.
1) This is true, although the specific example is flawed as per Danger's observation. This is a key concept: thermodynamics does not require the existence of atoms to be correct, and leads to the next one:
2) I found the concept difficult to understand, but that's me. Remember, we don't need to resort to atoms to understand thermodynamics, so let's not even talk about them. I have come to understand the internal energy to be a 'configuration' energy. A chair within an empty room (with gravity) will have different internal energies depending on where the chair is- near the ceiling? On the floor? Is it intact or burned? Re-introducing atoms, a protein molecule will have different internal energies depending on it's configuration: folded, denatured, etc.
3) Yes- another key concept. Heat is the flux of energy.
4) I think this is true as well- I don't fully understand what a state variable is- other than the definition of a state variable being independent of path. I think the $$\DeltaPV$$ component of work (mechanical work) is a state variable.
5) That's true for equlibrium thermodynamics only, I think. I vaguely recall Jarzynski's equality pertaining to nonequilibrium thermodynamics...
|
|
# Best practice for checking if lookup matches element without toText()?
Hi, I keep running into situations where I need to determine which of a series of values another look-up column has selected. For example:
``````SwitchIf(
thisRow.Frequency.toText()="One-time", ...,
thisRow.Frequency.toText()="Monthly", ...,
thisRow.Frequency.toText()="Quarterly", ...,
thisRow.Frequency.toText()="Yearly", ...,
)
``````
Using the `toText()` method feels very brittle. Ideally I’d like the flexibility to check both by value (as I’m doing here) and by reference so that if the options label text changes, the linkage changes along with it (as is the case when columns are renamed).
Any guidance on the best practice for comparing by reference instead of by value?
In a pseudo-Coda-ese, I’d want a reference comparison akin to:
``````SwitchIf(
thisRow.Frequency=Frequency#[One-time], ...,
thisRow.Frequency.toText()=Frequency#Monthly, ...,
thisRow.Frequency.toText()=Frequency#Quarterly, ...,
thisRow.Frequency.toText()=Frequency#Yearly, ...,
)
``````
Where the # symbol reflects that I’m referring to the display value of a specific row in the Frequency table, and that if the display value changes this formula would update accordingly (like columns do) and won’t break.
@MrChrisRodriguez Where do you see the brittleness? The ideas that come up for me is to further specify your frequency by chaining into the next value, ie `thisrow.Frequency.[TimeFrame]="Monthly"`.
Also look into the ability to reference a singular row by typing `@[Display Column]` - this will link you directly to the row so it will work even if the column name changes.
Hope these help!
Hi @MrChrisRodriguez,
two things:
1. (The trivial one) for the sake of simplicity, if you test against the same condition you might use `Switch()` instead of `SwitchIf()`. Therefore
``````Switch(thisRow.Frequency.toText(),
"One-time", ...,
"Monthly", ...,
"Quarterly", ...,
"Yearly", ...,
)
``````
1. I concur with @Johg_Ananda’s idea, however I’d ask you if you have a sample to share to dig into the data model: I tend to avoid to have literals (i.e. explicit values) in formulas as they represent a potential integrity threat. Maybe this is not the case, but in case…
|
|
# What is SSDT? Part 3 – an API for me, an API for you, an API for everyone!
Ed Elliott, 2016-01-07
In the final part of this 3 part series on what SSDT actually is I am going to talk about the documented API. What I mean by documented is that Microsoft have published the specification to it so that it is available to use rather than the documentation is particularly good – I warn you it isn’t great but there are some places to get some help and I will point them out to you.
The first parts are available:
https://the.agilesql.club/blogs/Ed-Elliott/2016-01-05/What-Is-SSDT-Why-S…
and
https://the.agilesql.club/blog/Ed-Elliott/2015-01-06/What-Is-SSDT-Part-2…
Same as before, i’ll give an overview and some links to more info if there are any 🙂
# Documented API
There are a number of different API’s broadly split into two categories the DacFx and the ScriptDom. The DacFx consists of everything in the diagram around the API’s circle except the ScriptDom which is separate.
## ScriptDom
For me SSDT really sets SQL Server apart from any other RDBMS and makes development so more professional. The main reason is the declarative approach (I realise this can be replicated to some extent) but also because of the API support – name me one other RDBMS or even NoSql system where you get an API to query and modify the language itself, go on think about it for a minute, still thinking?
The ScriptDom has two ways to use it, the first is to pass it some T-SQL (be it DDL or DML) and it will return a representation of the T-SQL in objects which you can examine and do things to.
The second way it can be used is to take objects and create T-SQL.
I know what you are thinking, why would I bother? It seems pretty pointless to me. Let me assure you that it is not pointless, the first time I used it for an actual issue was where I had a deployment script with about 70 tables in. For various reasons we couldn’t guarantee that the tables existed (some tables were moved into another database) the answer would have been to either split the tables into 2 files or manually wrap if exists around each table’s deploy script. Neither of these options were particularly appealing at the particular point in the project with the time we had to deliver.
What I ended up doing was using the ScriptDom to parse the file and for each statement, (some were merge statements, some straight inserts, some inserts using outer joins back to the original table and an in-memory table) retrieved the name of the table affected and then generating an if exists and begin / end around the table, I also produced a nice little excel document that showed what tables where there and what method was used to setup the data so we could prioritise splitting the statements up and moving them towards merge statements when we had more time.
Doing this manually would have technically been possible but there are so many things to consider when writing a parser it really is not a very reliable thing to do, just consider these different ways to do the same thing:
select 1 a; select 1 a select /*hi there*/ 1 a select * from (select 1 one) a select 1 as a; select 1 as a select /*hi there*/ 1 as a select 1 as [a]; select 1 as [a]
select /*hi there*/ 1 as a
select * from (select 1 one) a ;with a as (select 1 a) select * from a ;with a as (select 1 as a) select * from a ;with a as (select 1 a) select * from a ;with a(a) as (select 1) select * from a
;with a(a) as (select 1 a) select * from a
select 1 a into #t; select a from #t; drop table #t; select 1 a into #t; select a a from #t; drop table #t;
select 1 a into #t; select a as a from #t; drop table #t;
I literally got bored thinking of more variations but I am pretty sure I could think of at least 100 ways to get a result set with a single column called a and a single row with a value of 1. If you think that parsing T-SQL is something that is simple then you should give it a go as you will learn a lot (mostly that you should use an API to do it).
One thing that causes some confusion when using the ScriptDom is that to parse any T-SQL unless you just want a stream of tokens you need to use the visitor pattern and implement a class that inherits from TSqlFragmentVisitor – it is really simple to do and you can retrieve all the types of object that you like (CreateProcedure, AlterProcedure etc etc).
So if you have a need to parse T-SQL then use the ScriptDom, it is really simple what is not so simple is the other side of the coin, creating and modifying objects to create T-SQL.
If you need to do this then it is quite hard to work out the exact type of objects you need at the right point, for example if you take this query:
;with a as (select 1 a) select * from a
What you end up with is:
• SelectStatement that has…
• a list of CommonTableExpression that has…
• an ExpressionName which is of type Identitfier with a value of “a”
• an empty list of Identitiers which are the columns
• a QueryExpression that is of type QuerySpecification which has…
• a single LiteralInteger as the expression on a SelectScalarExpression as the only element in a list of SelectElement’s
• the CommonTableExpression has no other specific properties
• the SelectStatement also has…
• a QueryExpression that is a QuerySpecification which contains….
• a list of SelectElement’s with one item, a SelectStarExpression
• a FromClause that has a list of 1 TableReference’s which is a NamedTableReference that is…
• a SchemaObjectName that just has an Object name
If you think that it sounds confusing you would be right, but I do have some help for you in the ScriptDomVisualizer – if you give it a SQL statement it will parse it and show a tree of the objects you will get. If you do anything with the ScriptDom then use this as it will help a lot.
### ScriptDom Visualizer V2
https://the.agilesql.club/blog/Ed-Elliott/2015-11-06/Tidying-ScriptDom-V…
### Using the TransactSql.ScriptDOM parser to get statement counts
http://blogs.msdn.com/b/arvindsh/archive/2013/04/04/using-the-transactsq…
### MSDN forum post and a great demo of how to parse T-SQL from Gert Drapers
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/24fd8fa5-b1af-4…
## TSql Model
The TSql Model is a query-able model of all the objects in an SSDT project, their properties and relationships. That sounds like a mouthful but consider this:
create view a_room_with_a_view as
select column_name from table_name;
If you just have this script without the model you can use the ScriptDom to find that there is a select statement and a table reference and you could probably also work out that there is a column name but how do you know that there is actually a table called table_name or a column on the table called column_name and also that there isn’t already a view or other object called a_room_with_a_view? The TSql Model is how you know!
The TSql Model is actually not that easy to parse (there is help so fear not, I will tell you the hard way to do it then show you the easy way). What you do is to load a model from a dacpac (or you can create a brand new empty one if you like) and then query it for objects of specific types or with a specific name or even just all objects.
So imagine you open a dacpac and want to find the a_room_with_a_view view you could do something like:
var model = new TSqlModel(@"C:\path\to\dacpac.dacpac", DacSchemaModelStorageType.File);
var view = model.GetObject(ModelSchema.View, new ObjectIdentifier("a_room_with_a_view"), DacQueryScopes.UserDefined);
If you then wanted to find all the tables that the view referenced then you could examine the properties and relationships to find what you want. It is confusing to get your head around but really useful because if you know what type of object you are interested in then you can tailor your calls to that but if you just what to find all objects that reference a table (i.e. views, other tables via constraints, functions, procedures etc) it means you can really easily do that without having to say “get me all tables that reference this table, get me all functions that reference this table etc etc”.
The TSql Model API returns loosely typed objects so everything is a TSqlObject – this is good and bad but I will leave it as an exercise for you to find out why!
### DacFx Public Model Tutorial
This is what first allowed me to get into the DacFx, it is the only real documentation I have seen from Microsoft and invaluable to get started
http://blogs.msdn.com/b/ssdt/archive/2013/12/23/dacfx-public-model-tutor…
### Dacpac Explorer
https://sqlserverfunctions.wordpress.com/2014/09/26/dacpac-explorer/
I wrote DacPac Explorer to help teach myself about the DacFx and it turns out it is quite useful and has even been used within Microsoft as a training tool so there you go!
### Querying the DacFx API – Getting Column Type Information
https://sqlserverfunctions.wordpress.com/2014/09/27/querying-the-dacfx-A…
### DacExtensions
If you have tried to do something like get the data type of a column you will appreciate how much work there is to do, well as a special treat there is a github project called Microsoft/DacExtensions and it was written by members of the SSDT team in Microsoft but is open source (I love that!) what it does is take the loosely typed TSqlModel objects and creates strongly typed wrappers so if you want to see what columns are on a table, you query the model for objects of type TSqlTable (or a version specific one if you want) and you get a list of columns as a property rather than having to traverse the relationships etc.
If you do any serious querying of the TSqlModel then look at this as it really will help!
### Microsoft/DacExtensions
https://github.com/Microsoft/DACExtensions/tree/master/DacFxStronglyType…
## Build Contributors
The last three items, the contributors all let you inject your own code into something that SSDT does and change it – this really is huge, normally with tools you get the resulting output and that is it your stuck with it but with SSDT you can completely control certain aspects of how it works.
When you build a project in SSDT a build contributor gets full access to the validated TSqlModel and any properties of the build task so if you wanted to do some validation or change the model when it had been built then you can use this.
### Customize Database Build and Deployment by Using Build and Deployment Contributors
https://msdn.microsoft.com/en-us/library/ee461505.aspx
## Deployment Plan Modifiers
When the DacServices have compared your dacpac to a database, deployment plan modifiers are called and can add or remove steps in the plan before the final deployment script is generated. Again this is huge, it is bigger than huger, it is massive. If you want to make sure a table is never dropped or you don’t like the sql code that is generated then you can write a utility to change it before it is created – write the utility and use it for every build.
wowsers….
### Inside an SSDT Deployment Contributor
https://the.agilesql.club/blog/Ed-Elliott/2015/09/23/Inside-A-SSDT-Deplo…
### Repository of sample deployment contributors
https://github.com/DacFxDeploymentContributors/Contributors
### Deployment Contributor that lets you filter deployments (don’t deploy x schema to y server etc)
http://agilesqlclub.codeplex.com/
## Deployment Plan Executor
Where deplotment plan modifiers can change the plan and add, edit or remove steps a plan executor gets read only access to the plan and is called when the plan is actually executed. The example on MSDN shows a report of the deployment to give you some idea of what you can do with them.
### Walkthrough: Extend Database Project Deployment to Analyze the Deployment Plan
https://msdn.microsoft.com/en-us/library/dn268598.aspx
### Help and Support
I created a gitter room to answer questions and give advice on writing deployment contributors but I would be more than happy to help answer questions on them or any part of the DacFx so feel free to drop in:
https://gitter.im/DacFxDeploymentContributors/Contributors
All welcome 🙂
Tags:
# Book Review: Big Red – Voyage of a Trident Submarine
I’ve grown up reading Tom Clancy and probably most of you have at least seen Red October, so this book caught my eye when browsing used books for a recent trip. It’s a fairly human look at what’s involved in sailing on a Trident missile submarine…
Andy Warren
2009-03-10
# Database Mirroring FAQ: Can a 2008 SQL instance be used as the witness for a 2005 database mirroring setup?
Question: Can a 2008 SQL instance be used as the witness for a 2005 database mirroring setup? This question was sent to me via email. My reply follows. Can a 2008 SQL instance be used as the witness for a 2005 database mirroring setup? Databases to be mirrored are currently running on 2005 SQL instances but will be upgraded to 2008 SQL in the near future.
Robert Davis
2009-02-23
# Inserting Markup into a String with SQL
In which Phil illustrates an old trick using STUFF to intert a number of substrings from a table into a string, and explains why the technique might speed up your code…
Phil Factor
2009-02-18
# Networking – Part 4
You may want to read Part 1 , Part 2 , and Part 3 before continuing. This time around I’d like to talk about social networking. We’ll start with social networking. Facebook, MySpace, and Twitter are all good examples of using technology to let…
Andy Warren
2009-02-17
|
|
# Partial fraction decomposition¶
The partial fraction decomposition of a univariate rational function:
$f(x) = \frac{p(x)}{q(x)}$
where $$p$$ and $$q$$ are co-prime and $$\deg(p) < \deg(q)$$, is an expression of the form:
$\sum_{i=1}^k \sum_{j=1}^{n_i} \frac{a_{ij}(x)}{q_i^j(x)}$
where $$q_i$$ for $$i=1 \ldots k$$ are factors (e.g. over rationals or Gaussian rationals) of $$q$$:
$q(x) = \prod_{i=1}^k q_i^{n_i}$
If $$p$$ and $$q$$ aren’t co-prime, we can use cancel() to remove common factors and if $$\deg(p) >= \deg(q)$$, then div() can be used to extract the polynomial part of $$f$$ and reduce the degree of $$p$$.
Suppose we would like to compute partial fraction decomposition of:
>>> f = 1/(x**2*(x**2 + 1))
>>> f
1
───────────
2 ⎛ 2 ⎞
x ⋅⎝x + 1⎠
This can be achieved with SymPy’s built-in function apart():
>>> apart(f)
1 1
- ────── + ──
2 2
x + 1 x
We can use together() to verify this result:
>>> together(_)
1
───────────
2 ⎛ 2 ⎞
x ⋅⎝x + 1⎠
Now we would like to compute this decomposition step-by-step. The rational function $$f$$ is already in factored form and has two factors $$x^2$$ and $$x^2 + 1$$. If $$f$$ was in expanded from, we could use factor() to obtain the desired factorization:
>>> numer(f)/expand(denom(f))
1
───────
4 2
x + x
>>> factor(_)
1
───────────
2 ⎛ 2 ⎞
x ⋅⎝x + 1⎠
Based on the definition, the partial fraction expansion of $$f$$ will be of the following form:
$\frac{A}{x} + \frac{B}{x^2} + \frac{C x + D}{x^2 + 1}$
Let’s do this with SymPy. We will use undetermined coefficients method to solve this problem. Let’s start by defining some symbols:
>>> var('A:D')
(A, B, C, D)
We use here the lexicographic syntax of var(). Next we can define three rational functions:
>>> p1 = A/x
>>> p2 = B/x**2
>>> p3 = (C*x + D)/(x**2 + 1)
>>> p1, p2, p3
⎛A B C⋅x + D⎞
⎜─, ──, ───────⎟
⎜x 2 2 ⎟
⎝ x x + 1⎠
Let’s add them together to get the desired form:
>>> h = sum(_)
>>> h
A B C⋅x + D
─ + ── + ───────
x 2 2
x x + 1
The next step is to rewrite this expression as rational function in $$x$$:
>>> together(h)
⎛ 2 ⎞ ⎛ 2 ⎞ 2
A⋅x⋅⎝x + 1⎠ + B⋅⎝x + 1⎠ + x ⋅(C⋅x + D)
────────────────────────────────────────
2 ⎛ 2 ⎞
x ⋅⎝x + 1⎠
>>> factor(_, x)
3 2
A⋅x + B + x ⋅(A + C) + x ⋅(B + D)
─────────────────────────────────
2 ⎛ 2 ⎞
x ⋅⎝x + 1⎠
Let’s now visually compare the last expression with $$f$$:
>>> Eq(_, f)
3 2
a⋅x + b + x ⋅(a + c) + x ⋅(b + d) 1
───────────────────────────────── = ───────────
2 ⎛ 2 ⎞ 2 ⎛ 2 ⎞
x ⋅⎝x + 1⎠ x ⋅⎝x + 1⎠
Our task boils down to finding $$A$$, $$B$$, $$C$$ and $$D$$. We notice that denominators are equal so we will proceed only with numerators:
>>> eq = Eq(numer(_.lhs), numer(_.rhs))
>>> eq
3 2
a⋅x + b + x ⋅(a + c) + x ⋅(b + d) = 1
To solve this equation, we use solve_undetermined_coeffs():
>>> solve_undetermined_coeffs(eq, [A, B, C, D], x)
{A: 0, B: 1, C: 0, D: -1}
This gave us values for our parameters, which now can be put into the initial expression:
>>> h.subs(_)
1 1
- ────── + ──
2 2
x + 1 x
This result is identical to the result we got from apart(f). Suppose however, we would like to see how undetermined coefficients method works. First we have to extract coefficients of $$x$$ of both sides of the equation:
>>> lhs, rhs = Poly(eq.lhs, x), Poly(eq.rhs, x)
>>> lhs
Poly((A + C)*x**3 + (B + D)*x**2 + A*x + B, x, domain='ZZ[A,B,C,D]')
>>> rhs
Poly(1, x, domain='ZZ')
Now we can use Poly.nth() to obtain coefficients of $$x$$:
>>> [ Eq(lhs.nth(i), rhs.nth(i)) for i in xrange(4) ]
[b = 1, a = 0, b + d = 0, a + c = 0]
Solving this system of linear equations gives the same solution set as previously:
>>> solve(_)
{a: 0, b: 1, c: 0, d: -1}
>>> f.subs(_)
1 1
- ────── + ──
2 2
x + 1 x
There are several other ways we can approach undetermined coefficients method. For example we could use collect() for this:
>>> collect(eq.lhs - eq.rhs, x, evaluate=False)
⎧ 2 3 ⎫
⎨1: B - 1, x: A, x : B + D, x : A + C⎬
⎩ ⎭
>>> solve(_.values())
{A: 0, B: 1, C: 0, D: -1}
Notice that even though the expressions were not Eq()‘s, this still worked. This is because SymPy assumes by default that expressions are identically equal to 0, so solve(Eq(expr, 0)) is the same as solve(expr).
This approach is even simpler than using Poly.nth(). Finally we use a little trick with Symbol and visually present solution to partial fraction decomposition of $$f$$:
>>> Eq(Symbol('apart')(f), f.subs(_))
⎛ 1 ⎞ 1 1
apart⎜───────────⎟ = - ────── + ──
⎜ 2 ⎛ 2 ⎞⎟ 2 2
⎝x ⋅⎝x + 1⎠⎠ x + 1 x
1. Compute partial fraction decomposition of:
• $$\frac{3 x + 5}{(2 x + 1)^2}$$
• $$\frac{3 x + 5}{(u x + v)^2}$$
• $$\frac{(3 x + 5)^2}{(2 x + 1)^2}$$
(solution)
1. Can you use Expr.coeff() in place of Poly.nth()?
#### Previous topic
Mathematical problem solving with SymPy
#### Next topic
Deriving trigonometric identities
|
|
## Journal of Symbolic Logic
### Saturating Ultrafilters on N
#### Abstract
We discuss saturating ultrafilters on $\mathbf{N}$, relating them to other types of nonprincipal ultrafilter. (a) There is an $(\omega,\mathfrak{c})$-saturating ultrafilter on $\mathbf{N} \operatorname{iff} 2^\lambda \leq \mathfrak{c}$ for every $\lambda < \mathfrak{c}$ and there is no cover of $\mathbf{R}$ by fewer than $\mathfrak{c}$ nowhere dense sets. (b) Assume Martin's axiom. Then, for any cardinal $\kappa$, a nonprincipal ultrafilter on $\mathbf{N}$ is $(\omega,\kappa)$-saturating iff it is almost $\kappa$-good. In particular, (i) $p(\kappa)$-point ultrafilters are $(\omega,\kappa)$-saturating, and (ii) the set of $(\omega,\kappa)$-saturating ultrafilters is invariant under homeomorphisms of $\beta\mathbf{N\backslash N}$. (c) It is relatively consistent with ZFC to suppose that there is a Ramsey $p(\mathfrak{c})$-point ultrafilter which is not $(\omega,\mathfrak{c})$-saturating.
#### Article information
Source
J. Symbolic Logic, Volume 54, Issue 3 (1989), 708-718.
Dates
First available in Project Euclid: 6 July 2007
https://projecteuclid.org/euclid.jsl/1183743010
Mathematical Reviews number (MathSciNet)
MR1011162
Zentralblatt MATH identifier
0686.03021
JSTOR
|
|
×
# Is this correct?
$\sqrt[\infty]{n} =1$ where $$n$$ belongs to Natural numbers .
6 months, 1 week ago
Sort by:
\begin{align} \sqrt[\infty]{n}&=\lim_{k \to \infty} n^{\frac{1}{k}}\\ &=n^{\lim_{k \to \infty} \frac{1}{k} } \\ &=n^0\\&=1 \quad \forall n \in \mathbb{N} \end{align} · 6 months, 1 week ago
Exactly !!! :) · 6 months, 1 week ago
Can you please suggest me some Number Theory Books, I m a beginner & want to learn from basics to advanced , ! @Chinmay Sangawadekar · 6 months, 1 week ago
Try Elementary NT by David Burton · 6 months, 1 week ago
Ok thank you very much , is it good for the very basics to advanced level & for olympiads? · 6 months, 1 week ago
The same way I did , its crct! · 6 months, 1 week ago
I think it is true.$$n^{\frac{1}{\infty}}$$
Since $$\frac{1}{\infty}=0; n^0=\boxed 1.$$ · 6 months, 1 week ago
Yup . more standard solution is provided by Deepraj :) · 6 months, 1 week ago
yup, he is far better than me!! · 6 months, 1 week ago
|
|
Our Discord hit 10K members! 🎉 Meet students and ask top educators your questions.Join Here!
# When the principal quantum number is $n=4,$ how many different values of $(a) \ell$ and $(b) m_{\ell}$ are possible?
## a) 4b) 7
Atomic Physics
Nuclear Physics
### Discussion
You must be signed in to discuss.
##### Christina K.
Rutgers, The State University of New Jersey
##### Andy C.
University of Michigan - Ann Arbor
##### Marshall S.
University of Washington
##### Jared E.
University of Winnipeg
### Video Transcript
in this exercise. We have an atom at the fourth energy level and equals four. And in question A. We want to know how many different values of El that is. The orbital frontal number are possible. So remember that for a given end given energy level when he allowed, uh, L Z allowed orbital frontal numbers are ranged from zero to n minus one between. In this case, we have the possible else 012 and three. So in total, there are four allowed else. Okay, And for a question, be have to find Khomeini allowed magnetic quantum numbers. There are. Okay, so remember that for a given l the allowed magnetic quantum numbers range from minus l to l. So the largest l possible as we saw in question A is l equals three. Okay. And in that case, M l can be minus three minus two minus one zero one, two and three. Notice that in total there are seven. Okay. And this is this is the the number of allowed magnetic quantum numbers and else Okay, total there are seven. And this concludes the exercise
#### Topics
Atomic Physics
Nuclear Physics
##### Christina K.
Rutgers, The State University of New Jersey
##### Andy C.
University of Michigan - Ann Arbor
##### Marshall S.
University of Washington
##### Jared E.
University of Winnipeg
|
|
# For the composite geometric shapes given the semi circle has the same area measure as the right angle triangle, and the base of right triangle is congruent to the radius of the semicircle. Calculate the value of angle theta?
$\theta = {57.52}^{o}$
As area of triangle is $\frac{\pi {r}^{2}}{2}$ (it is equal to semi-circle) and base is $r$, it's height is given by $\frac{2 \times a r e a}{b a s e}$ or $\frac{\pi {r}^{2}}{r} = \pi r$.
Now in ∆ABC, as $\tan \theta = \frac{\pi r}{2 r} = \frac{\pi}{2}$
$\theta = {\tan}^{- 1} \left(\frac{\pi}{2}\right) = {57.52}^{o}$
|
|
# Math Help - Infinite Series Help
1. Hello All,
I'm trying to figure out if I'm doing this problem correctly.
I've rewritten the top as the cubed root of 1 and am using the root test. Am I on the right track?
Also,
My answer is 1 / (n^2 + 1) and that converges.
2. Originally Posted by maxreality
Hello All,
I'm trying to figure out if I'm doing this problem correctly.
I've rewritten the top as the cubed root of 1 and am using the root test. Am I on the right track?
I would try the limit comparison test with the known divergent series $\sum \frac{1}{n^\frac{2}{3}}$
3. Thank you. I was going about it the more difficult way.
|
|
Courses
# Ratio and Proportion - Important Formulas, Quantitative Aptitude Quant Notes | EduRev
## Banking Exams : Ratio and Proportion - Important Formulas, Quantitative Aptitude Quant Notes | EduRev
The document Ratio and Proportion - Important Formulas, Quantitative Aptitude Quant Notes | EduRev is a part of the Banking Exams Course SSC CGL Tier 2 - Study Material, Online Tests, Previous Year.
All you need of Banking Exams at this link: Banking Exams
Ratio and Proportion are explained majorly based on fractions. When a fraction is represented in the form of a:b, then it is a ratio whereas a proportion states that two ratios are equal. Here, a and b are any two integers. The ratio and proportion are the two important concepts, and it is the foundation to understand the various concepts in mathematics as well as in science.
Ratio and proportions are said to be faces of the same coin. When two ratios are equal in value, then they are said to be in proportion. In simple words, it compares two ratios. Proportions are denoted by the symbol ‘::’ or ‘=’.
Ratio
• The ratio is the comparison between similar types of quantities, it is an abstract quantity and does not have any units.
• The ratio of two quantities a and b in the same units is the fraction a/b and we write it as a: b.
• In the ratio a: b, we call the first term or antecedent and b, the second term or consequent.
Example: The ratio 5 : 9 represents 5/9 with antecedent = 5, consequent = 9.
Note: The multiplication or division of each term of a ratio by the same non-zero number does not affect the ratio. Example: 4 : 5 = 8 : 10 = 12 : 15. Also, 4 : 6 = 2 : 3
Let's see how questions appear from this chapter in CAT 2019:
Try yourself:PYQ CAT 2019: The salaries of Ramesh, Ganesh and Rajesh were in the ratio 6:5:7 in 2010, and in the ratio 3:4:3 in 2015. If Ramesh’s salary increased by 25% during 2010-2015, then the percentage increase in Rajesh’s salary during this period is closest to?
Key Points to Remember:
• The ratio should exist between the quantities of the same kind.
• While comparing two things, the units should be similar.
• There should be significant order of terms.
• The comparison of two ratios can be performed, if the ratios are equivalent like the fractions.
Try yourself:A and B together have Rs. 1210. If (4 / 15) of A's amount is equal to (2 / 5) of B's amount, how much amount does B have?
Proportion
• The equality of two ratios is called proportion i.e. If a/b = c/d, then a, b, c, d are said to be in proportion.
• If a: b = c : d, we write a: b:: c : d and saying that a, b, c, d are in proportion.
• Here a and d are called Extremes, while b and c are called Mean terms.
Product of means = Product of extremes
Thus, a : b :: c : d ⇔ (b x c) = (a x d)
Try yourself:Two numbers are respectively 20% and 50% more than a third number. The ratio of the two numbers is:
• Fourth Proportional
If a: b = c : d, then d is called the fourth proportional to a, b, c.
• Third Proportional
If a : b = c : d, then c is called the third proportion to a and b.
• Mean Proportional:
Mean proportional between a and b is
Comparison of Ratios
We say that (a : b) > (c : d) ⇔ a/b > c/d.
➤ Compounded Ratio: The compounded ratio of the ratios:
(a : b), (c : d), (e : f) is (ace : bdf)
Try yourself:In a mixture of 60 litres, the ratio of milk and water 2 : 1. If this ratio is to be 1 : 2, then the quantity of water to be further added is:
Try yourself:Salaries of Ravi and Sumit are in the ratio 2 : 3. If the salary of each is increased by Rs. 4000, the new ratio becomes 40 : 57. What is Sumit's salary?
➤ Duplicate Ratios
• The duplicate ratio of (a : b) is (a2 : b2)
• Sub - duplicate ratio of (a : b) is (√a : √b)
• Triplicate ratio of (a : b) is (a: b3)
• Sub - triplicate ratio of (a : b) is (a1/3: b1/3)
If a/b = c/d, then a + b/a - b = c + d/c - d [componendo and dividendo]
Variations
• We say that x is directly proportional to y if x = ky for some constant k and we write x ∝ y.
• We say that x is inversely proportional to y, if xy = k for some constant K and we write, x ∝ 1/y.
Types of Variation:
(i) Direct Variation: If A is in direct variation with B, then an increase or decrease in A will lead to a proportionate increase or decrease in B.
• ∝ B
• A = KB
(ii) Indirect Variation: If A is in inverse variation with B, then an increase in A will lead to a Proportionate decrease in B and vice versa.
• ∝ 1/B
• A = K/B
Try yourself:If ‘x’ and ‘y’ are in a direct proportion then which of the following is correct?
(iii) Joint Variation
• Let us consider the area of a triangle, which is dependent on both, the height as well as the base of the rectangle.
• When both the dimension of the triangle changes, then the area also changes.
When the area of the triangle varies with the change in the base of the triangle.
∝ b
• When the area of the triangle varies with the change in the height of the triangle.
∝ h
• This is called the joint variation of the area of the triangle with respect to its base and height.
∝ 1/2 x b.h
Simple Method
The LCM process gets very cumbersome when we have to find the ratio out of multiple ratios.
We have the following simple method for that for a chain of ratios of any length.
Suppose you have the ratio train as follows
A : B = 1 : 2
► B : C = 2 : 3
► C : D = 5 : 6
► D : E = 7 : 8
If we were to find A : B : C : D : E, then the LCM method would have taken quite a long time which is infeasible in examinations of limited hours.
The short cut is as follows:
A : B : C : D : E can be written directly as:
► 1 × 2 × 5 × 7 : 2 × 2 × 5 × 7 : 2 × 3 × 5 × 7 : 2 × 3 × 6 × 7 : 2 × 3 × 6 × 8
► 70 : 140 : 210 : 252 : 288
The thought algorithm for this case goes as:
To get the combined ratio of A : B : C : D : E, from A : B, B : C, C : D, and D : E
In the combined ratio of A : B : C : D : E.
• A will correspond to the product of all numerators (1 × 2 × 5 × 7).
• B will take the first denominator and the last 3 numerators (2 × 2 × 5 × 7).
• C, on the other hand, takes the first two denominators and the last 2 numerators (2 × 3 × 5 × 7).
• D takes the first 3 denominators and the last numerator (2 × 3 × 6 × 7) and E take all the four denominators (2 × 3 × 6 × 8).
Product of Proportions
If a:b = c:d is a proportion, then:
• Product of extremes = product of means i.e., ad = bc
• Denominator addition/subtraction: a:a+b = c:c+d and a:a-b = c:c-d
• a, b, c, d,.... are in continued proportion means, a:b = b:c = c:d = ....
• a:b = b:c then b is called mean proportional and 2b = ac
• The third proportional of two numbers, a and b, is c, such that, a:b = b:c.
d is fourth proportional to numbers a, b, c if a:b = c:d.
Variations
• If a ∝ b, provided c is constant and a ∝ c, provided b is constant, then a ∝ b ∝ c, if all three of them are varying.
• If A and B are in a business for the same time, then Profit distribution ∝ investment (Time is constant).
• If A and B are in a business with the same investment, then:
Profit distribution ∝ Time of investment (Investment is constant)
Profit Distribution ∝ Investment × Time
Ratio and Proportion Tricks
• If u/v = x/y, then uy = vx
• If u/v = x/y, then u/x = v/y
• If u/v = x/y, then v/u = y/x
• If u/v = x/y, then (u+v)/v = (x+y)/y
• If u/v = x/y, then (u-v)/v = (x-y)/y
• If u/v = x/y, then (u+v)/ (u-v) = (x+y)/(x-y), which is known as Componendo-Dividendo Rule
• If a/(b+c) = b/(c+a) = c/(a+b) and a+b+ c ≠0, then a =b = c
Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity!
## SSC CGL Tier 2 - Study Material, Online Tests, Previous Year
45 videos|26 docs|42 tests
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
;
|
|
# Quark Matter 2018
13-19 May 2018
Venice, Italy
Europe/Zurich timezone
The organisers warmly thank all participants for such a lively QM2018! See you in China in 2019!
## Studies of $\Lambda_{\rm c}^{+}\to p \rm K^{0}_{\rm S}$ in p-Pb collisions with the ALICE experiment at the LHC
15 May 2018, 17:00
2h 40m
First floor and third floor (Palazzo del Casinò)
### First floor and third floor
#### Palazzo del Casinò
Poster Open heavy flavour
### Speaker
Dr Elisa Meninno (Universita e INFN, Salerno (IT))
### Description
The ALICE (A Large Ion Collider Experiment) experiment at CERN is mainly aimed to study strongly-interacting matter under extreme conditions of temperature and energy density and, in particular, to verify the QCD predictions about the existence of a phase transition of the hadronic matter to the Quark-Gluon Plasma (QGP).
Heavy quarks (charm and beauty) are a powerful tool to study the properties of the QGP. Indeed they are formed during the early stages of the collisions via hard scattering of high-energy partons, on a time scale generally shorter than the QGP thermalisation time. So they can traverse the QCD medium, interact with its constituents and experience the whole evolution of the medium.
The $\Lambda_{c}^{+}$\$\rm D^{0}$ ratio is sensitive to hadronisation mechanisms and it will offer a unique probe of the role of coalescence and predicted existence of diquark states in the QGP.
Measurements of charmed-baryon production in small system (pp and p-Pb) collisions are a fundamental reference for measurements in Pb-Pb collisions and allow studies of possible modifications of the production due to cold nuclear matter effects.
Moreover, the study of charm production as a function of the multiplicity of charged particles produced in the collision can give insight into multi-parton interactions and into the interplay between hard and soft processes.
The recent results for $\Lambda_{c}^{+}$ baryons reconstructed via their hadronic decay $\Lambda_{\rm c}^{+}\to p \rm K^{0}_{\rm S}$ at mid-rapidity in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV will be presented.
The analysis takes advantage of the high precision tracking, good vertexing capabilities and excellent particle identification offered by the ALICE detector.
Collaboration ALICE Experiment Presenter name already specified
### Primary author
Dr Elisa Meninno (Universita e INFN, Salerno (IT))
|
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Harmonic Functions on Trees and Buildings
Edited by: Adam Korányi, City University of New York, Herbert H. Lehman College, Bronx, NY
SEARCH THIS BOOK:
Contemporary Mathematics
1997; 181 pp; softcover
Volume: 206
ISBN-10: 0-8218-0605-X
ISBN-13: 978-0-8218-0605-0
List Price: US$43 Member Price: US$34.40
Order Code: CONM/206
This volume presents the proceedings of the workshop "Harmonic Functions on Graphs" held at the Graduate Center of CUNY in the fall of 1995. The main papers present material from four minicourses given by leading experts: D. Cartwright, A. Figà-Talamanca, S. Sawyer and T. Steger. These minicourses are introductions which gradually progress to deeper and less known branches of the subject. One of the topics treated is buildings, which are discrete analogues of symmetric spaces of arbitrary rank; buildings of rank are trees. Harmonic analysis on buildings is a fairly new and important field of research. One of the minicourses discusses buildings from the combinatorial perspective and another examines them from the $$p$$-adic perspective. The third minicourse deals with the connections of trees with $$p$$-adic analysis. And the fourth deals with random walks, i.e., with the probabilistic side of harmonic functions on trees.
The book also contains the extended abstracts of 19 of the 20 lectures given by the participants on their recent results. These abstracts, well detailed and clearly understandable, give a good cross-section of the present state of research in the field.
Graduate students and research mathematicians interested in potential theory.
Part 1. Minicourses
• A. Figà-Talamanca -- Local fields and trees
• S. A. Sawyer -- Martin boundaries and random walks
• D. I. Cartwright -- A brief introduction to buildings
• T. Steger -- Local fields and buildings
Part II. Abstracts of Lectures
• D. Bednarchak -- Heat kernel for regular trees
• D. I. Cartwright, M. G. Kuhn, and P. M. Soardi -- Tensor product of spherical representations of the group of automorphisms of a homogeneous tree
• E. C. Tarabusi -- The horocyclic Radon transform on trees
• J. M. Cohen and F. Colonna -- Eigenfunctions of the Laplacian on a homogeneous tree
• F. Di Biase -- Exotic convergence in theorems of Fatou type
• Y. Guivarcl'h -- A spectral gap property for transfer operators
• V. A. Kaimanovich -- Harmonic functions on discrete subgroups of semi-simple Lie groups
• R. Lyons -- Biased random walks and harmonic functions on the Lamplighter group
• A. M. Mantero and A. Zappa -- Characterization of the eigenfunctions of the Laplace operators for an affine building of rank 2
• W. Młotkowski -- Free product of representations
• T. Nagnibeda -- The Jacobian of a finite graph
• S. Northshield -- Flows and harmonic functions on graphs
• M. Pagliacci -- Applications of diffusion processes on trees to mathematical finance
• M. A. Picardello -- Characterizing harmonic functions by mean value properties on trees and symmetric spaces
• J. Ramagge and G. Robertson -- Factors from buildings
• M. Rigoli, M. Salvatori, and M. Vignati -- Harnack and Liouville properties on graphs
• G. Robertson -- The spectrum of a directed Cayley graph of a free group
• M. H. Taibleson -- Factorization of the Green's kernel for non-nearest neighbor random walks
• W. Woess -- Harmonic functions for group-invariant random walks
|
|
Solution:
If the XOR sum of all numbers is 0. Then it must can make it. Because if and only if $a[1]\oplus a[2]\oplus...\oplus a[n-1]=a[n]$, the XOR sum is 0. So we can make the array as: $(a[n], a[n])$.
So the problem is what if the XOR sum is not 0.
Let's say $a[1]\oplus a[2]\oplus...\oplus a[n-1]\oplus a[n]=x$. If it can make a array which is match the requirement. Then the result must be something like: $(x,x,x,...,x)$.
So there should exist a $i$, so that $a[i]\oplus a[i+1]\oplus ... \oplus a[n-1]\oplus a[n]=x$ and $a[1]\oplus a[2]\oplus ... \oplus a[i-1]=0$.
Then the last thing is, can we make a $x$ from the last few numbers between 0 to $i-1$. If can, then can make $(x,x,x)$. Otherwise, print "No".
|
|
• M M LATHA
Articles written in Pramana – Journal of Physics
• Influence of quadrupole–quadrupole-type interaction on the chaotic dynamics of $\alpha$-helical proteins
By proposing a model Hamiltonian in the first quantised form we investigate the chaotic dynamics of $\alpha$-helical proteins by taking into account the quadrupole–quadrupole-type interaction. The dynamics is studied byderiving Hamilton’s equations of motion and by plotting the time-series evolution and phase-space trajectories. Chaotic trajectories are observed in the phase-space plots. The effect of the interaction parameters on the stabilityof proteins is also discussed.
• A chaotic study on Heisenberg ferromagnetic spin chain using Dzyaloshinski–Moriya interactions
The chaotic dynamics of a one-dimensional Heisenberg ferromagnetic spin chain incorporating Dzyaloshinski–Moriya (D–M) interaction, dipole–dipole and quadrupole–quadrupole interactions has been investigated. The studies are carried out by plotting phase diagrams and chaotic trajectories. We then analyse the stability of the system using the Lyapunov stability analysis.
• # Pramana – Journal of Physics
Volume 95, 2021
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
|
How to obtain tail bounds for a linear combination of dependent and bounded random variables?
I am looking for tail bounds (preferably exponential) for a linear combination of dependent and bounded random variables.
consider $$K_{ij}=\sum_{r=1}^N\sum_{c=1}^N W_{ir}C_{rc}W_{jc}$$ where $i \neq j$, $W\in \{+1, -1\}$ and $W$ follows $\operatorname{Bernoulli}(0.5)$, and $C=\operatorname{Toeplitz}(1, \rho, \rho^2, \ldots, \rho^{N-1})$, $0 \leq \rho < 1$.
I will be to happy if you give me any pointer to how I can evaluate the moment generating function of $K_{ij}$ to have bound for $Pr\{K_{ij} \geq \epsilon\}\leq \min_s\exp(-s\epsilon)E[\exp(K_{ij}s)]$ based on chernoff bound ans $s \geq 0$.
For a hint you can look at my another question entitled as How to obtain tail bounds for a sum of dependent and bounded random variables?'' which is a special case of this problem where $C_{rc}=1, 1 \leq c \leq N, 1 \leq r \leq N$. Thanks a lot in advance.
-
|
|
# Toronto Math Forum
## MAT334-2018F => MAT334--Tests => Quiz-6 => Topic started by: Victor Ivrii on November 17, 2018, 04:08:42 PM
Title: Q6 TUT 0201
Post by: Victor Ivrii on November 17, 2018, 04:08:42 PM
Find the Laurent series for the given function $f(z)$ about the indicated point. Also, give the residue of the function at the point.
$$f(z)=\frac{z^2}{z^2-1};\qquad z_0=1.$$
Title: Re: Q6 TUT 0201
Post by: Junya Zhang on November 17, 2018, 04:20:00 PM
This is question 8 from CH2.5 in the textbook.
Let $w = z - 1$. Then $z = w+ 1$.
$$\frac{z^2}{z^2 -1} = \frac{(w+1)^2}{(w+1)^2 -1} = \frac{w^2 + 2w + 1}{w^2 + 2w}= 1+ \frac{1}{w^2 + 2w}=1 + \frac{1}{2w} \cdot \frac{1}{1 + w/2} = 1 + \frac{1}{2w} \sum_{n = 0}^{\infty} \left(- \frac{w}{2}\right)^n$$ which is valid for $|w/2| < 1$, i.e. $|z-1|<2$. Then
$$\frac{z^2}{z^2 -1} = 1 + \frac{1}{2w}\left(1 - \frac{w}{2} + \sum_{n = 2}^{\infty} (-1)^n 2^{-n}w^n \right) = 1 + \frac{1}{2w} - \frac{1}{4} + \sum_{n = 2}^{\infty} (-1)^n 2^{-n-1}w^{n-1} = \frac{1}{2w} +\frac{3}{4} + \sum_{n = 1}^{\infty} (-1)^{n+1} 2^{-n-2}w^{n} = \frac{1}{2w} +\frac{3}{4} + \sum_{n = 1}^{\infty} (-1)^{n+1} \frac{w^{n}}{2^{n+2}}$$ Substitute $w = z-1$ back to the equation we get $$\frac{z^2}{z^2 -1} = \frac{1}{2(z-1)} +\frac{3}{4} + \sum_{n = 1}^{\infty} (-1)^{n+1} \frac{(z-1)^{n}}{2^{n+2}}$$
Residue of the function at $z = 1$ is the coefficient of $\frac{1}{z-1}$ in the Laurent series, which is $\frac{1}{2}$.
Title: Re: Q6 TUT 0201
Post by: Amy Zhao on November 17, 2018, 06:32:49 PM
$\frac{z^2}{z^2-1}$$=\frac{z^2-1+1}{z^2-1}$
$=1+\frac{1}{z^2-1}$
$=1+\frac{1}{(z-1)(z+1)}$
$=1+\frac{1}{2}\frac{1}{z-1}-\frac{1}{z+1}$, by partial fractions
$=1+\frac{1}{2}(\frac{1}{z-1}-\frac{1}{2+(z-1)})$
$=1+\frac{1}{2}(\frac{1}{z-1}-\frac{1}{2}\frac{1}{1+\frac{z-1}{2}})$
$=1+\frac{1}{2}((z-1)^{-1}-\frac{1}{2}\frac{1}{1-\frac{1-z}{2}})$
$=1+\frac{1}{2}((z-1)^{-1}-\frac{1}{2}\sum_{0}^{\infty} (\frac{1-z}{2})^n)$
Residue=$\frac{1}{2}$
Title: Re: Q6 TUT 0201
Post by: Victor Ivrii on November 28, 2018, 04:57:15 AM
Two rather different solutions leadinfg to the same result.
|
|
# Integrate the function$\int\frac{x^3}{\sqrt{1-x^8}}$
$\begin{array}{1 1} \large \frac{1}{4} \sin^{-1}(x^4)+c \\ \large \frac{1}{4} \cos^{-1}(x^4)+c \\ \large \frac{1}{4} \sin(x^4)+c \\ \large \frac{1}{4} \cos (x^4)+c\end{array}$
Toolbox:
• (i) If in a function $\int f(x)dx,\;let\;f(x)=t,\;then\;f'(x)=dt,\;then \;\int f(x)dx=\int tdt$
• (ii)$\int \large\frac{dx}{\sqrt {a^2-x^2}}=\sin^{-1}(x/a)+c$
given $I=\large\int\frac{x^3}{\sqrt{1-x^8}}dx$
Let $x^4=t;$ on differentiating we get
$4x^3dx=dt$
$=>x^3dx=\frac{dt}{4}$
Hence $I=\int \frac{dt/4}{\sqrt {1-t^2}}=\frac{1}{4}\int \frac{dt}{\sqrt{1-t^2}}$
On integrating we get, since it is of the form $\int \frac{dx}{\sqrt {a^2-x^2}}$
$I=\frac{1}{4}\int \frac{dt}{\sqrt{1-t^2}}=\sin^{-1}(t)+c$
Substituting for t we get
$I=\frac{1}{4} \sin^{-1}(x^4)+c$
|
|
# Learn Programming Language from Diving into Source Code
## 1 Master Core Functions
If you're a beginner on learning Programming Language, mastering some core and frequently used functions is really helpful. This can let you write better code.
This thought is comes from book <Clojure Standard Library: An annotated reference>. I get a free chance to choose an ebook after filled a form from Manning Publications. (I choose this book because I'm learning Clojure, need to master basic of Clojure as a beginner.)
## 2 Dive into Language Source Code
Programming Language usually has a mechanism to check out source code. For example, Emacs Lisp, it can use M-. to jump to function definition. Clojure is same. About other languages like Python, Ruby etc. In Emacs also can use M-. or Emacs extensions to support this.
## 3 Search code examples on public source code hosting websites
Learning with examples is helpful for beginner, you can search code on public source code hosting websites. Here is my collected list:
Rosetta Code
Rosetta Code is a programming chrestomathy site. The idea is to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another. Rosetta Code currently has 874 tasks, 223 draft tasks, and is aware of 687 languages, though we do not (and cannot) have solutions to every task in every language.
CoCycles
search for open source code by functionality.
Sourcegraph
search for code, docs, and usage examples.
(no term)
https://github.com/search
(no term)
https://bitbucket.org/
(no term)
(no term)
https://www.codeplex.com/
(no term)
https://gitorious.org/
(no term)
SourceForge
The Complete Open-Source Software Platform. Create, collaborate & distribute to over 33 million users worldwide.
(no term)
http://freecode.com/
(no term)
https://gitlab.com/public
(no term)
https://www.openhub.net/
(no term)
http://sources.debian.net/
(no term)
https://codesearch.debian.net/
### 3.1 Emacs package engine-mode
Emacs has a package called "engine-mode" which can define search engine based on URL by replacing query keyword with "%s". It is quite quick to use it to open a search query in web browser. Even can use thing-at-point to get word or symbol at point to avoid manually input.
#### 3.1.1 how to define engine
1. simple
(defengine github
"https://github.com/search?ref=simplesearch&q=%s")
2. specify key
(defengine duckduckgo
"https://duckduckgo.com/?q=%s"
:keybinding "d")
3. specify browser function
(defengine github
"https://github.com/search?ref=simplesearch&q=%s"
:browser 'eww-browse-url)
4. specify docstring
(defengine ctan
"http://www.ctan.org/search/?x=1&PORTAL=on&phrase=%s"
:docstring "Search the Comprehensive TeX Archive Network (ctan.org)")
#### 3.1.2 my config part
;; general search engines
:keybinding "g")
(defengine duckduckgo
"https://duckduckgo.com/?q=%s"
:docstring "DuckDuckGo"
:keybinding "d")
(defengine blekko
"https://blekko.com/#?q=%s"
:docstring "Blekko"
;; :keybinding "B"
)
(defengine bing
"http://cn.bing.com/search?q="
:docstring "Bing")
(defengine baidu
"http://www.baidu.com/s?wd=%s"
:docstring "Baidu"
:keybinding "b")
;; Wikipedia
(defengine wikipedia
"http://www.wikipedia.org/search-redirect.php?language=en&go=Go&search=%s"
:docstring "Wikipedia"
:keybinding "w")
(defengine baidu_baike
"http://baike.baidu.com/search/none?word=%s"
:docstring "Baidu Baike"
:keybinding "W")
(defengine wolfram-alpha
"http://www.wolframalpha.com/input/?i=%s"
:docstring "Wolfram Alpha"
:keybinding "A")
;; programming
;; Docs: API
(defengine APIs
"http://apis.io/?search=%s"
:docstring "APIs"
:keybinding "a")
(defengine mozilla-developer
"https://developer.mozilla.org/en-US/search?q=%s"
:docstring "Mozilla Developer"
:keybinding "m")
(defengine rfcs
"http://pretty-rfc.herokuapp.com/search?q=%s"
;; "https://www.rfc-editor.org/search/rfc_search_detail.php?rfc=%s"
:docstring "RFC"
:keybinding "R")
(defengine emacswiki
"www.emacswiki.org/emacs?search=%s"
:docstring "Emacs Wiki"
:keybinding "e")
## 4 More tips will updated on this post
If anyone suggest more skill tips about learning from diving into source code. I will add them at this post.
|
|
# The square of a positive number is 56 more than the number itself. What is the number?
Jan 9, 2017
The number is $8$
#### Explanation:
We need to take this one phrase at a time to develop our equation.
First, the square of a positive number can be written as:
${x}^{2}$
In mathematics the word "is"means "=" so we can now write:
${x}^{2} =$
and "56 more than the number itself" finishes the equation as:
${x}^{2} = 56 + x$
We can now transform this into a quadratic:
${x}^{2} - \textcolor{red}{56 - x} = 56 + x - \textcolor{red}{56 - x}$
${x}^{2} - x - 56 = 0$
We can now factor the quadratic:
$\left(x - 8\right) \left(x + 7\right) = 0$
Now we can solve each term for $0$
$x + 7 = 0$
$x + 7 - 7 = 0 - 7$
$x + 0 = - 7$
$x = - 7$ - this cannot be the answer because the question asked for a positive integer.
$x - 8 = 0$
$x - 8 + 8 = 0 + 8$
$x - 0 = 8$
$x = 8$
The number is $8$
Jan 9, 2017
$8$
#### Explanation:
Let the unknown value be $x$
This is a quadratic in disguise.
${x}^{2} = x + 56 \text{ "=>" } {x}^{2} \textcolor{red}{- x} - 56 = 0$
The $\textcolor{red}{x}$ has the coefficient of -1. This means that the whole number factors of 56 have a difference of -1.
$\sqrt{56} \approx 7.5$
Try $\left(- 8\right) \times \left(+ 7\right) = - 56 \text{ and } 7 - 8 = - 1$ so we have found the factors
${x}^{2} - x - 56 = \left(x - 8\right) \left(x + 7\right) = 0$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The question stipulates that the number is positive so we select $x = + 8$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$\textcolor{b l u e}{\text{Check}}$
${x}^{2} = x + 56 \text{ "->" } {8}^{2} \to 8 + 56$
$\text{ } 64 \to 64$
|
|
# soliton
A soliton is a non-linear object which moves through space without dispersion at constant speed. They occur naturally as solutions to the Korteweg - de Vries equation. They were first observed by John Scott Russell in the 19th century and then by Martin Kruskal and Norman Zabusky (who coined the term soliton) in a famous computer simulation in 1965. Insight into solitons can be obtained by noting that the Korteweg - de Vries equation satisfies D’Alembert’s solution:
$u(x,t)=f(x-ct)+g(x+ct)$
We see at once that this satisfies two important criteria: it has a constant velocity $c$, and it can also be shown that the two functions $f$ and $g$ can collide without altering shape. Solitons also occur in non-linear optics and as solutions to field equations in quantum field theory.
Title soliton Soliton 2013-03-22 17:47:50 2013-03-22 17:47:50 invisiblerhino (19637) invisiblerhino (19637) 10 invisiblerhino (19637) Definition msc 35Q51 msc 37K40 solitary wave
|
|
11 HL Induction
Complete the following questions for our next class.
Exercise 9B.1 questions 2b and 3b
Here is a template you can use for proof by induction. (Note that the template is set up for a proof that involves a claim concerning all natural numbers. If the claim concerns, for example, all positive integers, you need to adjust the base case and other remarks accordingly.)
11 HL Induction and the Binomial Theorem Test [Updated]
On Tuesday, October 12th [note the revised date + content] we’ll have a test on induction and the binomial theorem (including combinations and permutations). Graphing calculators will be required for this test.
In order to prepare for this test, have a look at the questions listed below.
Review Set 8A (all)
Review Set 8B 2, 8–10
Review Set 8C 1, 2, 7–10
Review Set 9A 1–7
Review Set 9B 2
Review Set 9C 1, 2, 5, 7
You may also find some of the additional resources on the HL Resources page useful.
We’ll have a test on the material from Chapter 1 of the textbook on Friday, September 29th.
In order to prepare for this test, have a look at the questions below, as well as the sample questions document on the SL Resources page.
Review Set 1A—complete any 8 questions
Review Set 1B—complete any 8 questions
Review Set 1C—complete any 8 questions
12 SL Derivatives of Logarithmic and Trigonometric Functions
We’ve now covered the derivatives of logarithmic and trigonometric functions, and the questions below involve applications of those derivative results.
For logarithmic functions, you may find it easier to simplify some expressions using the properties of logarithms before you try to differentiate. See the list of properties of logarithms at the bottom of page 376, and you can see an example of how these can simplify your calculations in Example 12 on page 377.
Exercise 15F 1ghk, 2adeh, 3abegi, 5
Exercise 15G 1adgh (see page 379 for more about the derivative of $$\tan x$$, 2adgk, 3bek, 4b
12 SL Derivatives Test [updated]
To give you all more time to prepare, our test on derivatives (Chapters 14 and 15) will be during class on Monday, October 2nd.
To help you prepare, Mr. Prior has shared the following document with plenty of practice questions. (Note that we won’t cover some of this material until next week.) Try some of those questions, and I’ll make the solutions available here next week.
Update: Solutions to Mr. Prior’s questions can be found here.
12 SL Exponential Functions and the Quotient Rule
Complete the following questions before our next lesson.
Exercise 15D (the quotient rule) questions 1abf, 2ad, and 4.
Exercise 15E (the derivative of the exponential function) questions 1ijno, 2acg, 3a, and 5.
11 HL The Binomial Theorem and Philosophy
After trying question Exercise 8G question 14, have a look at the quotation from Wittgenstein’s Tractatus Logico-Philosophicus.
With regard to the existence of $$n$$ atomic facts there are $$K_n=\sum_{r=0}^n \left(\begin{array}{c}n\\r\end{array}\right)$$ possibilities.
Show that Wittgenstein could equally have written “…there are $$K_n=2^n$$ possibilities.”
Make sure you complete the questions here before our next lesson.
Once you’ve completed those questions, work on the following questions.
Exercise 1F question 15
Review Set 1A questions 1, 2b, 4, 6, 7, 9, 11–13
Make sure you complete at least up to question 6 in Review Set 1A before our next class.
11 HL The Binomial Theorem
Complete the following questions before tomorrow’s class.
Exercise 8G questions 1, 2, 3, 4, 6ab, 10, 15
12 SL The Chain Rule
Complete the following questions for our next class.
Exercise 15B.2 questions 1ad, 2abcdfi, 3acef, 4, 5
|
|
# n-gram, entropy and entropy rate
n-gram models find use in many areas of computer science, but are often only explained in the context of natural language processing (NLP). In this post, I will show the relation to information theory by explicitly calculating the entropy and entropy rate of unigrams, bigrams and general $n$-grams.
## Overview
$n$-grams split texts into contiguous sequences of $n$ words or letters and assign a probability to each sequence. When $n \geq 2$, we obtain Markov models of order $n-1$.
In NLP, $n$-grams are used to predict the next letter/word based on the previous $n-1$ letters/words. More formally, we can write
where each $X_i$ indicates a letter, $\boldsymbol{\theta}$ is a fixed parameter and $\mathbf{x} \in \{0, 1\}^k$. In practice, $\boldsymbol{\theta}$ is not known and has to be estimated. We will use here the maximum likelihood estimate, which is simply $\frac{\text{frequency letter}}{\text{all letters}}$.
Example: Let the text be “abac”. Given that the last letter was “a”, what is the probability that the next letter is “b”?
The parameter is $% $ and the input is $% $.
Then the probability is $p_{X_2 \mid X_{1}}(\mathbf{x} ; \boldsymbol{\theta}_a) = \left(\frac{1}{2}\right)^{1} \cdot \left(\frac{1}{2}\right)^0 = \frac{1}{2}$.
This formal notation based on the categorical distribution can get a bit cumbersome. From now on, I will write the input to the PMFs simply as letters e.g. $p_{X_2 \mid X_{1}}(b \mid a) = \frac{1}{2}$.
## Unigram
### Unigram
Unigrams (also called $1$-grams or bag-of-words models) are a special case of $n$-grams where each sequence consists of a single word or letter. Unigrams depend only on the current letter.
A unigram does the following approximation:
Hence, it is assumed that $X_1, \dots, X_n$ are all i.i.d random variables.
Example: umulmundumulmum$Letter Frequency Probability u 6 6/16 m 5 5/16 l 2 2/16 n 1 1/16 d 1 1/16$ 1 1/16
In the example, we calculated the probability by dividing the frequency of a single letter by the length of the text.
The unigrams can be used to calculate the probability of certain letters occurring:
Note that i.i.d implies stationarity.
### Entropy
The most common definition of entropy is the following:
For example, the entropy of the word “umulmundumulmum$” is: In data compression, there exists another definition called $0$-th order empirical entropy: Empirical entropy makes no direct probabilistic assumptions and requires a text-like structure (i.e. where frequency estimates make sense). $H(X)$ is in comparison more general and one needs to specify the probability distribution. In our example, the variables for $H_0(T)$ are: Both definitions give the same result. ### Entropy rate For a unigram, the entropy rate is defined as follows: Since $X_1, \dots, X_n$ are i.i.d random variables, it suffices to calculate the entropy of one $X_i$. The same is true for our example. We can simply calculate one of the random variables i.e. $H(X_{16}) = \cdots = H(X_1)$. ## Bigram ### Bigram In this section, we will look at bigrams, which are $n$-grams with $n=2$. Bigrams depend on the previous letter. A bigram does the following approximation: Hence, bigrams satisfy the Markov property. We previously assumed i.i.d random variables. Now, we need stationarity (also known as time-homogeneity in the context of Markov chains): Example: umulmundumulmum$
Letters Frequency Conditional probability
mu 4 4/5
um 3 3/6
ul 2 2/6
lm 2 2/2
un 1 1/6
nd 1 1/1
du 1 1/1
m$1 1/5 In the example, the frequency was calculated by generating all possible contiguous sequences of two in the text. All sequences with the same initial letter in the table (e.g. “m” of “mu”) form a probability distribution. Let us calculate the probabilities of some letters: Compare this with the i.i.d assumption from the unigram. By using bigrams, we can take into account the context in which the letter occurs. As a result, unigrams and bigrams have different probabilities. Since we satisfy the Markov property, it is easy to visualize the bigrams. Let us draw the Markov chain of the underlying stochastic process. The code will output the following graph: When a Markov chain is time-homogeneous, irreducible and aperiodic, there exists a unique stationary distribution $\boldsymbol{\pi}$. The chain in the graph is aperiodic, because we satisfy $\text{gcd}(2, 3) = 1$. However, it is not yet irreducible. By adding a transition from “$” to “U”, we can also make it irreducible.
### Entropy
The $1$-th order entropy can be defined as follows:
We can apply this formula on our example:
In data compression, the following definition is more common:
For example, when $w = u$:
Again both definitions give the same result.
### Entropy rate
The entropy rate of a bigram is:
where $\mathbf{P}$ is the transition matrix of the time-homogeneous Markov chain and $\boldsymbol{\pi}$ is the corresponding stationary distribution.
Since the stochastic process is a Markov chain, the entropy rate is again just the entropy. In the example, $H(X_2 \mid X_{1}) \approx 0.7728$.
To verify this result, let us add the transition $(, U, 1)$ to our Markov chain. Now, $\boldsymbol{\pi}$ can be calculated:
The code calculates the left eigenvectors and chooses the one with the highest eigenvalue as $\boldsymbol{\pi}$. This highest eigenvalue will be $1,$ because the matrix $\mathbf{P}$ is stochastic.
We obtain
Note that $0.375 = \frac{6}{16}$ and $0.3125 = \frac{5}{16}$. Compare these values with our previous entropy calculation. Hence, we found the same result.
## n-gram
### n-gram
Before we only considered the case of $n$-grams with $n = 1$ or $n = 2$, but $n$ can be any number:
For example, we can condition on $X_1, \dots, X_{15}$ and predict the $16$th letter:
The choice of $n$ depends on many factors, but in general low values like $n = 2$, $n = 3$ or $n = 4$ are more popular. When we have a vocabulary of $k$ unique words, then there are $k^n$ possibilities to combine these words. Hence, higher values of $n$ require bigger corpora.
### Entropy
To calculate the $n$-th order entropy, one simply conditions on $n-1$ random variables:
The definition for data compression is similar to the previous one:
Instead of manually calculating the $n$-th order entropy for all $n$, we can also use a suffix tree (see this article).
The following code constructs a compressed suffix tree by using the library sdsl-lite and then calculates the entropy for $0 \leq n \leq 10$:
Compile the code with g++ -std=c++11 -O3 -DNDEBUG -I ~/include -L ~/lib entropy.cpp -o entropy -lsdsl -ldivsufsort -ldivsufsort64
We obtain as result:
We can see that conditioning reduces entropy, thus $H_{k+1}(T) \leq H_k(T)$.
### Entropy rate
The entropy rate is:
The equality follows from stationarity.
When $n$ is finite, calculating the entropy rate does not make a lot of sense. We saw before for i.i.d random variables and a specific $n,$ it is just entropy.
However, when the stochastic process is not stationary e.g.
or when we take $n$-grams with $n\to\infty$. Then we can get different results (as long as the limit exists).
## References
[1] T. Cover and J. Thomas, Elements of Information Theory. Hoboken, N.J: Wiley-Interscience, 2006.
Categories:
Updated:
|
|
# Documentation
Lean.PrettyPrinter.Delaborator.TopDownAnalyze
The top-down analyzer is an optional preprocessor to the delaborator that aims to determine the minimal annotations necessary to ensure that the delaborated expression can be re-elaborated correctly. Currently, the top-down analyzer is neither sound nor complete: there may be edge-cases in which the expression can still not be re-elaborated correctly, and it may also add many annotations that are not strictly necessary.
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
Equations
Equations
Equations
• One or more equations did not get rendered due to their size.
Equations
• = do let __do_lift ← pure __do_lift.userName
Instances For
Equations
• One or more equations did not get rendered due to their size.
Instances For
@[inline]
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Instances For
• bottomUps :
• higherOrders :
• funBinders :
• provideds :
• namedArgs :
Instances For
@[inline]
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
Equations
• One or more equations did not get rendered due to their size.
|
|
5 Answered Questions for the topic self
05/20/20
#### Write an example of Pygmalion effect from personal experience.
I'm from 2nd semester. The example should be a bit detailed and real. Thankyou.
05/19/20
#### Write an example of Pygmalion effect from personal experience
Social PsychologySelf
05/11/19
05/03/19
#### Does self-compassion have negative consequences?
Self-compassion is often compared with self-esteem and it seems that self-compassion is more effective and positive than self-esteem. A lot of studies have shown that excessive levels of... more
03/29/19
#### What is the purpose of self?
What is the purpose of the self word in Python? I understand it refers to the specific object created from that class, but I can't see why it explicitly needs to be added to every function as a... more
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
|
Java
Which version of Java must I use?
Any version of Java 8 to Java 11 should be fine, provided you don’t use any exotic features. Our autograder uses OpenJDK 11.
Which Java programming environment should I use?
We recommend our customized IntelliJ programming environment for Mac OS X, Windows, and Linux. If you followed our step-by-step instructions, then everything should be configured and ready to go.
If you prefer to use another environment (such as Eclipse or Sublime), that’s perfectly fine—just be sure that you know how to do the following:
• Add algs4.jar to your Java classpath.
• Enter command-line arguments.
• Use standard input and standard output (and, ideally, redirect them to or from a file).
How do I configure IntelliJ to access the libraries in algs4.jar?
If you use the provided IntelliJ project, you should be ready to go. Note that the provided IntelliJ project is configured to automatically add (and remove) import statements as needed, so you won’t type import statements explicitly.
It contains an IntelliJ project that you can use to develop your program. It may also contain supplemental test clients and data files (that you can use even if you do not use IntelliJ).
I haven’t programmed in Java in a while. Which material do I need to remember?
For a review of our Java programming model (including our input and output libraries), read Sections 1.1 and 1.2 of Algorithms, 4th Edition.
I’ve never programmed in Java before. Should I continue with the course?
That depends. You will need to write your programs in Java to receive feedback from the autograder. Perhaps you can use this course as an opportunity to learn Java.
Where can I find the Java source code and documentation for the algorithms, data structures, and I/O libraries from lecture and the textbook?
They are in algs4.jar. Here are links to the source code and Javadoc.
Submission and Feedback
How can I receive personalized feedback on this assignment?
• This service includes feedback on correctness (e.g. to help you pinpoint why you are failing a test, and how to fix the problem) and code quality (e.g. to help you write more human-readable and maintainable code) from experienced reviewers who are trained specifically to provide feedback on assignments in this course.
• The goals of the service are to help you better understand the assignment and develop the skills necessary to work as a professional software engineer.
• codePost is a service, started by former Princeton students, that connects qualified code reviewers with Coursera students.
How do I create a zip file for submission to Coursera?
Here are three approaches:
• Mac OS X.
• Select the required files in the Finder.
• Right-click and select Compress 5 Items.
• Rename the resulting file to percolation.zip.
• Windows.
• Select the required files in Windows Explorer.
• Right-click and select Send to -> Compressed (zipped) folder.
• Rename the resulting file to percolation (the .zip extension is automatic).
• Command line (Linux or Mac OS X or Windows Git Bash).
• Change to the directory containing the required .java files.
• Execute the command: zip percolation.zip *.java.
Can I add (or remove) methods to (or from) the prescribed APIs?
No. You must implement each API exactly as specified, with the identical set of public methods and signatures or your assignment will not be graded. However, you are encouraged to add private methods that enhance the readability, maintainability, and modularity of your program. The one exception is main()—you are always permitted to add this method to test your code, but we will not call it unless we specify it in our API.
Why is it so important to implement the prescribed API?
Writing to an API is an important skill to master because it is an essential component of modular programming, whether you are developing software by yourself or as part of a group. When you develop a module that properly implements an API, anyone using that module (including yourself, perhaps at some later time) does not need to revisit the details of the code for that module when using it. This approach greatly simplifies writing large programs, developing software as part of a group, or developing software for use by others.
Most important, when you properly implement an API, others can write software to use your module or to test it. We do this regularly when grading your programs. For example, your PercolationStats client should work with our Percolation data type and vice versa. If you add an extra public method to Percolation and call them from PercolationStats, then your client won’t work with our Percolation data type. Conversely, our PercolationStats client may not work with your Percolation data type if you remove a public method.
Can my public constructor and methods have side effects that are not described in the API?
No. For example, the PercolationStats constructor must not print to standard output (or a client would not be able to use it as a module). We usually allow public constructors and methods to have benign side effects, such as changing the state of a random number generator.
Which style and bug checkers does the autograder use? How can I configure my system to use them?
The autograder uses the following tools:
Here is some guidance for installing on your system.
Note that Checkstyle inspects the source code; SpotBugs inspects the compiled code.
Will I receive a deduction if I don’t adhere to the course rules for formatting and commenting my code?
The autograder (and IntelliJ) provide style feedback to help you become a better programmer. The autograder, however, does not typically deduct for stylistic flaws.
Percolation
Can my Percolation data type assume the row and column indices are between 0 and n−1?
No. The API specifies that valid row and column indices are between 1 and n.
How do I throw a IndexOutOfBoundsException?
Use a throw statement such as the following:
if (i <= 0 || i > n) throw new IndexOutOfBoundsException("row index i out of bounds");
Your code should not attempt to catch any exceptions—this will interfere with our grading scripts.
How many lines of code should my program be?
You should strive for clarity and efficiency. Our reference solution for Percolation.java is about 70 lines, plus a test client. If you are re-implementing the union–find data structure (instead of reusing the implementations provided), you are on the wrong track.
After the system has percolated, my PercolationVisualizer colors in light blue all sites connected to open sites on the bottom (in addition to those connected to open sites on the top). Is this “backwash” acceptable?
No, this is likely a bug in Percolation. It is only a minor deduction (because it impacts only the visualizer and not the experiment to estimate the percolation threshold), so don’t go crazy trying to get this detail. However, many students consider this to be the most challenging and creative part of the assignment (especially if you limit yourself to one union–find object).
~/Desktop/percolation> java-algs4 PercolationVisualizer input10.txt
PercolationStats
What should stddev() return if trials equals 1?
The sample standard deviation is undefined. We recommend returning Double.NaN.
How do I generate a site uniformly at random among all blocked sites?
Pick a site at random (by using StdRandom to generate two integers between 1 and n) and use this site if it is blocked; if not, repeat.
How precisely must the output in main() match the format given?
It should match exactly except that the autograder ignores whitespace and allows fewer digits of precision after the decimal point.
I don’t get reliable timing information when n = 200. What should I do?
Increase the size of n (say to 400, 800, and 1600), until the mean running time exceeds its standard deviation.
## Testing
Testing. We provide two clients that serve as large-scale visual traces. We highly recommend using them for testing and debugging your Percolation implementation.
Visualization client. PercolationVisualizer.java animates the results of opening sites in a percolation system specified by a file by performing the following steps:
• Read the grid size n from the file.
• Create an n-by-n grid of sites (initially all blocked).
• Read in a sequence of sites (row i, column j) to open from the file. After each site is opened, draw full sites in light blue, open sites (that aren’t full) in white, and blocked sites in black using standard draw, with with site (1, 1) in the upper left-hand corner.
The program should behave as in this movie and the following snapshots when used with input20.txt.
% java PercolationVisualizer input20.txt
50 open sites 100 open sites 150 open sites 204 open sites 250 open sites
Sample data files. The zip file percolation.zip contains some sample files for use with the visualization client. Associated with each input .txt file is an output .png file that contains the desired graphical output at the end of the animation.
InteractiveVisualization client. InteractivePercolationVisualizer.java is similar to the first test client except that the input comes from a mouse (instead of from a file). It takes an integer command-line argument n that specifies the lattice size. As a bonus, it writes to standard output the sequence of sites opened in the same format used by PercolationVisualizer, so you can use it to prepare interesting files for testing. If you design an interesting data file, feel free to share it with us and your classmates by posting it in the discussion forums.
## Possible Progress Steps
These are purely suggestions for how you might make progress. You do not have to follow these steps.
1. Consider not worrying about backwash for your first attempt. If you’re feeling overwhelmed, don’t worry about backwash when following the possible progress steps below. You can revise your implementation once you have a better handle on the problem and have solved the problem without handling backwash.
2. For each method in Percolation that you must implement (open(), percolates(), etc.), make a list of which WeightedQuickUnionUF methods might be useful for implementing that method. This should help solidify what you’re attempting to accomplish.
3. Using the list of methods above as a guide, choose instance variables that you’ll need to solve the problem. Don't overthink this, you can always change them later. Instead, use your list of instance variables to guide your thinking as you follow the steps below, and make changes to your instance variables as you go. Hint: At minimum, you'll need to store the grid size, which sites are open, and which sites are connected to which other sites. The last of these is exactly what the union–find data structure is designed for.
4. Plan how you're going to map from a 2-dimensional (row, column) pair to a 1-dimensional union find object index. You will need to come up with a scheme for uniquely mapping 2D coordinates to 1D coordinates. We recommend writing a private method with a signature along the lines of int xyTo1D(int, int) that performs this conversion. You will need to utilize the percolation grid size when writing this method. Writing such a private method (instead of copying and pasting a conversion formula multiple times throughout your code) will greatly improve the readability and maintainability of your code. In general, we encourage you to write such modules wherever possible. Directly test this method using the main() function of Percolation.
5. Write a private method for validating indices. Since each method is supposed to throw an exception for invalid indices, you should write a private method which performs this validation process.
6. Write the open() method and the Percolation() constructor. The open() method should do three things. First, it should validate the indices of the site that it receives. Second, it should somehow mark the site as open. Third, it should perform some sequence of WeightedQuickUnionUF operations that links the site in question to its open neighbors. The constructor and instance variables should facilitate the open() method's ability to do its job.
7. Test the open() method and the Percolation() constructor. These tests should be in main(). An example of a simple test is to call open(1, 1) and open(1, 2), and then to ensure that the two corresponding entries are connected (using two calls to find() in WeightedQuickUnionUF).
8. Write the percolates(), isOpen(), and isFull() methods. These should be very simple methods.
9. Test your complete implementation using the visualization clients.
10. Write and test the PercolationStats class.
## Programming Tricks and Common Pitfalls
1. Do not write your own union–find data structure. Use WeightedQuickUnionUF instead.
2. Your Percolation class must use WeightedQuickUnionUF. Otherwise, it will fail the timing tests, as the autograder intercepts and counts calls to methods in WeightedQuickUnionUF.
3. It's OK to use an extra row and/or column to deal with the 1-based indexing of the percolation grid. Though it is slightly inefficient, it's fine to use arrays or union–find objects that are slightly larger than strictly necessary. Doing this results in cleaner code at the cost of slightly greater memory usage.
4. Each of the methods (except the constructor) in Percolation must use a constant number of union–find operations. If you have a for loop inside of one of your Percolation methods, you're probably doing it wrong. Don't forget about the virtual-top / virtual-bottom trick described in lecture.
|
|
# Regularity of infinite concatenation
It is well-known that an infinite union of regular languages is not necessarily regular, since every language can be written as a union of singletons. What about infinite concatenations?
Let $$\{ L_z : z \in \mathbb{Z} \}$$ be a family of languages, cofinitely many of which contain $$\epsilon$$. We can define the infinite concatenation of the family as the collection of all words of the form $$w_{i_1} \ldots w_{i_n}$$, where $$i_1 < \cdots < i_n$$ and $$w_{i_j} \in L_{i_j}$$. (The definition makes sense if we replace $$\mathbb{Z}$$ with any linear order.)
Is the infinite concatenation of a family of regular languages necessarily regular?
The answer is negative. Let $$L_n = \{\epsilon, a^{3^n}\}$$ for $$n \geq 0$$, and let $$L_n = \{\epsilon\}$$ for $$n < 0$$. The infinite concatenation of the collection $$\{ L_n : n \in \mathbb{Z} \}$$ is $$L = \{ a^m : \text{m is the sum of distinct powers of 3} \}.$$ Since $$L$$ is unary, in order to show that $$L$$ is not regular, we need to show that the set $$S = \{ m : \text{m is the sum of distinct powers of 3} \}$$ is not eventually periodic. We can see this in many ways. For example, $$S$$ is infinite but has density $$0$$ in the limit.
|
|
# What is the practical benefit of a function being injective? surjective?
I have learned that an injective function is one such that no two elements of the domain map to the same value in the codomain.
Example: The function $x \mapsto x^2$ is injective when the domain is positive numbers but is not injective when the domain is all numbers because both $(-2)^2$ and $2^2$ map to the same value, $4$.
Is there an advantage of an injective function over a non-injective function? What is the practical benefit of injective functions? Or perhaps there is an advantage to a function not being injective?
I have also learned about surjective functions: a surjective function is one such that for each element in the codomain there is at least one element in the domain that maps to it.
What is the benefit of a function being surjective? Is there a danger to using functions that are not surjective?
I'd like to have a deeper understanding of injective and surjective than simply be able to parrot back their definitions. Please help.
-
An advantage? In what context? A lot of the time you don't use functions, you are given functions and you have to deal with them. Understanding the properties that a function has (such as injectivity or surjectivity) means that you can better understand what it does. For example, if a function is injective and surjective, it is invertible. This breaks testing for invertibility into two easier tests. – Aaron Aug 4 '12 at 14:36
It's not about benefits or dangers. The words "injective" and "surjective" are useful words for describing a mathematical situation. – Qiaochu Yuan Aug 4 '12 at 14:44
Careless use of non-surjective functions can lead to your presence on certain government watch lists... ;)
There is no "danger" to a function that is or isn't injective or surjective. However, in certain cases, you need to be certain that the function (morphism) has these properties, otherwise the property you are trying to prove would not be proven.
For example, one type of function is a homomorphism from one group to another. A special type of homomorphism is an isomorphism; existence of isomorphisms between groups are extremely strong, and indicate that one group is essentially the same as another, which is a very powerful statement indeed. For a homomorphism to be an isomorphism, it must be both injective and surjective -- called bijective.
Another example is the invertibility of a function. A function $f$ has an inverse function $f^{-1}$ if and only if it is bijective. This is why $x^2$ has no inverse (this is really an incomplete statement about which a lot more can be said; I am trying to paint a broad picture here).
So the advantages of a function having these properties depends a lot on your context; but as soon as you require these properties, the ability to demonstrate them for your function becomes critically important, and can be very powerful.
-
Ed and Aaron, thank you very much for explaining the practical benefits of injectivity and surjectivity.
I have taken what I learned from you and cast it into my own thoughts and experiences. Please let me know of any errors.
## Motivation
When I go hiking, I want to be able to retrace my steps and get back to my starting point.
When I go to a store, I want to be able to return home.
When I swim out from the shore, I want to be able to get back to the shore.
Going somewhere and then coming back to where one started is important in life.
And it is important in mathematics.
And it is important in functional programming.
## Domain, Codomain, and Inverse
If a function maps a set of elements (the domain) to a set of values (the codomain) then it is often useful to have another function that can take the elements in the codomain and send them back to their original domain values. The latter is called the inverse function.
In order for a function to have an inverse function, it must possess two important properties, which I explain now.
## The Injective Property
Let the domain be the set of days of the week. In Haskell one can create the set using a data type definition such as this:
data Day = Monday | Tuesday | Wednesday | Thursday | Friday | Saturday | Sunday
Let the codomain be the set of breakfasts. One can create this set using a data type definition such as this:
data Breakfast = Eggs | Cereal | Toast | Oatmeal | Pastry | Ham | Grits | Sausage
Now I create a function that maps each element of the domain to a value in the codomain.
Here is one such function. The first line is the function signature and the following lines is the function definition:
f :: Day -> Breakfast
f Monday = Eggs
f Tuesday = Cereal
f Wednesday = Toast
f Thursday = Oatmeal
f Friday = Pastry
f Saturday = Ham
f Sunday = Grits
An important thing to observe about the function is that no two elements in the domain map to the same codomain value. This function is called an injective function.
[Definition] An injective function is one such that no two elements in the domain map to the same value in the codomain.
Contrast with the following function, where two elements from the domain -- Monday and Tuesday -- both map to the same codomain value -- Eggs.
g :: Day -> Breakfast
g Monday = Eggs
g Tuesday = Eggs
g Wednesday = Toast
g Thursday = Oatmeal
g Friday = Pastry
g Saturday = Ham
g Sunday = Grits
The function is not injective.
Can you see a problem with creating an inverse function for g :: Day -> Breakfast?
Specifically, what would an inverse function do with Eggs? Map it to Monday? Or map it to Tuesday? That is a problem.
[Important] If a function does not have the injective property then it cannot have an inverse function.
In other words, I can't find my way back home.
## The Surjective Property
There is a second property that a function must possess in order for it to have an inverse function. I explain that next.
Did you notice in the codomain that there are 8 values:
data Breakfast = Eggs | Cereal | Toast | Oatmeal | Pastry | Ham | Grits | Sausage
So there are more values in the codomain than in the domain.
In function f :: Day -> Breakfast there is no domain element that mapped to the codomain value Sausage.
So what would an inverse function do with Sausage? Map it to Monday? Tuesday? What?
The function is not surjective.
[Definition] A surjective function is one such that for each element in the codomain there is at least one element in the domain that maps to it.
[Important] If a function does not have the surjective property, then it does not have an inverse function.
[Important] In order for a function to have an inverse function, it must be both injective and surjective.
## Injective + Surjective = Bijective
One final piece of terminology: a function that is both injective and surjective is said to be bijective. So, in order for a function to have an inverse function, it must be bijective.
## Recap
If you want to be able to come back home after your function has taken you somewhere, then design your function to possess the properties of injectivity and surjectivity.
-
Yes, this is a nice summary overview of the properties. Another example I have seen used: say you have a function mapping archers' arrows onto foot soldiers of an enemy's army. You surely want this function to be injective! – Emily Aug 4 '12 at 22:49
Actually, you only need the function to be injective in order to "take you back home" (that is, to allow a pseudoinverse). – F M Aug 5 '12 at 17:37
If you are studying algebraic structures (let's say rings) an injective homomorphism $\phi: R \longrightarrow S$ provides you with a way to interpret one ring as a subring of the other. After all, this just means that $R$ will be isomorphic to $\mathrm{Im}(\phi)$. For example, let $\phi: \mathbb{Q} \longrightarrow \mathbb{Q}[X]$ be defined by $\phi(a) = a + 0x + 0x^{2} + \cdots$. One can then check (you check!) that $\phi$ is injective and a homomorphism.
And indeed, $\mathbb{Q}$ can be interpreted as a subsystem of the ring of polynomials with coefficients in $\mathbb{Q}$, just by viewing each rational as a constant polynomial.
-
|
|
# Will a ZEBRA battery retain its charge when cooled down?
A ZEBRA battery needs a minimum temperature to operate. That is because only then the ions move freely enough.
The electro-chemical reaction is as follows:
$$NiCl_2 + 2Na \longrightarrow Ni + 2NaCl + e$$
So what if I would cool a charged battery down to room temperature?
Since the one electrode will still consist of $NiCl_2$ there should be charge "frozen".
|
|
# Why can't I write down tilde? Is there any MathJax syntax?
In an answer, I wanted to write ∼, but that wasn’t showing. I think it wasn’t showing for Markdown.
So I thought I should use \tilde, but it came on top of the next letter. How can I write down tilde in MathJax?
∼ is a small space, while \tilde{} is an ornament of a character. Use instead e.g. \sim. More generally, for such LaTeX questions, check out Detexify.
• Minor note: if the spacings don't look right, use {\sim}. The bare \sim is a binary operator and is spaced accordingly, which might be too much space for several common applications. Feb 21 at 9:55
• $\uparrow$ Good point.
|
|
# AES-128 as compression function in Merkle-Damgard construction
Using a compression function $$f : A × A → A$$. A basic version given by:
$$W_0 = IV$$
$$W_1 = f(W_0, m_1)$$
$$W_2 = f(W_1, m_2)$$
...
$$W_n = f(W_{n-1}, m_n)$$
$$W_n$$ is the output of the hash function, $$m_1,m_2 . . . m_n$$ is the message and $$IV$$ is a constant.
What would be the simplest way to implement AES-128 as the compression function? And would it be one way?
Excuse my ignorance, I am very new to the topic. My very wild stab in the dark is that AES-128 can be used in a way that feeds its own produced ciphertext blocks back into itself as the key.
• This question is basically covered by this more general question – rmalayter Sep 27 '18 at 16:24
• @rmalayter Not sure that addresses the main issue of AES being a random permutation rather than a random function. What is expected of a truly compressive function? Perhaps crypto.stackexchange.com/q/15579/23115? – Paul Uszak Sep 27 '18 at 19:30
• You can use Merkle-Damgård as stated. However, keep in mind that the block size of AES is too small for secure hash sizes. – kelalaka Sep 30 '18 at 21:47
This short thesis describes some of the basic ways in which to use the Merkle-Damgård construction based on block ciphers. The construction that is commonly used in MD5 and related constructions is in fact an instance of the Davies-Meyer construction. This could be directly applied to your construction (you have to add length padding and a length field to get a secure hash function of course) as follows: let $$E_k(m)$$ be a block cipher (like AES) with key $$k$$ and message block $$m$$, then we take $$f(W_i) = E_{m_i}(W_{i-1}) \oplus W_{i-1}$$ for $$i=1,\ldots n$$. This way we get a secure hash function, provided $$E$$ is a secure block cipher in the appropriate sense (essentially indistinguishable from a random permutation).
|
|
#### Volume 16, issue 2 (2012)
1 J M Bloom, T Mrowka, P Ozsváth, The Künneth principle in Floer homology, in preparation 2 S Boyer, D Lines, Conway potential functions for links in $\mathbf{Q}$–homology $3$–spheres, Proc. Edinburgh Math. Soc. 35 (1992) 53 MR1150952 3 J C Cha, C Livingston, KnotInfo: Table of knot invariants, website (2010) 4 B Chantraine, Lagrangian concordance of Legendrian knots, Algebr. Geom. Topol. 10 (2010) 63 MR2580429 5 Y Chekanov, Differential algebra of Legendrian links, Invent. Math. 150 (2002) 441 MR1946550 6 W Chongchitmate, L Ng, An atlas of Legendrian knots, to appear in Exp. Math. arXiv:1010.3997 7 M Culler, Gridlink: a tool for knot theorists 8 F Ding, H Geiges, Symplectic fillability of tight contact structures on torus bundles, Algebr. Geom. Topol. 1 (2001) 153 MR1823497 9 F Ding, H Geiges, Handle moves in contact surgery diagrams, J. Topol. 2 (2009) 105 MR2499439 10 F Ding, H Geiges, A I Stipsicz, Surgery diagrams for contact $3$–manifolds, Turkish J. Math. 28 (2004) 41 MR2056760 11 Y Eliashberg, Invariants in contact topology, from: "Proceedings of the International Congress of Mathematicians, Vol. II (Berlin, 1998)" (1998) 327 MR1648083 12 Y Eliashberg, M Fraser, Topologically trivial Legendrian knots, J. Symplectic Geom. 7 (2009) 77 MR2496415 13 J B Etnyre, On contact surgery, Proc. Amer. Math. Soc. 136 (2008) 3355 MR2407103 14 J B Etnyre, K Honda, Knots and contact geometry I: Torus knots and the figure eight knot, J. Symplectic Geom. 1 (2001) 63 MR1959579 15 J B Etnyre, K Honda, On the nonexistence of tight contact structures, Ann. of Math. 153 (2001) 749 MR1836287 16 J B Etnyre, D S Vela-Vick, Torsion and open book decompositions, Int. Math. Res. Not. 2010 (2010) 4385 MR2737776 17 D Fuchs, Chekanov–Eliashberg invariant of Legendrian knots: existence of augmentations, J. Geom. Phys. 47 (2003) 43 MR1985483 18 P Ghiggini, Knot Floer homology detects genus-one fibred knots, Amer. J. Math. 130 (2008) 1151 MR2450204 19 E Giroux, Convexité en topologie de contact, Comment. Math. Helv. 66 (1991) 637 MR1129802 20 K Honda, On the classification of tight contact structures I, Geom. Topol. 4 (2000) 309 MR1786111 21 K Honda, W H Kazez, G Matić, Tight contact structures on fibered hyperbolic $3$–manifolds, J. Differential Geom. 64 (2003) 305 MR2029907 22 A Juhász, Holomorphic discs and sutured manifolds, Algebr. Geom. Topol. 6 (2006) 1429 MR2253454 23 T Kálmán, Contact homology and one parameter families of Legendrian knots, Geom. Topol. 9 (2005) 2013 MR2209366 24 Y Kanda, On the Thurston–Bennequin invariant of Legendrian knots and nonexactness of Bennequin's inequality, Invent. Math. 133 (1998) 227 MR1632790 25 P B Kronheimer, T S Mrowka, Monopoles and contact structures, Invent. Math. 130 (1997) 209 MR1474156 26 P Kronheimer, T Mrowka, Monopoles and three-manifolds, New Math. Monogr. 10, Cambridge Univ. Press (2007) MR2388043 27 P Kronheimer, T Mrowka, Knots, sutures, and excision, J. Differential Geom. 84 (2010) 301 MR2652464 28 P Kronheimer, T Mrowka, P Ozsváth, Z Szabó, Monopoles and lens space surgeries, Ann. of Math. 165 (2007) 457 MR2299739 29 Y Lekili, Heegaard Floer homology of broken fibrations over the circle arXiv:0903.1773 30 P Lisca, P Ozsváth, A I Stipsicz, Z Szabó, Heegaard Floer invariants of Legendrian knots in contact three-manifolds, J. Eur. Math. Soc. 11 (2009) 1307 MR2557137 31 P Lisca, A I Stipsicz, Ozsváth–Szabó invariants and tight contact three-manifolds I, Geom. Topol. 8 (2004) 925 MR2087073 32 T Mrowka, Y Rollin, Legendrian knots and monopoles, Algebr. Geom. Topol. 6 (2006) 1 MR2199446 33 T Mrowka, Y Rollin, Contact invariants and monopole Floer homology, in preparation 34 K Niederkrüger, C Wendl, Weak symplectic fillings and holomorphic curves, Ann. Sci. Éc. Norm. Supér. (4) 44 (2011) 801 35 P Ozsváth, A I Stipsicz, Contact surgeries and the transverse invariant in knot Floer homology, J. Inst. Math. Jussieu 9 (2010) 601 MR2650809 36 P Ozsváth, Z Szabó, D Thurston, Legendrian knots, transverse knots and combinatorial Floer homology, Geom. Topol. 12 (2008) 941 MR2403802 37 B Sahamie, Dehn twists in Heegaard Floer homology, Algebr. Geom. Topol. 10 (2010) 465 MR2602843 38 A I Stipsicz, V Vértesi, On invariants for Legendrian knots, Pacific J. Math. 239 (2009) 157 MR2449016 39 C H Taubes, Embedded contact homology and Seiberg–Witten Floer cohomology V, Geom. Topol. 14 (2010) 2961 MR2746727 40 C Wendl, A hierarchy of local symplectic filling obstructions for contact $3$–manifolds arXiv:1009.2746
|
|
# Population models of Sources
## Ground-based detectors sources
• Black Hole Binary Mergers
• The merger rate density is expressed as: $$\dot{n}(z, m_1,m_2,\chi)=\mathcal{R}(z)f(m_1)\pi(q)P(\chi)/m_1,$$ where $f(m_1)$ is the mass function of the primary black hole, $\pi(q)$ and $P(\chi)$ are the probability distributions of the mass ratio $q\equiv m_1/m_2$ and the effective spin $\chi$ respectively. $\mathcal{R}$ as function of $z$ is often refer to as the cosmic merger rate density: $$\mathcal{R}(z_m)=\mathcal{R}_n\int^\infty_{z_m}\psi(z_f)P(z_m|z_f)dz_f,$$ where $\psi(z)$ is the Madau-Dickinson star formation rate: $$\psi(z)=\frac{(1+z)^\alpha}{1+(\frac{1+z}{C})^\beta},$$ with $\alpha=2.7$, $\beta=5.6$, $C=2.9$ (Madau & Dickinson 2014), and $P(z_m|z_f,\tau)$ is the probability that the BBH merger at $z_m$ if the binary is formed at $z_f$, which we refer to as the distribution of delay time with the form: $$P(z_m|z_f,\tau)=\frac{1}{\tau}\exp{\left[-\frac{t_f(z_f)-t_m(z_m)}{\tau}\right]}\frac{dt}{dz}.$$ In the above equation, $t_f$ and $t_m$ are the look back time as function of $z_f$ and $z_m$ respectively.
Therefore, $\mathcal{R}_n$ and $\tau$ is the pair of parameters that define the $z$ dependence of the merger rate density of the population. In below figure, we plot $\mathcal{R}(z)$ corresonding to different pair of $\mathcal{R}_n$ and $\tau$:
The default parameters for BBH are set to $\mathcal{R}_0=13\,$Gpc$^{-3}$ yr$^{-1}$ and $\tau=3$ Gyrs, which are compatible with the local merger rate of BBH found with GWTC-2
We use a uniform distribution between [$q_{\rm cut}$, 1] for $\pi(q)$ and assume $\chi$ follows a Gaussian distribution centered at zero with standard deviation $\sigma_{\chi}$.
For the mass function, we provide two parameterizations:
parameterization I:
\begin{eqnarray} p(m_1) \propto & \begin{cases} \exp\left(-\frac{c}{m_1-\mu}\right)(m_1-\mu)^{-\gamma}, & m_1\le m_{\text{cut}} \\ 0, & m_1>m_{\text{cut}}\\ \end{cases} %\exp\left(-\frac{c}{m_1-\mu}\right)(m_1-\mu)^{-\gamma}. \label{eqn:BBH-Pop2-mass} \end{eqnarray} The distribution of $p(m_1)$ is defined for $m_1>\mu$, which has a power law tail of index $-\gamma$ and a cut-off above $m_{\rm cut}$. The normalization of $p(m_1)$ is $$c^{1-\gamma}\Gamma(\gamma-1, \frac{c}{m_{\text{cut}}-\mu}),$$ where $\Gamma(a,b)$ is the upper incomplete gamma function; The plot below is an illustration of the mass function. We set $\mu=3$, $\gamma=2.5$, $c=6$, $m_{cut}=95\,M_\odot$ as default.
parameterization II:
$f(m_1)$ in this parameterization has an extra Gaussian peak component $p_{\rm{peak}}(m_1)$ on top of that in parameterization I: $$p_{\rm{peak}}(m_1)=A_{\rm{peak}}\exp\left[-\left(\frac{m_1-m_{\rm{peak}}}{\sigma_{\rm{peak}}}\right)^2\right],$$ the normalization of the alternative $p(m_1)$ is: $$c^{1-\gamma}\Gamma(\gamma-1, \frac{c}{m_{\text{cut}}-\mu})+\sqrt{2\pi}\sigma_{\rm{peak}}A_{\rm{peak}}.$$ Our default parameters for the peak component are $A_{\rm{peak}}=0.002$, $m_{\rm{peak}}=40\,M_\odot$, $\sigma_{\rm{peak}}=1\,M_\odot$
• Double Neutron star Mergers
• the merger rate density is expressed as: $$\dot{n}(z, m_1, m_2,\chi)=\mathcal{R}(z)p(m_1)p(m_2)P(\chi),$$ where $\mathcal{R}(z)$ is taking the same form as in BBH population model, but with a different default setting: $\mathcal{R}_0=400\,$Gpc$^{-3}$ yr$^{-1}$ and $\tau=3$ Gyrs, which are compatible with the local merger rate of DNS found with GWTC-2;
$p(m_{1,2})$, the mass function of the neutron star: truncated Gaussian with upper and lower cuts. Therefore the parameters defining the mass function are:
the peak of the Gaussian: $\overline{m}$, the dispersion: $\sigma_m$, the upper mass cut: $m_{\rm cut, high}$ and the lower mass cut: $m_{\rm cut, low}$.
The default parameters are: $\overline{m}=1.4\,M_\odot$, $\sigma_m=0.5\,M_\odot$, $m_{\rm cut, low}=1.1\,M_\odot$, $m_{\rm cut, high}=2.5\,M_\odot$; we use the same form $\chi$ distribution as in Black Hole Binary Mergers.
• Black Hole-Neutron star Mergers
• For the population model of BHNS, we assume the merger rate density to be: $$\dot{n}(z, m_1, m_2, \chi)=\mathcal{R}(z)f(m_\bullet)p(m_{\rm n})P(\chi),$$ where $\mathcal{R}(z)$ is taking the same form as in BBH and DNS, with a different default setting: $\mathcal{R}_0=5\,$Gpc$^{-3}$ yr$^{-1}$ and $\tau=3$ Gyrs; $f(m_\bullet)$ is the mass function of the BH, which is the same as in BBH, We denote the population model without/with the peak component in $f(m_\bullet)$ as parameterization I/II. ; $p(m_n)$ is the mass function of the NS, which is the same as in DNS. we use the same form $\chi$ distribution as in Black Hole Binary Mergers.
## Space-Borne detectors sources
• Massive Black Hole Binary Mergers
• We use the MBHB catalogues from Klein et al. (2016). There are 3 population models, namely pop3, Q3_nodelays and Q3_delays, description of each population model can be found in Klein et al's paper. For each population model, there are 10 catalogues, each corresponds to a realisation of all sources in the Universe within one year. In the figure below, we plot $M_{\rm tot}$ (total mass) and $z$ of MBHB in the Universe within one year for three population models as a direct demonstration of the population models.
• Galactic White Dwarf Binaries
• Nelemans et al. (2001):
We use the synthetic catalogue of DWD in the whole Galaxy from Nelemans et al. (2001). In the figure below, we plot the distribution density between the frequencies $f_{\rm s}=2/P$ (where $P$ is the orbital period of the binaries) and the intrinsic amplitudes $A=2\left(G\mathcal{M}\right)^{5/3}\left(\pi f\right)^{2/3}/(c^4d)$ of GW emitted from binaries in the catalogue.
Verification Binaries: Beside the synthetic catalogue, we also include a catalogue of 81 previously known DWDs (Huang et al. 2020) as verification binaries (VBs). Those VBs are plotted along in the above figure with star markers.
• Extreme Mass Ratio Inspiral
• In the Toolbox, we use the simulated catalogues which correspond to population models M1-M11 in Babak et al. (2017). For each population model, there are ten realization of catalogues, which contain detectable EMRIs within one year with their assumption of LISA noise properties. The distributions of $\mu$, $M$ and $D$ in the catalogues are plotted in the following figure. We exclude M7 and M12 in the Toolbox, because in their population models the direct plunges are ignored, therefore the total number of EMRIs are $\sim$ one order of magnitude larger than others, which will make the computation workload $\sim$ ten times bigger.
|
|
Register FAQ Search Today's Posts Mark Forums Read
2021-11-05, 22:17 #2355
techn1ciaN
Oct 2021
U.S. / Maine
2048 Posts
Quote:
Originally Posted by techn1ciaN If you check out exponents for PRP-CF on the manual assignment page (or re-load any of your automatically fetched PRP-CF work from the replacement lines in https://www.mersenne.org/workload/), the work lines given look like ...
I have apparently misstated the issue. What I said initially is only true for the replacement work lines in your Assignments. The lines given by the manual assignment page actually look like:
Code:
PRP=[AID],1,2,[exponent],-1,99,0,"[known factor(s)]
If you load this, Prime95 sets residue type 5 automatically. So, there is no problem (not even with the P-1 tests_saved value) unless you are specifically trying to use the replacement work lines tool.
2021-11-05, 22:55 #2356
James Heinrich
"James Heinrich"
May 2004
ex-Northern Ontario
5×719 Posts
Quote:
Originally Posted by techn1ciaN only true for the replacement work lines in your Assignments
OK, that I can fix. I can either hardcode them to show base,type as 3,5, or just leave those two fields out entirely and let Prime95 act on its default behavior. I have opted for the latter (do not include these fields), same as manual assignment.
2021-11-10, 09:24 #2357 S485122 "Jacob" Sep 2006 Brussels, Belgium 32·197 Posts www.mersenne.org down ? It worked until about 09:10 UTC. The server responds to pings and FTP but not to HTTP(S) requests. Last fiddled with by S485122 on 2021-11-10 at 09:33 Reason: FTP works
2021-11-10, 10:05 #2358
S485122
"Jacob"
Sep 2006
Brussels, Belgium
33558 Posts
Quote:
Originally Posted by S485122 It worked until about 09:10 UTC. The server responds to pings and FTP but not to HTTP(S) requests.
OK again.
Thanks to whoever restored the service.
Last fiddled with by S485122 on 2021-11-10 at 10:07 Reason: thanks
2021-11-10, 17:29 #2359
techn1ciaN
Oct 2021
U.S. / Maine
8416 Posts
Quote:
Originally Posted by James Heinrich OK, that I can fix. I can either hardcode them to show base,type as 3,5, or just leave those two fields out entirely and let Prime95 act on its default behavior. I have opted for the latter (do not include these fields), same as manual assignment.
Just tested by checking out a PRP-CF assignment and loading it from my replacement work lines. Starts as intended now — thanks.
Might you also change tests_saved in these lines to 0, for full consistency with what the manual assignment page outputs?
2021-11-11, 00:41 #2360
James Heinrich
"James Heinrich"
May 2004
ex-Northern Ontario
5×719 Posts
Quote:
Originally Posted by techn1ciaN Might you also change tests_saved in these lines to 0, for full consistency with what the manual assignment page outputs?
As best I can see what the page is trying to do, it shows 0 if a decent amount of P-1 has already been done; if it hasn't then it shows 1 for PRP-DC and 2 for PRP. Whether this makes sense or not I don't know, I didn't write the logic and I'm not that knowledgeable on PRP assignments.
2021-11-11, 05:54 #2361
slandrum
Jan 2021
California
22·3·52 Posts
Quote:
Originally Posted by James Heinrich As best I can see what the page is trying to do, it shows 0 if a decent amount of P-1 has already been done; if it hasn't then it shows 1 for PRP-DC and 2 for PRP. Whether this makes sense or not I don't know, I didn't write the logic and I'm not that knowledgeable on PRP assignments.
I think what was being asked for - if the exponent has already been proven composite, then tests saved cannot be more than 0.
2021-11-11, 11:13 #2362
techn1ciaN
Oct 2021
U.S. / Maine
22·3·11 Posts
Quote:
Originally Posted by slandrum I think what was being asked for - if the exponent has already been proven composite, then tests saved cannot be more than 0.
This is essentially the idea. If someone is checking out an exponent for PRP-CF, then a full-length primality test will be run, by definition.
Of course, with how_far_factored=99, the issue should be purely cosmetic in all practical cases.
2021-11-11, 11:37 #2363 S485122 "Jacob" Sep 2006 Brussels, Belgium 33558 Posts Exponent Status Distribution errors The Exponent Status Distribution (the menu Item "Current Progress / Work Distribution Map") has some wrong totals, for instance the PrimeNet Activity Summary dated 2021-11-11 10:00 UTC. The 10M range has a spurious exponent counted as having only one erroneous test. That is wrong since all exponents of that range have long been verified or factored. There is some logic error in the counting, I will illustrate it with 105M 106M range. - The number of untested exponents is 4, it is indeed the number of assigned first time tests. But the NO-LL count in the table is 3, one to low ! - The number of factored Mersennes is correct in the table. - The table has the correct total number of exponents (sum of the number of exponents for which the corresponding Mersenne number is prime, factored, verified composite, tested once, got an erroneous result and untested.) - I counted 4775 exponents with unverified test(s) (some with a mix of LL and PRP without Cert). The table has only 4769, 6 are missing. - I counted 14281 verified exponents (LL or PRP double checked or certified PRP.) The table counts 14288, 7 too many. The differences do add up to 0 in this range. Other range have too few assigned and available exponents : 3M is missing 1, 23M : 1, 30M : 1, 59M : 3, 60M : 2, 61M : 16, 62M : 1, 63M : 1, 64M : 1, 104M : 1, 106M : 13, 107M : 33, 108M : 30, 109M : 47, 110M : 27, 111M : 11, 112M : 13, 113M : 12, 114M : 7, 115M : 1, 122M : 1, 124M : 1, 126M : 29, 149M : 1, 150M : 1, 160M : 3, 164M : 1, 165M : 2, 166M : 5, 172M : 1, 177M : 1, 184M : 2, 185M : 2, 188M : 1, 190M : 1, 332M : 2, 333M : 1, 371M : 1, 385M : 1, 623M : 1 and 800M : 1. Some of those differences are quite persistent over months (others, more transient, might be due to cut-off issues.) There is at the moment no range with more assigned than available exponents (the ECM range is a special case), there have been some in the past.
2021-11-16, 12:00 #2364 sdbardwick Aug 2002 North San Diego County 19×37 Posts DB server might be having problems. Homepage opens, but anything involving DB seems to stall, including login. EDIT: Appears to be back online. Last fiddled with by sdbardwick on 2021-11-16 at 12:07
2021-12-09, 19:51 #2365 slandrum Jan 2021 California 22·3·52 Posts Problem with P-1 GHz days Definitely need new accounting for the new P-1 processes being tested in prime95/mprime. Attached Thumbnails
Similar Threads Thread Thread Starter Forum Replies Last Post ewmayer Lounge 39 2015-05-19 01:08 ewmayer Science & Technology 41 2014-04-16 11:54 cheesehead Soap Box 56 2013-06-29 01:42 cheesehead Soap Box 61 2013-06-11 04:30 Dubslow Programming 19 2012-05-31 17:49
All times are UTC. The time now is 01:37.
Mon Jan 17 01:37:53 UTC 2022 up 177 days, 20:06, 0 users, load averages: 1.97, 1.30, 1.19
|
|
Last edited by Kenos
Tuesday, November 24, 2020 | History
2 edition of Polarization studies in D-P elastic scattering using a polarized proton target. found in the catalog.
Polarization studies in D-P elastic scattering using a polarized proton target.
Richard Lawrence Maughan
# Polarization studies in D-P elastic scattering using a polarized proton target.
Written in English
Edition Notes
Thesis (Ph.D.)- Univ.of Birmingham, Dept of Physics.
ID Numbers
Open LibraryOL19886391M
We describe the first spin-transfer experiment performed for the π to pp reaction. Three spin-transfer parameters were measured: K’LS'; K'ss'; and K'NN each, at a single angle for a number of energies spanning the Δ resonance of this system. The apparatus employed in this experiment consisted of established systems, including a dynamically polarized deuteron target and a proton polarimeter. The incorporation of polymers or smaller complex molecules into lipid membranes allows for property modifications or the introduction of new functional elements. The corresponding molecular-scale details, such as changes in dynamics or features of potential supramolecular structures, can be studied by a variety of solid-state NMR by: 6. of the 3He(d,p)4He reaction. Developments on sources of polarized ions (atomic physics, radiofrequency transi-tions, ion optics, accelerator physics). Development of a polarized 3He target (with optical pumping, using metastability exchange). Development of optical methods for the measurement of the 3He polarization. OHIO STATE UNIVERSITY. The MONDO detector, that exploits the tracking of the recoil protons produced in single and double-elastic scattering neutron interaction to measure the neutron kinetic energy and its incoming direction, is a matrix of scintillating fibres, arranged in x-y oriented layers (total active volume 10×10×20 cm^3^ filled with squared μm fibres.
Facility for Rare Isotope Beams at Michigan State University. (d, p) Reaction Theories. 28 February AM. FRIB Laboratory Prospects for Measuring Coherent Elastic Neutrino-Nucleus Scattering. 16 October PM. FRIB Laboratory. Kate Scholberg.
You might also like
Applied physiology.
Applied physiology.
The Future of our religious past
The Future of our religious past
Petersons education portal
Petersons education portal
tale of Beatrix Potter
tale of Beatrix Potter
Quam oblationem of the Roman Canon
Quam oblationem of the Roman Canon
City of Belfast centenary, 1888-1988.
City of Belfast centenary, 1888-1988.
Children and money
Children and money
Theory of liquids.
Theory of liquids.
Sardar Patel souvenir, October, 1974.
Sardar Patel souvenir, October, 1974.
distribution of planktonic diatoms in Yaquina Estuary, Oregon
distribution of planktonic diatoms in Yaquina Estuary, Oregon
lost world of the East
lost world of the East
paleolimnology of the alkaline, Saline lakes on the Mt. Meru Lahar
paleolimnology of the alkaline, Saline lakes on the Mt. Meru Lahar
Model form of conditions of contract for process plant.
Model form of conditions of contract for process plant.
Three Little Chaperones
Three Little Chaperones
### Polarization studies in D-P elastic scattering using a polarized proton target. by Richard Lawrence Maughan Download PDF EPUB FB2
The tensor polarization of the recoil deuteron in elastic electron-deuteron scattering has been measured at the Bates Linear Accelerator Center at three values of four-momentum transfer [ital Q]=,and fm[sup [minus]1], corresponding to incident electron energies of, and MeV.
Polarized electron beams elastically scattered by atoms as a tool for testing fundamental predictions of quantum mechanics. the final polarization after an elastic scattering collision is a Cited by: 1. A polarized, internal electron target gradually polarizes a proton beam in a storage ring. Here, we derive the spin-transfer cross section for $\vec e\,(p,\vec p\,)e$\ scattering.
The polarization studies in N-N scattering at and below 50 MeV have provided specific and significant improvements in the phase-shift parameters. High-energy investigations with both polarized proton beams and targets have shown unexpectedly large spin effects, and this provides a challenge for theoretical effort to explain the by: 4.
A polarized deuterium target was developed for the use of the first measurement of the so-called complete spin dependent cross-sections of the D→(d→,p)T nuclear reaction at 20MeV. Future Activities of the COMPASS Polarized Target (N Doshita et al.) Distillation and Polarization of HD (S Bouchigny et al.) Optical Pumping Method: A New 3 He Polarization for Fundamental Neutron Physics (Y Masuda et al.) Polarized Proton Beams in RHIC (A Zelenski) Nuclear Reaction Method: Focal Plane; Polarimeter for a Test of EPR Paradox (K.
Proceedings of the Fourth International Symposium on Polarization Phenomena in Nuclear Reactions High Energy Experiments with a Polarized Proton Beam and a Polarized Proton Target.
Pages Krisch, A. Analyzing Power for Proton-Elastic Scattering from Pb Near the Low-Lying Isobaric Analog Resonances. Pages Brand: Birkhäuser Basel. It is proposed to measure scattering parameters for the p-n system using a polarized proton beam and a polarized neutron (helium – three) target.
The target will be 3 He gas, polarized by optical pumping at room temperature and compressed to a pressure of about 1 At. It will be a modification of one at present in its final stages of Author: Malcolm McMillan. Polarization target for using hadron experiment is developing and studying in RCNP, Osaka University.
LEPS group have studied hadron photo-production experiment of the \phi, K, \eta, and \pi^0 mesons by using linear polarized Back Scattering Compton (BCS) \gamma-ray and no polarized target with energies of E\gamma= ∼ GeV.
also K‘‘, the transfer of polarization from the target to the antineutron. This means that one can produce polarized antineutrons by scattering antitprotons on a longitudinally polarized proton target.
Amplitude analysis. Decades of efforts have been necessary to. The comprehensive coverage also includes polarized proton and electron acceleration and storage, as well as polarized ion sources and targets.
Many significant new results and achievements on the different topics considered at the symposium are presented in this book for the first time. Sec. IVB. Thus, in order to keep the proton polarization for a su ciently long time period, it is important to maintain the target at a su ciently low temperature.
A beam irradiation test of the polarized target was conducted with a MeV 12C beam at the PSI. The polarized target was operated at T to avoid the rapid spin relaxation at.
measurements [3] using both a polarized neutron beam and a polarized proton target. For the first time we measured the energy dependence of the Δσ L (np), neutron-proton total cross section difference for the pure longitudinal (L) spin states for parallel and antiparallel (np) spins, over a new kinetic energy range of GeV for a quasi.
Polarized sources and targets: proceedings of the ninth international workshop; Nashville, Indiana, USA, 30 September-4 October, | Vladimir P.
Derenchuk, Barbara Von Przewoski | download | B–OK. Download books for free. Find books. For high energy nuclear physics, the polarized quark–parton distributions of the nucleon/nucleus are studied intensively. In the study of baryon structures and nuclear astrophysics, spin is an important observable through hadron reactions.
A better approximation to the interaction potential is obtained by allowing for charge polarization of the target molecule by the reactive partner or scattering particle. In this chapter we focus on this charge-polarization aspect, and, in particular, we study how adiabatic charge polarization affects the interaction potentials for electron Cited by: 7.
This book will be of value to graduate students and researchers working in all areas of quantum physics and particularly in elementary particle and high energy physics.
It is suitable as a supplementary text for graduate courses in theoretical and experimental particle physics. Backward angle T20 in $$\vec d - p$$ elastic scattering and ΔΔ component of the deuteron wave function.
Polarized target double and triple spin correlation parameters in elastic proton-deuteron scattering. Book Title The Three-Body Force in the Three-Nucleon System. where A denotes the target nucleus. For A = p, this is a standard exercise in relativistic kinematics to demonstrate that the kinetic energy of the incoming proton should be higher than 6 m, where m is the proton mass, and c = 1.
This threshold decreases if the target A is more massive. The Bevatron was completed inand the antiproton was discovered in by a team lead by Chamberlain. Scattering by hydrogenic systems has been carried out using various approximations. Among them is the method of polarized orbitals of Temkin () [], which takes into account the distortion produced in the target by the incident particle in the ansatz for the wave function for the scattering r, this method is not variational and does not provide any bounds on the calculated Cited by: 2.
A Study of 27Mg with (d,p&gamma) Angular Correlation. Fred O. Purser, Jr. PhD () H. Newson: Neutron Depolarization in Neutron Proton Scattering and Neutron Polarization from the D(d,n)3He Reaction.
Robert Coleman Richardson: PhD () :. "Analyzing Power for Proton Elastic Scattering from Pb near the Low Lying Isobaric Analog Resonances", Proceedings of the 4th International Symposium on Polarization Phenomena in Nuclear Reactions, Zürich, 9/75, p. L19, Eds.: W.
Gruebler and W. Koenig; Birkhaeuser Verlag, Basel (). New studies include work on multi-electron electrode reactions, nonaqueous polaography of metals, electrolytic and foam chromatography. Nuclear Chemistry (Inorganic) Group.
Research efforts are reported on physico-chemical studies of ion-exchange and solvent extraction more» systems, and nuclear chemical studies of the fission process and.
Murad Ahmed, “β-γ spectroscopy of neutron-rich nucleus Os”, PhD Thesis, University of Tsukuba, Mar. Mukai, “In-gas-cell laser resonance ionization spectroscopy of Ir”, PhD Thesis, University of Tsukuba, Feb. Full text of "Polarization Spectroscopy: Principles, Theory, Techniques and Applications" See other formats.
Pisano et al., "Single and double spin asymmetries for deeply virtual Compton scattering measured with CLAS and a longitudinally polarized proton target",D91(5), K. Park et al., Measurements of ep → e'π + n at W = - GeV and extraction of nucleon resonance electrocouplings at CLAS",C Get this from a library.
The three-body force in the three-nucleon system: proceedings of the international symposium held at the George Washington University, Washington, D.C., April[B L Berman; B F Gibson;]. one optical cycle. For hydrogen this corresponds to peak electric fields of about 50 billion V/m, corresponding to laser intensities of ~ PW/cm 2 at optical wavelengths.
As Figure in Chapter 1 shows, this is also the laser field corresponding to an atomic unit, where the laser field exceeds the coulomb field binding the electron to the proton, so that the standard atomic physics.
On-line experimental results of argon gas cell based laser ion source (KEK Isotope Separation System) Y. Hirayama et al. EMIS, Grand Rapids, May 11 - 15, There have been many publications on neutron diffraction and spectroscopy, 5, 7 including books on fundamental physics of neutron scattering by Furrer and co‐authors and by Squires, an earlier book on neutron scattering in chemistry by Bacon and a more recent review on the subject by Pusztai, a book on single‐crystal neutron diffraction by Cited by: 4.
Old Dominion University, located in the coastal city of Norfolk, is Virginia's entrepreneurial-minded doctoral research university with more t students, rigorous academics, an energetic residential community, and initiatives that contribute $billion annually to Virginia's economy. The authors then discuss the resolution function and focusing effects. Simple examples of phonon and magnon measurements are presented. Important chapters cover spurious effects in inelastic and elastic measurements, and how to avoid them. The last chapter covers techniques for, and applications of, polarization analysis. Publications Diploma Thesis Zielinski, M. Feasibility study of the$\eta' \rightarrow \pi^{+} \pi^{-} \pi^{0}\$ decay using WASA-at-COSY apparatus Jülich: Forschungszentrum, Zentralbibliothek, Berichte des Forschungszentrums Jülich69 p () = Krakau, Univ., Dipl., Files Fulltext by OpenAccess repository BibTeX | EndNote: XML, Text | RIS.
focus of these studies have been photodisintegration reactions on light nuclei (A=) at low energies, and Compton scattering reactions on deuteron, helium, and lithium nu-clei at higher photon energies but below the pion production threshold.
These studies probe the nature of the strong nuclear force as manifested by the macroscopic properties. Full text of "Introduction to High Energy Physics" See other formats. Preface. This book is intended as a textbook for a course in radiation physics in academic medical physics graduate programs.
The book may also be of interest to the large number of professionals. NRC,Connecting Quarks with the Cosmos: Eleven Science Questions for the New Century, The National Academies Press, Washington, D.C.
NRC,Frontiers in High Energy Density Physics: The X-Games of Contemporary Science, The National Academies Press, Washington, D.C. This agrees with the observation that domain structures are often correlated across grain boundaries (presumably due to long‐range elastic and dipole interactions) as well as with earlier reports on correlated switching via electron microscopy studies.
Recently, similar behavior was observed by SS‐PFM mapping in capacitor and tip. Table of contents for issues of Computer Physics Communications Last update: Fri Oct 13 MDT Volume 1, Number 3, January, Volume 1, Number 4, April, Volume 1, Number 5, September, Volume 1, Number 6, December, Volume 2, Number 1, January, Volume 2, Number 2, February / March, Elastic J/psi photoproduction cross section measurement at Fermilab E Proc.
of the Division of Particles and Fields, American Physical Society. Young, K. Coulter, R. Gilman, R. Holt and others. Development of a polarized deuterium target to measure T 20 in electron storage rings. Proc. of the Topical Conference on Electronuclear. Furthermore, geometrical optimization and frequency analysis was carried out using density functional theory (DFT) calculations with B3LYP hybrid functionals and G(d), G(d,p) and G(d,p) basis sets.
The theoretical and experimental structures were found to .Full text of "SCATTERING, REACTIONS AND DECAY IN NONRELATIVISTIC QUANTUM MECHANICS" See other formats.D. Abbott et al., Production of Highly Polarized Positrons Using Polarized Electrons at MeV Energies, PRL() S.
N. Nakamura et al., Observation of the Helium 7 Lambda hypernucleus by the (e,e’K+) reaction, Phys. Rev. Lett.() P.
Guèye et al., Dispersive effects by a comparison of electron and positron scattering.
|
|
# Finding the unique weak solution of Non-linear boundary problem
We are given the equation \begin{cases} -\Delta u=0&x\in \Omega\\ \partial_\nu u+\beta(u)=0&x\in\partial\Omega \end{cases} where $\Omega$ is bounded bounded smooth boundary and $0<a\leq \beta'(z)\leq b<\infty$ where $\beta$: $R\to R$.
The question ask me to find a unique weak solution $u\in H^1(\Omega)$. To do so, I first try to write weak formulation of this problem. Take arbitrary $v\in H^1(\Omega)$ and integration by parts we have $$\int_\Omega \nabla u\nabla v\,dx+\int_{\partial \Omega}\beta(u)v\,dH^{N-1}=\int_\Omega fv\,dx$$
Update: based on @Behaviours suggest, I made the following progress.
Define $B(x)$ to be the anti-derivative of $\beta(x)$ above and I obtain that $$\frac{1}{2}ax^2+k_1x+k_2\leq B(x)\leq \frac{1}{2}bx^2+k_3x+k_4 \,\,\,\,\,\,\,\,\,\,\,\,\,\,(1)$$ for some constant $k_1\ldots k_4\in R$
Hence, I conclude that $B(x)$ is BOUNDED BELOW.
Now define our energy function to be $$E[u]:=\frac{1}{2}\int_\Omega |\nabla u|^2dx+\int_{\partial \Omega}B(u)dH^{N-1}-\int_\Omega fu\,dx$$ So, if we have a minimizer in $H^1(\Omega)$, we would be done.
Now the problem is how to prove the existence of minimizer. This should be a standard direct method in CoV.
The key is obtain $E[u]$ is bounded below and also we have the coercivity. Usually, we could use the first term to control the last term in $E[u]$ by viewing of Poincare inequality. However, in this question we have no Poincare inequality so we can not control last term with the first term. I then assume that $f=0$ because the homogenizies problem is easy to solve. Now I have new energy function
$$E[u]:=\frac{1}{2}\int_\Omega |\nabla u|^2dx+\int_{\partial \Omega}B(u)dH^{N-1}$$
This energy functional is bounded below and hence I could obtain a minimizing sequence $(u_n)\subset H^1(\Omega)$ such that $$E[u_n]\to \alpha$$ where $\alpha$ is the min value.
Next I try to prove that $$\|u_n\|_{H^1(\Omega)}\leq C<\infty\,\,\,\,\,\,\,\,\,\,\,\,\,\,(2)$$ for some constant $C>0$, and I got stuck here...
From $(1)$ and take in the trace estimation, we have $$\int_{\partial \Omega}B(u)dH^{N-1}\leq C_1\|u\|_{H^1(\Omega)}^2+C_2\|u\|_{H^1(\Omega)}+C_3$$ and hence we have $$E[u_n]\geq (\frac{1}{2}-C_1)\|\nabla u\|_{L^2}^2-C_2\|u\|_{L^2}^2 -C_2\|u\|_{H^1}$$ which can not conclude $(2)$ as I want...
So how can I conclude $(2)$?
The variational approach is the way to go. You know that the Laplacian corresponds to minimizing the Dirichlet energy $\int_\Omega |\nabla u|^2$; the question is how to fit $\int_{\partial \Omega}\beta(u)v\,dH^{N-1}$ into the variational problem. Let's try a generic boundary energy term: $$\int_{\partial \Omega}B(u) \,dH^{N-1}$$ with $\Phi$ to be determined. Substitute $u$ with $u+v$ and keep only the first order term with $v$, i.e., the first variation: $$\int_{\partial \Omega}B'(u)v \,dH^{N-1}$$ This looks good. You just need $B'(u)=\beta(u)$. The condition that $\beta$ is increasing gives you convexity of $B$. Furthermore, the two-sided bound on $\beta'$ says that $B$ is strongly convex with quadratic growth at infinity, i.e., the nicest kind of convex functions. It remains to study the functional $$E(u)=\frac12\int_\Omega |\nabla u|^2+\int_{\partial \Omega}B(u) \,dH^{N-1}$$ which, by the standard trace theorem, is well-defined on $H^1(\Omega)$ (there is a bounded trace operator $H^1(\Omega)\to L^2(\partial \Omega)$).
Take a sequence $(u_n)$ such that $E(u_n)\to\inf_{H^1} E$. Let $\mu_n$ be the mean of $u_n$ on $\Omega$. Since the sequence $(\nabla u_n)$ is bounded in $L^2(\Omega)$, by the Poincaré lemma the sequence $(u_n-\mu_n)$ is bounded in $H^1(\Omega)$. It remains to show that $(\mu_n)$ is a bounded sequence.
Let $T:H^1(\Omega)\to L^2(\partial\Omega)$ be the trace operator. Since it is a bounded operator, the sequence of functions $T(u-\mu_n)=T(u_n)-\mu_n$ is bounded in $L^2(\partial\Omega)$. On the other hand, $T(u_n)$ are themselves bounded in $L^2(\partial\Omega)$, since their norm is controlled by $E(u_n)$. Hence $(\mu_n)$ is bounded.
|
|
Accelerating the pace of engineering and science
# Robust Control Toolbox
## Getting Reliable Estimates of Robustness Margins
This example shows how to modify the uncertainty description to avoid discontinuities and get reliable estimates of the margin of uncertainty.
Computing the smallest perturbation that causes instability at a given frequency is the cornerstone of most robustness analysis algorithms in Robust Control Toolbox™. To estimate, for example, the robust stability margin over the entire frequency range, the function robuststab performs this basic computation for a finite set of frequencies and looks at the worst case over this set.
Under most conditions, the robust stability margin is continuous with respect to frequency, so this approach gives good estimates provided you use a sufficiently dense frequency grid. However in some problems with only parameter uncertainty (UREAL), the migration of poles from stable to unstable can occur at isolated frequencies (generally unknown to you). As a result, any frequency grid that excludes these particular frequencies fail to detect worst-case perturbations and give over-optimistic stability margins.
How Discontinuities Can Hide Robustness Issues
In this example, we consider a spring-mass-damper system with 100% parameter uncertainty in the damping coefficient and 0% uncertainty in the spring coefficient. Note that all uncertainty here is of ureal type:
m = 1;
k = 1;
c = ureal('c',1,'plusminus',1);
sys = tf(1,[m c k]);
As the uncertain element c varies, the only place where the poles can migrate from stable to unstable is at s = j*1 (1 rad/sec). No amount of variation in c can cause them to migrate across the jw-axis at any other frequency.
When computing the robust stability margins with the robuststab function, almost any frequency grid will exclude f=1, leading to the incorrect conclusion that the margin of uncertainty on c is infinite.
omega = logspace(-1,1,40); % one possible grid
sysg = ufrd(sys,omega);
[stabmarg,du,rep,info] = robuststab(sysg);
stabmarg
stabmarg =
LowerBound: 5.0348e+03
UpperBound: Inf
DestabilizingFrequency: 0.1000
When we look at the mussv bounds computed by the robuststab function, the structured singular value mu is zero at all test frequencies in omega:
opt = bodeoptions;
opt.MagUnits = 'Abs';
opt.PhaseVisible = 'off';
opt.YLim = [0 1];
bodeplot(info.MussvBnds(:,1),'b*',info.MussvBnds(:,2),'g*',opt)
title('Original \mu bounds, 40 frequency points')
legend('Upper bound','Lower bound')
Figure 1: Original mu bounds, 40 frequency points.
Note that making the grid denser would not help. Only by adding f=1 to the grid will we find the true margin.
f = 1;
stabmarg = robuststab(ufrd(sys,f))
stabmarg =
LowerBound: 1.0000
UpperBound: 1
DestabilizingFrequency: 1
Modifying the Uncertainty Model to Eliminate Discontinuities
The example above shows that the robust stability margin can be a discontinuous function of frequency. In other words, it can have jumps. We can eliminate such jumps by adding a small amount of uncertain dynamics to every uncertain real parameter. This amounts to adding some dynamics to pure gains. Importantly, as the size of the added dynamics goes to zero, the estimated margin for the modified problem converges to the true margin for the original problem.
In the spring-mass-damper example, we model c as a ureal with the range [0.05,1.95] rather than [0,2], and add a ultidyn perturbation with gain bounded by 0.05. This combination covers the original uncertainty in c and introduces only 5% conservatism.
cc = ureal('cReal',1,'plusminus',0.95) + ultidyn('cUlti',[1 1],'Bound',0.05);
sysreg = usubs(sys,'c',cc);
Now let's recompute the robust stability margin:
[stabmarg,du,report,info] = robuststab(ufrd(sysreg,omega));
stabmarg
report
stabmarg =
LowerBound: 2.0016
UpperBound: 2.3630
DestabilizingFrequency: 1.0608
report =
Assuming nominal UFRD system is stable...
Uncertain system is robustly stable to modeled uncertainty.
-- It can tolerate up to 200% of the modeled uncertainty.
-- A destabilizing combination of 236% of the modeled uncertainty was found.
-- This combination causes an instability at 1.06 rad/seconds.
-- Sensitivity with respect to the uncertain elements are:
'cReal' is 35%. Increasing 'cReal' by 25% leads to a 9% decrease in the margin.
'cUlti' is 63%. Increasing 'cUlti' by 25% leads to a 16% decrease in the margin.
Now the calculation determines that the margin is not infinite. The value 2.36 is still greater than 1 (the true margin) because the density of frequency points is not high enough.
If we plot the mu bounds (reciprocal of robust stability margin) as a function of frequency, a peak clearly appears around 1 rad/s:
bodeplot(info.MussvBnds(:,1),info.MussvBnds(:,2),opt)
title('Regularized \mu, 40 frequency points, 5% added dynamics');
legend('Upper bound','Lower bound')
Figure 2: Regularized mu, 40 frequency points, 5% added dynamics
Increasing the Frequency Density for Sharper Estimates.
The sharpness of the peak of the mussv plot as well as its shape suggests that a denser frequency grid may be necessary. We'll increase the frequency-grid density by a factor of 5, and recompute:
OmegaDense = logspace(-1,1,200);
stabmarg
report
stabmarg =
LowerBound: 1.0026
UpperBound: 1.0056
DestabilizingFrequency: 0.9885
report =
Assuming nominal UFRD system is stable...
Uncertain system is robustly stable to modeled uncertainty.
-- It can tolerate up to 100% of the modeled uncertainty.
-- A destabilizing combination of 101% of the modeled uncertainty was found.
-- This combination causes an instability at 0.988 rad/seconds.
-- Sensitivity with respect to the uncertain elements are:
'cReal' is 95%. Increasing 'cReal' by 25% leads to a 24% decrease in the margin.
'cUlti' is 6%. Increasing 'cUlti' by 25% leads to a 2% decrease in the margin.
The computed robust stability margin has decreased significantly. It helped to use a denser grid, and the margin estimate is now close to the true value 1. The mu plot shows a pronounced peak at 1 rad/sec:
bodeplot(info.MussvBnds(:,1),info.MussvBnds(:,2),opt)
title('Regularized \mu, 200 frequency points, 5% added dynamics');
legend('Upper bound','Lower bound')
Figure 3: Regularized mu, 200 frequency points, 5% added dynamics
For the modified uncertainty model, the robust stability margin is 1, which is equal to that of the original problem. In general, the stability margin of the modified problem is less than or equal to that of the original problem. If it is significantly less, then the answer to the question "What is the stability margin?" is very sensitive to the uncertainty model. In this case, we put more faith in the value that allows for a few percents of unmodeled dynamics. Either way, the stability margin for the modified problem is most trustworthy.
Automating Substitution of Uncertainty Models
The command complexify automates the procedure of replacing a ureal with the sum of a ureal and ultidyn. The analysis above can be repeated using complexify obtaining the same results.
sysreg = complexify(sys,0.05,'ultidyn');
stabmargc
reportc
bodeplot(info.MussvBnds(:,1),'b+',info.MussvBnds(:,2),'b+',...
infoc.MussvBnds(:,1),'r',infoc.MussvBnds(:,2),'r',opt)
title('Regularized \mu, 200 frequency points, 5% added dynamics');
legend('Upper bound','Lower bound','(complexify) Upper bound',...
'(complexify) Lower bound','Location','NorthEast');
stabmargc =
LowerBound: 1.0026
UpperBound: 1.0056
DestabilizingFrequency: 0.9885
reportc =
Assuming nominal UFRD system is stable...
Uncertain system is robustly stable to modeled uncertainty.
-- It can tolerate up to 100% of the modeled uncertainty.
-- A destabilizing combination of 101% of the modeled uncertainty was found.
-- This combination causes an instability at 0.988 rad/seconds.
-- Sensitivity with respect to the uncertain elements are:
'c' is 95%. Increasing 'c' by 25% leads to a 24% decrease in the margin.
'c_cmpxfy' is 6%. Increasing 'c_cmpxfy' by 25% leads to a 2% decrease in the margin.
Figure 4: Regularized mu, 200 frequency points, 5% added dynamics
The continuity of the robust stability margin, and the subsequent computational and interpretational difficulties raised by the presence of discontinuities are considered in [1]. The consequences and interpretations of the regularization illustrated in this small example are described in [2]. An extensive analysis of regularization for 2-parameter example is given in [2].
References
[1] Barmish, B.R., Khargonekar, P.P, Shi, Z.C., and R. Tempo, "Robustness margin need not be a continuous function of the problem data," Systems & Control Letters, Vol. 15, No. 2, 1990, pp. 91-98.
[2] Packard, A., and P. Pandey, "Continuity properties of the real/complex structured singular value," Vol. 38, No. 3, 1993, pp. 415-428.
|
|
# Higgs couplings in the MSSM at large tan β
Beneke, M; Ruiz-Femenía, P; Spinrath, M (2009). Higgs couplings in the MSSM at large tan β. Journal of High Energy Physics, (1):031.
## Abstract
We consider tan β-enhanced quantum effects in the minimal supersymmetric standard model (MSSM) including those from the Higgs sector. To this end, we match the MSSM to an effective two-Higgs doublet model (2HDM), assuming that all SUSY particles are heavy, and calculate the coefficients of the operators that vanish or are suppressed in the MSSM at tree-level. Our result clarifies the dependence of the large-tan β resummation on the renormalization convention for tan β, and provides analytic expressions for the Yukawa and trilinear Higgs interactions. The numerical effect is analyzed by means of a parameter scan, and we find that the Higgs-sector effects, where present, are typically larger than those from the wrong-Higgs'' Yukawa couplings in the 2HDM.
## Abstract
We consider tan β-enhanced quantum effects in the minimal supersymmetric standard model (MSSM) including those from the Higgs sector. To this end, we match the MSSM to an effective two-Higgs doublet model (2HDM), assuming that all SUSY particles are heavy, and calculate the coefficients of the operators that vanish or are suppressed in the MSSM at tree-level. Our result clarifies the dependence of the large-tan β resummation on the renormalization convention for tan β, and provides analytic expressions for the Yukawa and trilinear Higgs interactions. The numerical effect is analyzed by means of a parameter scan, and we find that the Higgs-sector effects, where present, are typically larger than those from the wrong-Higgs'' Yukawa couplings in the 2HDM.
## Statistics
### Citations
5 citations in Web of Science®
6 citations in Scopus®
### Altmetrics
Detailed statistics
Other titles: Higgs couplings in the MSSM at large tan(beta) Journal Article, refereed, original work 07 Faculty of Science > Institute for Computational Science 530 Physics Higgs Physics; Supersymmetric Standard Model English 2009 22 Feb 2010 11:14 05 Apr 2016 13:56 Institute of Physics Publishing 1029-8479 https://doi.org/10.1088/1126-6708/2009/01/031 http://arxiv.org/abs/0810.3768
Preview
Content: Accepted Version
Filetype: PDF
Size: 1MB
View at publisher
Filetype: PDF (Verlags-PDF) - Registered users only
Size: 2MB
## TrendTerms
TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.
|
|
#### You are here: Articles >Reference & Education > Mathematics
travelappsios lifestyleappsios utilitiesappsios
06 October 06:18 To acquisition the Everyman Accepted Assorted (LCM) of several numbers, we first accurate anniversary amount as a artefact of its prime factors.
For example, if we ambition to acquisition the LCM of 60, 12 and 102 we write
egin
60=2^2 cdot 3 cdot 5 12=2^2 cdot 3 102=2 cdot 3 cdot 17
end
The artefact of the accomplished ability of anniversary altered agency appearing
is the LCM.
For archetype in this case, $2^2cdot3cdot5cdot17=1020$. You can see that 1020 is a assorted of 12, 60 and 102 ... the everyman accepted assorted of all three numbers.
Another example: What is the LCM of 36, 45, and 27?
Solution: Factorise anniversary of the numbers
egin
36=2^2cdot3^245=5cdot3^2 27=3^3
end
The artefact of the accomplished ability of anniversary altered agency appearing
is the LCM, i.e;
$2^2cdot5cdot3^3=540$
If the LCM of the numbers is begin and 1 is subtracted from the LCM then the butt if disconnected by anniversary of the numbers whose LCM is begin would accept a butt that is 1 beneath than the divisor.
For archetype if the LCM of 2 numbers 10 and 9 is 90.
Then 90-1=89 and
89 disconnected by 10 leaves a butt of 9 and the aforementioned amount disconnected by 9 leaves a butt of 8.
Tags: product, different, example, numbers, common, factor cdot, multiple, numbers, remainder, common, example, lowest, divided, product, , common multiple, lowest common, lowest common multiple, different factor appearingis, arithmetic lowest common,
|
|
Graph homomorphism
Graph homomorphism
In the mathematical field of graph theory a graph homomorphism is a mapping between two graphs that respects their structure. More concretely it maps adjacent vertices to adjacent vertices.
Definitions
A graph homomorphism f from a graph G = (V,E) to a graph G' = (V',E'), written $f:G \rightarrow G'$, is a mapping $f:V \rightarrow V'$ from the vertex set of G to the vertex set of G' such that $\{u,v\}\in E$ implies $\{f(u),f(v)\}\in E'$.
The above definition is extended to directed graphs. Then, for a homomorphism $f:G \rightarrow G'$, (f(u),f(v)) is an arc of G' if (u,v) is an arc of G.
If there exists a homomorphism $f:G\rightarrow H$ we shall write $G\rightarrow H$, and $G\not\rightarrow H$ otherwise. If $G\rightarrow H$, G is said to be homomorphic to H or H-colourable.
If the homomorphism $f:G\rightarrow G'$ is a bijection whose inverse function is also a graph homomorphism, then f is a graph isomorphism.
Two graphs G and G' are homomorphically equivalent if $G\rightarrow G'$ and $G'\rightarrow G$.
A retract of a graph G is a subgraph H of G such that there exists a homomorphism $r:G\rightarrow H$, called retraction with r(x) = x for any vertex x of H. A core is a graph which does not retract to a proper subgraph. Any graph is homomorphically equivalent to a unique core.
Properties
The composition of homomorphisms are homomorphisms.
Graph homomorphism preserves connectedness.
The tensor product of graphs is the category-theoretic product for the category of graphs and graph homomorphisms.
Connection to coloring and girth
A graph coloring is an assignment of one of k colors to a graph G so that the endpoints of each edge have different colors, for some number k. Any coloring corresponds to a homomorphism $f:G\rightarrow K_k$ from G to a complete graph Kk: the vertices of Kk correspond to the colors of G, and f maps each vertex of G with color c to the vertex of Kk that corresponds to c. This is a valid homomorphism because the endpoints of each edge of G are mapped to distinct vertices of Kk, and every two distinct vertices of Kk are connected by an edge, so every edge in G is mapped to an adjacent pair of vertices in Kk. Conversely if f is a homomorphism from G to Kk, then one can color G by using the same color for two vertices in G whenever they are both mapped to the same vertex in Kk. Because Kk has no edges that connect a vertex to itself, it is not possible for two adjacent vertices in G to both be mapped to the same vertex in Kk, so this gives a valid coloring. That is, G has a k-coloring if and only if it has a homomorphism to Kk.
If there are two homomorphisms $H\rightarrow G\rightarrow K_k$, then their composition $H\rightarrow K_k$ is also a homomorphism. In other words, if a graph G can be colored with k colors, and there is a homomorphism $H\rightarrow G$, then H can also be k-colored. Therefore, whenever a homomorphism $H\rightarrow G$ exists, the chromatic number of H is less than or equal to the chromatic number of G.
Homomorphisms can also be used very similarly to characterize the girth of a graph G, the length of its shortest cycle, and the odd girth, the length of the shortest odd cycle. The girth is, equivalently, the smallest number g such that a cycle graph Cg has a homomorphism $C_g\rightarrow G$, and the odd girth is the smallest odd number g for which there exists a homomorphism $C_g\rightarrow G$. For this reason, if $G\rightarrow H$, then the girth and odd girth of G are both greater than or equal to the corresponding invariants of H.
Complexity
The associated decision problem, i.e. deciding whether there exists a homomorphism from one graph to another, is NP-complete. Determining whether there is an isomorphism between two graphs is also an important problem in computational complexity theory; see graph isomorphism problem.
References
• Hell, Pavol; Jaroslav Nešetřil (2004). Graphs and Homomorphisms (Oxford Lecture Series in Mathematics and Its Applications). Oxford University Press. ISBN 0-19-852817-5.
Wikimedia Foundation. 2010.
Look at other dictionaries:
• Homomorphism — In abstract algebra, a homomorphism is a structure preserving map between two algebraic structures (such as groups, rings, or vector spaces). The word homomorphism comes from the Greek language: ὁμός (homos) meaning same and μορφή (morphe)… … Wikipedia
• Graph coloring — A proper vertex coloring of the Petersen graph with 3 colors, the minimum number possible. In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called colors to elements of a graph… … Wikipedia
• Graph isomorphism — In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H such that any two vertices u and v of G are adjacent in G if and only if ƒ(u) and ƒ(v) are adjacent in H. This kind of bijection is commonly… … Wikipedia
• Graph of groups — In geometric group theory, a graph of groups is an object consisting of a collection of groups indexed by the vertices and edges of a graph, together with a family of injective homomorphisms of the edge groups into the vertex groups.There is a… … Wikipedia
• Induced homomorphism — In mathematics, an induced homomorphism is a structure preserving map between a pair of objects that is derived in a canonical way from another map between another pair of objects. A particularly important case arises in algebraic topology, where … Wikipedia
• Conceptual graph — Conceptual graphs (CGs) are a formalism for knowledge representation. In the first published paper on CGs, John F. Sowa (Sowa 1976) used them to represent the conceptual schemas used in database systems. The first book on CGs (Sowa 1984) applied… … Wikipedia
• Median graph — The median of three vertices in a median graph In mathematics, and more specifically graph theory, a median graph is an undirected graph in which any three vertices a, b, and c have a unique median: a vertex m(a,b,c) that belongs to shortest… … Wikipedia
• List of graph theory topics — This is a list of graph theory topics, by Wikipedia page. See glossary of graph theory for basic terminology Contents 1 Examples and types of graphs 2 Graph coloring 3 Paths and cycles 4 … Wikipedia
• Glossary of graph theory — Graph theory is a growing area in mathematical research, and has a large specialized vocabulary. Some authors use the same word with different meanings. Some authors use different words to mean the same thing. This page attempts to keep up with… … Wikipedia
• Core (graph theory) — This article is about graph homomorphisms. For the subgraph in which all vertices have high degree, see k core. In the mathematical field of graph theory, a core is a notion that describes behavior of a graph with respect to graph homomorphisms.… … Wikipedia
|
|
Introduction
Downy mildew (DM) is a destructive disease caused by obligate parasitic oomycetes from the Peronosporaceae family1. It has been a serious challenge for a wide range of cultivated crops including row crops, vegetables, fruits, and ornamental plants. DM is globally distributed and has high adaptability to new and changing environmental conditions2. Most DM pathogens can infect their host plant at the seedling stage, causing systemic shoot infection, whereas infection at a more mature stage may develop into localized infection patches3. DM can affect the leaves, flowers, fruits, and shoots of hosts and cause great economic losses. It may lead to yield losses of up to 40–80% for different crops4,5. Many fungicides have been developed to manage DM pathogens; however, due to genetic recombination, frequent mutations, and asexual reproduction, new DM pathogen races with higher virulence levels emerge constantly, resulting in fungicide resistance in DM pathogens and thus severely hindering the effectiveness of fungicides whose development could take many years and cost hundreds of millions of dollars6,7.
DM pathogens are composed of at least 300 species belonging to different genera, such as Peronospora, Pseudoperonospora, and Plasmopara, among which Peronospora is the largest genus containing more than 260 species8. The common DM species infecting horticultural crops include Peronospora destructor (onion), Peronospora belbahrii (basil), Plasmopara viticola (grape), Pseudoperonospora cubensis (cucurbits), Plasmopara halstedii (sunflower), Peronospora effusa (spinach), and Bremia lactucae (lettuce). To combat this disease, host resistance to DM has been identified in several crops and a few resistance genes have been cloned. For example, the sunflower genome contains more than 30 DM resistance genes distributed in the domesticated and wild species9. In lettuce, over 50 DM resistance genes have been identified and genetically characterized, among which at least 28 genes can provide high levels of resistance against DM10. In grapevine, 27 quantitative trait loci (QTLs) for DM resistance have been identified from various Vitis species, of which the locus Rpv3 is a major determinant for DM resistance11. DM pathogens secrete apoplastic and cytoplasmic effector molecules upon infection that can be recognized by the proteins encoded by plant disease-resistance genes (R-genes), which are primarily comprised of nucleotide-binding site leucine-rich repeat (NBS-LRR) genes. Many NBS-LRR clusters have been identified in sunflower and lettuce genomic regions involved in DM resistance12,13. In Arabidopsis, some Toll/interleukin-1 receptor NBS-LRR (TIR-NBS-LRR) genes such as RPP1 confer organ-specific resistance to downy mildew14. In spinach, NBS genes present at the RPF1 locus contribute to resistance to P. effusa15.
Impatiens are one of the top-selling annual bedding flowers in the United States. The genus Impatiens (family Balsaminaceae) contains >1000 species that are widely distributed in different geographic and climatic regions, including tropical Africa, Southeast Asia, parts of Europe, and North America16. Among these species, Impatiens walleriana and Impatiens hawkeri are the most commonly cultivated in the world. The popularity of impatiens in the floriculture industry is attributed to the flower color diversity, profuse flowering nature, and ease of growing17,18. In 2018 alone, impatiens contributed a wholesale value of more than $109 million19. Impatiens downy mildew (IDM) caused by Plasmopara obducens is currently a huge threat to the impatiens industry20. Severe outbreaks of IDM were reported in Europe21, Australia22, and North America23,24, causing significant economic losses. The outbreak of IDM in the USA has caused a significant decrease of the wholesale values of impatiens from ~$150 million in 2005 down to ~\$65 million in 201525. IDM caused by P. obducens has become a major disease of I. walleriana. The infected plants exhibit downward leaf curling, chlorotic and downy leaves, and leaves and flowers drop, all of which may result in complete losses of the aesthetic value of impatiens cultivars24. Several studies reported the morphology, transmission and hosts of P. obducens. This pathogen develops hyaline and monopodial sporangiophores with apical branches that can produce ovoid and hyaline sporangia26. The pathogen is readily transmitted by wind-blown or water-splashed sporangia from which the zoospores can be released and infect impatiens under suitable temperature and relative humidity27. Usually, 5–14 days after pathogen infection, visible white downy symptoms could be observed on the lower leaf surfaces27,28. The oospores could not be observed in fresh leaves and may survive overwinter in plant debris26,27,29. Plasmopara obducens can infect a number of cultivated and wild Impatiens spp., including I. walleriana, I. balsamina, I. pallida, I. carpensis, and I. glandulifera28,30,31,32,33. However, I. hawkeri appears to be highly resistant to this disease22.
Management of IDM can be achieved by using preventive fungicides. Several fungicides have been used to manage this disease in impatiens production facilities. Frequent applications of these fungicides have significantly increased the production costs and caused serious concerns over pesticide pollution of the environment. Moreover, few fungicides are available for use to manage this disease in the landscape (public or residential) and indoor exhibitions where impatiens are grown34,35. Developing and using disease-resistant cultivars have proven to be an effective, economic, and sustainable approach to managing devastating diseases in crops if genetic disease resistance can be found or developed. For example, disease-resistant cultivars have played an essential and critical role in controlling grapevine DM caused by P. viticola, which is in the same genus with the IDM pathogen P. obducens36. To develop disease-resistant cultivars, disease screening is essential and most critical. First, germplasm accessions, as many as possible, need to be screened to discover useful sources of disease resistance. Then, large breeding populations, generation after generation, need to be screened to identify the resistant progeny. Thus, effective and efficient disease screening or resistance phenotyping techniques frequently determine the success of plant disease-resistance breeding in many crops. IDM resistance has become the most important breeding objective in impatiens in the world; the development of effective and efficient IDM screening techniques would be of tremendous value to this important crop.
RNA-sequencing (RNA-Seq) technology has been used to identify genes potentially involved in DM resistance in horticultural plants, including lettuce, grapevine, spinach, and impatiens37,38,39,40,41,42. Previously, two de novo RNA-Seq comparative analyses of impatiens have identified some differentially expressed genes, including a couple of NBS-LRR genes for IDM resistance and candidates for IDM susceptibility41,42. However, in the absence of a reference genome, an accurate transcriptome and full lengths of the candidate genes are hardly achievable using short reads of RNAs based on the Illumina sequencing platform. Isoform sequencing (Iso-Seq), an advanced technique based on the single-molecule real-time (SMRT) sequencing platform and long reads of RNAs, has facilitated retrieval of full-length transcripts, assembly of high-quality reference transcriptomes, and discovery of splicing events and novel transcripts43. With Iso-Seq, each mRNA-derived cDNA molecule in a transcriptome is sequenced multiple rounds, resulting in high-quality full-length cDNA or corresponding mRNA sequences. Currently, a reference-level high-quality transcriptome for impatiens is not available. Study of the gene expression profiles at different developmental stages for different resistant and susceptible plants can function as a model to study DM-plant interaction at the transcriptome level and to uncover the plant–pathogen dynamics during resistance development.
In this study, the disease responses of 32 impatiens cultivars, including 16 I. walleriana and 16 I. hawkeri cultivars, were investigated at the cotyledon, first/second pair of true leaf, and mature plant stages, aiming to establish a system for early and rapid screening and phenotyping of impatiens for IDM resistance. DM pathogen growth and development in cotyledons and leaves were examined histologically, revealing the IDM-resistance mechanisms in I. hawkeri. Moreover, full-length transcriptome sequencing combined with RNA-Seq was applied to investigate transcriptome dynamics for three representative cultivars showing different resistance and susceptibility at cotyledon and true leaf stages. The transcriptome comparisons between IDM-resistant and susceptible cultivars and tissues revealed a core set of genes, including three R-genes potentially involved in IDM resistance in impatiens. Results from this study have provided very useful genomic resources and laid a solid foundation for future studies to implement genomics-assisted breeding of impatiens for IDM resistance and to identify and clone the IDM-resistance genes in impatiens in the future.
Results
Responses of 32 cultivars to natural downy mildew pathogen infection
In total, 16 cultivars of I. walleriana and 16 cultivars of I. hawkeri (Table 1) were evaluated in the field for their response to IDM at the mature stage. On December 28, 2014 (206 days after planting (DAP)) (average temperature 20.51 °C, relative humidity 88%, rainfall 0 cm)44, “Balance Orange” (BO) and “Super Elfin Pink” (SEP) of I. walleriana first showed IDM white sporulation on the abaxial side of foliage. Within 3 days, all plants of I. walleriana, sooner or later, showed similar IDM symptoms (Table 1). Infected impatiens plants showed chlorotic and downward-curling leaves, followed by leaf and flower dropping, complete defoliation, and plant collapse within a 7-week period. All plants died before February 16, 2015 (256 DAP), indicating all these I. walleriana cultivars are highly susceptible to IDM. By contrast, all plants of I. hawkeri cultivars did not show any IDM disease symptoms through the field experiment, suggesting that they possess strong resistance to IDM at the mature plant stage (Table 1).
All infected I. walleriana plants in the field showed chlorotic and downward-curling leaves with white downy mildew sporulation (growth) on the lower surface at the early infection stage and then followed by leaf and flower drops and plant collapsing in a seven-week period. No disease symptom was observed on I. hawkeri plants during field experiments. Details of disease incidence for inoculation experiments were described in Supplementary Table S1. “S” indicates susceptibility to impatiens downy mildew; “R” indicates resistance to impatiens downy mildew.
Phenotyping for downy mildew resistance at the earliest plant growth stages
To develop an effective early and rapid phenotyping system, we inoculated young plants of these 32 cultivars at their earliest growth stages (cotyledon and first/second pair of true leaf stages) as well as mature leaves using two inoculation methods (P. obducens spores applied to the abaxial or adaxial side of the cotyledons, first/second pair of true leaves, and mature leaves). Results showed that all I. walleriana cultivars were highly susceptible to IDM at all these stages (Table 1 and Supplementary Table S1). Typical white downy mildew sporulation was evident and profuse on the abaxial side of cotyledons and true leaves for all I. walleriana cultivars (Fig. 1A, D). All 16 cultivars of I. hawkeri showed resistance to IDM at the first/second pair of true leaf stage (Fig. 1E, F), consistent with typical plant responses of these cultivars to IDM at the mature stage. These results indicate that young impatiens plants at their first true leaf stage have developed resistance to IDM and are ready for IDM disease screening or phenotyping for IDM resistance.
Interestingly, I. hawkeri plants at the cotyledon stage exhibited different responses to inoculated P. obducens spores (P < 0.05). When inoculated on the abaxial side of cotyledons, “Divine Orange Bronze Leaf” (DOB) (Fig. 1B), “Divine Burgundy” (DB), “Divine Orange” (DO), and “Florific Violet” (FV) were susceptible to IDM with a disease incidence index at 0.67, 0.61, 0.56, and 0.50 at 10 days post inoculation (dpi), respectively (Supplementary Table S1). “Florific Sweet Orange” (FSO), “Divine White Blush” (DWB), “Florific White” (FW), and “Divine Violet” (DV) were also susceptible to DM, but with a lower disease incidence index (≤0.14). Most importantly, “Florific Lavender” (FLR) (Fig. 1C) and “Divine Lavender” (DL) showed strong resistance to IDM even at this early stage. There were no significant differences between 10 and 20 dpi across all cultivars, except that “Divine Pink” (DP) and DO showed significant differences when inoculated on the abaxial and adaxial side, respectively (Supplementary Table S1). When the adaxial side was inoculated, the disease incidence index was lower than that of the abaxial side (P < 0.05) for most cultivars except DV, FW (higher at the adaxial side). Again, DL and FLR showed strong resistance to IDM after their cotyledons were inoculated on either side, with zero disease incidence (Supplementary Table S1). On the other hand, DOB and DB consistently showed susceptibility to IDM when either side of the cotyledon was inoculated. These results indicated that 14 I. hawkeri cultivars were susceptible to IDM at the cotyledon stage and turned resistant starting at the true leaf stage, while two I. hawkeri cultivars (DL and FLR) expressed strong resistance to IDM starting at the cotyledon stage.
An interesting feature was observed on the adaxial and abaxial surfaces of inoculated cotyledons of I. hawkeri, but not on cotyledons of I. walleriana cultivars. During incubation after inoculation with P. obducens spores, irregular black “spots” and “specks” began to develop on cotyledon surfaces. Their occurrence varied among I. hawkeri cultivars but seemed to be from necrotic cells. For simplicity and convenience, we tentatively called them as “black spots”. To quantify the severity of black spots on cotyledons, we developed a black spot severity scale and calculated a black spot severity index (Fig. 2 and Supplementary Table S2). Black spot severity index for all I. hawkeri cultivars, except for “Divine Blue Pearl” (DBP), seemed to remain unchanged from 10 to 20 dpi. The cultivars DOB and DWB exhibited a higher black spot severity index than other cultivars at 10 dpi, with an average index of 2.18 and 2.08, respectively (Supplementary Table S2). Five cultivars, including FSO, FW, FLR, “Divine Cherry Red” (DCR), and DP showed a lower black spot severity index (<0.60). A similar trend was observed on the adaxial side of inoculated cotyledons, except that black spot severity was generally lower. The black spot severity index at 10 dpi appeared to be less than that at 20 dpi, but significant differences were not detected. At the first and second pair of true leaf stages, all I. hawkeri cultivars showed resistance to IDM and had no black spots, except DWB displaying small black spots on the leaf surface (Fig. 1G). When the IDM disease incidence indices (Supplementary Table S1) and the black spot severity indices of the 16 I. hawkeri cultivars (Supplementary Table S2) were examined, the Pearson Correlation Coefficient was 0.66 (abaxial, 10 dpi), 0.53 (abaxial, 20 dpi), 0.50 (adaxial, 10 dpi), and 0.51 (adaxial, 20 dpi), respectively, with an average of 0.55. Therefore, in general, the IDM disease incidence index had a moderate level of positive relationship with black spot severity. For example, the cotyledons of DOB exhibited high IDM incidence and severe black spotting after inoculation with P. obducens. However, there were some exceptions, for example, the cotyledons of DWB showed low IDM incidence yet severe black spotting. These results show that impatiens plants at their cotyledon stage may not express their typical mature plant resistance to P. obducens. However, if they do express resistance at this stage, they retain the resistance to IDM through their growth and developmental stages.
Histological characterization of the disease-resistance response
Three cultivars, I. walleriana SER, I. hawkeri DOB and FLR, were selected for detailed histological characterization. In the above-described phenotyping experiments, they showed contrasting IDM-resistance responses: SER—susceptible at cotyledon, first/second pair of true leaf, and mature plant stages; DOB—susceptible at the cotyledon stage and resistant at the first/second pair of true leaf and mature plant stages; and FLR—resistant at cotyledon, first/second true leaf, and mature plant stages. Their cotyledons and true leaves were excised, inoculated with P. obducens sporangia, and cultured on 1% water agar. White downy mildew sporulation was evidently observed on the abaxial surface of cotyledons of SER and DOB at 4 dpi and became more and more massive at 6, 8, and 10 dpi. By contrast, only tiny mildew sporulation was observed on the cotyledons of FLR until 8 or 10 dpi, and the area of DM was very limited and not enlarging. When true leaves were inoculated, white mildew sporulation was only observed on the abaxial leaf surface of SER at 6 dpi but not on leaf surfaces of DOB and FLR. These results confirmed that the cotyledons of SER and DOB and true leaves of SER were susceptible to IDM, while the cotyledons of FLR and true leaves of DOB and FLR were resistant to IDM.
The sporangia density on the cotyledons and true leaves of SER, DOB, and FLR at 4, 6, 8, and 10 dpi was determined (Table 2). No P. obducens sporangia were observed on the leaves of DOB and FLR, and a small number of sporangia could be counted on the cotyledons of FLR at 8 and 10 dpi. On the other hand, the sporangia density on the cotyledon of SER reached 3.03 × 103 sporangia cm−2 at 4 dpi, approximately three times higher than that of DOB, and at this time point, no white mildew growth could be observed on the leaves of SER yet. The sporangia density on cotyledons and leaves of SER at 10 dpi were 480 × 103 and 404 × 103 sporangia cm−2, respectively, and almost two times greater than that on the cotyledons of DOB. It showed that the susceptible levels of cotyledons and leaves of SER to P. obducens were greater than that of the cotyledons of DOB.
To assess P. obducens development on impatiens, inoculated cotyledon and true leaf segments of SER, DOB, and FLR were stained with trypan blue at 1, 2, 3, 4, 5, and 6 dpi and examined microscopically. On cotyledons of SER and DOB and true leaves of SER, similar P. obducens development was observed. Plasmopara obducens sporangia first penetrated into the adaxial leaf surface (Fig. 3B, C) and then formed vesicles, intercellular hyphae, and haustoria (Fig. 3A, E, F, G) at 1 or 2 dpi. The vesicles development in cotyledons of SER was earlier than that in cotyledons of DOB and true leaves of SER. Evident hyphae and haustoria growth were seen at 4 dpi (Fig. 3I, K) and 6 dpi (Fig. 3M–O). Monopodially branched sporangiophores first emerged from stomata at 4 dpi on cotyledons of SER and DOB, and then profuse sporangiophores and sporulation were seen on cotyledons of SER and DOB and true leaf of SER at 6 dpi (Fig. 3Q–S and Table 2). On cotyledons of DOB, apparent cell death response could be observed (Fig. 3R). On the true leaves of FLR and DOB, inoculated sporangia were observed on the adaxial leaf surface (Fig. 3D), but the development of new vesicles, hyphae, or haustoria was not seen (Fig. 3H, L, P). Therefore, the lifecycle of P. obducens did not begin in the true leaves of FLR and DOB. In the cotyledon of FLR, although the hyphae and haustoria could be observed occasionally, the extension of hyphae was greatly limited.
Single-molecule real-time sequencing of transcriptomes and alternative splicing
To identify the genes controlling IDM resistance in Impatiens, we selected DOB (susceptible to IDM at the cotyledon stage, but resistant at the true leaf stage and thereafter) for full-length transcriptome characterization using the PacBio Iso-Seq protocol. In addition, SER (susceptible to IDM at the cotyledon, true leaf, and mature plant stages), FLR (resistant to IDM at the cotyledon, true leaf, and mature plant stages), and DOB were also selected for transcriptome profiling using the Illumina-based RNA-Seq methodology. To generate the full-length transcriptome of DOB, the RNA samples of cotyledon and true leaf tissues were pooled together for sequencing using the PacBio Sequel platform. A total of 716,913 raw polymerase reads (34.4 Gb) with an N50 length of 93,681 bp were generated on one SMRT cell (Supplementary Table S3). After running the bioinformatics pipeline, 464,436 circular consensus sequences (CCSs) were obtained, among which 418,648 were full-length non-chimeric (FLNC) reads containing the 5′ and 3′ primers and poly-A tails. After the clustering step, 36,954 high-quality (HQ) isoforms were generated. The minor low-quality (LQ) isoforms were excluded for further analysis. The HQ isoforms were further error-corrected using LoRDEC and trimmed RNA-Seq short reads described below. Subsequently, 36,954 error-corrected HQ isoforms were obtained with an N50 length of 3010 bp (Table 3).
To discover alternative splicing (AS) events in Impatiens, the error-corrected and non-redundant (redundancy removed using CD-HIT-EST) HQ isoforms were partitioned into transcript families by the Coding GENome reconstruction Tool (Cogent) to reconstruct full-length unique transcript models (UniTransModels). A total of 11,763 full-length UniTransModels were obtained. Based on these UniTransModels, the HQ isoforms were further collapsed using Cupcake to obtain a set of 16,752 collapsed isoforms with an N50 length of 2992 bp (Table 3). Most of these UniTransModels (8,923, 75.7%) had one isoform, while 2862 (24.3%) UniTransModels had at least two isoforms (Fig. 4A, B). Based on these UniTransModels, there were six types of AS events observed, including retained intron (RI), alternative 5′ splice-site (A5), alternative 3′ splice-site (A3), skipping exon (SE), alternative first exon (AF), and alternative last exon (AL) (Fig. 4B). Among these AS events, RI type was the most predominant (984, 64.0%), followed by A3 (286, 18.6%) and A5 (13.7%). These three types of AS events accounted for >96% of detected events. By mapping the Illumina short reads to these UniTransModels, the reliability of detected AS events was confirmed (Fig. 4C).
The first reference transcriptome of impatiens
For each of the three cultivars (SER, FLR, and DOB), the cotyledon and true leaf tissues were also subjected to Illumina short reads sequencing (RNA-Seq). A total of 18 samples (3 cultivars × 2 tissue types × 3 replicates = 18) were sequenced. An average of 14.5 million 150-bp cleaned read pairs were obtained for each sample (Supplementary Table S4). A de novo assembly was performed for each cultivar by pooling reads of the six samples and using Trinity. A total of 118,919, 120,416, and 95,837 contigs were obtained for DOB, FLR, and SER, with an N50 length of 2112, 2164, and 2242 bp, respectively (Table 3). The three RNA-Seq assemblies were merged using TGICL with redundancy removed using CD-HIT-EST, yielding 100,049 unique transcript sequences with an N50 length of 2277 bp. The unique transcript sequences obtained from RNA-Seq were mapped to Iso-Seq isoforms. For downstream functional annotation and investigation of gene expressions, a reference transcriptome for Impatiens was constructed by combining the longest collapsed isoforms from Iso-Seq and unmapped transcript sequences from RNA-Seq. Finally, a total of 48,758 reference transcript sequences with an N50 length of 2060 bp were obtained to represent the reference transcriptome of Impatiens (Table 3). To estimate the completeness of this reference transcriptome, we compared these sequences to the BUSCO embryophyta_odb9 dataset and obtained a completeness score of 85.2%.
For functional annotation, the reference transcriptome was compared to several major public databases. The majority of the sequences (36,978; 75.8%) had hits to the non-redundant protein (NR) database, followed by Swiss-Prot (29,731; 61.0%), and non-redundant nucleotide (NT) database (22,616; 46.4%) (Supplementary Table S5). A total of 22,699 (46.6%) transcripts were annotated with gene ontology (GO) terms, with an average of four GO terms per transcript. In addition, 11,537 (23.7%) sequences were assigned with Kyoto Encyclopedia of Genes and Genomes Ontology (KO) terms. By mining the reference transcriptome in the PlantTFDB v4.0 database, a small portion of sequences (1165; 2.4%) was predicted to encode transcription factors (TFs) and assigned to TF families (Supplementary Table S6). Among the 54 TF families, the most predominant was the bHLH family (122; 10.5%), followed by bZIP (77; 6.6%), and MYB-related (62; 5.3%). By running the TransDecoder pipeline, coding regions and protein sequences were successfully predicted for a total of 34,359 (70.5%) sequences, among which 27,515 (56.4%) sequences contained a complete open reading frame (ORF). Based on the predicted proteins out of the reference transcriptome, we identified 45 NBS-containing genes and 246 leucine-rich repeat receptor-like kinase (LRR-RLK) genes (Supplementary Table S7). Among these 45 predicted NBS genes, 33 (73.3%) contained a complete ORF. These NBS genes were further classified into four types, including NBS-LRR (15), NBS (13), coiled-coil (CC)-NBS-LRR (10), and CC-NBS (7). The TIR domain was not identified in these predicted NBS genes. A phylogenetic tree was constructed based on the NBS domain sequences, which revealed two major clusters of Impatiens NBS genes (Fig. 5).
Identification of genes and R-genes potentially involved in downy mildew resistance
To identify impatiens R-genes, we first downloaded the 152 reference Pathogen Receptor Genes maintained at the PRGdb that have been cloned and well-characterized in other plant species. We also obtained 1678 proteins from NCBI and 37 Arabidopsis genes from the UniProt database based on their functionality in resistance to downy mildew. Through gene family analysis, we identified 683 impatiens genes (81 gene families) orthologous to the PRGdb reference R-genes or “downy mildew” associated genes (Supplementary Table S8). These impatiens orthologs and predicted NBS and LRR-RLK genes were prioritized for downstream evaluation of gene expressions in the five pairs of comparisons that were made possible by three cultivars (DOB, FLR, and SER) and two types of organs (cotyledon and true leaf) with different resistance or susceptibility to IDM (Fig. 6).
The clean reads from the 18 impatiens RNA-Seq samples were mapped to the reference transcriptome to investigate gene expression profiles of cotyledon and true leaf tissues of impatiens. By setting the transcripts per million (TPM) cutoff as 0.5 (at least one replicate) to be considered expressed, a range of 24,716–31,178 transcripts were expressed in the cotyledons and true leaves of these 18 samples (Fig. 6A and Supplementary Table S9). Apparently, a higher number of transcripts were expressed at the true leaf stage compared with the cotyledon stage for all three cultivars. Interestingly, DOB had the smallest number of expressed transcripts at the cotyledon stage but had the highest number of expressed transcripts at the true leaf stage. There were 345 transcripts only expressed at the true leaf stage for all three cultivars. A total of 1245 transcripts were only expressed at the true leaf stage for I. hawkeri samples, but not expressed in the I. walleriana sample. A much smaller number of transcripts were only expressed at the cotyledon stage. Through differential gene expression analysis using DESeq2, DOB had a much larger number of differentially expressed genes (DEGs) when transitioning from the cotyledon stage to the true leaf stage than the other two cultivars (Table 4 and Supplementary Table S10). Further principal component analysis (PCA) using DESeq2 also revealed that DOB had very different expression profiles between its cotyledons and true leaves. As shown in Fig. 6B, the expression profiles of true leaves were very similar between DOB and FLR, which belong to the same species. However, the cotyledon expression profiles of these two cultivars were separated apart. These unique features may correspond to DOB’s different responses to IDM at the cotyledon versus the true leaf stage compared with other I. hawkeri cultivars resistant to IDM at both stages. As expected, these two species were separated by the first PC, which explained 85% of the variance (PC1, 85%).
Given that DOB transitioned from IDM susceptibility (S) on cotyledons to IDM resistance (R) on true leaves, it was expected that the genes associated with resistance to IDM expressed differently in true leaves compared with cotyledons in DOB. Thus, candidate genes were first mined based on the following criteria: (1) differentially expressed (FDR < 0.05, fold change ≥ 2) for DOB cotyledon (S) vs DOB-true leaf (R); (2) within the same tissue type, for the genes upregulated in DOB-true leaf (R) compared with DOB cotyledon (S), we looked for those that were also expressed at higher levels (FDR < 0.05, fold change ≥2) in IDM-resistant cultivars than in susceptible cultivars; (3) similarly, for the genes downregulated in DOB-true leaf (R) compared with DOB cotyledon (S), we looked for those that were also expressed at lower levels (FDR < 0.05, fold change ≥2) in resistant cultivars than in susceptible cultivars (Table 4 and Supplementary Table S10). By applying these criteria, we identified 241 transcripts upregulated and 112 transcripts downregulated for all S vs R comparisons (Fig. 6C, D and Supplementary Table S11). Importantly and interestingly, three NBS genes orthologous to cloned and characterized R-genes and to those associated with DM resistance were among the 241 upregulated transcripts (Fig. 7A–C and Supplementary Table S8). These three NBS genes were expressed at significantly higher levels in all IDM-resistant samples compared with susceptible samples. For further verification, we also analyzed the gene expression data from another independent study on mature leaves (three replicates pooled for sequencing) of IDM-resistant and susceptible Impatiens cultivars41. We observed that these NBS genes were also expressed much higher in resistant Impatiens “SunPatiens® Compact Royal Magenta” (SPR) than in susceptible sample SEP. The three NBS genes were orthologous to two genes (ACY69609.1/RGC203 and ADX86902.1) in common sunflower that have been associated with resistance to Plasmopara halstedii, the causal agent of sunflower DM45,46, and were also orthologous to two genes (Rpi-blb1 and RB) conferring resistance to potato blight in Solanum bulbocastanum, a potato relative47,48. Considering this evidence, these NBS genes can be good candidates for future mining Impatiens genes conferring IDM resistance. In addition, we identified two LRR-RLK genes significantly upregulated in all IDM-resistant samples (Fig. 7D, E), which may also be candidates potentially associated with IDM resistance. The three candidate NBS genes and two candidate LRR-RLK genes were selected for qRT-PCR validation of gene expressions in I. hawkeri and I. walleriana samples. Several pairs of primers were designed for candidate gene PB.2459.1, but somehow all designed primers did not work properly for this candidate gene. The qRT-PCR results supported that the expression levels of PB.2448.1, PB. 11744.1, PB.11524.1, and CL41296Contig1 were much higher (fold change ≥2) in the resistant samples than in the susceptible samples from I. hawkeri (Fig. 8A–D). When I. walleriana samples were included in the qRT-PCR comparison, only PB.11744.1 (CC-NBS-LRR) and CL41296Contig1 (LRR-RLK) showed significantly higher expression levels in the resistant samples than in the susceptible samples. Since only a single reference gene (GAPDH) was used for normalization in these qRT-PCR analyses, caution might be warranted when comparing gene expression levels between I. hawkeri and I. walleriana.
Discussion
As a worldwide challenge, DM has devastated many crops, including the ornamental crop garden impatiens, I. walleriana. Impatiens hawkeri has been reported to be generally resistant to IDM. However, to date, little research has been reported to develop disease screening or resistance phenotyping methodologies in Impatiens and to understand the IDM resistance in I. hawkeri. The lack of information on resistance phenotyping and resistance mechanism has severely hindered efforts to develop new IDM-resistant cultivars. Moreover, the interaction between P. obducens and Impatiens remains to be discovered. Therefore, to fill these gaps, we have established a rapid, efficient, and effective system to assess IDM susceptibility and resistance at very early growth stages (first and second true leaves as well as the cotyledon stage) and histologically characterized the pathogen development inside impatiens cotyledons and leaves. Using this newly developed method, we discovered that two cultivars (DL and FLR) possess strong IDM resistance starting at the earliest growth stage (the cotyledon stage) and that a number of other cultivars including DOB could make the dramatic transition within a short time and a very short distance, from susceptibility at the cotyledon stage to complete resistance at the first/second pair of true leaf stage. These cultivars and growth stages with different levels of resistance to IDM have created an excellent opportunity to investigate host–pathogen (impatiens—P. obducens) interactions and discover genes potentially involved in such a dramatic transition process. In this study, we grasped this opportunity and applied full-length transcriptome sequencing (Iso-Seq) and RNA-Seq to three cultivars (SER, DOB, and FLR) with contrasting phenotypes to IDM and made five pairs of transcriptome comparisons between IDM-resistant and susceptible cultivars and tissues. These comparisons enabled us to identify 241 transcripts upregulated in resistant cultivars and resistant tissues and three R-genes potentially involved in IDM resistance. Results and genomic resources from this study will help better understand IDM resistance in impatiens, develop molecular markers, implement genomics-assisted breeding, accelerate the development of new IDM-resistant cultivars, and provide genomic resources for cloning of IDM-resistance genes.
All I. walleriana cultivars tested in this study exhibited susceptibility to IDM throughout the entire plant developmental stages, from the cotyledon stage to mature, flowering plants. While I. hawkeri showed general resistance to IDM, many cultivars were susceptible to IDM at the cotyledon stage, indicating important influences of plant developmental stage (or tissue type) on IDM resistance or susceptibility. A similar phenomenon has been observed in certain other plant–pathogen interactions. For example, broccoli (Brassica oleracea) lines “PCB21.32” and “OL87123-2” were fully susceptible to DM (Hyaloperonospora parasitica) at the cotyledon stage but were resistant to the pathogen at 6-weeks old49. Therefore, for these plants, DM resistance cannot be predicted from cotyledon resistance. By contrast, cotyledons and true leaves in basil (Ocimum basilicum) exhibited similar DM responses, indicating that early inoculation could be used in DM resistance evaluation50. In our study, we observed different I. hawkeri cultivars had different responses to IDM at the cotyledon stage and even some of the I. hawkeri cultivars were highly susceptible to IDM at this early stage. Therefore, to prevent potential damage of IDM to these cultivars during young plant production, fungicide protection in the production facility is required at the cotyledon stage. As for breeding impatiens for IDM resistance, we recommend screening impatiens breeding populations beginning at the cotyledon stage so that newly developed resistant cultivars will have resistance through their entire plant developmental stages. Delayed IDM disease screening may result in new cultivars susceptible to this fast-acting and destructive disease at their early growth stages resulting in crop failure of large-scale young plant production. This very early stage disease screening should be invaluable to impatiens breeding: It identifies new breeding lines and ultimately new cultivars with lifetime-long resistance to IDM, and it should also greatly reduce the space, time, labor required, and costs associated with screening large numbers of impatiens breeding populations.
Hypersensitive response (HR) involves the rapid death of host cells that can limit the progress of infection. It is a plant resistance response that can be used to differentiate between resistant and susceptible plants. In different plants, varied HR symptoms have been reported when resistant hosts were infected by the DM pathogen. For instance, resistant Arabidopsis plants showed different HR symptoms against DM, such as flecking necrosis, necrosis, pitting necrosis, or trailing necrosis, depending on the strength and timing of the cell death response51. By contrast, susceptible accessions displayed heavy conidiophore sporulation without visible cell death. Compared with susceptible individuals, which did not show any visible reactions except sporulation, resistant grapevine showed HR with isolated necrosis, resulting in a significant reduction of pathogen expansion and disease symptoms52. In this study, “black spots” (and “black specks”) were observed on inoculated cotyledons of I. hawkeri. There was no simple linear correlation between disease incidence of IDM and black spot. This raised an interesting question as for what roles these black spots and specks may play in I. hawkeri’s resistance to IDM.
Although plants defend themselves against pathogens in different ways, successful host defenses disrupt the disease cycle primarily in the pre-penetration, penetration, or infection phase53. The host defense to DM in resistant plants has been studied in several plant species, such as Arabidopsis, lettuce, and grapevine52,54,55,56,57. Arabidopsis C24 appeared to develop HR upon infection with visible cell death and the elongation of hyphae branching out into the intercellular space was restricted54. In resistant transgenic lettuce, the growth process of pathogen Bremia lactucae was retarded and no sporophores were observed at any time points55. In these two examples, the lifecycle of DM pathogens in resistant plants was not completed. Whereas, in DM-resistant grapevines, P. viticola could complete its life cycle in leaf tissues, but its hyphal growth and sporangia formation were inhibited, resulting in no visible symptoms or sporulation52,56,57. The above three types of host defenses to DM all occurred in the infection phase. In comparison to above scenarios, we found that the resistance mechanism of I. hawkeri to P. obducens may be more similar to that of grapevines against P. viticola. Because in cotyledons of I. hawkeri FLR, the lifecycle of P. obducens could also complete occasionally, but hyphae growth, haustoria development, and sporulation were greatly restricted.
RNA-Seq has been a powerful approach to understanding the transcriptional regulations associated with the disease response to DM in many crops, such as grapevine57,58, spinach40, lima bean59, and pearl millet60. However, the short reads from RNA-Seq are usually insufficient to reconstruct an accurate transcriptome, especially for species without a publicly available reference genome. Our study has combined the strengths of the third-generation sequencing technology with a much longer read length and RNA-Seq in reconstructing the transcriptome for Impatiens spp. with limited genome and transcriptome resources. As revealed in this study, the N50 length of Iso-seq isoforms (2992 bp) is much longer than the final assembly from RNA-Seq (2277 bp). The full-length transcriptome not only provided full-length transcript sequences for genes but also provided insights into the AS events in Impatiens spp. In consistent with other crops such as cotton61, rice62, and Italian ryegrass63, the retained intron type contributed the majority of AS events in Impatiens. On the basis of Iso-seq isoforms and in combination with RNA-Seq assemblies, we constructed a reference transcriptome for Impatiens spp. with a BUSCO score of 85.2%. Since we only sequenced tissues collected from earlier developmental stages (cotyledon and true leaf stages), not all genes are expressed. This completeness is comparable to that of several reports in other plant species64,65.
Improving disease resistance is an important objective of impatiens breeding. Efficient screening of breeding populations to identify breeding lines with disease resistance is critical for a breeding effort toward such a goal. Development and application of molecular markers have been proposed as an effective approach to increasing the screening efficiency in impatiens disease-resistance breeding. However, few molecular markers have been reported for disease-resistance traits in impatiens, primarily due to the lack of genomic resources. The transcriptome sequences assembled in this study can play an important role in the future development of molecular markers for disease-resistance traits (and other traits) in impatiens. The 45 impatiens NBS and 246 LRR-RLK gene sequences identified in this study can be particularly valuable for this effort. It has been shown that NBS genes constitute the large family of R-genes conferring plants resistance to diverse bacterial, fungal, oomycete, and viral pathogens and nematodes, even insects in some cases66. LRR-RLK genes also play important roles in plant disease resistance, functioning as R-genes or as members of the plant defense signaling pathways. In other plants, NBS and LRR-RLK sequences often co-localize or are linked or associated with disease-resistance loci or QTLs, having allowed rapid development of new molecular markers10. Thus, these impatiens NBS and LRR-RLK sequences can serve as an excellent starting point in future efforts toward developing molecular markers for disease-resistance traits in impatiens. The full-length coding region sequences of these genes from the assembled reference impatiens genome can speed up the cloning and functional characterization of the identified impatiens disease-resistance genes.
One limitation of this study is that P. obducens-inoculated samples were not available to be included in transcriptome sequencing due to the lack of viable pathogen inoculum when this part of the study was initiated. Without P. obducens-inoculated samples in transcriptome sequencing, transcripts that were to be induced by pathogen infection could not be captured. To overcome this shortage, we made use of the unique impatiens genotype (DOB) that was discovered in this study and made comparisons between IDM-resistant and susceptible tissues and between IDM-resistant and susceptible cultivars. These comparisons enabled us to identify 241 and 112 transcripts upregulated and downregulated in IDM-resistant cultivars/tissues, respectively. These differentially expressed transcripts can be very valuable for further dissection of the interactions between impatiens and P. obducens at the molecular level. The three NBS and two LRR-RLK transcripts that were upregulated in the IDM-resistant cultivars/tissues may be of particular value because they were also expressed at higher levels in another IDM-resistant I. hawkeri cultivar in a previous paper41 and potentially involved in IDM resistance. Several approaches can be used in future experiments to test the roles of these transcripts in IDM resistance, including genetic segregation analysis, genetic mapping, gene expression analysis, genetic transformation and overexpression, and/or knockout with RNAi or gene editing67. The full-length coding region sequences of these transcripts can facilitate the initiation of all these important analyses.
For all three cultivars (DOB, FLR, and SER), a higher number of transcripts were expressed at the true leaf stage compared with the cotyledon stage, indicating that more genes are needed and induced as impatiens plants begin to grow and develop. However, it seems a smaller number of transcripts were newly induced in I. walleriana than in I. hawkeri. The I. hawkeri cultivar DOB seems a special case since the number of DEGs by comparing cotyledon and true leaf stage (5703) was at least twice of that in the other two cultivars (2163 or 1850). According to the PCA analysis, the high number of DEGs in DOB is less likely due to a distinct expression profile in true leaf, since DOB and FLR (both belonging to I. hawkeri) had very similar expression profiles in true leaf. Instead, it is more likely to be explained by the distinct expression profile of cotyledon of DOB, as DOB and FLR had relatively dissimilar expression profiles in the cotyledon. Moreover, the total number of transcripts expressed in cotyledons of DOB was only 24,716, the lowest among the three cultivars. Therefore, it is possible that some molecular or transcriptional regulations related to IDM resistance may be missing or undermined in cotyledons of DOB, but later came back to a level similar to FLR at the true leaf stage. We found these DEGs from DOB to be of particular interest since the genes associated with susceptibility (at the cotyledon stage) or resistance (at the true leaf stage) to IDM are likely among these DEGs. Subsequently, we further identified DEGs shared by all possible S vs R comparisons within the same tissue types, which could represent the transcriptional differences associated with susceptibility/resistance to IDM. The differential expression analysis combined with large-scale identification of NBS genes, LRR-RLK genes, and orthologs to public R-genes and genes associated with DM finally led to a few candidate genes, including three NBS genes.
Currently, >30 resistance genes against DM, designated as Pl genes, have been identified and extensively studied in sunflower68,69. NBS genes have played an important role. In sunflower, two types of DM resistance have been reported, including type I which restricts the pathogen growth in hypocotyls, and type II which allows the pathogen to reach hypocotyls and cotyledons45. The type II resistance (Pl14) was reported to be controlled by CC-NBS-LRR genes, while type I resistance (PlARG) likely controlled by TIR-NBS-LRR genes. Besides, the type II resistance gene (Pl14) was reported to be in close proximity to several clusters of non-TIR type NBS-LRR genes that appeared to be tandemly duplicated in the sunflower genome46. In comparison with the two types of resistance in sunflower, the resistance to IDM conferred by I. hawkeri DOB may be similar to the type II resistance. First, the cotyledons of DOB can be invaded by IDM. Second, the TIR domain was not identified in the NBS genes of impatiens, indicating most NBS genes of impatiens could be a non-TIR type. Moreover, the three NBS genes identified in this study were assigned to the same gene family with RGC203 (resistance type II) in sunflower, and two of these NBS genes are of a CC-NBS-LRR type. As a total of 20 impatiens NBS genes are assigned to this gene family, they may also belong to duplicated clusters, which needs further confirmation based on genome sequences. Future experiments can be designed to look at the temporal expressions of these candidate genes along with IDM infection and to investigate their functions. Since both susceptibility and resistance to IDM can be observed on the same plant at different growth stages, the cultivars like DOB would be an excellent plant material and model to further clarify the molecular mechanisms of IDM resistance and susceptibility in impatiens.
Conclusion
In summary, our study investigated the resistance and susceptibility of I. walleriana and I. hawkeri cultivars to P. obducens at different plant growth stages. By artificial inoculation and histological characterization of pathogen development inside inoculated tissues, we established an effective early and rapid system to screen impatiens cultivars and breeding populations for IDM resistance and to study plant–pathogen interactions. Using this system, we discovered two cultivars with strong resistance to IDM from their cotyledon stage on and additional cultivars that expressed, at different growth stages, dramatically different levels of resistance to P. obducens. We took advantage of these newly discovered disease responses and further characterized the expression profiles of cotyledons and true leaves of Impatiens. Our study has provided a comprehensive data source for mining disease-resistance genes in Impatiens, including transcriptome-wide identified NBS genes, LRR-RLK genes, genes orthologous to public R-genes and downy mildew associated genes, and DEGs differentially regulated between resistant and susceptible cultivars and tissues. Our results have laid a solid foundation for further research to understand and improve DM resistance in impatiens and have good potential to be applied to other crops.
Materials and methods
Impatiens walleriana and I. hawkeri cultivars and seedlings
Sixteen cultivars of I. walleriana from Accent Premium, Xtreme, Super Elfin, and Balance series and 16 I. hawkeri from Florific and Divine series (Table 1 and Supplementary Table S1) were evaluated for their response to P. obducens infection at the cotyledon, first/second pair of true leaf, and mature plant stages. Seeds of these 32 cultivars were sown on 20-rowed germination trays (model P-SEED20; Landmark Plastic Co., Orlando, FL) filled with Fafard germination Mix (Conrad Fafard, Inc., Agawam, MA). The trays were covered with plastic lids to keep moisture in a growth room at temperatures between 22 and 25 °C and a photoperiod of 16 h light/8 h dark. Seedlings with cotyledons (about two weeks old for I. walleriana and three weeks old for I. hawkeri) were transferred, one plant per cell, into 128-cell trays (model TR128D; Speedling Inc., Sun City, FL) filled with the commercial potting mix Fafard 3B mix (Conrad Fafard, Inc.). Seedlings were grown in the DM-free greenhouse with the temperature controlled between 25 and 30 °C. A liquid fertilizer containing 20% (w/w) nitrogen, 20% (w/w) phosphate (P2O5), and 20% (w/w) potassium (K2O) (Southern Agricultural Insecticides Inc., Palmetto, FL) was applied to the seedlings at 75 ppm twice a week following the irrigation program. All seedlings used in different experiments were grown using this method and all experiments were conducted at the University of Florida’s Gulf Coast Research and Education Center (UF/GCREC) (lat. 27°45’36” N, long. 82°13’45” W; AHS Heat Zone 10; USD Cold Hardiness Zone 9 A) in Wimauma, FL, USA.
Plant growth and field disease evaluation
On April 11, 2014 (47 days after seeds were sown), seedlings with four pairs of true leaves were transplanted into 72-cell trays (model TR72D; Speedling Inc., Sun City, FL) filled with the commercial potting mix Fafard 3B mix and kept in the greenhouse. The same liquid fertilizer was applied to the seedlings at 75 ppm once a day following the irrigation program. On June, 5 2014 (0 DAP), plants were transplanted on the 20-cm-high, 81-cm-wide raised ground beds of EauGallie fine sand covered with white-on-black plastic mulch in the experimental field of UF/GCREC. The overhead shade cloth was set up over the beds to create a partially shady environment (~40%). All cultivars were planted following a randomized complete block design with three blocks. For each cultivar, two biological replications in each block were grown 112.5 cm apart from each other. Drip irrigation with regular fertilizer and insecticide programs were followed. Plants were checked visually every 2 days for white sporulation on the abaxial side of leaves as an indication of IDM. Diseased leaves were sampled and observed under a bright-field microscope (BH-2) to confirm the pathogen identity.
In vivo preservation of DM pathogen
Plasmopora obducens sporangia were obtained from I. walleriana Accent Premium Rose (APR) during a field trial in March 2015 and then used to inoculate susceptible I. walleriana APR stock plant maintained in an isolated growth room. First, identification of P. obducens causing DM was achieved by symptoms of plants, and the morphology of sporangiophores and sporangia described by Palmateer et al.24. A sporangia solution (1 × 105 sporangia ml−1) was prepared as described by Pyne et al.50. Fresh sporulating leaves of APR were dipped into distilled water and gently agitated for 5 min. The P. obducens sporangia suspension was filtered through a 40-μm nylon mesh cell strainer (Thermo Fisher Scientific, Bridgewater, NJ) and then centrifuged at 3000×g for 10 min. This mesh cell strainer was used to remove debris and produce a cleaner sporangia suspension. Since P. obducens sporangia were ovoid and 12.7–25.0 × 10.0–17.7 µm in dimension24,29, they were expected to pass through the strainer easily. The supernatant was discarded, leaving the pellet re-suspended in 10 ml of distilled water. The sporangia density in the suspension was adjusted to a final density of 1 × 105 sporangia ml−1 using a Reichert Bright-Line hemocytometer (Hausser Scientific, Horsham, PA) and a BH-2 microscope (Olympus America Inc., Melville, NY). The prepared sporangia suspension was finely sprayed onto the adaxial leaf surface of I. walleriana APR plants (60-days old). The inoculated plants were kept in closed plastic bags on a metal bench in the growth room with the air temperature maintained at 21 ± 1°C, light intensity of 160 µmol m–2 s–1, and 16-h light/8-h dark. The humidity inside the plastic bags was 100%, measured with hygrometers. After 7 days post inoculation (dpi), white downy growth was visualized on the abaxial leaf surface. Disease symptoms and the morphology of the sporangiophores and sporangia were compared to the control plants to verify the pathogen.
Inoculating I. walleriana and I. hawkeri seedlings at three growth stages
Seedlings of 16 I. walleriana and 16 I. hawkeri cultivars were individually inoculated with P. obducens sporangia at the cotyledon stage or the first/second pair of true leaf stage in 128-cell trays. One droplet (~20 µL per droplet) of sporangia suspension (1 × 105 sporangia mL−1) was added to the adaxial and abaxial sides of each cotyledon, respectively. To inoculate the first/second pair of true leaves, five droplets were applied onto the adaxial and abaxial sides of each leaf, respectively. Inoculated seedlings were immediately enclosed inside a polythene bag for 20 days. In the control treatment (non-inoculated), the cotyledons and first/second pair of true leaves were mock-inoculated with the same numbers of droplets of distilled water, and these seedlings were kept in a separate growth room with the same growing conditions. Inoculated seedlings were evaluated for IDM disease symptoms at 10 and 20 dpi. The disease incidence index was defined as mean ratings for downy mildew incidence using a binary scale, in which 0 equaled no visible sporulation and 1 equaled visible sporulation on the abaxial side of the cotyledon. A randomized complete block design with three replicates was used as the experimental design. For each replicate, 12 or 16 cotyledons or leaves were sampled per cultivar. Data analysis was performed in SAS version 8.1 (SAS Institute Inc., Cary, NC).
Observation of pathogen development in inoculated cotyledons and true leaves
Based on in vivo inoculation results, I. walleriana SER, I. hawkeri DOB, and FLR were selected for microscopic observation. Infected cotyledon and true leaf segments (~5 × 10 mm) were collected, washed with autoclaved distilled water three times, and placed on 1% autoclaved water agar in plastic disposable Petri dishes (9.5 cm in diameter; 20 mL per dish). Cotyledon and leaf segments were inoculated with one 10-µL droplet at 1 × 105 sporangia mL−1 on the adaxial surface. The inoculated cotyledon and true leaf segments were incubated for 24 h under the above-described conditions. Thereafter, the sporangia suspension droplets were blotted with autoclaved filter papers. The cotyledon and true leaf segments were kept on the water agar with the abaxial surface up to observe disease symptoms.
The development of P. obducens in impatiens cotyledons or true leaves was examined by microscopic observation of trypan blue-stained impatiens tissues. Five inoculated cotyledon or leaf segments per treatment were removed from the Petri dishes at 1, 2, 3, 4, 5, and 6 dpi and fixed by soaking them in 5 mL of the clearing solution A (acetic acid:ethanol = 1:3, v/v) in a 50-mL tube (one or two segments per tube). Tubes were shaken at a low speed (80 rpm) overnight. Subsequently, the clearing solution A was removed and replaced with 5 mL of the clearing solution B (acetic acid:ethanol:glycerol = 1:5:1, v/v/v). The tissue samples were shaken for at least 3 h and then treated with 5 mL of 0.01 % trypan blue (Sigma-Aldrich) staining solution (trypan blue:lactic acid:phenol:distilled water = 0.003:1:1:1, w/v/v/v). Impatiens tissue samples were stained overnight on a shaker at a low speed (80 rpm). Stained cotyledon or leaf tissues were rinsed with a small amount of autoclaved 60% glycerol to remove the staining solution, immersed in 5 mL of autoclaved 60% glycerol, and shaken at 80 rpm for at least 2 h. Finally, the well-stained impatiens tissue samples were placed on a clean glass slide in a drop of 60% glycerol, covered with a coverslip, and observed under a microscope (BX41) equipped with an Olympus Q-color 5 camera (Olympus America Inc., Melville, NY).
Determination of sporangia densities on inoculated cotyledons and true leaves
Small pieces (5 × 5 mm) of tissue from inoculated cotyledon and leaf segments were cut and immersed in 200 µL distilled water amended with Tween 20 (0.05%; Sigma-Aldrich, St. Louis, MO) in a 1.5-mL microcentrifuge tube. The tubes were vortexed on a mini-shaker (Vortex-Genie; Fisher Scientific, Waltham, MA) for 5–10 s to dislodge the sporangia from cotyledon or leaf surfaces. Sporangia in the suspension were counted using a hemocytometer under a bright-field microscope (BH-2). Sporangia counts were converted into sporangia densities (total number of sporangia/area of the cotyledon or leaf segment sampled). Sporangia counting was performed every two days from 4 dpi to 10 dpi. For each time point, eight pieces of cotyledon or leaf segments were sampled. This experiment was repeated three times.
Library preparation and sequencing
To investigate the normal transcriptome profiles of IDM-resistant and susceptible cultivars at cotyledon and true leaf stages, three representative cultivars were selected, including DOB (susceptible to IDM at the cotyledon stage; resistant to IDM at the true leaf stage and thereafter), SER (susceptible to IDM at all stages), and FLR (resistant to IDM at all stages). Seeds were planted on a commercial potting mix in containers and germinated in a greenhouse facility at UF/GCREC, USA. Cotyledons and first true leaves were collected at the cotyledon stage and true leaf stage, respectively, without P. obducens inoculation. For each tissue type/cultivar, samples were collected from three biological replicates. Collected samples were immediately frozen in liquid nitrogen for RNA extraction. RNA samples were extracted using RNeasy Plus Mini kit (Qiagen, CA, USA). RNA quality and quantity were evaluated using Qubit fluorometer 2.0 (Thermo Fisher Scientific, Waltham, USA) and Agilent 2100 Bioanalyzer (Agilent Technologies, CA, USA), respectively. The RNA samples of cotyledons and true leaves of DOB were pooled in equal amounts for PacBio Iso-seq. The SMARTer PCR cDNA Synthesis Kit (Clontech, CA, USA) was used for full-length cDNA synthesis. Two size bins (<3 Kb and >3 Kb) were used for cDNA fraction and Iso-Seq library construction, which was sequenced on one SMRT cell of the PacBio Sequel system (PacBio, CA, USA) at the Interdisciplinary Center for Biotechnology Research, University of Florida, Gainesville, FL, USA. In addition, the RNA samples from the above three cultivars were sent to the University of California Davis Genome Center for Illumina HiSeq4000 and NovaSeq6000 sequencing (150 bp paired-end reads).
Iso-Seq and RNA-Seq data analysis
The Iso-Seq raw data were processed following the PacBio Iso-Seq pipeline using SMRT Link v8.0 and Iso-Seq3 (https://github.com/PacificBiosciences/IsoSeq_SA3nUP). Only high-quality (HQ) consensus sequences were used for further analysis. The trimmed Illumina short reads below were used to correct errors in the HQ consensus sequences using LoRDEC70. The redundancy was removed using CD-HIT-EST71 (-c 0.99 -n 10 -T 0 -M 0 -r 1). Cogent v2.1 was used to reconstruct the unique transcript models (UniTransModels) (https://github.com/Magdoll/Cogent). The error-corrected and non-redundant HQ consensus sequences were mapped to the UniTransModels using GMAP and further collapsed using Cupcake (https://github.com/Magdoll/cDNA_Cupcake). The alternative splicing (AS) events were identified using SUPPA72 with default settings. For visualization of AS events, sashimi plots were generated using the Integrative Genomics Viewer (IGV). The bam file was obtained by aligning the Illumina short reads of DOB to the UniTransModels using Tophat273.
The raw Illumina reads (HiSeq and NovaSeq) were trimmed using Trimmomatic74. The trimmed reads belonging to the same cultivar were pooled for a de novo assembly using Trinity75, respectively (--min_kmer_cov 2). The three resulting assemblies were merged using TGICL v2.1 with default options76. Redundancy was removed using CD-HIT-EST (-c 0.95 -n 9 -T 0 -M 0 -r 1). To construct a reference transcriptome for downstream annotation and gene expression analyses, the final transcripts from RNA-Seq were mapped to the Iso-seq isoforms using BWA-mem77. The longest isoforms from Iso-seq and unmapped transcripts from RNA-Seq assembly were combined to represent the reference transcriptome of Impatiens spp. To assess the completeness of the reference transcriptome, it was compared to the BUSCO OrthoDB9 embryophyta dataset.
Functional annotation and prediction of coding sequences
The reference transcriptome was compared to the non-redundant protein (nr), non-redundant nucleotide (nt) databases from NCBI (https://www.ncbi.nlm.nih.gov/), Swiss-Prot database (https://www.uniprot.org/), and Kyoto Encyclopedia of Genes and Genomes (KEGG) database (http://www.genome.jp/kaas-bin/kaas_main) using Blast (E-value ≤1e-05). Gene ontology (GO) terms were assigned using Blast2Go78 (-v -annot -dat -img -ips ipsr -annex -goslim). The Plant Transcription Factor Database (PlantTFDB) v4.0 (http://planttfdb.cbi.pku.edu.cn/prediction.php) was used to predict transcription factors (TFs). The coding sequences (CDS) and protein sequences were predicted following the TransDecoder pipeline (https://github.com/TransDecoder/TransDecoder) integrating the Blast (Swiss-Prot) and Pfam search results.
The NBS-containing genes were predicted by searching (hmmsearch) the predicted protein sequences using the hidden Markov model (HMM) profile of the NBS (PF00931)79 under E-value 1 × 10−4. PfamScan and NCBI Conserved Domain Search were used for confirmation of the NBS domain. The classification of NBS genes based on TIR, LRR, and CC domains was performed using NCBI Conserved Domains tool and Marcoil80 (probability > 90%). The NBS domain sequences were retrieved to construct a phylogenetic tree using RAxML under ‘PROTGAMMAJTTF’ model with 1000 bootstraps81. The identification of LRR-RLK genes followed the same method described previously82.
Comprehensive search for downy mildew associated genes and gene family analysis
To collect publicly available plant proteins associated with DM, the keyword “downy mildew” was first searched at NCBI and 1678 proteins were obtained. The keyword was also searched at UniProt and 37 proteins from Arabidopsis were obtained. In addition, the 152 reference Pathogen Receptor Genes maintained at PRGdb (http://prgdb.org/prgdb/) were also included. To identify Impatiens orthologs, gene family analysis was performed for the above-collected proteins and predicted proteins from Impatiens using All-Against-All Blast (E-value 1 × 10−5) and OrthoMCL83.
Differential expression analysis
The clean reads for each replicate were aligned to the reference transcriptome using BWA-mem. Only uniquely mapped reads were considered for further analysis. Read counts were obtained using HTSeq84. Differentially expressed genes (DEGs) were identified using DESeq285 under the cutoff of false discovery rate (FDR) < 0.05 and fold change ≥2. The transcripts per million (TPM) values were calculated using TPMCalculator86.
qRT-PCR validation
Two NBS genes and two LRR-RLK genes were selected for validation of gene expression in I. hawkeri samples using qRT-PCR. Primers were designed using BatchPrimer3 v1.0 (http://probes.pw.usda.gov/batchprimer3/). The RNA samples of cotyledon and true leaf tissues for DOB and FLR were used for cDNA synthesis with the SuperScript® III First-Strand Synthesis System for RT-PCR kit (Invitrogen, CA, USA). qRT-PCR was carried out with three biological replicates and each containing two technical replicates for each tissue type using the Power SYBR® Green PCR Master Mix kit (Applied Biosystems, USA). The cDNA levels of selected genes were normalized to the reference gene GAPDH.
|
|
# Equivalence Relations and Equivalence Classes
We will define the relation ~ on $\mathbb N \times \mathbb N$ by $(a,b)\sim (c,d)$ iff $a + d = b + c$.
Prove that the operation given by: $[(a,b)][(c,d)] \stackrel{\text{def}}= [(ac + bd, ad + bc)]$ is well-defined.
My attempt at this proof:
Let $(a,b)$ and $(a',b')$ be elements of $[(a,b)]$, and similarly, $(c,d)$ and $(c',d')$ be elements of $[(c,d)]$.
My guess is that we need to show that:
$(a'c' + b'd', a'd' + b'c')$ is an element of $[(ac + bd, ad + bc)]$.
So we know the following:
$$aa' + bb' = ab' + ba' \\ cc' + dd' = cd' + bc'$$
But now I'm wondering what to work with. It would be great if someone could give some assistance. Thank you!
Edit: I am also seeking assistance on the following problems:
Q: Prove that $\mathbb N \times \mathbb N$/~ contains an additive identity, i.e. find an element [(i, j)] ∈ $\mathbb N \times \mathbb N$/~ with the property that
$$[(i,j)] + [(c,d)] = [(c,d)]$$
Q: Prove that every element of $\mathbb N \times \mathbb N$/~ has an additive inverse, i.e. for any $[(a,b)] \in \mathbb N \times \mathbb N/~$, show that there exists $[(c,d)] \in \mathbb N \times \mathbb N/~$ such that
$$[(a,b)] + [(c,d)] = [(i,j)]$$ where [(i,j)] is the additive identity.
-
What is it all about? We want to introduce the negative numbers, constructed somehow using only the naturals. An integer is represented by $(a,b)$ and it wants to be $a-b$ which not necessarily exist yet (in $\mathbb N$), and now we are only allowed to use $+$ and natural numbers.
How did you get the last 2 equations? They should be $a+b' = b+a'$ and $c+d' = d+c'$..
When $(a,b)\sim(a',b')$, either $a\le a'$ or $a'<a$, because of symmetry, we may also assume the former, if it makes you more comfortable. Then $a'=a+n$ for some $n\in\mathbb N$, and hence, using the defn. of $\sim$, we also have $b'=b+n$. Similarly, we can assume that $c\le c'$ and hence $c'=c+m$ and $d'=d+m$. And.. probably the best is to make one step at one time:
First assume that $C:=(c,d)$ is fixed, and $A:=(a,b)\sim (a',b')=:A'$, then show that $AC\sim A'C$. Finally this same step applies for $A'C\sim A'C'$.
For the rest two questions, guess what integers are represented by the following pairs: $(149,149)$, $(2,7)$, $(7,2)$.. hope it helps
-
Makes sense, thank you. Would you be able to assist on the other 2 questions? – Julian Park Oct 7 '12 at 16:17
just post them.. – Berci Oct 7 '12 at 16:17
Dear Berci, they're in this thread. (under the edit). Thank you. – Julian Park Oct 7 '12 at 16:21
..well.. I would suggest to start them by yourself. Have any ideas? Guess, what could be an $(i,j)$ for the unit of +? What could be the additive inverse of an $(a,b)$? – Berci Oct 7 '12 at 16:23
Yes. But, what $(i,j)$ pairs represent zero, and, for $(a,b)$, what pair represents "$-(a,b)$"? Note that we still don't have $-$ yet. For example, what integer does $(2,7)$ represent? – Berci Oct 7 '12 at 16:54
|
|
# How to Parse a Geographic Position
You can extend the rules that Henry uses to detect a geographic position (latitude and longitude) when importing plain text (using Edit | "Paste into new Overlay", File | "Create Overlays from a Text File...", or the Henry Overlay Tool).
This is an advanced option. You should make a copy of the SupportedCharacters.XML file (refered to below) so that you can re-instate it if you make a mistake.
Henry detects a geographic position in text by looking for patterns. For example, it knows that "E" and "W" are used to indicate that the preceding numbers are a longitude.
You can add to some of these patterns by editing the SupportedChartacters.xml file.
Note: The location of SupportedCharacters.xml is indicated in Help | About. You can change it using the CherSoft Registry Configuration tool (CRE.exe) which is found in the same directory as Henry.exe (typically c:\program files (x86)\CherSoft\Henry\). The default location, c:\program files (x86)\CherSoft\Henry\ is a read-only location on disk; you will need to copy the file to a writable location, such as c:\programdata\chersoft\henry\ and update Henry's knowledge of the new location using CRE. E.g.
The following elements of the geographic position can be configured. Each is represented by an XML structure consisting of tags in angle brackets:
element Type Comment
Degrees <Value> The Unicode character values that represent a degree symbol
Minutes <Value> The Unicode character values that represent a minute symbol
Seconds <Value> The Unicode character values that represent a seconds symbol
Decimal <Value> The Unicode character values that represent a decimal point
LatLonSeparator <Value> The Unicode character values that are allowed between the latitude and longitude parts
AngleComponentSeparator <Value> The Unicode character values that are allowed between the degrees and minutes and between the minute and seconds of either latitude or longitude
LatPrefix <String> The text that is allowed (but only required if the LonPrefix was found) before the latitude part
LonPrefix <String> The text that is allowed (but only required if the LatPrefix was found) before the longitude part
Minus <Value> The Unicode character values that represent a negative (only used when not using the cardinal values; N,S,E,W)
N <Value> The Unicode character values that represent the north cardinal indicator
S <Value> The Unicode character values that represent the south cardinal indicator
E <Value> The Unicode character values that represent the east cardinal indicator
W <Value> The Unicode character values that represent the west cardinal indicator
Space <Value> The Unicode character values that are treated as white space between parts of the position
<Value>s need to be entered as Unicode values which can be obtained from [1]
<String>s need to be entered as text.
An example of the characters that will be interpreted as a degree symbol:
``` <SupportedCharacter>
<CharacterType>Degrees</CharacterType>
<UnicodeValues>
<Value>00BA</Value>
<Value>00B0</Value>
<Value>002A</Value>
<Value>002D</Value>
</UnicodeValues>
</SupportedCharacter>
```
An example of the characters that will be interpreted as a north cardinal indicator:
``` <SupportedCharacter>
<CharacterType>N</CharacterType>
<UnicodeValues>
<Value>006E</Value>
</UnicodeValues>
</SupportedCharacter>
```
|
|
# cirq.LineQid¶
class cirq.LineQid(x: int, dimension: int)[source]
A qid on a 1d lattice with nearest-neighbor connectivity.
LineQids have a single attribute, and integer coordinate ‘x’, which
identifies the qids location on the line. LineQids are ordered by
this integer.
One can construct new LineQids by adding or subtracting integers:
>>> cirq.LineQid(1, dimension=2) + 3
cirq.LineQid(4, dimension=2)
>>> cirq.LineQid(2, dimension=3) - 1
cirq.LineQid(1, dimension=3)
__init__(x: int, dimension: int) → None[source]
Initializes a line qid at the given x coordinate.
Parameters
• x – The x coordinate.
• dimension – The dimension of the qid, e.g. the number of quantum levels.
Methods
for_gate(val[, start, step]) Returns a range of line qids with the same qid shape as the gate. for_qid_shape(qid_shape[, start, step]) Returns a range of line qids for each entry in qid_shape with is_adjacent(other) Determines if two qubits are adjacent line qubits. neighbors([qids]) Returns qubits that are potential neighbors to this LineQubit range(*range_args, dimension) Returns a range of line qids. validate_dimension(dimension) Raises an exception if dimension is not positive. with_dimension(dimension) Returns a new qid with a different dimension.
Attributes
dimension Returns the dimension or the number of quantum levels this qid has. x
|
|
# Tag Info
11
Take a path property that is not first-order expressible, e.g. $$\nu x.p\wedge\Diamond\Diamond x$$ which says that there exists a path where the atomic proposition $p$ holds at every even position, and any valuation can be used on odd positions.
6
If the formula is satisfiable you cannot construct an interpolant. If the formula is unsatisfiable, you know that the final states are not reachable in at most $k$ steps. You do not know about reachability of final states in $k+1$ or more steps. If the formula you suggest is unsatisfiable, you can replace the sub-formula $I$ of $A$ with the interpolant and ...
6
The formula in your statement Obviously E(A(F1)) with F1=E(A(F1)) is not well-defined for some Büchi automata. So what does inductively defined exactly mean in this case? cannot arise if you built ECTL* inductively. This means, in standard academic parlance, we would present a syntax definition of the form below. Let $Prop$ be a set of propositions and $... 5 To expand on pedagand's answer: productivity is the term used for the computational dual (in some precise sense) of termination. Formally, a program f : CoData is productive if running the computation f eventually produces a constructor of CoData, and every (recursive) sub-term of that constructor is also productive. For example primes = filterBy prime [0..... 5 In general, CTL model checking is P-complete. Since we think that$L\neq P$(and moreover$NL\neq P$), it is unlikely that an algorithm with logarithmic space exists. It is also unlikely that a sub-polynomial space algorithm exists, for similar reasons of common belief. I don't know of exact space-optimizations for the problem, but in general - yes, you ... 5 I think the key problem here is not understanding how inductive definitions of syntax work. Here are three approaches to understanding what a BNF grammar means. Consider a simple grammar: $$t ::= \mathtt{true} ~~|~~ \mathtt{false} ~~|~~ 0 ~~|~~ \mathtt{succ}\ t ~~|~~ \mathtt{if}\ t\ \mathtt{then}\ t\ \mathtt{else}\ t$$ Following Pierce's Types and ... 4 I think the simpler example is your property, which can be written for instance$E(((a+b)a)^\omega))$. A simple way to show that is is not in CTL* is to show that this would imply that the word language$((a+b)a)^\omega$is in LTL (because CTL* on linear structures is LTL). This fact is a classical result. To show it, it suffices (for instance) to use the ... 4 The answer to (1) is no, even for deterministic transducers. The reason is that we can encode configurations (tape contents + head position + machine state) of Turing machines into words such that the configuration changes made by any machine$M$can be represented by a transducer$T_M$, and then decidability of (1) would imply decidability of the halting ... 3 Your first question is answered in this paper: https://www.cs.cornell.edu/fbs/publications/RecSafeLive.pdf Given an LTL formula, translate it into a Büchi automaton, and remove states that have no path to an accepting state. Then, change all states to be accepting. If the language of the automaton does not change, then the property is a safety property. ... 3 Safety properties are closed under finite intersection. This can be seen by following Alpern and Schneider's characterisation which showed that safety properties are limit-closed when viewed topologically. Liveness properties as defined by Alpern and Schneider are dense. They are not closed under intersection as soon as there are two elements in the state ... 3 In general, the technique used is known as "fuzzing". Not all errors are equally likely. Let's consider two hypothetical errors: System A incorrectly rejects a filename if it contains an | anywhere. System A incorrectly rejects a filename if it contains a prime number of b characters. Errors of the second type are much, much rarer, but this is not ... 3 As time can be a resource, it is a bit unclear to me what you seek. Nevertheless, you might want to look at weighted extensions of LTL, like Metric Temporal Logic first defined here. (Specifying real-time properties with metric temporal logic) 3 You could find a few examples in Danielsson's papers, such as: Total parser combinators Operational Semantics Using the Partiality Monad The key idea is to use the productivity of greatest fixpoints to guarantee liveness ("eventually something happens"). 3 Intuitively, what happens here is that for$AFGp$, you check each individual path for whether after some point,$p$will always be true - no matter what other choices are available in a given state. In particular, for the path which always stays in the first state, this is true even though a$\neg p$-state is reachable. On all other paths it is true because ... 3 One thing we have to be clear on is the kind of property we are talking about: CTL and CTL* are branching-time logics, used to talk about tree languages, whereas LTL is a linear-time logic, which per se talks about words, but can be applied to trees by requiring all branches to satisfy the formula. This already gives you a hint for some CTL properties ... 3 I'm not answering the full question but only part of it (I have no interest in branching time). Your definition of$\mathit{even}$would better read$\mathit{even}(p) \equiv \exists q.(q \land \Box ( q \leftrightarrow \mathsf{X} \lnot q ) \land \Box ( q \rightarrow p ))$. You are introducing$q$only to remember if you are on an odd or even position, but ... 3 You can find a formal model and proof of Paxos and Byzantine Paxos written by L. Lamport et al at http://research.microsoft.com/en-us/um/people/lamport/tla/byzpaxos.html. The model can be checked using the TLA+ toolbox. Notice that the author of the Paxos algorithm, the formal model above, and even the TLA+ modeling language is the same person:) 3 As it turns out, there is some fascinating work done in this direction. In particular, in 2003, Michael Howard, Jon Pincus, and Jeannette M. Wing's Measuring Relative Attack Surfaces in proceedings of Workshop on Advanced Developments in Software and Systems Security, Taipei, December 2003. Further work by the same authors over the years is quite ... 3 The "probabilistic" element in probabilistic model checking is that the system being checked is probabilistic, not that we add probabilities to an existing deterministic or non-deterministic system. Thus, what you are checking is whether a probabilistic system satisfies some property. For example "is it true that with probability at least 0.5, the system ... 2 I think that software model checking, in the vein of Alloy, is probably related to what you're looking for. You write a model, and also a specification that the model should satisfy, and check if they're related appropriately. 2 If your background is CTL interpreted over Kripke structures and you looks for something similar interpreted over LTSs, than ACTL (action-based CTL) could be interesting. Back in 1990, R. De Nicola and F. Vaandrager introduced ACTL as an action-based CTL (Action versus state based logics for transition systems, Semantics of Systems of Concurrent Processes (... 2 First question: A set$M$is decidable if there is a Turing Machine which halts on all inputs and accepts all inputs$x$with$x \in M$. We try to encode$\bigwedge_{\phi \in X} \phi$for arbitrary sets of$\mathsf{FO}[\tau]$-formulars$X$. Since,$\mathcal{P}(X)$is uncountable there can be no code with finite alphabet. Hence, there can be no Turing ... 2 For the concrete case of a specification of a regular language, there is the Java String Analyzer which roughly is able to compute a finite state automaton (i.e. regular expression) of the set of strings accepted by a Java method, using various techniques in static analysis. While the paper deals directly with the set of strings generated by a piece of Java ... 2 Your construction for bad prefixes is not correct on NBA's. For instance take the NBA on alphabet$A=\{a,b\}$with two initial states$q_a$and$q_b$where for both$x\in A$,$q_x$goes to an accepting sink if the first letter is$x$and to a rejecting sink if the first letter is not$x$. Then the language recognized is$A^\omega$, but the set of "bad ... 2 Pierre Wolper defined in 1983 extended temporal logic (ETL, in Information and Computation 56, 72–99, doi:10.1016/S0019-9958(83)80051-5), where a temporal operator$\mathcal A(\varphi_1,\dots,\varphi_n)$can be introduced for a finite-state automaton$\mathcal A$. The formula is satisfied in an infinite word$u$at position$i$, i.e.$u,i\models\mathcal A(\...
1
To answer your second question: there is one property that is both safety and liveness: True. With this exception, however, it is fair to say that a property is either safety or liveness or neither. "Most" properties (like yours) are actually neither, but every property can be represented by the intersection of a safety and a liveness property. I think ...
1
What you wrote is correct. The actions $\textit{coin}$ and $\textit{ret_coin}$ do not change the values of the variables $\textit{Var} = \{\textit{nsoda}, \textit{nbeer}\}$, i.e., the number of soda and beer cans in the machine. Inserting a coin is only possible in the $\textit{start}$ location, and with this action the location changes to $\textit{select}$....
1
I'm assuming that you're really asking, "how do I do something useful with modelling and theory." The easiest answer is to work in a modelling and simulation field that makes useful products. Computational Electromagnetics is used a lot in RF, Finite Element Analysis is used in mechanical product design. The broken part of your reasoning is that "more ...
1
Maybe take a look at http://www.syntcomp.org/ This is a competition of tools solving the LTL synthesis problem (and some related problems).
1
I think it depends on what you mean by linear-time temporal logics. If you mean temporal logics that have linear time semantics (i.e. cannot distinguish more than trace equivalence, a la van Glabbeek) then there are indeed logics that require counter examples that are not just lassos. HyperLTL is an example: https://www.react.uni-saarland.de/publications/...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
# Separated varieties and uniqueness of extension
I would like to know if my solution to the following exercise is correct. If not, then I will be grateful for a correct argument. (I am working with varieties over an algebraically closed field, not schemes)
Exercise: Suppose that $f,g:X \to Y$ are two morphisms, where Y is separated. If $f$ and $g$ agree on some dense open subset $U \subset X$ then $f = g$.
My argument: Let $W = \{x \in X\:|\:f(x) = g(x)\}$. I know by assumption that $U \subset W$, and wish to show that $W = X$. Since $Y$ is separated, the graph of $f$ (call it $\Gamma_f$) is closed in $X \times Y$. So by continuity, the preimage $(\mathrm{id},g)^{-1}(\Gamma_f) = W$ is closed in $X$. But then $X = Cl_X(U) \subset Cl_X(W) = W \Rightarrow W = X$ as required.
-
Are there non-separated varieties? – Matt Oct 23 '12 at 16:03
Yes - one example is 'the affine line with two origins'. The underlying set of this is constructed as follows: you take the disjoint union of two copies of $\mathbb{A}^1$ (call the coordinate on the first one $x$ and the coordinate on the second $y$) and quotient out by the relation $x$ ~ $y$ if and only if $x \neq 0$ and $y \neq 0$ and $x = y$. You give it the variety structure such that the quotient map is a morphism of varieties. – AKr Oct 23 '12 at 16:37
However, every (quasi-)affine and every (quasi-)projective variety is separated I believe. – AKr Oct 23 '12 at 16:40
Hmm...then maybe you should give us your definition of variety. If I'm not mistaken Hartshorne's definition is an integral, separated, scheme of finite type (over a field). It isn't that relevant to the question. I was just being annoying because you claimed to be working with varieties and not schemes, but then included the hypothesis separated for $Y$. – Matt Oct 23 '12 at 17:24
I mean 'variety' in the sense of Kempf's book: a space with functions, $(X,\mathcal{O}_X)$ over a field $k = \overline{k}$, with the property that there exists a finite covering by open sets $X = \bigcup_1^n{U}_i$ and each $(U_i,\mathcal{O}_X|U_i)$ is isomorphic to an affine variety ($\mathrm{maxSpec}(A)$ for some finitely generated $k$-algebra $A$, which has no nilpotents). I believe such objects are sometimes called 'algebraic sets'. I really should have clarified this, since the word 'variety' means different things to different people. – AKr Oct 23 '12 at 18:51
You proved correctly that $f=g$ as maps of the underlying spaces of $X, Y$. That they are equal as morphisms either follows from the definition in Kempf's book (I can't check) or has to be proved (used the fact the $O_X(U)$ is reduced).
-
I have added this comment as an answer, since the text keeps overflowing. My apologies, I am new to this. – AKr Oct 24 '12 at 8:46
@Akr, no problem. You are welcome. – user18119 Oct 25 '12 at 22:02
In response to the answer by QiL (it was too long for a comment):
Kempf's definition of a morphism is: a continuous map of the underlying topological spaces, $f:X\to Y$ such that regular functions on Y pull back to regular functions on X: for any $V\subset Y$ open, $r\in \mathcal{O}_Y(U)\Rightarrow f^{∗}r \in \mathcal{O}_X(f^{−1}(U))$.
So my understanding is that I need to add the following extra line:
Moreover, $f,g$ induce the same pullback on regular functions. Indeed, for any open $U \subset Y$ and any $r \in \mathcal{O}_Y(U)$, consider the pullbacks $f^{*}r, g^{*}r$ in $\mathcal{O}_X(f^{-1}(U)) = \mathcal{O}_X(g^{-1}(U))$. Both these regular functions take the same value in $k$ at every point of $f^{-1}(U)$. Since $\mathcal{O}_X(f^{-1}(U))$ is reduced (implying regular functions are determined by their values) it follows that $f^{*}r=g^{*}r$. Since $f,g$ are equal as functions, and induce the same pullback, they are equal as morphisms.
Finally, just to be clear: in Kempf's book, the structure sheaf $\mathcal{O}_X$ of a variety $X$ assigns, by definition, to every open set $U \subset X$, a $k$-algebra of functions $U \to k$. So it will always be reduced (have no nonzero nilpotents).
-
|
|
# [NTG-context] picture before \chapter breaking \header \footer setup
Thu Oct 23 15:40:22 CEST 2008
On Thu, 23 Oct 2008, Jelle Huisman wrote:
> Hi all,
>
> Another question, this time about \headers and \footers. On the first
> page of the chapter I have the page number in the footer and on all
> other pages they are in the header. This works fine, as expected.
> However I have to include an \externalfigure *before* the \chapter
> command and I discovered that I have to use \startcombination because
> otherwise the \chapter moves to the next page. However, when I use
> \startcombination ... etc \chapter is not longer the first thing on the
> page and the special header/footer setup is not triggered (apparently
> \chapter has to come first for that to happen). See sample code:
You can try to play around with \setuphead[chapter][command=\something]
and make sure that \something puts a figure where you want. You will need
to specify the figure by other mechanism (define a command, pass two
arguments trough one argument, etc).
> % sample
> \definetext[mychapter][footer][pagenumber]
> \showframe
> \starttext
>
> %\startcombination[1*1]{\externalfigure}{ % uncomment to see problem
> \chapter{first}
> %}\stopcombination % uncomment to see problem
>
> \dorecurse{20}{\input tufte \relax}
> \chapter{second} \dorecurse{20}{\input tufte \relax} % this works as
> expected
> \stoptext
> % sample
>
> Is it possible to make this work? (maybe an extra \chapter command
> before the combination that triggers the header-footer setup, but that
> is not printed after all?)
>
> Thanks for any help.
>
> Jelle
> ___________________________________________________________________________________
> If your question is of interest to others as well, please add an entry to the Wiki!
>
> maillist : ntg-context at ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context
> webpage : http://www.pragma-ade.nl / http://tex.aanhet.net
> archive : https://foundry.supelec.fr/projects/contextrev/
> wiki : http://contextgarden.net
> ___________________________________________________________________________________
>
>
>
|
|
Biswajit Banerjee
### Material and spatial incremental constitutive equations
An answer to a common question on objectivity
#### The question
A colleague asked a question on objectivity a few days ago that had me going back to Ray Ogden’s book on nonlinear elastic deformations. The question was on incremental stresses and their material and spatial descriptions.
To be more specific, the question was on incremental stress-strain equations expressed in rate form and why the instantaneous moduli for material spatial stress measures were different.
###### Instantaneous moduli for PK-2 stress and Green strain
Consider the relation between the second Piola-Kirchhoff (PK-2) stress ($\boldsymbol{S}$) and the Lagrangian Green strain ($\boldsymbol{E}$):
$$\dot{\boldsymbol{S}} = \mathsf{C}^{SE} : \dot{\boldsymbol{E}}$$
where $\mathsf{C}^{SE}$ is the first-order instantaneous modulus tensor (a rank-4 tensor).
Note that the first-order instantaneous modulus tensor is neither constant nor independent of the state of deformation.
Many books use an additional linearization assumption to simplify the instantaneous modulus tensor. In particular, for an isotropic material, the linearized instantaneous modulus tensor can be expressed as
$$\mathsf{C}^{SE} = \kappa \mathbf{I}\otimes\mathbf{I} + 2\mu\left(\mathsf{I}^{(4s)} - \tfrac{1}{3}\mathbf{I}\otimes\mathbf{I}\right)$$
where $\kappa$ and $\mu$ are material constants, $\mathbf{I}$ is the rank-2 identity tensor, and $\mathbf{I}^{(4s)}$ is symmetrized rank-4 identity tensor.
In our earlier article on closest-point return we saw that we can express the above modulus tensor in the terms of projection tensors:
$$\mathsf{C}^{SE} = 3\kappa \mathsf{P}^{\text{iso}} + 2\mu\mathsf{P}^{\text{symdev}} \quad \quad \text{(1)}$$
###### Instantaneous moduli for Kirchhoff stress
My colleague wanted to use the expression in equation (1) above to determine the instantaneous moduli for a system that uses the Kirchhoff stress measure ($\boldsymbol{\tau}$)
$$\boldsymbol{\tau} = J \boldsymbol{\sigma} = \boldsymbol{F}\cdot\boldsymbol{S}\cdot\boldsymbol{F}^T$$
where $\boldsymbol{F}$ is the deformation gradient, $J = \det\boldsymbol{F}$, and $\boldsymbol{\sigma}$ is the Cauchy stress.
The time derivative of this stress measure is
\begin{align} \dot{\boldsymbol{\tau}} & = \frac{d}{dt}(\boldsymbol{F}\cdot\boldsymbol{S}\cdot\boldsymbol{F}^T) \\ & = \dot{\boldsymbol{F}}\cdot\boldsymbol{S}\cdot\boldsymbol{F}^T + \boldsymbol{F}\cdot\dot{\boldsymbol{S}}\cdot\boldsymbol{F}^T + \boldsymbol{F}\cdot\boldsymbol{S}\cdot\dot{\boldsymbol{F}^T} \\ & = \boldsymbol{l}\cdot\boldsymbol{F}\cdot\boldsymbol{S}\cdot\boldsymbol{F}^T + \boldsymbol{F}\cdot(\mathsf{C}^{SE}:\dot{\boldsymbol{E}})\cdot\boldsymbol{F}^T + \boldsymbol{F}\cdot\boldsymbol{S}\cdot\boldsymbol{F}^T\cdot\boldsymbol{l}^T \end{align}
where we have used the rate-form expression for $\dot{\boldsymbol{S}}$ and the relationship between the velocity gradient ($\boldsymbol{l}$) and time derivative of the deformation gradient. Also,
\begin{align} \dot{\boldsymbol{E}} & = \tfrac{1}{2}(\dot{\boldsymbol{F}^T}\cdot\boldsymbol{F} + \boldsymbol{F}^T\cdot\dot{\boldsymbol{F}}) \\ & = \tfrac{1}{2}(\boldsymbol{F}^T\cdot\boldsymbol{l}^T\cdot\boldsymbol{F} + \boldsymbol{F}^T\cdot\boldsymbol{l}\cdot\boldsymbol{F}) \\ & = \boldsymbol{F}^T\cdot\boldsymbol{d}\cdot\boldsymbol{F} \end{align}
where $\boldsymbol{d}$ is the symmetric part of the velocity gradient tensor. Therefore, defining the spin tensor ($\boldsymbol{w}$) via $\boldsymbol{l} = \boldsymbol{d} + \boldsymbol{w}$, we have
\begin{align} \dot{\boldsymbol{\tau}} & = \boldsymbol{l}\cdot\boldsymbol{\tau} + \boldsymbol{F}\cdot\mathsf{C}^{SE}:(\boldsymbol{F}^T\cdot\boldsymbol{d}\cdot\boldsymbol{F})\cdot\boldsymbol{F}^T + \boldsymbol{\tau}\cdot\boldsymbol{l}^T \\ & = \boldsymbol{d}\cdot\boldsymbol{\tau} + \boldsymbol{w}\cdot\boldsymbol{\tau} + \boldsymbol{F}\cdot\mathsf{C}^{SE}:(\boldsymbol{F}^T\cdot\boldsymbol{d}\cdot\boldsymbol{F})\cdot\boldsymbol{F}^T + \boldsymbol{\tau}\cdot\boldsymbol{d}^T + \boldsymbol{\tau}\cdot\boldsymbol{w}^T \end{align}
If we define the Jaumann rate of the Kirchhoff stress as
$$\overset{\triangle J}{\boldsymbol{\tau}} = \dot{\boldsymbol{\tau}} - \boldsymbol{w}\cdot\boldsymbol{\tau} -\boldsymbol{\tau}\cdot\boldsymbol{w}^T$$
and the Jaumann modulus using
$$\mathsf{C}^{\tau J}:\boldsymbol{d} = \boldsymbol{F}\cdot\mathsf{C}^{SE}:(\boldsymbol{F}^T\cdot\boldsymbol{d}\cdot\boldsymbol{F})\cdot\boldsymbol{F}^T$$
we have, assuming that the stress is always significantly smaller than the modulus,
$$\overset{\triangle J}{\boldsymbol{\tau}} = \boldsymbol{d}\cdot\boldsymbol{\tau} + \boldsymbol{\tau}\cdot\boldsymbol{d}^T + \mathsf{C}^{\tau J}:\boldsymbol{d} \approx \mathsf{C}^{\tau J}:\boldsymbol{d}$$
Now, some authors express the Jaumann modulus as
$$\mathsf{C}^{\tau J} = 3\kappa \mathsf{P}^{\text{iso}} + 2\mu\mathsf{P}^{\text{symdev}} \quad \quad \text{(2)}$$
Where has the dependence on the deformation gradient gone? This is a common source of confusion.
#### The reason for the inconsistency
The main reason for this inconsistency is
• the use of the same symbols for two quite different sets of moduli and basis tensors, and
• ignoring the fact that a linearization operation has been performed to get the simplified modulus tensors.
In other words, most importantly,
1. the quantities $\kappa, \mu$ are not necessarily the same for the two cases,
2. the basis tensors $\mathsf{P}^{\text{iso}}$ and $\mathsf{P}^{\text{symdev}}$ are not identical for the PK-2 case and the Kirchhoff stress case. One has been rotated and stretched relative to the other.
A detailed discussion of these issues can be found in Odgen’s book (chapter 6.1.4). A shorter discussion can be found in Computational Inelasticity by Simo and Hughes (sections 7.1.5.3 - 7.1.5.5).
|
|
Page d'accueil du laboratoire > Équipe de Probabilités > Page principale > MetaFont
# MetaFoundry
## Licence: standard LaTeX licence
The following fonts have the standard LaTeX licence. The former send-me-a-postcard licence was not understood and there is a too few people who are really interested in MetaFont design and programming. None of these fonts is frozen. I change things from time to time. Conversion to other formats (postscript, type 1) does not concern me. Most of these fonts would loose most of their interest (if any) doing such conversions without hints. Furthermore, I am not concerned by installation.
## Mbb series, aka mbboard
and even much more!
Those blackboard fonts are at the origin of my MetaFont programming. They received a very special care and should be of quite high quality. It is quite stable (never say never). The two main series are mbb (blackboard light fonts) and mbbx (blackboard bold extended fonts). Other fonts are by now oddities.
mbboard0.0 old space-time initial distribution, january 2000
mbboard0.1 old april 2000
mbboard0.2 old december 2000
mbboard0.3 old june 2001
mbboard0.4 last distribution by now, october 2001 (posted on CTAN)
mbbtest compressed postscript documentation, october 2001
On a teTeX distribution one may drop every source file (.mf) in a subdirectory of \$TEXMF/fonts/source/public. I've never tried... For individual use on a Unix system, create \$HOME/metafont/pk, \$HOME/metafont/tfm, \$HOME/metafont/source. Move then the whole content of archive's source directory in \$HOME/metafont/source. In \$HOME/metafont/source, execute COMPILE. Move texinput files where TeX can find them. The last thing to do is to set up environment variables (see at the bottom of this page).
(Index)
## Mathabx series
Here is one of the largest sets of mathematical symbols ever programmed in MetaFont. These fonts are supposed to be very high quality fonts even if some symbols may have to be designed anew. Encoding is not quite stable but input packages free people of this matter. We decline any responsability about type 1 versions or such: conversions certainly don't care of rasterization, i.e. they may not be hinted. By the way, look at Kohsaku Hotta's web page about mathabx for up-to-date conversions.
mathabx-1.0 1.0 distribution (may 2011). It should work. You can complain if you need (only about TeX programming, METAFONT programming, fonts' design; installation is not my problem at all). (The CTAN's archive has been updated at may 15, 2005...)
mathtest compressed postscript documentation, may 2011
(Index)
## Mcalligra, aka Calligra iMproved
The Calligra font is for long available on CTAN but it seems that nobody can use it properly as a true TeX font. I changed some things in the code of this font in order to obtain the following:
mcalligra first unstable distribution, november 2001. Provided with no TeX nor LaTeX input file (not so complicated). Well, I've seen many things that could be improved since I've put this stuff of the web. So changes will be made before the end of the year.
mcaltest compressed postscript documentation, november 2001
## Mxy series
The mxy series are very simple fonts designed quite rapidly. It's just fun to use them. They will be extended one day.
mxy first unstable distribution, november 2002. Provided with no LaTeX input file (not so complicated).
mxytest compressed postscript documentation, november 2002
(Index)
## Mgrey font
That's not a tremendous font. Just grey squares made for drawing pixmaps...
(Index)
## How-to: don't ask me more!
### Basic information
A MetaFont program is a collection of files whose suffix is generally mf. Theses files are called the source of the MetaFont fonts. They contain the description of the drawings and of the size of every character in the related fonts. These are human written programs whose syntax is quite close from pascal (hence not so complicated).
The interpreter of MetaFont programs is METAFONT (it is also the name of the language). It is invocated via the mf command. There are 3 kinds of output files: the well-known log-files, and the not-so-well-known tfm and gf-files.
tfm-files (Tex Font Metrics)
describe the metrics of the fonts: bounding box of every characters, ligaturing and kerning properties, relative dimensions. They are directly used by TeX when it formats a document and they are mode or resolution-independent.
gf-files (Generic Fonts)
are the bitmap pictures corresponding to every characters. In the MetaFont program, characters have an almost resolution-free description. Thus their description leads to a digitalized output (resolution, or mode-dependent) that can be visualized on the screen or printed.
pk-files (PacKed fonts)
gf-files are non-compressed bitmap images. That's why there are pk-files (Packed Fonts). Both can be used for printing or visualising, but pk-files are prefered. Conversion is done via the gftopk command (there is also a pktogf...). Of course, these are also mode or resolution-dependent.
Take a look at the METAFONTbook to get deeper but clearer information.
### Modes
Since META-description of a font within a MetaFont program is almost resolution (mode)-independent, one has to specify a mode to produce bitmap font adaptated to a specific printer device or screen. Those modes are defined by aware users and usually defined in the TeXMF distribution. Typical invocation is
``` mf "\mode=mymode; mag:=mymag; input myfontXX.mf" ```
where `mymode` is a pre-defined mode (typically `cx` for 300dpi-laser printer, or `ljfour` for 600dpi-laser printer), `mymag` is the magnification or magstep (typically `mag:=1` , or `mag:=1.2`, or `mag:=1.44`), `myfontXX` is the name of the font (typically `cmr10`, ...) and `myfontXX.mf` the corresponding MetaFont source file.
The former command will produce 3 files: `myfontXX.log`, `myfontXX.tfm` and `myfontXX.YYYgf`, where `YYY` is a number equal to the resolution times magnification (typically 300, 360, 432, or 600, 720, 864). To generate pk-file just type
``` gftopk myfontXX.YYYgf ```
which produces `myfontXX.YYYpk`. What remains to do is just to clean garbage (`myfontXX.log` and `myfontXX.YYYgf`) and to move the other files to some place where TeX-related programs can find them.
More to come...
### Installation with SuperUser's privileges
We will deal only with standard TeX distributions as the ones one can find in the TeXlive cd-rom. So, it'll aply to the TETeX (Unix and Linux), MikTeX (W.nd.w\$), OzTeX (MacOS). Here are the things to do:
1. Get the archive of the font you want to install, unpack it in a temporary directory.
2. Find the root of the TeXMF tree in your system: it is a directory that should be named `texmf` and whose subdirectories are for instance `tex`, `metafont`, `metapost`, `fonts`, etc. The TeXMF root will be denoted by `../texmf`.
3. In the directory `../texmf/fonts/source/public/` create a new directory whose name matches the name of the fonts you want to install, say `../texmf/fonts/source/public/mathabx` or `../texmf/fonts/source/public/mbboard`.
4. Move all the `xxx.mf` files that are in your temporary directory (or in its subdirectories) to the newly created one.
5. If the archive contains TeX or LaTeX style or input files, move them into the directory `../texmf/tex/generic/misc` (for instance).
6. Job is almost completed!
• Linux' users and such. In a terminal window execute `texhash`.
• MikTeX' users. In a DOS window execute `initexmf -u`.
This will update the database of your TeXMF distribution to take into account the newly installed files.
### Installation as a poor simple user
This how-to is intended for a user of Unix-like system with no root privilege. Such a person may want to install new TeX fonts on is own account. What is described here is what I've done myself.
More to come...
Copyright: HTML's texts or graphics are free of any copyright, they are copyleft. TeX programs are also copyleft but one can send a postcard. MetaPost programs have just a feel-free-to-send-me-a-postcard licence. MetaFont programs have the standard LaTeX licence.
Anthony Phan,
Département de Mathématiques, SP2MI,
Boulevard Marie et Pierre Curie, Téléport 2,
BP 30179, F-86962 Futuroscope-Chasseneuil cedex
|
|
# Tomcat HotDeploy from ZK-GUI
IngoB
256 6
Hi,
I want an "Update Feature" in my GUI. So I thought to upload a WAR-File via ZK Fileupload and then replace the "old" WAR File on-the-fly in the Tomcat Webapps directory to deploy it (and restart the server).
The upload is working, but the copying of the file to the webapps folder is not working. If I do it manually it works ( "mv new_app.war app.war" ).
I tried it with "apache.common.io" and "1.7 java", same result: No error, nevertheless no copied file!
String realPath = Executions.getCurrent().getDesktop().getWebApp().getRealPath("/");
File destFile = new File(realPath.substring(1, realPath.lastIndexOf("/")) + "/app.war");
...
Any Idea?
Linux / ZK 7.0.2 EE / Mozilla FF 30.0
delete retag edit
Sort by » oldest newest most voted
9393 3 7 16
http://www.oxitec.de/
I would NOT do that. Tomcat is not build for hot-deployment !!
This should work 3 or 4 times dependent of the memory that Tomcat uses. By next 'hot-deployment' you will get 100% a 'Perm Gem' error.
best Stephan
IngoB
256 6
Yes, I know. But! uploading an update and telling the customer to reboot is MUCH easier, than to discuss how to shutdown/deploy/startup on linux :p
9393 3 7 16
http://www.oxitec.de/
if it's possible to write a little java command tool as an observer who will oberserve if it's a new war is uploaded. And than it calls a sh-script?
best Stephan
IngoB
256 6
The upload is working fine. I save the uploaded file in a tmp directory. The only problem is, that i can't get the file copied to my webapps folder. there is no error, nothing, but the file is still not there :<
jimmyshiau
4921 5
http://www.zkoss.org/
You can try to invoke ant build to move the file, or restart server with ant.
IngoB
256 6
There was a problem with my tomcat configuration and auto-deploy. Now its working, thanks for your input :)
[hide preview]
|
|
The Permuted Striped Block Model and its Factorization – Algorithms with Recovery Guarantees
We introduce a novel class of matrices which are defined by the factorization Y :=AX, where A is an m × n wide sparse binary matrix with a fixed number d nonzeros per column and X is an n × N sparse real matrix whose columns have at most k nonzeros and are dissociated. Matrices defined by this factorization can be expressed as a sum of n rank one sparse matrices, whose nonzero entries, under the appropriate permutations, form striped blocks - we therefore refer to them as Permuted Striped Block (PSB) matrices. We define the PSB data model as a particular distribution over this class of matrices, motivated by its implications for community detection, provable binary dictionary learning with real valued sparse coding, and blind combinatorial compressed sensing. For data matrices drawn from the PSB data model, we provide computationally efficient factorization algorithms which recover the generating factors with high probability from as few as N =O(n/klog^2(n)) data vectors, where k, m and n scale proportionally. Notably, these algorithms achieve optimal sample complexity up to logarithmic factors.
Authors
• 4 publications
• 15 publications
01/20/2021
Dictionary-Sparse Recovery From Heavy-Tailed Measurements
The recovery of signals that are sparse not in a basis, but rather spars...
08/28/2013
New Algorithms for Learning Incoherent and Overcomplete Dictionaries
In sparse recovery we are given a matrix A (the dictionary) and a vector...
01/19/2017
Stochastic Subsampling for Factorizing Huge Matrices
We present a matrix-factorization algorithm that scales to input matrice...
12/28/2021
Robust Sparse Recovery with Sparse Bernoulli matrices via Expanders
Sparse binary matrices are of great interest in the field of compressed ...
04/16/2018
Binary Matrix Factorization via Dictionary Learning
Matrix factorization is a key tool in data analysis; its applications in...
01/22/2016
Task Parallel Incomplete Cholesky Factorization using 2D Partitioned-Block Layout
We introduce a task-parallel algorithm for sparse incomplete Cholesky fa...
06/27/2012
Matrix Tile Analysis
Many tasks require finding groups of elements in a matrix of numbers, sy...
This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
1 Introduction
In many data science contexts, data is represented as a matrix and is often factorized into the product of two or more structured matrices so as to reveal important information. Perhaps the most famous of these factorizations is principle component analysis (PCA)
[22]), in which the unitary factors represent dominant correlations within the data. Dictionary learning [35] is another prominent matrix factorization in which the data matrix is viewed to lie, at least approximately, on a union of low rank subspaces. These subspaces are represented as the product of an overcomplete matrix, known as a dictionary, and a sparse matrix. More generally a wide variety of matrix factorizations have been studied to solve a broad range of problems, for example missing data in recommender systems [27], nonnegative matrix factorization [23]
and automatic separation of outliers from a low rank model via sparse PCA
[14].
In this paper we introduce a new data matrix class which permits a particular factorization of interest. The members of this class are composed of a sum of rank one matrices of the form , where is a binary column vector with exactly non-zeros and is a real row vector. Unlike PCA, and more analogous to dictionary learning, we typically consider . An example of a matrix in this class is the sum of rank one matrices whose supports are striped blocks of size that may or may not overlap. Here ‘striped’ refers to the entries in any given column of an associated block having the same coefficient value. More generally, data matrices in this class can be expressed as the sum of independently permuted striped blocks and as a result we will refer to them as Permuted Striped Block (PSB) matrices. We provide a visualization of such matrices in Figure 1.
1.1 Data Model and Problem Definition
A PSB matrix can be defined as , where A is an sparse binary matrix with exactly nonzeros per column and X is an column sparse real matrix. In Definition 1 we present the PSB data model, which defines a particular distribution over the set of PSB matrices. The focus of this paper is to show that, under certain conditions and with high probability, data sampled from the PSB data model has a unique (up to permutation) factorization of the form discussed which can be computed efficiently and in dimension scalings that are near optimal. The details of PSB data model are given below in Definition 1. As a quick point to clarify our notation, bold upper case letters will be used to refer to deterministic matrices while nonbolded upper case letters will be used to refer to random matrices, i.e., a matrix drawn from a particular distribution (from the context it should be clear which distribution is being referred to).
Definition 1.
Permuted Striped Block (PSB) data model: given , with , , define
• as the set of binary vectors of dimension with exactly nonzeros per column and as the set of matrices with columns for all .
• as the set of real, dissociated (see Definition 2) and sparse dimensional vectors, and as the set of matrices with columns for all .
We now define the following random matrices, note that the randomness is over their supports only.
• is a random binary matrix of size where for . The distribution over the supports of these random vectors is defined as follows. The first columns are formed by dividing a random permutation of into disjoint sets of size and assigning each disjoint set as the support of an . This process is repeated with independent permutations until columns are formed. In this construction there are a fixed number nonzeros per column and a maximum of nonzeros per row.
• is a random real matrix of size whose distribution is defined by concatenating mutually independent and identically distributed random vectors from ; that is, the support of each is chosen uniformly at random across all possible supports of size at most .
The PSB data model is the product of the aforementioned factors, generating the random matrix
.
The columns comprising in the PSB data model have nonzeros drawn so that partial sums of the nonzeros give unique nonzero values - a property which is referred to as dissociated.
Definition 2.
A vector is said to be dissociated iff for any subsets then .
The concept of dissociation comes from the field of additive combinatorics ( Definition 4.32 in [33]). Although at first glance this condition appears restrictive it is fulfilled almost surely for isotropic vectors and more generally for any random vector whose nonzeros are drawn from a continuous distribution.
1.2 Motivation and Related Work
The PSB data model and the associated factorization task can be interpreted and motivated from a number of perspectives, three of which we now highlight.
• A generative model for studying community detection and clustering:
consider the general problem of partitioning the nodes of a graph into clusters so that intra cluster connectivity is high relative to inter cluster connectivity. Given such a graph two basic questions of interest are 1) do these clusters or communities of nodes exist (community detection) and 2) can we actually recover them (clustering)? To study this question researchers study various generative models for random graphs, one of the most popular (particularly in machine learning and the network sciences) being the stochastic block model (for a recent survey see
[1]). In this setting the observed data matrix is the adjacency matrix of a graph, generated by first sampling the cluster to which each node belongs and then sampling edges based on whether or not the relevant pair of nodes belong to the same cluster. The weighted stochastic block model, Aicher et al [3], generalizes this idea, allowing the weights of this adjacency matrix to be non-binary. The PSB data model can be viewed as an alternative data model for studying community detection and clustering. Indeed, the PSB data model can be interpreted as the adjacency matrix of a weighted bipartite graph. Recovering the factors and from is valuable as encodes clusters or groups in the set (where each group is a fixed size ) and each column of encodes a soft clustering of an object in into of groups. The nonzero coefficients in a given column of represent the strength of association between an object in and the clusters defined by .
• Dictionary learning with a sparse, binary dictionary: for classes of commonly used data it is typically the case that there are known representations in which the object can be represented using few components, e.g. images are well represented using few coefficients in wavelet or discrete cosine representations. That is, there is a known “dictionary” for which data from a specific class has the property that is small even while the number of nonzeros in is limited to , meaning . Dictionary learning allows one to tailor a dictionary to a specific data set by solving subject to for all . Alternatively, dictionary learning applied to a data matrix without prior knowledge of an initial dictionary reveals properties of the data through the learned dictionary. Factorizing a matrix drawn from the PSB data model can be viewed as a specific instance of dictionary learning in which the dictionary is restricted to be in the class of overcomplete sparse, binary dictionaries. While this restricted class of dictionaries limits the type of data which the PSB data model can be used to describe, as we will show in Section 3 it does allow for a rigorous proof that with high probability it is possible to efficiently learn the dictionary and sparse code. This extends the growing literature on provable dictionary learning - see Table 1 for a summary of some recent results.
• Learned combinatorial compressed sensing: the field of compressed sensing [13, 17] studies how to efficiently reconstruct a sparse vector from only a few linear measurements. These linear measurements are generated by multiplying the sparse vector in question by a matrix, referred to as the sensing matrix. The sensing matrices most commonly studied are random and drawn from one of the following distributions: Gaussian, uniform projections over the unit sphere and partial Fourier. Combinatorial compressed sensing [10] instead studies the compressed sensing problem in the context of a random, sparse, binary sensing matrix. The problem of factorizing PSB matrices can therefore also be interpreted as recovering the sparse coding matrix from compressed measurements without access to the sensing matrix . Indeed, the factorization problem we study here can be motivated by the conjecture that, as combinatorial compressed sensing algorithms are so effective, the sparse code can be recovered even without knowledge of the sensing matrix.
The PSB data model, Definition 1, and the associated factorization task are most closely related to the literature on subspace clustering, multiple measurement combinatorial compressed sensing, and more generally the literature on provable dictionary learning. Each of these topics studies the model , where is assumed to be a sparse matrix with up to nonzeros per column. These topics differ in terms of what further assumptions are imposed on the dictionary and sparse coding matrix . The most general setup is that of dictionary learning, in which typically no additional structure is imposed on . In the literature on provable dictionary learning however, structure is often imposed on the factor matrices so as to facilitate the development of theoretical guarantees even if this comes at the expense of model expressiveness. Two popular structural assumptions are that the dictionary is complete (square and full rank) or that its columns are uncorrelated. Furthermore, as is the case for the PSB data model, in provable dictionary learning it is common to assume that the factor matrices are drawn from a particular distribution, e.g. Bernoulli-Subgaussian, so as to understand how difficult the factorization typically is to perform, rather than in the worst case. Table 1 lists some recent results and summarizes their modelling assumptions. Subspace clustering considers the further structural assumption that the columns of the sparse coding matrix can be partitioned into disjoint sets , where the number of nonzeros per row of each of the submatrices is less than the minimum dimension of and . In this setting, the data is contained on a union of subspaces which are of lower dimension than the ambient dimension [19].
The PSB data model differs from the above literature most notably in the construction of being sparse, binary and having exactly nonzeros per column. The case of being sparse and binary has been studied previously in [5]
, where, guided by the notion of reversible neural networks (that a neural network can be run in reverse, acting as a generative model for the observed data), the authors consider learning a deep network as a layerwise nonlinear dictionary learning problem. This work differs substantially from
[5] in a number of respects: first the authors consider , here is the elementwise unit step function which removes information about the nonzero coefficients of and means that is binary. As a result of this, and because the authors are in the setup where each layer feeds into the next, then is also assumed to be binary. The factors generating this thresholded model are challenging to recover due to the limited information available and as a consequence this model is also less descriptive for nonbinary data. In particular, in the three motivating examples previously covered: the communities would be unweighted, the dictionary learning data would be binary, and the learned combinatorial compressed sensing would only be for binary signals. Third and finally, in [5] the nonzeros of are drawn independently of one another, the distribution defined in the PSB data model is a significant departure from this with both inter and intra column nonzero dependencies. We also emphasize that in terms of method the approach we take to factorize a matrix drawn from the PSB data model differs markedly from that adopted in [5]. In this prior work is reconstructed (up to permutation) using a non-iterative approach, involving first the computation of all row wise correlations of from which pairs of nonzero entries are recovered. Using a clustering technique adapted from the graph square root problem, these pairs of nonzeros, which can be thought of as partial supports of the columns of of length 2, are then combined to recover the columns in question. In contrast, our method iteratively generates large partial supports directly from the columns of which can readily be clustered while simultaneously revealing nonzero entries in . This process is then repeated on , which is the residual of after the approximations of and at the iteration have been removed.
1.3 Main Contributions
The main contributions in this paper are an 1) the introduction of the Permuted Sparse Binary (PSB) data model, 2) an algorithm, Expander Based Factorization (EBF), for computing the factorization of PSB matrices and 3) recovery guarantees for EBF under the PSB data model, which we summarize in Theorem 1. Central to our method of proof is the observation that the random construction of as in Definition 1 is with high probability the adjacency matrix of a left regular () bipartite expander graph. We define such graphs in Definition 3 and discuss their properties in Section 2.2. In what follows the set of left regular () bipartite expander graph adjacency matrices of dimension is denoted .
Theorem 1.
Let be drawn from the PSB data model (Definition 1) under the assumption that . If EBF, Algorithm 1, terminates at an iteration then the following statements are true.
1. EBF only identifies correct entries: for all there exists a permutation such that , and all nonzeros in are equal to the corresponding entry in .
2. Uniqueness of factorization: if then this factorization is unique up to permutation.
3. Probability that EBF is successful: suppose and where and . If where (see Lemma 8 for a full definition) and is a constant, then the probability that EBF recovers and up to permutation is greater than .
As each column of is composed of columns from , which itself has columns, a minimum number of columns are necessary in order for to have at least one contribution from each column in . To be clear, this is a necessary condition on to identify all columns in . Theorem 1 states that is sufficient to recover both and from with high probability under the PSB data model. We believe this result is likely to be a sharp lower bound as one factor arises from a coupon collector argument inherent to the way is sampled, and the second factor is needed to achieve the stated rate of convergence. As discussed in more detail in Section 4.2, assuming the same asymptotic relationships as in statement 3 of Theorem 1, EBF has a per iteration cost (in terms of the input data dimensions) of . The rest of this paper is structured as follows: in Section 2 we provide the algorithmic details of EBF, in Section 3 we prove Theorem 1 and in Section 4 we provide a first step improvement of EBF along with numerical simulations, demonstrating the efficacy of the algorithms in practice. The proofs of the key supporting lemmas are mostly deferred to the Appendix.
2 Algorithmic Ideas
In this section we present a simple algorithm which leverages the properties of expander graphs to try and compute the factorization of where and . We call this algorithm Expander Based Factorization (EBF). To describe and define this algorithm we adopt the following notational conventions. Note that all variables referenced to here will be defined properly in subsequent sections.
• For any we define as the set of natural numbers less than or equal to , for example, .
• If is an matrix, a subset of the row indices and a subset of the column indices, then is the submatrix of formed with only the rows in and the columns in .
• will be used as an the iteration index for the algorithms presented.
• is the estimate of
, up to column permutation, at iteration .
• is the estimate of , up to row permutation, at iteration .
• is the residual of the data matrix at iteration .
• is the number of partial supports extracted from at iteration .
• is the matrix of partial supports , extracted from .
• is the set of singleton values (see Definition 4) extracted from which appear in a column of more than times.
• is the set of partial supports (see Definition 5) of extracted from used to update the th column of the estimate .
• A column with of said to have been recovered iff there is some iteration for which for all subsequent iterates there exists a column where such that .
• A column with of is said to be complete at iteration iff
• is the elementwise unit step function; for and for .
• is the set of column indices of which have non-zeros, i.e., . Furthermore .
• For consistency and clarity we will typically adopt the following conventions: will be used as an index for the columns of , , and , will be used as an index for the rows of , , and , will be used as an index for the columns of as well as the rows of , will be used as an index for the columns of as well as the rows of , and finally will be used as an index for the columns of .
2.1 The Expander Based Factorization Algorithm (EBF)
The key idea behind EBF is that if the binary columns of are sufficiently sparse and have nearly disjoint supports, then certain entries in , which we will term singleton values (see Definition 4), can be directly identified from entries of . Furthermore, it will become apparent that singleton values not only provide entries of but also entries in via the construction of partial supports. By iteratively combining the information gained about and from the singleton values and partial supports and then removing it from the residual, EBF can iteratively recover parts of and until either no further progress can be made or the factors have been fully recovered (up to permutation).
The remainder of Section 2 is structured as follows: in Section 2.2 we review the definition and properties of expander graphs so as to formalize a notion of the columns of being sufficiently disjoint in support, in Section 2.3 we define and prove certain key properties of singleton values and partial supports, in Section 2.4 we prove that EBF never introduces erroneous nonzeros into the estimates of either or and review each step of Algorithm 1 in more detail. Finally, in Section LABEL:subsection_A_whp_expander, for completeness we provide a sketch of a proof that in a certain parameter regime the matrix in the PSB data model is with high probability the adjacency matrix of a expander graph.
2.2 Background on Expander Graphs
The PSB data model in Definition 1 is chosen so as to leverage certain properties of expander graphs. Such graphs are both highly connected yet sparse and are an interesting object both from a practical perspective, playing a key role in applications, e.g. error correcting codes, as well in areas of pure maths (for a survey see Lubotzky [24]). We consider only left -regular bipartite expander graphs and for ease we will typically refer to these graphs as expanders.
Definition 3 ((k, ϵ, d) Expander Graph [28]).
Consider a left d-regular bipartite graph , for any let be the set of nodes in connected to a node in . is a (k, , d) expander iff
(2.1)
Here denotes the set of subsets of with cardinality at most . A key property of such graphs is the Unique Neighbour Property.
Theorem 2 (The Unique Neighbour Property[15, Lemma 1.1]).
Suppose that G is an unbalanced, left d-regular bipartite graph . Let be any subset of nodes and define
N1(S)={j∈N(S):|N(j)∩S|=1},
here is the set of nodes of connected to node . If is a (k, , d) expander graph then
|N1(S)|>(1−2ϵ)d|S|∀S∈[n](≤k). (2.2)
A proof of Theorem 2 in the notation used here is available in [25, Appendix A]. For our purposes it will prove more relevant to describe expander graphs using matrices. The adjacency matrix of a expander graph is an binary matrix where if there is an edge between node and and is otherwise111We note that adjacency matrices of graphs are often defined to describe the edges present between all nodes in the graph, however we emphasize that in the definition adopted here the edges between nodes in the same group are not set to zero but rather are omitted entirely.. Applying Definition 3 and Theorem 2 we make the following observations.
Corollary 1 (Adjacency matrix of a (k, ϵ, d) Expander Graph).
If is the adjacency matrix of a (k, , d) Expander Graph then any submatrix of , where , satisfies the following properties.
1. By definition 3 there are more than rows in that have at least one non-zero.
2. By Theorem 2 there are more than rows in that have only one non-zero.
We will use to denote the set of (k, , d) expander graph adjacency matrices of dimension .
2.3 Singleton values and partial supports
For now we will consider a generic vector such that where and . We will later apply the theory we develop here to the columns of the residual at any iteration . Letting denote the th row of , then any given entry of is a sum of some subset of the entries of . We now introduce two concepts which underpin much of what follows.
Definition 4 (Singleton Value).
Consider a vector where and , a singleton value of is an entry such that , hence for some .
Definition 5 (Partial Support).
A partial support of a column of is a binary vector satisfying .
Singleton values are of interest in the context of factorizing a matrix drawn from the PSB data model as once identified their contribution can be removed from the residual. Furthermore, using a singleton value one can construct a partial support by creating a binary vector with nonzeros where the singleton value appears. Therefore identifying singleton values allows for the recovery of nonzeros in both and . To leverage this fact however we need a criteria or certificate for identifying them. Under the assumption that is a () expander and that is dissociated (see Definition 2), then it is possible to derive a certificate based on a lower bound on the mode (or frequency) with which a value appears in .
Lemma 1 (Identification of singleton values).
Consider a vector where and , then the frequency of any singleton value is at least . To be precise,
|{j∈supp(b):∃l∈supp(xi)s.t.bj=zl}|≥2ϵd.
A proof of Lemma 1 is provided in Appendix A.1. This Lemma provides a sufficient condition for testing whether a value is a singleton or not. However, it does not provide any guarantees that in there will be any singleton values. In Lemma 2, adapted from [25, Theorem 4.6], we show, under certain assumptions, that there always exist a positive number of singleton values which appear more than times in .
Lemma 2 (Existence of singleton values, adapted from [25, Theorem 4.6]).
Consider a vector where and . For , let be the set of row indices of for which , i.e., . Defining as the set of singleton values which appear more than times in , then
|T|≥|supp(z)|(1+2ϵ)d.
Therefore, so long as is nonzero it is always possible to extract at least 1 partial support of size greater than .
A proof of Lemma 2 is given in Appendix A.2.
Step 5 of Algorithm 1 relies on us being able to accurately and efficiently sort the partial supports extracted by their column of . Corollary 2 states that if with and if the partial supports are sufficiently large, then clustering can be achieved without error simply by computing pairwise inner products.
Corollary 2 (Clustering partial supports).
Any pair of partial supports , satisfying and originate from the same column of iff .
A proof of Corollary 2, which follows directly from results derived in the proof of Lemma 2, is provided in Appendix A.3.
2.4 Algorithm 1 accuracy guarantees and summary
Based on the results of Section 2.3 it is now possible to provide the following accuracy guarantees for EBF.
Lemma 3 (EBF only identifies correct entries).
Let , where with and . Suppose that EBF terminates at iteration , then the following statements are true.
1. For all there exists a permutation such that ,
and all nonzeros in are equal to the corresponding entry in .
2. EBF fails only if .
For a proof of Lemma 3 see Appendix A.4. Lemma 3 states that EBF never makes a mistake in terms of introducing erroneous nonzeros to either or , and hence fails to factorize only by failing to recover certain nonzeros in or . We are now in a position to be able to summarize and justify how and why each step of Algorithm 1 works.
• Steps 1, 2 and 3 are self explanatory, 1 being the initialization of certain key variables, 2 defining a while loop which terminates only when neither of the estimates or are updated from the previous iterate and 3 being the update of the iteration index.
• Steps 4 and 5 are in fact conducted simultaneously with each column of the residual being processed in parallel. The mode of each nonzero in a column is calculated and the value checked against singleton values identified in previous iterates. If a nonzero value appears more than times then its associated partial support is constructed. Note that although it may be possible to identify singleton values which appear at least times, unless they appear more than then it is not possible to guarantee that their associated partial support can be clustered correctly. Therefore, such a singleton value cannot be placed in a row of consistent with the other singleton values extracted, and its associated partial support cannot be used to augment . If any nonzero value of a column of the residual matches a previously identified singleton value from the same column then it can also be used to augment . Indeed, if then by Lemma 3, and by the fact that is dissociated it must be that .
• Steps 6, 7 and 8 can be conducted on the basis of Corollary 2. A naive method would be to simply take each of the partial supports extracted from in turn and compute its inner product with the nonzero columns of . By Lemma 3 and the fact that each partial support has cardinality larger than , then either a partial support will match with an existing column of or otherwise it can be used as the starting estimate for a new column of . Once a partial support, for which we know the column in from which it was extracted, is assigned to a column of , then the corresponding location in can be updated using its associated singleton value.
• Step 9 - once all singleton values and partial supports extracted from have been used to augment and then the residual can be updated and the algorithm loops.
2.5 The construction of A in the PSB data model: motivation and connections to (k,ϵ,d) expander graphs
The dictionary model in Definition 1 is chosen so as to satisfy two properties that facilitate the identification of its columns from the data . First, the number of nonzeros per row of is bounded so that each entry can be readily guaranteed to be recovered, and second, the supports of the nonzeros per column are drawn so that the probability of substantial overlap is bounded. The remainder of this section is a further discussion of how these properties motivate the construction of and give insight into other choices of for which the methods presented in this paper could be extended. Some explicit examples of such extensions are given in Section 5.
In order that an entire column of can be identified, the union of the associated partial supports extracted by EBF needs to be equal to the entire support of the column. That sufficiently many partial supports of a column will be observed at some iteration of EBF is achieved by ensuring that there are enough nonzeros per row of , this is analyzed in Lemma 9 in Section 3. The inability to identify all entries in a column of only occurs if the random combinations of columns of have a consistent overlap, resulting in an entry from the column in question not being present in any of the partial supports. The probability of such an overlap is analyzed in the proof of Lemma 8, where we show that bounding the maximum number of nonzeros per row of is sufficient to ensure that such a consistent overlap has a probability that goes to zero exponentially fast in the number of partial supports available. This motivates the random construction of in Definition 1, where the maximum number of nonzeros per row of is given by .
The algorithmic approach of EBF, i.e., the extraction and clustering of singleton values and partial supports, follows from being a expander. In [7] it was shown that if were constructed such that the column supports have fixed cardinality and were drawn independently and uniformly at random, then would be a expander with high probability. More precisely, [7, 8] considered and to be fixed with , , and growing proportionally and showed that the probability that would not be the adjacency matrix of a expander graph is bounded by a given function which decays exponentially in . Without modification, the aforementioned proof and bounds in [7] prove that the construction of in the PSB data model, Definition 1, is with high probability a expander graph. For brevity we do not review this proof in detail and instead only sketch why the proof is equally valid here. In [7] the large deviation probability bounds are derived by considering the submatrix consisting of columns from . The support is repeatedly divided into two disjoint supports, say and , each of size approximately and then the number of neighbours is bounded in terms of and . This process is repeated until the finest level partitions contain a single column, in which case for all . Bounds on the probability of the number of neighbours is computed using the mutual independence of the columns. These bounds also hold for the construction in Definition 1 since the columns either have independently drawn supports or if they are dependent then they are disjoint by construction.
3 Theoretical Guarantees
Analyzing and proving theoretical guarantess for EBF directly is challenging, so instead our approch will be to study a simpler surrogate algorithm which we can use to lower bound the performance of EBF. To this end, we introduce the Naive Expander Based Factorization Algorithm (NEBF) which, despite being suboptimal from a practical perspective, is still sufficiently effective for us to prove Theorem 1.
3.1 The Naive Expander Based Factorization Algorithm (NEBF)
NEBF, given in Algorithm 2, is based on the same principles as EBF developed in Section 2 - in short the extraction and clustering of partial supports and singleton values. However, there are a number of significant restrictions included in order to allow us to determine when and with what probability it will succeed. At each iteration of the while loop, lines 2-14, NEBF attempts to recover a column of , if it fails to do so then the algorithm terminates. Therefore, by construction, at the start of the th iteration the first of columns are complete. On line 3 the subroutine PeelRes (for more details see Algorithm 4 in Appendix B.1) iteratively removes the contributions of complete columns of from the residual until none of the partial supports extracted match with a complete column. This means that the clusters of partial supports returned by PeelRes all correspond to as of yet not complete columns of . To define PeelRes we need to introduce the notion of the visibile set , which is the set of column indices of for which there exists at least one partial support extracted from . In step 4 NEBF identifies the cluster of partial supports with the largest cardinality and then in steps 6 and 7 attempts to use this cluster to compute and complete the th column of . If this step is successful then in lines 8-9 the residual and iteration index are updated and the algorithm repeates. If the construction of the th column is unsuccessful, i.e., it’s missing at least one entry, then the algorithm terminates.
We now present a few properties of NEBF: first and analagous to Lemma 3 NEBF does not make mistakes.
Lemma 4 (NEBF only identifies correct entries).
Let , where with and . Suppose NEBF terminates at an iteration , then the following statements are true.
1. For all there exists a permutation matrix such that
1. where .
2. From any nonzero , at least singleton values and associated partial supports, each with more than nonzeros, can be extracted and clustered without error.
3. , and any nonzero in is equal to its corresponding entry in .
2. NEBF fails only if .
A proof of Lemma 4 can be found in appendix A.5. One of the key differences between NEBF and EBF is that at every iteration the residual is of the form where and : as a result Lemma 2 applies at every iteration. This is covered in more detail in Lemma 4 below.
3.2 Key supporting Lemmas
Before we can proceed to prove Theorem 1 it is necessary for us to introduce a number of key supporting lemmas. First, and taking inspiration from algorithms in the combinatorial compressed sensing literature (in particular Expander Recovery [21]), it holds for both EBF and NEBF that recovery of is sufficient for the recovery of . This result will prove useful in what follows as it allows us to study the recovery of only rather than both and .
Lemma 5 (Recovery of A is sufficient to guarantee success).
Let , where with and . If either EBF or NEBF recover up to permutation at some iteration , then both are guaranteed to recover by iteration . Therefore, for both algorithms recovery of is both a necessary and sufficient condition for success and hence under the PSB data model the probability that is successfully factorized is equal to the probability that is recovered.
For a proof of Lemma 5 see Appendix A.6. Second, using Lemmas 3 and 4, we have the following uniqueness result concerning the factorization calculated by EBF and NEBF.
Lemma 6 (Uniqueness of factorization).
Let , where with and . If either EBF or NEBF terminates at an iteration such that then the factorization is not only successful, but is also unique, up to permutation, in terms of factorizations of this form.
For a proof of Lemma 6 see Appendix A.7. The next Lemma states that if NEBF succeeds in computing the factors of then so will EBF. The key implication of this Lemma is that it is sufficient to study NEBF in order to lower bound the probability that EBF is successful.
Lemma 7 (NEBF can be used to lower bound the performance of EBF).
Assuming that with and , then if NEBF successfully computes the factorization of up to some permutation of the columns and rows of and respectively, then so will EBF. Furthermore, if the probability that NEBF successfully factorizes , drawn from the PSB data model, is greater than then the probability EBF successfuly factorizes is also greater than .
A proof of Lemma 7 can be found in Appendix A.8. Lemma 8 below provides a lower bound on the probability that a column of the random matrix from the PSB data model can be recovered from of its partial supports.
Lemma 8 ( Column recovery from L partial supports).
Under the construction of in Definition 1, consider any . Let be a set of partial supports associated with . Let , where is the unit step function applied elementwise, be the reconstruction of based on these partial supports. With , then
P(^Al≠Al)≤d(1−(n−ζk−1)(n−1k−1)−1)L.
Furthermore, if and , where are constants and , then this upper bound can be simplified as follows,
P(^Al≠Al)≤de−τ(n)L
where is .
For a proof see Appendix A.9. The key takeaway of this lemma is that the probability that NEBF fails to recover a column decreases exponentially in , the number of partial supports available to reconstruct it. Finally, Lemma 9 concerns the number of data points required so that each column of is seen sufficiently many times. To be clear, we require large enough so that the number of nonzeros per row of is at least as large as some lower bound with high probability.
Lemma 9 (Nonzeros per row in X).
Under the construction of in Definition 1, with for some , then the probability that has at least non-zeros per row is at least .
For a proof of Lemma 9 see Appendix A.10. With these Lemmas in place we are ready to proceed to the proof of Theorem 1.
3.3 Proof of Theorem 1
Statements 1 and 2 of Theorem 1 are immediate consequences of Lemmas 3 and 6 respectively, so all that is left is to prove statement 3. To quickly recap, our objective is to recover up to permutation the random factor matrices and , as defined in the PSB data model in Definition 1, from the random matrix . Our strategy at a high level is as follows: using Lemmas 5 and 7 we lower bound the probability that EBF factorizes by lower bounding the probability that NEBF recovers . NEBF recovers up to permutation iff at each iteration of the while loop, lines 2-14 of Algorithm 2, a new column of is recovered. We lower bound the probability of this using Lemma 8 by first conditioning on there being a certain number of nonzeros per row of using Lemma 9, and then using a pigeon hole principle argument to ensure that . Here is chosen to ensure the desired rate. In what follows we adopt the following notation.
• and are the events that EBF and NEBF respectively recover and up to permutation, meaning there exists an iteration and a permutation such that and .
• is the event that is the adjacency matrix of a expander graph with expansion parameter .
• For let denote the event that a column of is recovered at the th iterate of the while loop on lines 2-14 of Algorithm 2 at some iteration of NEBF. Note that by construction where is the index of the last column completed before NEBF terminates.
• is the event that each row of has at least some quantity (yet to be specified) nonzeros per row which is a function of .
Proof.
As NEBF recovers iff at every iteration of the while loop (lines 2-14 of Algorithm 2) a column of is recovered, then
P(Λ∗NEBF|Λ0) =P(n⋂h=1Λh|Λ0)
We now apply Bayes’ Theorem and condition on
,
P(Λ∗NEBF|Λ0) =P(n⋂h=1Λh|
|
|
# ODE problem: 3x^2 y dx + (x^3 + 2y)dy = 0
I have two ODE with which I cannot to slove;
3x2ydx+(x3+2y)dy=0
I tried to change of variable y=v*x, but I still cannot find a way to solve it.
the second is:
(exsin(y)-2ysin(x))dx+(excos(y)+2cos(x))dx=0
here I have no idea
thank you
B
## Answers and Replies
quasar987
Science Advisor
Homework Helper
Gold Member
They are exact diff equ. Have you not seen how to solve such ode's?
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
Surely you know other techniques for solving ODE? This problem is tailor-made for one of them.
|
|
People who code: we want your input. Take the Survey
# Tag Info
37
Sage is an open source computer algebra system. Let's see if it can handle your basic example: sage: sqrt(3) * (4/sqrt(3) - sqrt(3)) 1 What is happening under the hood? Sage is storing everything as a symbolic expression, which it is able to manipulate and simplify using some basic rules. Here is another example: sage: 1 + exp(pi*i) 0 So sage can also ...
22
There is no way to represent all real numbers without errors if each number is to have a finite representation. There are uncountably many real numbers but only countably many finite strings of 1's and 0's that you could use to represent them with.
20
It all depends what you want to do. For example, what you show is a great way of representing rational numbers. But it still can't represent something like $\pi$ or $e$ perfectly. In fact, many languages such as Haskell and Scheme have built in support for rational numbers, storing them in the form $\frac{a}{b}$ where $a,b$ are integers. The main reason ...
18
Computer algebra is a huge area, probably at least a semester-long university-level course to get most of the basics. However, I think we can cover some of the flavour of it here. Your case is the easy case, because it is entirely within the language of algebraic numbers (i.e. roots of polynomials), and manipulating polynomials is really the foundation of ...
9
Any machine model in which a machine can be described by a string over a fixed alphabet can only compute countably many things. Since there are uncountably many real numbers, all of these machine models fail to compute almost all real numbers. In fact, the alphabet need not be fixed. It is enough that the alphabet is taken from some countable set of finite ...
8
A real number cannot be input into a Turing machine, since it is an infinite object. There are various models for providing a Turing machine with oracle access to real numbers, but these are by definition computable. You could imagine a Turing machine given a ZFC definition of a real number, and in that case it's undecidable. For example, given a Turing ...
8
Yes. There are. There is the real-RAM/BSS model mentioned in the other answer. The model has some issues and AFAIK there is not much research activity about it. Arguably, it is not a realistic model of computation. The more active notion of real computability is that of higher type computation model. The basic idea is that you define complexity for higher ...
8
The model you describe is known as the Blum-Shub-Smale (BSS) model (also Real RAM model) and indeed used to define complexity classes. Some interesting problems in this domain are the classes $P_R$, $NP_R$, and of course the question of whether $P_R$ = $NP_R$. By $P_R$ we mean the problem is polynomially decidable, $NP_R$ is the problem is polynomially ...
8
Yes, Rice's theorem for reals holds in every reasonable version of computable reals. I will first prove a certain theorem and a corollary, and explain what it has to do with computability later. Theorem: Suppose $p : \mathbb{R} \to \{0,1\}$ is a map and $a, b \in \mathbb{R}$ two reals such that $p(a) = 0$ and $p(b) = 1$. Then there exists a Cauchy sequence ...
7
Your idea does not work because a number represented in base $b$ with mantissa $m$ and exponent $e$ is the rational number $b \cdot m^{-e}$, thus your representation works precisely for rational numbers and no others. You cannot represent $\sqrt{2}$ for instance. There is a whole branch of computable mathematics which deals with exact real arithmetic. Many ...
7
There are many effective Rational Number implementations but one that has been proposed many times and can even handle some irrationals quite well is Continued Fractions. Quote from Continued Fractions by Darren C. Collins: Theorem 5-1. - The continued fraction expression of a real number is finite if and only if the real number is rational. Quote ...
7
Type-2 Turing machines are not more powerful than ordinary Turing machines in the sense that any map $\mathbb{N} \to \mathbb{N}$ that can be computed by a type-2 machine can also be computed by an ordinary machine. To see this, suppose a type-2 Turing machine $T$ computes a function $f : \mathbb{N} \to \mathbb{N}$. We can convert $T$ to an ordinary machine ...
6
Let's assume that the numbers $a_1,\ldots,a_n$ are integers, so that the problem is in NP for any fixed $f$. We construct a polynomial $f$ so that the problem is NP-complete, by reduction from vertex cover in cubic graphs ($3$-regular graphs). Let the instance of cubic vertex cover consist of a cubic graph $G=(V,E)$ and an integer $m$, and let $|V| = n$. ...
6
Here is the question: You are given a list of length $n+1$ which contains the numbers $1,\ldots,n$, one of them appearing twice (and the rest appearing once). Find the number which appears twice. The sum of numbers from $1$ to $n$ is $\frac{n(n+1)}{2}$, so if you subtract that from the sum of the list you get the number appearing twice.
5
There are a number of "exact real" suggestions in the comments (e.g. continued fractions, linear fractional transformations, etc). The typical catch is that while you can compute answers to a formula, equality is often undecidable. However, if you're just interested in algebraic numbers, then you're in luck: The theory of real closed fields is complete, o-...
5
In a nutshell: Printing a random non-computable real is a meaningless task, for precise technical reasons. The meaningful problem is to print non-computable numbers precisely identified by some unique property. But these cannot be printed by any program precisely because they are not computable. Using randomness in the hope of printing by chance the ...
5
Turing machines, in the classical sense, decide languages of finite strings over finite alphabets. Your logical language has uncountably many constant symbols so you can't even write down all the formulas as strings, let alone ask a Turing machine to decide things about the collection of formulas.
5
The continued fraction algorithm is easy enough to implement. The first step is to compute the continued fraction of the input $x = [c_0;c_1,\ldots]$. You start with $x_0 = x$, and use the recurrence $c_i = \lfloor x_i \rfloor$, $x_{i+1} = 1/(x_i - c_i)$. You stop when $x_i - c_i$ is "small enough". The next step is to compute the convergent of the continued ...
5
We can compute $$f(\theta) = -N\log (2\pi)-\frac{1}{2}\sum_{i=1}^N (\langle x_i,\theta \rangle -y_i)^2.$$ Expanding this out, we get that $f(\theta)$ is some quadratic form: $$f(\theta) = \theta' A \theta + v'\theta + C,$$ where $A$ is symmetric. The next step is to get rid of the linear term. Let $\theta = \psi + \epsilon$. Then $$\theta' A \theta + \... 4 If you use the operators \{+,-,\times,/\} (i.e., you don't included the power operator), then all of your problems are likely decidable. Testing equality with zero For instance, let's consider L = \mathbb{Z} \cup \{\pi\}. Then you can treat \pi as a formal symbol, so that each leaf is a polynomial in \mathbb{Z}[\pi] (e.g., the integer 5 is the ... 4 This is a rather tricky question! As you seem to understand, the real issue is the presence of \hat{}. It is intimately related to a well known conjecture: Schanuel's conjecture, which states that, essentially, there are no non-trivial algebraic relationships between \pi and e. The (expected) positive answer to this conjecture would give you a ... 4 The set of higher-order primitive recursive reals is essentially the class of functions \mathbb{N}\rightarrow\mathbb{N} which can be represented by a term \mathrm{Nat}\rightarrow\mathrm{Nat} in Gödel's system T. Since every such function is total, and every well-typed term in the system can be enumerated effectively, there is a relatively easy proof by ... 4 There are several issues with your question but perhaps I can clarify some issues. First off you assume f(1) = 1.999... and also that no x \in \mathbb{N} exists such that f(x) = 2 but that's a contradiction in terms because 1.999... = 2 and thus f(1) = 2. Why does 1.999... = 2? Well there's an easy answer but not fulfilling answer and a more ... 3 Consider the following reasonable definition for a Turing machine computing an irrational number in [0,1]. A Turing machine computes an irrational r \in [0,1] if, on input n, it outputs the first n digits (after the decimal) of the binary representation of r. One can think of many extensions of this definition for probabilistic Turing machines. ... 3 You are looking for an optimal 1-dimensional k-means algorithm. The k-means objective function for partitioning the data x_1, \ldots, x_n into k sets S = \{S_1, \ldots, S_k\}.$$ \sum\limits_{i=1}^k \sum\limits_{x \in S_i}\lVert x - \mu_i \rVert^2 where $\mu_i$ is the mean of $S_i$ [1]. You can apply a dynamic programming algorithm to the ...
3
It's pretty clearly an algorithm according to my definition in the linked question. I think your real question is "what is the problem, if any, for which this algorithm is correct?" The answer would be something like "given some stuff and a number of iterations, output a value satisfying some condition with an accuracy related to the iteration count."
3
Original formulation (Originally, the OP was interested in the minimum absolute difference rather than the minimum non-zero absolute difference.) You are asking two different questions. For the first, consider the restricted version in which all numbers are rational, and the decision variant in which the problem is to decide whether your expression is at ...
3
Check this out! http://coq.io/opam/coq-markov.8.5.0.html. A library for Markov's inequality built on mathematical probability theory.
3
You can run any standard quantum algorithm on a real-amplitude quantum computer with one additional qubit and only constant-factor slowdown (or perhaps linear-factor slowdown considering loss of parallelism) by replacing each $a{+}bi$ in your unitary matrices by $\big(\begin{smallmatrix}a&-b\\b&a\end{smallmatrix}\big)$. Likewise, you can simulate a ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
10,16,2021
Lossless CNN Channel Pruning via Gradient Resetting and Convolutional Re-parameterization2020-07-07 ${\displaystyle \cong }$ Channel pruning (a.k.a. filter pruning) aims to slim down a convolutional neural network (CNN) by reducing the width (i.e., numbers of output channels) of convolutional layers. However, as CNN's representational capacity depends on the width, doing so tends to degrade the performance. A traditional learning-based channel pruning paradigm applies a penalty on parameters to improve the robustness to pruning, but such a penalty may degrade the performance even before pruning. Inspired by the neurobiology research about the independence of remembering and forgetting, we propose to re-parameterize a CNN into the remembering parts and forgetting parts, where the former learn to maintain the performance and the latter learn for efficiency. By training the re-parameterized model using regular SGD on the former but a novel update rule with penalty gradients on the latter, we achieve structured sparsity, enabling us to equivalently convert the re-parameterized model into the original architecture with narrower layers. With our method, we can slim down a standard ResNet-50 with 76.15\% top-1 accuracy on ImageNet to a narrower one with only 43.9\% FLOPs and no accuracy drop. Code and models are released at https://github.com/DingXiaoH/ResRep. DMCP: Differentiable Markov Channel Pruning for Neural Networks2020-05-07 ${\displaystyle \cong }$ Recent works imply that the channel pruning can be regarded as searching optimal sub-structure from unpruned networks. However, existing works based on this observation require training and evaluating a large number of structures, which limits their application. In this paper, we propose a novel differentiable method for channel pruning, named Differentiable Markov Channel Pruning (DMCP), to efficiently search the optimal sub-structure. Our method is differentiable and can be directly optimized by gradient descent with respect to standard task loss and budget regularization (e.g. FLOPs constraint). In DMCP, we model the channel pruning as a Markov process, in which each state represents for retaining the corresponding channel during pruning, and transitions between states denote the pruning process. In the end, our method is able to implicitly select the proper number of channels in each layer by the Markov process with optimized transitions. To validate the effectiveness of our method, we perform extensive experiments on Imagenet with ResNet and MobilenetV2. Results show our method can achieve consistent improvement than state-of-the-art pruning methods in various FLOPs settings. The code is available at https://github.com/zx55/dmcp Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning2020-02-12 ${\displaystyle \cong }$ Magnitude-based pruning is one of the simplest methods for pruning neural networks. Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures. Based on the observation that magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization. Our experimental results demonstrate that the proposed method consistently outperforms magnitude-based pruning on various networks, including VGG and ResNet, particularly in the high-sparsity regime. See https://github.com/alinlab/lookahead_pruning for codes. PruneNet: Channel Pruning via Global Importance2020-05-22 ${\displaystyle \cong }$ Channel pruning is one of the predominant approaches for accelerating deep neural networks. Most existing pruning methods either train from scratch with a sparsity inducing term such as group lasso, or prune redundant channels in a pretrained network and then fine tune the network. Both strategies suffer from some limitations: the use of group lasso is computationally expensive, difficult to converge and often suffers from worse behavior due to the regularization bias. The methods that start with a pretrained network either prune channels uniformly across the layers or prune channels based on the basic statistics of the network parameters. These approaches either ignore the fact that some CNN layers are more redundant than others or fail to adequately identify the level of redundancy in different layers. In this work, we investigate a simple-yet-effective method for pruning channels based on a computationally light-weight yet effective data driven optimization step that discovers the necessary width per layer. Experiments conducted on ILSVRC-$12$ confirm effectiveness of our approach. With non-uniform pruning across the layers on ResNet-$50$, we are able to match the FLOP reduction of state-of-the-art channel pruning results while achieving a $0.98\%$ higher accuracy. Further, we show that our pruned ResNet-$50$ network outperforms ResNet-$34$ and ResNet-$18$ networks, and that our pruned ResNet-$101$ outperforms ResNet-$50$. EZCrop: Energy-Zoned Channels for Robust Output Pruning2021-05-08 ${\displaystyle \cong }$ Recent results have revealed an interesting observation in a trained convolutional neural network (CNN), namely, the rank of a feature map channel matrix remains surprisingly constant despite the input images. This has led to an effective rank-based channel pruning algorithm, yet the constant rank phenomenon remains mysterious and unexplained. This work aims at demystifying and interpreting such rank behavior from a frequency-domain perspective, which as a bonus suggests an extremely efficient Fast Fourier Transform (FFT)-based metric for measuring channel importance without explicitly computing its rank. We achieve remarkable CNN channel pruning based on this analytically sound and computationally efficient metric and adopt it for repetitive pruning to demonstrate robustness via our scheme named Energy-Zoned Channels for Robust Output Pruning (EZCrop), which shows consistently better results than other state-of-the-art channel pruning methods. Neural Pruning via Growing Regularization2020-12-16 ${\displaystyle \cong }$ Regularization has long been utilized to learn sparsity in deep neural network pruning. However, its role is mainly explored in the small penalty strength regime. In this work, we extend its application to a new scenario where the regularization grows large gradually to tackle two central problems of pruning: pruning schedule and weight importance scoring. (1) The former topic is newly brought up in this work, which we find critical to the pruning performance while receives little research attention. Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains compared with its one-shot counterpart, even when the same weights are removed. (2) The growing penalty scheme also brings us an approach to exploit the Hessian information for more accurate pruning without knowing their specific values, thus not bothered by the common Hessian approximation problems. Empirically, the proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning. Their effectiveness is demonstrated with modern deep neural networks on the CIFAR and ImageNet datasets, achieving competitive results compared to many state-of-the-art algorithms. Our code and trained models are publicly available at https://github.com/mingsuntse/regularization-pruning. Play and Prune: Adaptive Filter Pruning for Deep Model Compression2019-05-11 ${\displaystyle \cong }$ While convolutional neural networks (CNN) have achieved impressive performance on various classification/recognition tasks, they typically consist of a massive number of parameters. This results in significant memory requirement as well as computational overheads. Consequently, there is a growing need for filter-level pruning approaches for compressing CNN based models that not only reduce the total number of parameters but reduce the overall computation as well. We present a new min-max framework for filter-level pruning of CNNs. Our framework, called Play and Prune (PP), jointly prunes and fine-tunes CNN model parameters, with an adaptive pruning rate, while maintaining the model's predictive performance. Our framework consists of two modules: (1) An adaptive filter pruning (AFP) module, which minimizes the number of filters in the model; and (2) A pruning rate controller (PRC) module, which maximizes the accuracy during pruning. Moreover, unlike most previous approaches, our approach allows directly specifying the desired error tolerance instead of pruning level. Our compressed models can be deployed at run-time, without requiring any special libraries or hardware. Our approach reduces the number of parameters of VGG-16 by an impressive factor of 17.5X, and number of FLOPS by 6.43X, with no loss of accuracy, significantly outperforming other state-of-the-art filter pruning methods. Exploiting Channel Similarity for Accelerating Deep Convolutional Neural Networks2019-08-06 ${\displaystyle \cong }$ To address the limitations of existing magnitude-based pruning algorithms in cases where model weights or activations are of large and similar magnitude, we propose a novel perspective to discover parameter redundancy among channels and accelerate deep CNNs via channel pruning. Precisely, we argue that channels revealing similar feature information have functional overlap and that most channels within each such similarity group can be removed without compromising model's representational power. After deriving an effective metric for evaluating channel similarity through probabilistic modeling, we introduce a pruning algorithm via hierarchical clustering of channels. In particular, the proposed algorithm does not rely on sparsity training techniques or complex data-driven optimization and can be directly applied to pre-trained models. Extensive experiments on benchmark datasets strongly demonstrate the superior acceleration performance of our approach over prior arts. On ImageNet, our pruned ResNet-50 with 30% FLOPs reduced outperforms the baseline model. Deep Model Compression via Deep Reinforcement Learning2019-12-04 ${\displaystyle \cong }$ Besides accuracy, the storage of convolutional neural networks (CNN) models is another important factor considering limited hardware resources in practical applications. For example, autonomous driving requires the design of accurate yet fast CNN for low latency in object detection and classification. To fulfill the need, we aim at obtaining CNN models with both high testing accuracy and small size/storage to address resource constraints in many embedded systems. In particular, this paper focuses on proposing a generic reinforcement learning based model compression approach in a two-stage compression pipeline: pruning and quantization. The first stage of compression, i.e., pruning, is achieved via exploiting deep reinforcement learning (DRL) to co-learn the accuracy of CNN models updated after layer-wise channel pruning on a testing dataset and the FLOPs, number of floating point operations in each layer, updated after kernel-wise variational pruning using information dropout. Layer-wise channel pruning is to remove unimportant kernels from the input channel dimension while kernel-wise variational pruning is to remove unimportant kernels from the 2D-kernel dimensions, namely, height and width. The second stage, i.e., quantization, is achieved via a similar DRL approach but focuses on obtaining the optimal weight bits for individual layers. We further conduct experimental results on CIFAR-10 and ImageNet datasets. For the CIFAR-10 dataset, the proposed method can reduce the size of VGGNet by 9x from 20.04MB to 2.2MB with 0.2% accuracy increase. For the ImageNet dataset, the proposed method can reduce the size of VGG-16 by 33x from 138MB to 4.14MB with no accuracy loss. Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks2020-04-29 ${\displaystyle \cong }$ The enormous inference cost of deep neural networks can be scaled down by network compression. Pruning is one of the predominant approaches used for deep network compression. However, existing pruning techniques have one or more of the following limitations: 1) Additional energy cost on top of the compute heavy training stage due to pruning and fine-tuning stages, 2) Layer-wise pruning based on the statistics of a particular, ignoring the effect of error propagation in the network, 3) Lack of an efficient estimate for determining the important channels globally, 4) Unstructured pruning requires specialized hardware for effective use. To address all the above issues, we present a simple-yet-effective gradual channel pruning while training methodology using a novel data-driven metric referred to as feature relevance score. The proposed technique gets rid of the additional retraining cycles by pruning the least important channels in a structured fashion at fixed intervals during the actual training phase. Feature relevance scores help in efficiently evaluating the contribution of each channel towards the discriminative power of the network. We demonstrate the effectiveness of the proposed methodology on architectures such as VGG and ResNet using datasets such as CIFAR-10, CIFAR-100 and ImageNet, and successfully achieve significant model compression while trading off less than $1\%$ accuracy. Notably on CIFAR-10 dataset trained on ResNet-110, our approach achieves $2.4\times$ compression and a $56\%$ reduction in FLOPs with an accuracy drop of $0.01\%$ compared to the unpruned network. Network Pruning via Annealing and Direct Sparsity Control2020-07-26 ${\displaystyle \cong }$ Artificial neural networks (ANNs) especially deep convolutional networks are very popular these days and have been proved to successfully offer quite reliable solutions to many vision problems. However, the use of deep neural networks is widely impeded by their intensive computational and memory cost. In this paper, we propose a novel efficient network pruning method that is suitable for both non-structured and structured channel-level pruning. Our proposed method tightens a sparsity constraint by gradually removing network parameters or filter channels based on a criterion and a schedule. The attractive fact that the network size keeps dropping throughout the iterations makes it suitable for the pruning of any untrained or pre-trained network. Because our method uses a $L_0$ constraint instead of the $L_1$ penalty, it does not introduce any bias in the training parameters or filter channels. Furthermore, the $L_0$ constraint makes it easy to directly specify the desired sparsity level during the network pruning process. Finally, experimental validation on extensive synthetic and real vision datasets show that the proposed method obtains better or competitive performance compared to other states of art network pruning methods. BWCP: Probabilistic Learning-to-Prune Channels for ConvNets via Batch Whitening2021-05-13 ${\displaystyle \cong }$ This work presents a probabilistic channel pruning method to accelerate Convolutional Neural Networks (CNNs). Previous pruning methods often zero out unimportant channels in training in a deterministic manner, which reduces CNN's learning capacity and results in suboptimal performance. To address this problem, we develop a probability-based pruning algorithm, called batch whitening channel pruning (BWCP), which can stochastically discard unimportant channels by modeling the probability of a channel being activated. BWCP has several merits. (1) It simultaneously trains and prunes CNNs from scratch in a probabilistic way, exploring larger network space than deterministic methods. (2) BWCP is empowered by the proposed batch whitening tool, which is able to empirically and theoretically increase the activation probability of useful channels while keeping unimportant channels unchanged without adding any extra parameters and computational cost in inference. (3) Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet with various network architectures show that BWCP outperforms its counterparts by achieving better accuracy given limited computational budgets. For example, ResNet50 pruned by BWCP has only 0.70\% Top-1 accuracy drop on ImageNet, while reducing 43.1\% FLOPs of the plain ResNet50. C2S2: Cost-aware Channel Sparse Selection for Progressive Network Pruning2019-04-06 ${\displaystyle \cong }$ This paper describes a channel-selection approach for simplifying deep neural networks. Specifically, we propose a new type of generic network layer, called pruning layer, to seamlessly augment a given pre-trained model for compression. Each pruning layer, comprising $1 \times 1$ depth-wise kernels, is represented with a dual format: one is real-valued and the other is binary. The former enables a two-phase optimization process of network pruning to operate with an end-to-end differentiable network, and the latter yields the mask information for channel selection. Our method progressively performs the pruning task layer-wise, and achieves channel selection according to a sparsity criterion to favor pruning more channels. We also develop a cost-aware mechanism to prevent the compression from sacrificing the expected network performance. Our results for compressing several benchmark deep networks on image classification and semantic segmentation are comparable to those by state-of-the-art. PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices2020-03-04 ${\displaystyle \cong }$ Model compression techniques on Deep Neural Network (DNN) have been widely acknowledged as an effective way to achieve acceleration on a variety of platforms, and DNN weight pruning is a straightforward and effective method. There are currently two mainstreams of pruning methods representing two extremes of pruning regularity: non-structured, fine-grained pruning can achieve high sparsity and accuracy, but is not hardware friendly; structured, coarse-grained pruning exploits hardware-efficient structures in pruning, but suffers from accuracy drop when the pruning rate is high. In this paper, we introduce PCONV, comprising a new sparsity dimension, -- fine-grained pruning patterns inside the coarse-grained structures. PCONV comprises two types of sparsities, Sparse Convolution Patterns (SCP) which is generated from intra-convolution kernel pruning and connectivity sparsity generated from inter-convolution kernel pruning. Essentially, SCP enhances accuracy due to its special vision properties, and connectivity sparsity increases pruning rate while maintaining balanced workload on filter computation. To deploy PCONV, we develop a novel compiler-assisted DNN inference framework and execute PCONV models in real-time without accuracy compromise, which cannot be achieved in prior work. Our experimental results show that, PCONV outperforms three state-of-art end-to-end DNN frameworks, TensorFlow-Lite, TVM, and Alibaba Mobile Neural Network with speedup up to 39.2x, 11.4x, and 6.3x, respectively, with no accuracy loss. Mobile devices can achieve real-time inference on large-scale DNNs. DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks2019-11-20 ${\displaystyle \cong }$ The rapidly growing parameter volume of deep neural networks (DNNs) hinders the artificial intelligence applications on resource constrained devices, such as mobile and wearable devices. Neural network pruning, as one of the mainstream model compression techniques, is under extensive study to reduce the number of parameters and computations. In contrast to irregular pruning that incurs high index storage and decoding overhead, structured pruning techniques have been proposed as the promising solutions. However, prior studies on structured pruning tackle the problem mainly from the perspective of facilitating hardware implementation, without analyzing the characteristics of sparse neural networks. The neglect on the study of sparse neural networks causes inefficient trade-off between regularity and pruning ratio. Consequently, the potential of structurally pruning neural networks is not sufficiently mined. In this work, we examine the structural characteristics of the irregularly pruned weight matrices, such as the diverse redundancy of different rows, the sensitivity of different rows to pruning, and the positional characteristics of retained weights. By leveraging the gained insights as a guidance, we first propose the novel block-max weight masking (BMWM) method, which can effectively retain the salient weights while imposing high regularity to the weight matrix. As a further optimization, we propose a density-adaptive regular-block (DARB) pruning that outperforms prior structured pruning work with high pruning ratio and decoding efficiency. Our experimental results show that DARB can achieve 13$\times$ to 25$\times$ pruning ratio, which are 2.8$\times$ to 4.3$\times$ improvements than the state-of-the-art counterparts on multiple neural network models and tasks. Moreover, DARB can achieve 14.3$\times$ decoding efficiency than block pruning with higher pruning ratio. Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks2020-03-05 ${\displaystyle \cong }$ Modern deep networks have millions to billions of parameters, which leads to high memory and energy requirements during training as well as during inference on resource-constrained edge devices. Consequently, pruning techniques have been proposed that remove less significant weights in deep networks, thereby reducing their memory and computational requirements. Pruning is usually performed after training the original network, and is followed by further retraining to compensate for the accuracy loss incurred during pruning. The prune-and-retrain procedure is repeated iteratively until an optimum tradeoff between accuracy and efficiency is reached. However, such iterative retraining adds to the overall training complexity of the network. In this work, we propose a dynamic pruning-while-training procedure, wherein we prune filters of the convolutional layers of a deep network during training itself, thereby precluding the need for separate retraining. We evaluate our dynamic pruning-while-training approach with three different pre-existing pruning strategies, viz. mean activation-based pruning, random pruning, and L1 normalization-based pruning. Our results for VGG-16 trained on CIFAR10 shows that L1 normalization provides the best performance among all the techniques explored in this work with less than 1% drop in accuracy after pruning 80% of the filters compared to the original network. We further evaluated the L1 normalization based pruning mechanism on CIFAR100. Results indicate that pruning while training yields a compressed network with almost no accuracy loss after pruning 50% of the filters compared to the original network and ~5% loss for high pruning rates (>80%). The proposed pruning methodology yields 41% reduction in the number of computations and memory accesses during training for CIFAR10, CIFAR100 and ImageNet compared to training with retraining for 10 epochs . Exploring Weight Importance and Hessian Bias in Model Pruning2020-06-18 ${\displaystyle \cong }$ Model pruning is an essential procedure for building compact and computationally-efficient machine learning models. A key feature of a good pruning algorithm is that it accurately quantifies the relative importance of the model weights. While model pruning has a rich history, we still don't have a full grasp of the pruning mechanics even for relatively simple problems involving linear models or shallow neural nets. In this work, we provide a principled exploration of pruning by building on a natural notion of importance. For linear models, we show that this notion of importance is captured by covariance scaling which connects to the well-known Hessian-based pruning. We then derive asymptotic formulas that allow us to precisely compare the performance of different pruning methods. For neural networks, we demonstrate that the importance can be at odds with larger magnitudes and proper initialization is critical for magnitude-based pruning. Specifically, we identify settings in which weights become more important despite becoming smaller, which in turn leads to a catastrophic failure of magnitude-based pruning. Our results also elucidate that implicit regularization in the form of Hessian structure has a catalytic role in identifying the important weights, which dictate the pruning performance. Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation2020-06-19 ${\displaystyle \cong }$ Network pruning is one of the most dominant methods for reducing the heavy inference cost of deep neural networks. Existing methods often iteratively prune networks to attain high compression ratio without incurring significant loss in performance. However, we argue that conventional methods for retraining pruned networks (i.e., using small, fixed learning rate) are inadequate as they completely ignore the benefits from snapshots of iterative pruning. In this work, we show that strong ensembles can be constructed from snapshots of iterative pruning, which achieve competitive performance and vary in network structure. Furthermore, we present simple, general and effective pipeline that generates strong ensembles of networks during pruning with large learning rate restarting, and utilizes knowledge distillation with those ensembles to improve the predictive power of compact models. In standard image classification benchmarks such as CIFAR and Tiny-Imagenet, we advance state-of-the-art pruning ratio of structured pruning by integrating simple l1-norm filters pruning into our pipeline. Specifically, we reduce 75-80% of total parameters and 65-70% MACs of numerous variants of ResNet architectures while having comparable or better performance than that of original networks. Code associate with this paper is made publicly available at https://github.com/lehduong/ginp. Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks2019-09-17 ${\displaystyle \cong }$ Filter pruning is one of the most effective ways to accelerate and compress convolutional neural networks (CNNs). In this work, we propose a global filter pruning algorithm called Gate Decorator, which transforms a vanilla CNN module by multiplying its output by the channel-wise scaling factors, i.e. gate. When the scaling factor is set to zero, it is equivalent to removing the corresponding filter. We use Taylor expansion to estimate the change in the loss function caused by setting the scaling factor to zero and use the estimation for the global filter importance ranking. Then we prune the network by removing those unimportant filters. After pruning, we merge all the scaling factors into its original module, so no special operations or structures are introduced. Moreover, we propose an iterative pruning framework called Tick-Tock to improve pruning accuracy. The extensive experiments demonstrate the effectiveness of our approaches. For example, we achieve the state-of-the-art pruning ratio on ResNet-56 by reducing 70% FLOPs without noticeable loss in accuracy. For ResNet-50 on ImageNet, our pruned model with 40% FLOPs reduction outperforms the baseline model by 0.31% in top-1 accuracy. Various datasets are used, including CIFAR-10, CIFAR-100, CUB-200, ImageNet ILSVRC-12 and PASCAL VOC 2011. Code is available at github.com/youzhonghui/gate-decorator-pruning Movement Pruning: Adaptive Sparsity by Fine-Tuning2020-05-15 ${\displaystyle \cong }$ Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth- and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters.
|
|
# Question on definition of Dirichlet to Neumann operator
Assume $$\Omega$$ is an open, bounded subset of $$\mathbb R^3$$ with a $$C^2-$$ boundary $$\partial \Omega= \Gamma$$. For $$f \in H^{1/2}(\Gamma)$$, let $$F \in H^1(\Omega)$$ denote the weak solution of the Dirichlet problem:
$$\begin{cases} \Delta F=0 & \text{in } \Omega\\ F=f & \text{on } \Gamma \\ \end{cases}$$
and let $$Nf:=\frac{\partial F}{\partial \eta}$$ be the Neumann data on $$\Gamma$$. This defines a continuous map $$N: H^{1/2}(\Gamma) \to H_{*}^{-1/2}(\Gamma) \equiv \{g \in H^{-1/2}(\Gamma) \vert \int_{\Gamma} g=0\}$$
I have trouble understanding why $$Nf \in H_{*}^{-1/2}(\Gamma)$$. What I can see is that, $$F \in H^1(\Omega)$$ implies $$F \in H^{1/2}(\Gamma)$$ and moreover $$\int_{\Gamma} Nf=\int_{\Gamma} \frac{\partial F}{\partial \eta}=\int_{\Omega} \Delta F =0$$. But I don't understand why $$\frac{\partial F}{\partial \eta} \in H^{-1/2}(\Gamma)$$
EDIT: An idea just popped up inside my head, but I'd like if someone could confirm if it's correct. Applying Green's formula we find:
$$\int_{\Omega} {\vert \nabla F \vert }^2= \int_{\Gamma} F\frac{\partial F}{\partial \eta}=\int_{\Gamma} f Nf$$ for every $$f\in H^{1/2}(\Gamma)$$. Now since $$\nabla F \in L^2(\Omega)$$ we deduce that $$Nf \in H^{-1/2}(\Gamma)$$ by duality.
I 'm not familiar to this operator so I apologize in advance if I missed something from the definition. Any help or hint is much appreciated.
Thanks!
$$N$$ is a first order pseudo-differential operator. Hence it maps Sobolev space $$H^s(\Gamma)$$ into $$H^{s-1}(\Gamma)$$. Furthermore, $$N$$ is elliptic, its principal symbol is the square root of the principal symbol of the Laplacian. A reference is Appendix C in PDE II of Michael E. Taylor.
|
|
# Uniform convergence of infinite series
Suppose $f$ is a holomorphic function (not necessarily bounded) on $\mathbb{D}$ such that $f(0) = 0$. Prove the the infinite series $\sum_{n=1}^\infty f(z^n)$ converges uniformly on compact subsets of $\mathbb{D}$.
I met this problem on today's qual. Here is what I have so far, since $f(0) = 0$, we can write $f(z) = z^m h(z)$ for some integer $m$. Then $f(z^n) = z^{nm}h(z^n)$. We might then use Cauchy's criterion for uniform convergence to finish the proof.
-
What you did seems correct. – Davide Giraudo Aug 10 '12 at 20:27
An alternative way is the following: let $M_R:=\sup_{|z|\leq R}|f(z)|$. For a fixed $R<1$, define $g(z):=\frac 1{1+M_R}f(Rz)$, from the open unit disk to itself. Then by Schwarz lemma, we have $|g(z)|\leq |z|$ for all $z\in D$, hence $|f(Rz)|\leq (1+M_R)|z|$ for all $z\in D$. We get that $|f(z)|\leq \frac{1+M_R}R|z|$ for all $z$ in the closed ball of center $0$ and radius $R$, which shows that the series $\sum_nf(z^n)$ is normally convergent on this set.
More simply, for a fixed $R\in (0,1)$, we have $$\sup_{|z|\leq R}|f(z^n)|\leq \sup_{|z|\leq R^n}|f(z)|.$$ Then we use continuity at $0$ of $f$.
Thanks Davide, this is a really nice idea. Although $f$ is not necessarily bounded on $D$, it is bounded on every compact subsets of $D$. – Hongshan Li Aug 11 '12 at 3:26
|
|
# vector and interators problem
This topic is 4414 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hello, Im trying to solve a problem in my text Accelerated C++ (chapter 5 ex 6) but having no luck. Can anyone help out. (Sorry but the book has no associated forum) I have a programe that allows a user to input student names and their associated grades. This is stored in a vector of Student_info where Student_info is a structure containing the necessary elements to store the name etc. The exercise asks me to try different ways of removing those students who failed from the vector. One way is to inspect and test each element in the vector and to insert (vector.insert(...)) all the pass structures to the begining. Then the vector is resized to the number of pass structures copied which ultimately results in all the fail grades being chopped off. Whether this is an efficient method or not is the aim of the exercise. I tried to implement this alogorithm however the results were different. As a result i decided not to resize the vector and instead simply view the result of the inserting as a way to test the alogrithm. This is my code for the extract function
using std::vector;
void extract_fails(vector<Student_info> &students) //students is a record of students and thier grades.
{
std::vector<Student_info>::iterator iter = students.begin();
std::vector<Student_info>::size_type counter = 0; //to count how many pass grades have been inserted into the front
while (iter != students.end())
{
if (!fgrade(*iter)) //this funciton simply tests if a passed grade is less than 70 percent.
{
//if this is a pass grade insert it before the begining
students.insert(students.begin(), iter, iter);
counter++;
}
iter++; //go to the next student record in the vector
}
//resize
//students.resize(counter); //i have removed this for testing purposes
}
If i enter the following students data Adam fail Barry fail Charlie pass it should result in the following vector Charlie pass Adam fail Barry fail but it shows Adam fail Charlie pass Barry fail. could anyone comment as to why its doing this? Surely if i use students.insert(students.begin(),....) it should insert the requested elements BEFORE the begining of the vector. But it doesnt. Any help would be greatly appreciated.
##### Share on other sites
You have two problems. First of all:
students.insert(students.begin(), iter, iter);
Will not insert anything into the vector. The iterators should form a half-open range, which means the first iterator points to the first element in the range and the second iterator points to one element beyond the last element in the range. By specifying that the first element is also one element beyond the last element you are specifying an empty range (all elements following iter (first iterator passed) except for those elements from iter (second iterator passed) onwards). You should probably use the single element insert, which requires you to pass a value rather than two iterators.
Secondly if your insert did ever insert anything into the vector it would invalidate iter.
Σnigma
##### Share on other sites
better solution: pointer
iterator = vector.insert()
Kuphryn
##### Share on other sites
Indeed, the insert() member function will invalidate iterators when something actually gets inserted. That's because everything gets conceptually "shifted over" to make room for the inserted element(s). The container makes no guarantees what will happen when you try to use an invalidated iterator - welcome to undefined behaviour land. For a std::vector in particular, it will often seem to be OK - it will pick up the data that was newly shifted into the same spot - but will sometimes explode (the insert may trigger a resize, which would mean the whole shebang gets copied off to somewhere else in memory, and the old chunk of memory no longer necessarily even belongs to your process).
You are encouraged to think about the performance ramifications of that movement of elements, since "whether this is an efficient method or not is the aim of the exercise".
The standard library provides algorithms std::remove and std::remove_if, which can be used for a task like this. If you "remove" failing grades, these algorithms will put (copies of) the passing grades at the start of the vector, but will not change the vector's size, and make no guarantees about the contents of the elements past that point. (You are encouraged to think about how it may help performance to not make that guarantee; specifically about how the algorithm might be implemented similar to what you are currently doing and in light of the above discussion).
1. 1
2. 2
3. 3
4. 4
Rutin
16
5. 5
• 12
• 9
• 12
• 37
• 12
• ### Forum Statistics
• Total Topics
631419
• Total Posts
2999981
×
|
|
# $\tan{(30^°)}$ value
The exact value of tan function if angle of right triangle is $30$ degrees is called tan of $30$ degrees. It’s written as $\tan{(30^°)}$ in mathematical form according to Sexagesimal system.
$\tan{(30^°)} \,=\, \dfrac{1}{\sqrt{3}}$
The exact value of tan of angle $30$ degrees is $\dfrac{1}{\sqrt{3}}$ in fraction from. It is an irrational number and is equal to $0.5773502691\ldots$ in decimal form.
## Alternative form
$\tan{(30^°)}$ is written as $\tan{\Big(\dfrac{\pi}{6}\Big)}$ in circular system and also written as $\tan{\Bigg(33\dfrac{1}{3}^g\Bigg)}$ in centesimal system alternatively.
$(1) \,\,\,$ $\tan{\Big(\dfrac{\pi}{6}\Big)} \,=\, \dfrac{1}{\sqrt{3}}$
$(2) \,\,\,$ $\tan{\Bigg(33\dfrac{1}{3}^g\Bigg)} \,=\, \dfrac{1}{\sqrt{3}}$
### Proof
You learned the $\tan{\Big(\dfrac{\pi}{6}\Big)}$ value and it’s your time to learn how to $\tan{(30^°)}$ value is derived in trigonometry.
Email subscription
Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more
Mobile App for Android users
###### Math Problems
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
|
Lugaru's Epsilon
Programmer's
Editor 14.04
Previous Up Next Epsilon Command Line Flags Getting Started General Concepts
### File Inventory
Epsilon consists of the following files:
epsilon.exe
The Epsilon for Windows executable program.
epsilonc.exe
The Epsilon executable program for Windows Console mode.
eel.exe
Epsilon's compiler. You need this program if you wish to add new commands to Epsilon or modify existing ones.
eel_lib.dll
Under Windows, Epsilon's compiler eel.exe requires this file. Epsilon itself also uses this file when you compile from within the editor.
icu*
These files help provide Unicode support under Windows.
inherit.exe and inherit.pif
Epsilon for Windows uses these files to execute another program and capture its output.
edoc.hlp
This Windows help file provides help on Epsilon.
epshlp.dll
Epsilon's WinHelp file communicates with a running copy of 32-bit Epsilon so it can display current key bindings or variable values and let you modify variables from the help file. It uses this file to do that.
sendeps.exe
The Epsilon for Windows installer uses this file to help create desktop shortcuts and Send To menu entries. See Running Epsilon via a Shortcut.
VisEpsil.dll
Epsilon for Windows includes this Developer Studio extension that lets Developer Studio pass all file-opening requests to Epsilon.
mspellcmd.exe
Epsilon's speller uses this helper program to get suggestions from the MicroSpell speller.
econfig.exe
Epsilon for Windows runs this program when you use the configure-epsilon command.
mshelp2.vbs
Epsilon for Windows uses this script to display MS Help 2 files.
fixpath2.exe
Epsilon's Windows installer and configuration program use this program to add or remove directories from the system's PATH.
The secure shell (ssh) and secure file transfer (scp) features in Epsilon for Windows use these helper programs to interact with Cygwin's ssh program.
bscquery.exe
Epsilon for Windows uses this program to help support .bsc source code browser files.
owitheps.dll
This shell extension can be used to put an Open With Epsilon menu item on File Explorer's context menu, but by default Epsilon does this in a way that requires only setting registry entries, so this DLL is unused.
The installation program puts the following files in the main Epsilon directory, normally \Program Files\Eps14 under Windows and /opt/epsilon14.04 under Unix. (Epsilon for macOS keeps these files in various directories located within its app bundle, following Apple's requirements.)
epsilon-v14.sta
This file contains all of Epsilon's commands. Epsilon needs this file in order to run. If you customize Epsilon, this file changes. The name includes Epsilon's major version.
original.sta
This file contains a copy of the original version of epsilon-v14.sta at the time of installation.
edoc
Epsilon's on-line documentation file. Without this file, Epsilon can't provide basic help on commands and variables.
info\epsilon.inf
Epsilon's on-line manual, in Info format.
info\dir
A default top-level Info directory, for non-Unix systems that may lack one. See Info mode for details.
lhelp\*
This directory contains files for the HTML version of Epsilon's documentation. The lhelp helper program reads them.
eteach
Epsilon's tutorial. Epsilon needs this file to give the tutorial (see Epsilon Tutorial). Otherwise, Epsilon does not need this file to run.
keychart.pdf
A printable sheet listing most of Epsilon's default key assignments.
colclass.txt
One-line descriptions of each of the different color classes in Epsilon. The set-color command reads this file.
brief.kbd
The brief-keyboard command loads this file. It contains the bindings of all the keys used in Brief emulation, written in Epsilon's command file format.
epsilon.kbd
The epsilon-keyboard command loads this file. It contains the standard Epsilon key bindings for all the keys that are different under Brief emulation, written in Epsilon's command file format.
epsilon.mnu, brief.mnu
Non-GUI versions of Epsilon use one of these files to construct the menu bar.
gui.mnu, cua.mnu
GUI versions of Epsilon use one of these files to construct the menu bar.
latex.env
The tex-environment command in LaTeX mode (Alt-Shift-E) gets its list of environments from this file. You can add new environments by editing this file.
filter.txt
A file defining the options for the filter control of Epsilon's Common File Open/Save dialogs under Windows.
This file describes changes in recent versions of Epsilon. You can use the Alt-x release-notes command to read it.
epsout.e
The import-customizations command uses this file.
uninstall.exe
If you used the Windows-based installer, you can uninstall Epsilon by running this program.
install.log
The Windows-based installer creates this file to indicate which files it installed.
*.h
The installation program copies a number of "include files" to the subdirectory "include" within Epsilon's main directory. These header files are used if you decide to compile an Epsilon extension or add-on written in its EEL extension language.
eel.h
Epsilon's standard header file, for use with the EEL compiler.
codes.h
Another standard header file, with numeric codes. The eel.h file includes this one automatically.
*.e
These files contain source code in EEL to all Epsilon's commands. The installation program copies them to the subdirectory "source" within Epsilon's main directory.
epsilon.e
This file loads all the other files and sets up Epsilon.
samplemode.e
This example file demonstrates various aspects of how to define a new mode.
makefile
You can use this file, along with a "make" utility program, to help recompile the above Epsilon source files. It lists the source files and provides command lines to compile them.
The directory "changes" within Epsilon's main directory contains files that document new features added in Epsilon 9 and earlier versions. See the online documentation for details on changes in more recent versions. Other files in this directory may be used to help incorporate old customizations, when updating from Epsilon 7 or earlier. See Updating from an Old Version for information on updating to a new version of Epsilon.
Previous Up Next Epsilon Command Line Flags Getting Started General Concepts
|
|
# Western Australia Studentized Range Distribution Table Pdf
## The Distribution of the Ratio in a Single Normal Sample
### Harter Tables of Range and Studentized Range
Studentized Range Distribution Table Guide Peng Zeng. A Table of Upper Quantile Points of the Studentized Range Distribution This is a table of upper quantile points of the studentized range distribution, values that are required by several, • Table A.4 Critical Values for the χ2 Distribution • Table A.5 Critical Values for the F Distribution • Table A.6 Critical Values for the Studentized Range.
### Studentized Range Distribution CDF В· Issue #158
7.4.7.1. Tukey's method itl.nist.gov. The Studentized Range Distribution is a function of q, k, and df, where k is the number of groups of means, and df is the degrees of freedom. If $\phi(z)$ is the standard normal PDF, and $\Phi(z)$ is the standard normal CDF:, studentized range test is de ned, and the level and the power of the proposed range test are calculated. In Section 4, the numerical calculation of the level and the power is discussed..
Critical Values of the Studentized Range Statistic (Tukey's HSD Critical Values) Table of a - percentage points of the studentized range statistic with v = k(n-1) d.f. arising from k normal populations with unknown variance 0' and with non-identical means (- k6o12, 0,, 0, kW2) is given in Table 2 for a =
does that mean that I can somehow can use the good ol' t-distribution and modify it to get the studentized range distribution? In general: yes. MathCAD doesn't attempt to be a full up, fancy, stats package, rather it provides all the basics so that you can create all the special variants you need. A description is given of the computation of tables of percentage points of the range, moments of the range, and percentage points of the studentized range for samples from a normal population.
A general-purpose distribution with a variety of shapes controlled by Range of standard distribution is The R-distribution with parameter is the distribution of the correlation coefficient of a random sample of size drawn from a bivariate normal distribution with The mean of the standard distribution is always zero and as the sample size grows, the distribution’s mass concentrates more Tukey’s procedure Tukey’s procedure is based on the studentized range distribution \\ \ W"# 8, ,, a normal random sample; is an independent estimate of , then5
studentized range test is de ned, and the level and the power of the proposed range test are calculated. In Section 4, the numerical calculation of the level and the power is discussed. Given an accurate quantile from the student t distribution, only a few arithmetic operations yield a studentized range quantile with accuracy sufficient for most data analytic and other practical purposes; in fact, the accuracy is nearly as good as that of the studentized range table that has been in use since 1960. This approach also yields methods for interpolating studentized range
PDF Microsoft Excel has some functionality in terms of basic statistics; however it lacks distribution functions built around the studentized range (Q). The developed Excel addin introduces two Studentized Range Distribution Table Guide studentized_range.pdf Uni. Turku STATISTICS TILM3511 - Spring 2013 studentized_range.pdf. 1 pages. Studentized_range Ohio State University BUSMGT 2320 - Spring 2014
658 Journal of the American Statistical Association, September 1978 k 1. Upper .01 Points of the Studentized Augmented Range Distribution with k and II Degrees How to Calculate the Score for a T Distribution. When you look at the t-distribution tables, you’ll see that you need to know the “df.” This means “degrees of freedom” and is just the sample size minus one.
Studentized Range Distribution: The studentized range $$q$$ The Tukey method uses the studentized range distribution. Suppose we have $$r$$ independent observations $$y_1, \, \ldots, \, y_r$$ from a normal distribution with mean $$\mu$$ and variance $$\sigma^2$$. Let $$w$$ be the range for this set , i.e., the maximum minus the minimum. Now suppose that we have an estimate $$s^2$$ of the RANGE applies to the distribution of the studentized range for n group means. PARTRANGE applies to the distribution of the partitioned studentized range. Let the PARTRANGE applies to the distribution of the partitioned studentized range.
A numerical example is given of the analysis of variance applied on yields per cabbage. After having concluded from a F-test, that the varieties show significant differences, a discussion is given of a new method to decidewhich varieties are different. The t-test though in frequent use, gives wrong Euphytica 1 (1952): 112-122 THE USE OF THE ,STUDENTIZED RANGE" IN CONNECTION WITH AN ANALYSIS OF VARIANCE M. KEULS Institute of Horticultural Plant Breeding, Wageningen
View Test Prep - Studentized Range Distribution Table Guide from ISYE 2027 at Georgia Institute Of Technology. Peng Zeng @ Auburn University April 03, 2009 Upper Percentiles of Studentized Range studentized range test is de ned, and the level and the power of the proposed range test are calculated. In Section 4, the numerical calculation of the level and the power is discussed.
Title: Statistics Plain and Simple, 3rd ed. Author: Sherri L. Jackson Created Date: 3/8/2015 11:32:19 PM Table: Q scores for Tukey’s method α = 0.05 k 2 3 4 5 6 7 8 9 10 df 1 18.0 27.0 32.8 37.1 40.4 43.1 45.4 47.4 49.1 2 6.08 8.33 9.80 10.88 11.73 12.43 13.03 13.54 13.99
### Table Q scores for Tukey’s method math.ucalgary.ca
Continuous Statistical Distributions — SciPy v0.14.0. Table of a - percentage points of the studentized range statistic with v = k(n-1) d.f. arising from k normal populations with unknown variance 0' and with non-identical means (- k6o12, 0,, 0, kW2) is given in Table 2 for a =, 3.9265: 5.0403: 5.7571: 6.2870: 6.7065: 7.0528: 7.3465: 7.6015: 7.8264: 8.0271: 8.2083: 8.3732: 8.5244: 8.6640: 8.7934: 8.9141: 9.0271: 9.1332: 9.2333.
### Studentized range distribution Revolvy
Harter Tables of Range and Studentized Range. The test statistic, the studentized range, is a distribution of the range(s) of a varying number, p , of normally distributed items where the range is the difference between the highest and lowest values of the items and the s is an independent estimate of the standard deviation of Excel support for the studentized range (Q) distribution does not allow for the Tukey honestsignificantdifference,Student-Newman-Keuls(S-N-K),orRyan-Einot-Gabriel-Welsch Q (REGWQ)teststobecarriedout..
Peng Zeng @ Auburn University April 03, 2009 Upper Percentiles of Studentized Range Distribution The upper percentile qm,d,α means P(qm,d ≥ qm,d,α) = α Table: Q scores for Tukey’s method α = 0.05 k 2 3 4 5 6 7 8 9 10 df 1 18.0 27.0 32.8 37.1 40.4 43.1 45.4 47.4 49.1 2 6.08 8.33 9.80 10.88 11.73 12.43 13.03 13.54 13.99
does that mean that I can somehow can use the good ol' t-distribution and modify it to get the studentized range distribution? In general: yes. MathCAD doesn't attempt to be a full up, fancy, stats package, rather it provides all the basics so that you can create all the special variants you need. A description is given of the computation of tables of percentage points of the range, moments of the range, and percentage points of the studentized range for samples from a normal population.
The Studentized Range Distribution Description. Functions of the distribution of the studentized range, R/s, where R is the range of a standard normal sample and df*s^2 is independently distributed as chi-squared with df degrees of freedom, see pchisq. The test statistic, the studentized range, is a distribution of the range(s) of a varying number, p , of normally distributed items where the range is the difference between the highest and lowest values of the items and the s is an independent estimate of the standard deviation
studentized range test is de ned, and the level and the power of the proposed range test are calculated. In Section 4, the numerical calculation of the level and the power is discussed. PDF Microsoft Excel has some functionality in terms of basic statistics; however it lacks distribution functions built around the studentized range (Q). The developed Excel addin introduces two
the distribution of q, the “studentized range.” Th e “studentized range” with k and r degrees of freedom is the range ( i.e. maximum − minimum) of a set of k independent observations from [ 482 ] the distribution of the ratio, in a single normal sample, of range to standard deviation by h. a. david, h. 0. hartley and e. s. pearson
Euphytica 1 (1952): 112-122 THE USE OF THE ,STUDENTIZED RANGE" IN CONNECTION WITH AN ANALYSIS OF VARIANCE M. KEULS Institute of Horticultural Plant Breeding, Wageningen QPROB(q, k, df, tails, iter, interp) = estimated p-value for the studentized range q distribution at q for the distribution with k groups, degrees of freedom df; iter is the number of iterations used to calculate the p-value from the table of critical values (default 40).
How to Calculate the Score for a T Distribution. When you look at the t-distribution tables, you’ll see that you need to know the “df.” This means “degrees of freedom” and is just the sample size minus one. Tabled are the values q(L;k; ), below which lie a proportion Lof the studentized range distribution based on kpopulations and degrees of freedom. There are three tables, one for each of L= 0:90, 0.95
## Studentized range Wikipedia
Continuous Statistical Distributions — SciPy v0.14.0. where Q = (1 − α) percentile of the studentized range distribution with r number of factor levels and n T - r degrees of freedom. Fisher where t = (1 − α/2) percentile of the Student's t-distribution with n T − r degrees of freedom., Given an accurate quantile from the student t distribution, only a few arithmetic operations yield a studentized range quantile with accuracy sufficient for most data analytic and other practical purposes; in fact, the accuracy is nearly as good as that of the studentized range table that has been in use since 1960. This approach also yields methods for interpolating studentized range.
### 1 Overview The University of Texas at Dallas
Studentized range revolvy.com. differences, using qα,k−1,ν, the studentized range distribution quantile for k− 1 means, instead of q α, k ,ν . If the F -test is not significant, make no comparisons and no pairwise differences can be, distribution function of the maximum of c statistics having studentized range distributions of r sample means obtained from random a sample of size n from normal homocedastic distributions was adapted and implemented in Pascal..
Peng Zeng @ Auburn University April 03, 2009 Upper Percentiles of Studentized Range Distribution The upper percentile qm,d,α means P(qm,d ≥ qm,d,α) = α the proposed method is to use the studentized range distribu- tion in conjunction with a pairwise test statistic for which the two-sided test using the actual null distribution has P-values
The test statistic, the studentized range, is a distribution of the range(s) of a varying number, p , of normally distributed items where the range is the difference between the highest and lowest values of the items and the s is an independent estimate of the standard deviation a for the Chi-Square Distribution Table C.4. Critical Values f V],v 2,a for the F-Distribution Table C.5. Critical Values qk, v, a for the Studentized Range Distribution Table C.6. One-Sided Multivariate t Critical Values tk, v,p,a f°r Common Cor-relation p = 0.5 Table C.7. Two-Sided Multivariate t Critical Valuesί|*, υ,ρ, α for Common Correlation p = 0.5 Table C.8. Studentized Maximum
of Excel support for the studentized range (Q) distribution does not allow for the Tukey honestsignificantdifference,Student-Newman-Keuls(S-N-K),orRyan-Einot-Gabriel-Welsch Q (REGWQ)teststobecarriedout. Studentized t (Also called Q) Number of means df: p (two tailed) This program calculates areas in the tails of the Studentized Range Distribution. First specify the value of the studentized range statistic (Q). Q is computed as follows: Some programs compute the value of the studentized t by including a 2 in the denominator. If so, then you should multiply the t by 2.77 to convert it to Q.
Newman-Keuls Test and Tukey Test This distribution, called the Studentized Range or Student’s q, is similar to a t-distribution. It corresponds to the sampling distribution of the largest difierence between two means coming from a set of A means (when A = 2 the q distribution cor-responds to the usual Student’s t). In practice, one computes a criterion denoted qobserved which eval the distribution of q, the “studentized range.” Th e “studentized range” with k and r degrees of freedom is the range ( i.e. maximum − minimum) of a set of k independent observations from
Studentized t (Also called Q) Number of means df: p (two tailed) This program calculates areas in the tails of the Studentized Range Distribution. First specify the value of the studentized range statistic (Q). Q is computed as follows: Some programs compute the value of the studentized t by including a 2 in the denominator. If so, then you should multiply the t by 2.77 to convert it to Q. The Studentized Range Distribution Description. Functions of the distribution of the studentized range, R/s, where R is the range of a standard normal sample and df*s^2 is independently distributed as chi-squared with df degrees of freedom, see pchisq.
where Q = (1 − α) percentile of the studentized range distribution with r number of factor levels and n T - r degrees of freedom. Fisher where t = (1 − α/2) percentile of the Student's t-distribution with n T − r degrees of freedom. [ 482 ] the distribution of the ratio, in a single normal sample, of range to standard deviation by h. a. david, h. 0. hartley and e. s. pearson
Table 6 Values That Capture Specifi ed Upper-Tail F Curve Areas 869 Table 7 Critical Values of q for the Studentized Range Distribution 873 Table 8 Upper-Tail Areas for Chi-Square Distributions 874 A general-purpose distribution with a variety of shapes controlled by Range of standard distribution is The R-distribution with parameter is the distribution of the correlation coefficient of a random sample of size drawn from a bivariate normal distribution with The mean of the standard distribution is always zero and as the sample size grows, the distribution’s mass concentrates more
For a given distribution and sample size, the likely studentized range (shown in the last three columns of Table 1) can be calculated using (F −1 upper −F −1 lower)/σ, where F −1 upper and F −1 lower are the relevant percentile points. Note that the likely studentized range increases with sample size and as the distribution becomes more leptokurtic. The studentized range was used to assess for significant differences in kurtosis on each independent variable among the ethnicities (p > .001) indicating that there was not a violation of
658 Journal of the American Statistical Association, September 1978 k 1. Upper .01 Points of the Studentized Augmented Range Distribution with k and II Degrees QPROB(q, k, df, tails, iter, interp) = estimated p-value for the studentized range q distribution at q for the distribution with k groups, degrees of freedom df; iter is the number of iterations used to calculate the p-value from the table of critical values (default 40).
Given an accurate quantile from the student t distribution, only a few arithmetic operations yield a studentized range quantile with accuracy sufficient for most data analytic and other practical purposes; in fact, the accuracy is nearly as good as that of the studentized range table that has been in use since 1960. This approach also yields methods for interpolating studentized range Table 6 Values That Capture Specifi ed Upper-Tail F Curve Areas 869 Table 7 Critical Values of q for the Studentized Range Distribution 873 Table 8 Upper-Tail Areas for Chi-Square Distributions 874
### Critical Values of the Studentized Range (0 David Lane
The use of the „studentized range” in connection with an. Table 6 Values That Capture Specifi ed Upper-Tail F Curve Areas 869 Table 7 Critical Values of q for the Studentized Range Distribution 873 Table 8 Upper-Tail Areas for Chi-Square Distributions 874, does that mean that I can somehow can use the good ol' t-distribution and modify it to get the studentized range distribution? In general: yes. MathCAD doesn't attempt to be a full up, fancy, stats package, rather it provides all the basics so that you can create all the special variants you need..
Statistics with JMP Hypothesis Tests ANOVA and. Critical Values of the Studentized Range. Click on the appropriate degrees of freedom, How to Calculate the Score for a T Distribution. When you look at the t-distribution tables, you’ll see that you need to know the “df.” This means “degrees of freedom” and is just the sample size minus one..
### R The Studentized Range Distribution ETH Zurich
Statistics with JMP Hypothesis Tests ANOVA and. a for the Chi-Square Distribution Table C.4. Critical Values f V],v 2,a for the F-Distribution Table C.5. Critical Values qk, v, a for the Studentized Range Distribution Table C.6. One-Sided Multivariate t Critical Values tk, v,p,a f°r Common Cor-relation p = 0.5 Table C.7. Two-Sided Multivariate t Critical Valuesί|*, υ,ρ, α for Common Correlation p = 0.5 Table C.8. Studentized Maximum In statistics, the studentized range is the difference between the largest and smallest data in a sample measured in units of sample standard deviations, so long as ….
tables of the inverse Studentized Range distribution, such as this table at Duke University. Next, we establish a Tukey test statistic from our sample columns to compare with the appropriate critical value of Tukey’s procedure Tukey’s procedure is based on the studentized range distribution \\ \ W"# 8, ,, a normal random sample; is an independent estimate of , then5
This table contains critical values Qα,k,v for the Studentized Range distribution defined by P(Q ≥ Qα,k,v) = α, k is the number of degrees of freedom in the numerator (the number of treatment groups) and v is the number of degrees of freedom in the denominator (s 2). The ANOVA procedure is designed to handle balanced data (that is, data with equal numbers of observations for every combination of the classification factors), whereas the GLM procedure can analyze both balanced
Euphytica 1 (1952): 112-122 THE USE OF THE ,STUDENTIZED RANGE" IN CONNECTION WITH AN ANALYSIS OF VARIANCE M. KEULS Institute of Horticultural Plant Breeding, Wageningen Table of a - percentage points of the studentized range statistic with v = k(n-1) d.f. arising from k normal populations with unknown variance 0' and with non-identical means (- k6o12, 0,, 0, kW2) is given in Table 2 for a =
658 Journal of the American Statistical Association, September 1978 k 1. Upper .01 Points of the Studentized Augmented Range Distribution with k and II Degrees Euphytica 1 (1952): 112-122 THE USE OF THE ,STUDENTIZED RANGE" IN CONNECTION WITH AN ANALYSIS OF VARIANCE M. KEULS Institute of Horticultural Plant Breeding, Wageningen
The Studentized Range Distribution is a function of q, k, and df, where k is the number of groups of means, and df is the degrees of freedom. If $\phi(z)$ is the standard normal PDF, and $\Phi(z)$ is the standard normal CDF: differences, using qα,k−1,ν, the studentized range distribution quantile for k− 1 means, instead of q α, k ,ν . If the F -test is not significant, make no comparisons and no pairwise differences can be
This table contains critical values Qα,k,v for the Studentized Range distribution defined by P(Q ≥ Qα,k,v) = α, k is the number of degrees of freedom in the numerator (the number of treatment groups) and v is the number of degrees of freedom in the denominator (s 2). [ 482 ] the distribution of the ratio, in a single normal sample, of range to standard deviation by h. a. david, h. 0. hartley and e. s. pearson
View all posts in Western Australia category
|
|
Journal ArticleDOI
# Determination of the π-charge distribution of the DMe-DCNQI molecule in (DMe-DCNQI)2M, M=Li, Ag, and Cu
12 Apr 2006-Journal of Low Temperature Physics (Kluwer Academic Publishers-Plenum Publishers)-Vol. 142, Iss: 3, pp 633-637
AbstractSolid state high-resolution NMR of 1H and 13C along with 15N is analyzed to investigate the electronic states of the charge transfer salts (DMe-DCNQI)2M, (M=Li, Ag, and Cu). We determined the spin/charge distribution in a DMe-DCNQI molecule of the Li-salt from the Knight shifts at each atom on the molecule. It is found that the obtained charge distribution is similar to the theoretical prediction. The charge density on the DCNQI molecules of the Ag-salt is found to be smaller by 20% than the Li-salt, which could be an origin of differences from the Li-salt. This result is consistent with the first principle calculations (Miyazaki and Terakura, Phys. Rev. B 54, 10452, 1996).
Topics: Charge density (56%), Knight shift (51%)
Content maybe subject to copyright Report
##### Citations
More filters
Journal ArticleDOI
Abstract: We review recent progress in understanding the different spatial broken symmetries that occur in the normal states of the family of charge-transfer solids (CTS) that exhibit superconductivity (SC), and discuss how this knowledge gives insight to the mechanism of the unconventional SC in these systems. A great variety of spatial broken symmetries occur in the semiconducting states proximate to SC in the CTS, including charge ordering, antiferromagnetism and spin-density wave, spin-Peierls state and the quantum spin liquid. We show that a unified theory of the diverse broken symmetry states necessarily requires explicit incorporation of strong electron–electron interactions and lattice discreteness, and most importantly, the correct bandfilling of one-quarter, as opposed to the effective half-filled band picture that is often employed. Uniquely in the quarter-filled band, there is a very strong tendency to form nearest neighbor spin–singlets, in both one- and two-dimension. The spin–singlet in the quarter-filled band is necessarily charge-disproportionated, with charge-rich pairs of nearest neighbor sites separated by charge-poor pairs of sites in the insulating state. Thus the tendency to spin–singlets, a quantum effect, drives a commensurate charge-order in the correlated quarter-filled band. This charge-ordered spin–singlet, which we label as a paired-electron crystal (PEC), is different from and competes with both the antiferromagnetic (AFM) state and the Wigner crystal (WC) of single electrons. Further, unlike these classical broken symmetries, the PEC is characterized by a spin gap. The tendency to the PEC in two dimension is enhanced by lattice frustration. The concept of the PEC mirrors parallel development of the idea of a density wave of Cooper pairs in the superconducting high T c cuprates, where also the existence of a charge-ordered state in between the antiferromagnetic and the superconducting phase has now been confirmed. Following this characterization of the spatial broken symmetries, we critically reexamine spin-fluctuation and resonating valence bond theories of frustration-driven SC within half-filled band Hubbard and Hubbard–Heisenberg Hamiltonians for the superconducting CTS. We present numerical evidence for the absence of SC within the half-filled band correlated-electron Hamiltonians for any degree of frustration. We then develop a valence-bond theory of SC within which the superconducting state is reached by the destabilization of the PEC by additional pressure-induced lattice frustration that makes the spin–singlets mobile. We present limited but accurate numerical evidence for the existence of such a charge order–SC duality. Our proposed mechanism for SC is the same for CTS in which the proximate semiconducting state is antiferromagnetic instead of charge-ordered, with the only difference that SC in the former is generated via a fluctuating spin–singlet state as opposed to static PEC. In Appendix B we point out that several classes of unconventional superconductors share the same band-filling of one-quarter with the superconducting CTS. In many of these materials there are also indications of similar intertwined charge order and SC. We discuss the transferability of our valence-bond theory of SC to these systems.
26 citations
Journal ArticleDOI
Abstract: We review recent progress in understanding the different spatial broken symmetries that occur in the normal states of the family of charge-transfer solids (CTS) that exhibit superconductivity (SC), and discuss how this knowledge gives insight to the mechanism of the unconventional SC in these systems We show that a unified theory of the diverse broken symmetry states necessarily requires explicit incorporation of strong electron-electron interactions and lattice discreteness, and most importantly, the correct bandfilling of one-quarter Uniquely in the quarter-filled band, there is a very strong tendency to form nearest neighbor spin-singlets, in both one and two dimensions The tendency to spin-singlets, a quantum effect, drives a commensurate charge-order in the correlated quarter-filled band This charge-ordered spin-singlet, which we label as a paired-electron crystal (PEC), is different from and competes with both the antiferromagnetic state and the Wigner crystal of single electrons Further, unlike these classical broken symmetries, the PEC is characterized by a spin gap The tendency to the PEC in two dimensions is enhanced by lattice frustration Following this characterization of the spatial broken symmetries, we critically reexamine spin-fluctuation and resonating valence bond theories of frustration-driven SC within half-filled band Hubbard and Hubbard-Heisenberg Hamiltonians for the superconducting CTS We develop a valence-bond theory of SC within which the superconducting state is reached by the destabilization of the PEC by additional pressure-induced lattice frustration that makes the spin-singlets mobile Our proposed mechanism for SC is the same for CTS in which the proximate semiconducting state is antiferromagnetic instead of charge-ordered, with the only difference that SC in the former is generated via a fluctuating spin-singlet state as opposed to static PEC
18 citations
##### References
More filters
Journal ArticleDOI
Abstract: The paramagnetic susceptibility ${{\ensuremath{\chi}}_{p}}^{e}$ of conduction electron spins is isolated experimentally from the total magnetic susceptibility in metallic lithium and sodium by studying the intensity of the conduction-electron spin resonances. The absolute intensity of absorption is calibrated by comparison with the nuclear resonance of the metal nuclei in the same sample and at the same frequency, the two resonances being observed merely by changing the static magnetic field. In this manner ${{\ensuremath{\chi}}_{p}}^{e}$ is measured in terms of the nuclear static susceptibility, ${{\ensuremath{\chi}}_{p}}^{n}$, which in turn can be calculated accurately from the Langevin-Debye formula. A narrow band modulation technique gives improved signal to noise over our earlier work. The values of ${{\ensuremath{\chi}}_{p}}^{e}$ are (2.08\ifmmode\pm\else\textpm\fi{}0.1)\ifmmode\times\else\texttimes\fi{}${10}^{\ensuremath{-}6}$ cgs volume units for lithium at 300\ifmmode^\circ\else\textdegree\fi{}K and (0.95\ifmmode\pm\else\textpm\fi{}0.1)\ifmmode\times\else\texttimes\fi{}${10}^{\ensuremath{-}6}$ cgs volume units for sodium at 79\ifmmode^\circ\else\textdegree\fi{}K, in rather good agreement with the theory of Pines and Bohm, but in substantial disagreement with the simple Pauli model, or the results of Sampson and Seitz. Experimental precision does not permit conclusions to be drawn about the diamagnetism of conduction electrons.
115 citations
Journal ArticleDOI
Abstract: Structural and physical properties of the Cu salts of a series of π-acceptors N,N′-dicyanobenzoquinonediimines (DCNQIs) are described. The most notable feature of this system is that 3d electrons in Cu interact with pπ electrons in DCNQI near the Fermi level. This unique feature has provided a lot of interesting solid state properties: the Mott transition triggered by the Peierls transition, the pressure-induced metal-insulator transition, the metal-insulator-metal (reentrant) transition, the three-dimensional Fermi surface, the anomalous isotope effects, the antiferromagnetic transition, the weak ferromagnetism, and the electron mass enhancement. The aim of this account is to give an overview of this unique pπ-d system.
100 citations
Journal ArticleDOI
Abstract: We have studied the structural instabilities of 2,5(MR-DCNQI) 2 Ag (R=CH 3 , Cl or Br, M=CH 3 ) using low-temperature X-ray diffuse scattering and diffraction techniques. In (DM-DCNQI) 2 Ag (DM=dimethyl) we observe quasi-1D 4k F diffuse scattering at room temperature which transforms into satellite reflections at Q 4kF =(0,0, 1/2) below T 4kF =(0,0, 1/2) below T 4kF ≃100 K. A second transition with limited quasi-1D precursor scattering occurs at T 2kF ≃100 K. A second transition with limited quasi-1D precursor scattering occurs at T 2kF ≃83 K with satellite reflections at Q 2kF ≃83 K with satellite reflections at Q 2kF =(0,0, 1/4). In relation with the transport and magnetic properties it is suggested that the upper transition localizes the charge carriers while the lower one leads to a spin-Peierls ground state. In (MCl-DCNQI) 2 =(0,0, 1/4). In relation with the transport and magnetic properties it is suggested that the upper transition localizes the charge carriers while the lower one leads to a spin-Peierls ground state. In (MCl-DCNQI) 2 Ag and (MBr-DCNQI) 2 Ag the quasi-1D 4K F and 2K f diffuse scatterings do not condense and there is not transition, probably because of the inherent disorder caused by the MCl and MBr substituents. Supplementary diffuse scattering features and irradiation effects are also reported Etude des instabilites structurales de ces composes par diffraction de rayons X et diffusion en diffraction. Pour M,R=CH 3 , observation d'une diffusion diffuse quasi-1D a 4 k F des la temperature ambiante, avec des satellites a Q 4 kF =(0,0,1/2) au-dessous de T 2kF =(0,0,1/2) au-dessous de T 2kF =100 K, et d'une 2eme transition, avec de faibles effets precurseurs quasi-1D a T 2kF =100 K, et d'une 2eme transition, avec de faibles effets precurseurs quasi-1D a T 2kF =83 K avec des satellites Q 2kF =83 K avec des satellites Q 2kF =(0,0,1/4). En relation avec les proprietes electriques et magnetiques, attribution de la 1ere transition a la localisation des porteurs et la 2eme a un etat fondamental de type spin-Peierls. Pour les composes avec M=CH 3 =(0,0,1/4). En relation avec les proprietes electriques et magnetiques, attribution de la 1ere transition a la localisation des porteurs et la 2eme a un etat fondamental de type spin-Peierls. Pour les composes avec M=CH 3 , R=Cl ou Br, absence de condensation a 4k F et 2k F et de transitions, probablement du fait du desordre inherent aux substituants Cl et Br
39 citations
Journal ArticleDOI
37 citations
|
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 167 of 344 Not logged in
ID Date Author Type Category Subject
10977 Wed Feb 4 21:05:24 2015 manasaUpdateGeneralRF amplifier for ALS
The components of the RF amplifier box are in place. The RF amplifier box has been mounted on the IOO rack and the front panel connections have been labeled. Attached is the photo of how things look in the inside for future reference.
Sometime in the next few days the box will be pulled out to replace the panel mount SMA barrels in the front with insulated ones.
Attachment 1: RFampBox.png
10978 Wed Feb 4 21:09:43 2015 manasaUpdateGeneralX endtable work scheduled tomorrow
The X end fiber setup will be put together tomorrow morning. Let me know if there are any concerns.
Quote: It is certain that we have space issues at the X end that has been preventing us from sticking in a lens to couple light into the fiber. The only way out is to install a platform on the table where we can mount the lens. I have attached the a photo of how things look like at the X end (attachment 1) and also a drawing of the platform that which can hold the lens (attachment 2). Additional support to the raised platform will be added depending on how much space we can clear up on the table by moving the clamping forks of the doubler. Steve and I have been able to gather parts that can be put together into something similar to what is shown in the drawing. Proposed modifications to the X end table: 1. The side panels of the table enclosure will come out while putting in the new platform. 2. The clamping forks for the doubling crystal will be moved. Let me know of any concerns about the proposed solution.
10979 Thu Feb 5 04:35:14 2015 diegoUpdateLSCCARM Transition to REFL11 using CM_SLOW Path
[Diego, Eric]
Tonight was a sad night... We continued to pursue our strategy, but with very poor results:
• before doing anything, we made sure we had a good initial configuration: we renormalized the arm powers, retuned the X arm green beatnote, did extensive ASS alignment;
• since the beginning of the night we faced a very uncooperative PRMI, which caused a huge number of locklosses, often just by itself, without even managing to reduce the MICH offset before reducing the CARM one;
• we had to reduce the PRCL gain to -0.002 in order to acquire PRMI lock, but keeping it such or restoring it to -0.004 once lock was acquired either didn't improve the PRMI stability at all;
• we also tweaked a bit the PRCL and MICH UGF servos (namely, their frequencies to ~80 Hz and ~40 Hz respectively) and that seemed to help earlier during the night, but not much longer;
• we only managed to transition CARM to REFL11 via CM SLOW twice;
• the first time we lost lock almost immediately, probably because of a non-optimal offset between CARM A and B;
• the second time we managed to stay there a little longer, but then some spike in the PRCL loop and/or the MICH loop hitting the rails threw us out of lock (see the lockloss plot);
• both times we transitioned at arm power ~18;
• during the night we used an increased analog ASDC whitening gain, as from Eric's elog here http://nodus.ligo.caltech.edu:8080/40m/10972 ; even with this fix, though, MICH is still often hitting the rails and causing the lock losses;
• the conclusion for tonight is that we need to figure what is going on with the PRMI...
Attachment 1: 4Feb2015_Transition_CARM_REFL11_CM_SLOW_AP_18.png
10981 Thu Feb 5 15:21:25 2015 manasaUpdateGeneralX endtable work
[EricG, Manasa]
We were at the X end today trying to couple AUX X light into the fiber.
The proposed plan still did not give a good beampath. The last steering mirror before the fiber coupler was sticking out of the table enclosure. I tried a few other options and the maximum coupling that I could get was ~10%
I am working on plan C now; which would be to use fixed mount mirrors and steer the beam to the space created by Koji near the IR trans path and use a set of lenses instead of a single lens. I will elog more details after some modematching calculations.
We moved one of the clamps for the doubling crystal to make space. Also, the NPRO current was reduced during this work.
I reset things to how they were before I touched the table. I ensured that the green power was still the same (~3mW) after the doubler and that it could lock to the arm in TEM00.
10982 Fri Feb 6 03:21:17 2015 diegoUpdateLSCCARM Transition to REFL11 using CM_SLOW Path
[Diego, Jenne]
We kept struggling with the PRMI, although it was a little better than yesterday:
• we retuned the X Green beatnote;
• we managed to reach lower CARM offsets than yesterday night, but we still can't keep lock long enough to perform a smooth transition to CM SLOW/REFL11;
• we tweaked MICH a bit:
• the ELP in FM8 now is always on, because it seems to help;
• we tried using a new FM1 1,1:0,0 instead of FM2 1:0 because we felt we needed a little more gain at low frequencies, but unfortunately this didn't change much MICH's behaviour;
• now, after catching PRMI lock, the MICH limiter is raised to 30k (in the script), as a possible solution for the railing problem; the down/relock scripts take care of resetting it to 10k while not locked/locking;
So, still no exciting news, but PRMI lock seems to be improving a little.
10983 Fri Feb 6 10:03:23 2015 SteveUpdatePEMdusty days
Quote: Yesterday morning was dusty. I wonder why? The PRM sus damping was restored this morning.
Yesterday afternoon at 4 the dust count peaked 70,000 counts
Manasa's alergy was bad at the X-end yesterday. What is going on?
There was no wind and CES neighbors did not do anything.
Attachment 1: duststorm.png
10984 Fri Feb 6 15:29:29 2015 ericqUpdateCDSNetwork Shenanigans
Just now we had another EPICS freeze. The network was still up; i.e. I could ssh to chiara and fb, who both seemed to be working fine.
I could ping c1lsc successfully, but ssh just hung. fb's dmesg had some daqd segfault messages, so I telnet'ed to daqd and shut it down. Soon after, EPICS came back, but this is not neccesarily because of the daqd restart...
10985 Fri Feb 6 18:06:18 2015 manasaUpdateGeneralFiber Optic module for FOL
I pulled out the Fiber Optic Module for FOL from the rack inside the PSL table enclosure and modified it. The beat PDs were moved into the box to avoid breaking the fiber pigtail input to the PD.
The box has 3 input FC/APC connectors (PSL and AUX lasers) and 2 output FC/APC connectors (10% of the beatnote for the AUX lasers).
Attachment shows what is inside the box. The box will again go back on the rack inside the PSL enclosure.
Attachment 1: FOLfiberModule.png
10987 Sat Feb 7 21:30:45 2015 JenneUpdateLSCPRC aligned
I'm leaving the PRC aligned and locked. Feel free to unlock it, or do whatever with the IFO.
10989 Mon Feb 9 08:40:49 2015 SteveUpdateSUSrecent earthquake 4.9
Baja 4.9 m earth quake tripped suspentions, except ETMX Sus damping recovered. MC is locking.
Attachment 1: M4.9Baja.png
10990 Mon Feb 9 17:23:17 2015 diegoUpdateComputer Scripts / ProgramsNew laptops
I forgot to elog about these ones, my bad... The new/updated laptops are giada, viviana and paola; paola is already in the lab, while giada and viviana are in the control room waiting for a new home. The Pool of Names Wiki page has already been updated to reflect the changes.
10991 Mon Feb 9 17:47:17 2015 diegoUpdateLSCCM servo & AO path status
I wrote the script with the recipe we used, using the Yarm and AS55 on the IN2 of the CM board; however, the steps where the offset should be reduced are not completely deterministic, as we saw that the initial offset (and, therefore, the following ones) could change because of different states we were in. In the script I tried to "servo" the offset using C1:LSC-POY11_I_MON as the reference, but in the comments I wrote the actual values we used during our best test; the main points of the recipe are:
• misalign the Xarm and the recycling mirrors;
• setting up CARM_B for POY11 locking and enabling it;
• setting up CARM_A for CM_SLOW;
• setting up the CM_SLOW filter bank, with only FM1 and FM4 enabled;
• setting up the CARM filter bank: FM1 FM2 FM6 triggered, only FM3 and FM5 on; usual CARM gain = 0.006;
• setting up CARM actuating on MC2;
• turn off the violin filter FM6 for MC2;
• setting up the default configuration for the Common Mode Servo and the Mode Cleaner Servo; along with all the initial parameters, here is where the initial offset is set;
• turn on the CARM output and, then, enable LSC mode;
• wait until usual POY11 lock is acquired and, a bit later, transition from CARM_B to CARM_A;
• then, the actual CM_SLOW recipe:
• CM_AO_GAIN = 6 dB;
• SUS-MC2_LSC FM6 on (the 300:80 filter);
• CM_REFL2_GAIN = 18 dB;
• servo CM_REFL_OFFSET;
• CM_AO_GAIN = 9 dB;
• CM_REFL2_GAIN = 21 dB;
• servo CM_REFL_OFFSET;
• CM_REFL2_GAIN = 24 dB;
• servo CM_REFL_OFFSET;
• CM_REFL2_GAIN = 27 dB;
• servo CM_REFL_OFFSET;
• CM_REFL2_GAIN = 31 dB;
• servo CM_REFL_OFFSET;
• CM_AO_GAIN = 6 dB;
• SUS-MC2_LSC FM7 on (the :300 compensating filter);
I tried the procedure and it seems fine, as it did during the tries Q and I made; however, since it touches many things in many places, one should be careful about which state the IFO is into, before trying it.
The script is in scripts/CM/CM_Servo_OneArm_CARM_ON.py and in the SVN.
10992 Tue Feb 10 02:40:54 2015 JenneUpdateLSCSome locking thoughts on PRMI
[EricQ, Jenne]
We wanted to make the PRMI lock more stable tonight, which would hopefully allow us to hold lock much longer. Some success, but nothing out-of-this-world.
We realized late last week that we haven't been using the whitening for the ASDC and POPDC signals, which are combined to make the MICH error signal. ASDC whitening is on, and seems great. POPDC whitening (even if turned on after lock is acquired) seems to make the PRMI lock more fussy. I need to look at this tomorrow, to see if we're saturating anything when the whitening is engaged for POPDC.
One of the annoying things about losing the PRMI lock (when CARM and DARM have kept ALS lock) is that the UGF servos wander off, so you can't just reacquire the lock. I have added triggering to the UGF servo input, so that if the cavity is unlocked (really, untriggered), the UGF servo input gets a zero, and so isn't integrating up to infinity. It might need a brief "wait" in there, since any flashes allow signal through, and those can add up over time if it takes a while for the PRMI to relock. UGF screens reflect this new change.
10994 Tue Feb 10 03:09:02 2015 ericqUpdateLSCSome locking thoughts on PRMI
Unfortunately, we only had one good CARM offset reduction to powers of about 25, but then my QPD loop blew it. We spent the vast majority of the night dealing with headaches and annoyances.
Things that were a pain:
• If TRX is showing large excursions after finding resonance, there is no hope. These translate into large impulses while reducing the CARM offset, which the PRMI has no chance of handling. The first time aligning the green beat did not help this. For some reason, the second time did, though the beatnote amplitude wasn't increased noticibly.
• NOTICE: We should re-align the X green beatnote every night, after a solid ASS run, before any serious locking work.
• Afterwards, phase tracker UGFs (which depend on beatnote amplitude, and thereby frequency) should be frequently checked.
• We suffered some amount from ETMX wandering. Not only for realigning between lock attempts, but on one occasion, with CARM held off, GTRX wandered to half its nominal value, leading to a huge effective DARM offset, which made it impossible to lock MICH with any reasonble power in the arms. Other times, simply turning off POX/POY locking, after setting up the beatnotes, was enough to significantly change the alignment.
• IMC was mildly tempermental, at its worst refusing to lock for ~20min. One suspicion I have is that when the PMC PZT is nearing its rail, things go bad. The PZT voltage was above 200 when this was happening, after relocking the PMC to ~150, it seems ok. I thing I've also had this problem at PZT voltages of ~50. Something to look out for.
Other stuff:
• We are excited for the prospect of the FOL system, as chasing the FSS temperature around is no fun.
• UGF servo triggering greatly helps the PRMI reacquire if it briefly flashes out, since the multipliers don't run away. This exacerbated the ALS excursion problem.
• Using POPDC whitening made it very tough to hold the PRMI. Maybe because we didn't reset the dark offset...?
10995 Tue Feb 10 13:48:58 2015 manasaUpdateLSCProbable cause for headaches last night
I found the PSL enclosure open (about a feet wide) on the north side this morning. I am assuming that whoever did the X beatnote alignment last night forgot to close the door to the enclosure before locking attempts
Quote: Unfortunately, we only had one good CARM offset reduction to powers of about 25, but then my QPD loop blew it. We spent the vast majority of the night dealing with headaches and annoyances. Things that were a pain: If TRX is showing large excursions after finding resonance, there is no hope. These translate into large impulses while reducing the CARM offset, which the PRMI has no chance of handling. The first time aligning the green beat did not help this. For some reason, the second time did, though the beatnote amplitude wasn't increased noticibly. NOTICE: We should re-align the X green beatnote every night, after a solid ASS run, before any serious locking work. Afterwards, phase tracker UGFs (which depend on beatnote amplitude, and thereby frequency) should be frequently checked. We suffered some amount from ETMX wandering. Not only for realigning between lock attempts, but on one occasion, with CARM held off, GTRX wandered to half its nominal value, leading to a huge effective DARM offset, which made it impossible to lock MICH with any reasonble power in the arms. Other times, simply turning off POX/POY locking, after setting up the beatnotes, was enough to significantly change the alignment. IMC was mildly tempermental, at its worst refusing to lock for ~20min. One suspicion I have is that when the PMC PZT is nearing its rail, things go bad. The PZT voltage was above 200 when this was happening, after relocking the PMC to ~150, it seems ok. I thing I've also had this problem at PZT voltages of ~50. Something to look out for. Other stuff: We are excited for the prospect of the FOL system, as chasing the FSS temperature around is no fun. UGF servo triggering greatly helps the PRMI reacquire if it briefly flashes out, since the multipliers don't run away. This exacerbated the ALS excursion problem. Using POPDC whitening made it very tough to hold the PRMI. Maybe because we didn't reset the dark offset...?
10996 Tue Feb 10 16:01:21 2015 manasaUpdateGeneralAUX X fiber coupled 72%
Plan C finally worked. We have 1.454mW of AUX X light at the PSL table (2mW incident on the fiber coupler).
Attached is the layout of the telescope.
What I did:
I stuck in Lens 1 (f=200mm) and measured the beam width after the focus of the lens at several points. I fit the data and calculated the beam waist and its position after this lens.
I used the calculated waist and matched it with an appropriate lens and target (fiber coupler) distance. I calculated the maximum coupling efficiency to be 77% for Lens 2 with f=50mm and the fiber coupler placed at 20cm from the waist of Lens1. I was able to obtain 72% coupling after putting the telescope together.
I locked the arms, ran ASS and brought back GTRX to its usual optimum value of ~0.5 counts after closing. We also have the X arm beatnote on the spectrum analyzer.
Notes:
There are still a couple of things to fix. The rejected beam from the beam sampler has to be dumped using a razor blade.
10997 Tue Feb 10 18:37:17 2015 ericqUpdateASCQPD ASC ready to go
I've remeasured the QPD ASC sensing coefficients, and figured out what I did weird with the actuation coefficients. I've rearranged the controller filters to be able to turn on the boost in a triggered way, and written Up/Down scripts that I've tested numerous times, and Jenne has used as well; they are exposed on the ASC screen.
All four loops (2 arms * pit,yaw), have their gains set for 8Hz UGF, and have 60 degrees of phase margin. The loop shape is the same as the previous ELOG post. Here is the current on/off performance. The PDH signals (not shown, but in attached xml) show no extra noise, and the low frequency RIN goes down a bit, whic is good. The oplevs error signals are a bit noisier, but I suppose that's unavoidable. The Y-arm performs a bit better than the X-arm.
The up/down scripts don't touch the filters' trigger settings at all, just handles switching the input and output and clearing history. FM1 contains the boost which is intended to have a longer trigger delay than the filters themselves.
Attachment 1: Feb10_loops_offOn.png
Attachment 2: Feb10_newLoops_offOn.xml.zip
10998 Wed Feb 11 00:07:54 2015 ranaUpdateLSCLock Loss plot
Here is a lock loss from around 11 PM tonight. Might be due to poor PRC signals. $\oint {\frac{\partial PRCL}{\partial x}}$
This is with arm powers of ~6-10. You can see that with such a large MICH offset, POP22 signal has gone done to zero. Perhaps this is why the optical gain for PRCL has also dropped by a factor of 30 .
This seems untenable . We must try this whole thing with less MICH offset so that we can have a reasonable PRCL signal.
Attachment 1: 1107673198.png
10999 Wed Feb 11 02:42:05 2015 JenneUpdateLSCPRC error signal RF spectra
Since we're having trouble keeping the PRC locked as we reduce the CARM offset, and we saw that the POP22 power is significantly lower in the 25% MICH offset case than without a MICH offset, Rana suggested having a look at the RF spectra of the REFL33 photodiode, to see what's going on.
The Agilent is hooked up to the RF monitor on the REFL33 demod board. The REFL33 PD has a notch at 11MHz and another at 55MHz, and a peak at 33MHz.
We took a set of spectra with MICH at 25% offset, and another set with MICH at 15% offset. Each of these sets has 4 traces, each at a different CARM offset. Out at high CARM offset, the arm power vs. CARM offset is pretty much independent of MICH offset, so the CARM offsets are roughly the same between the 2 MICH offset plots.
What we see is that for MICH offset of 25%, the REFL33 signal is getting smaller with smaller CARM offset!! This means, as Rana mentioned earlier this evening, that there's no way we can hold the PRC locked if we reduce the CARM offset any more.
However, for the MICH offset 15% case, the REFL 33 signal is getting bigger, which indicates that we should be able to hold the PRC. We are still losing PRC lock, but perhaps it's back to mundane things like actuator saturation, etc.
The moral of the story is that the 3f locking seems to not be as good with large MICH offsets. We need a quick Mist simulation to reproduce the plots below, to make sure this all jives with what we expect from simulation.
For the plots, the blue trace has the true frequency, and each successive trace is offset in frequency by a factor of 1MHz from the last, just so that it's easier to see the individual peak heights.
Here is the plot with MICH at 25% offset:
And here is the plot with MICH at 15% offset:
Note that the analyzer was in "spectrum" mode, so the peak heights are the true rms values. These spectra are from the monitor point, which is 1/10th the value that is actually used. So, these peak heights (mVrms level) times 10 is what we're sending into the mixer. These are pretty reasonable levels, and it's likely that we aren't saturating things in the PD head with these levels.
The peaks at 100MHz, 130MHz and 170MHz that do not change height with CARM offset or MICH offset, we assume are some electronics noise, and not a true optical signal.
Also, a note to Q, the new netgpib scripts didn't write data in a format that was back-compatible with the old netgpib stuff, so Rana reverted a bunch of things in that directory back to the most recent version that was working with his plotting scripts. sorry.
Attachment 1: REFL33_25.pdf
Attachment 2: REFL33_15.pdf
11000 Wed Feb 11 03:41:12 2015 KojiUpdateLSCPRC error signal RF spectra
As the measurements have been done under feedback control, the lower RF peak height does not necessary mean
the lower optical gain although it may be the case this time.
These non-33MHz signals are embarassingly high!
We also need to check how these non-primary RF signals may cause spourious contributions in the error signals,
including the other PDs.
11001 Wed Feb 11 04:08:53 2015 JenneUpdateLSCNew Locking Paradigm?
[Rana, Jenne]
While meditating over what to do about the fact that we can't seem to hold PRMI lock while reducing the CARM offset, we have started to nucleate a different idea for locking
We aren't sure if perhaps there is some obvious flaw (other than it may be tricky to implement) that we're not thinking about, so we invite comments. I'll make a cartoon and post it tomorrow, but the idea goes like this.....
Can we use ALS to hold both CARM and DARM by actuating on the ETMs, and sit at (nominally) zero offset for all degrees of freedom? PRMI would need to be stably held with 3f signals throughout this process.
1) Once we're close to zero offset, we should see some PDH signal in REFL11. With appropriate triggering (REFLDC goes low, and REFL11I crosses zero), catch the zero crossing of REFL11I, and feed it back to MC2. We may want to use REFL11 normalized by the sum of the arm transmissions to some power (1, 0.5, or somewhere in between may maximize the linear range even more, according to Kiwamu). The idea (very similar to the philosophy of CESAR) is that we're using ALS to start the stabilization, so that we can catch the REFL11 zero crossing.
2) Now, the problem with doing the above is that actuating on the mode cleaner length will change the laser frequency. But, we know how much we are actuating, so we can feed forward the control signal from the REFL11 carm loop to the ALS carm loop. The goal is to change the laser frequency to lock it to the arms, without affecting the ALS lock. This is the part where we assume we might be sleepy, and missing out on some obvious reason why this won't work.
3) Once we have CARM doubly locked (ALS pushing on ETMs, REFL11 pushing on MC/laser frequency), we can turn off the ALS system. Once we have the linear REFL11 error signal, we know that we have enough digital gain and bandwidth to hold CARM locked, and we should be able to eek out a slightly higher UGF since there won't be as many digital hops for the error signal to transverse.
4) The next step is to turn on the high bandwidth common mode servo. If ALS is still on at this point, it will get drowned out by the high gain CM servo, so it will be effectively off.
5) Somewhere in here we need to transition DARM to AS55Q. Probably that can happen after we've turned on the digital REFL11 path, but it can also probably wait until after the CM board is on.
The potential show-stoppers:
Are we double counting frequency cancellation or something somewhere? Is it actually possible to change the laser frequency without affecting the ALS system?
Can we hold PRMI lock on 3f even at zero CARM offset? Anecdotally from a few trials in the last hour or so, it seems like coming in from negative carm offset is more successful - we get to slightly higher arm powers before the PRMI loses lock. We should check if we think this will work in principle and we're just saturating something somewhere, or if 3f can't hold us to zero carm offset no matter what.
A note on technique: We should be able to get the transfer function between MC2 actuation and ALS frequency by either a direct measurement, or Wiener filtering. We need this in order to get the frequency subtraction to work in the correct units.
11002 Wed Feb 11 16:49:41 2015 ranaUpdateelogELOGD restarted
No elog response from outside and no elogd process on nodus, so I restarted it using 'start-elog.csh'.
11003 Wed Feb 11 17:31:11 2015 ericqUpdateLSCRFPD spectra
For future reference, I've taken spectra of our various RFPDs while the PRMI was sideband locked on REFL33, using a 20dB RF coupler at the RF input of the demodulator boards. The 20dB coupling loss has been added back in on the plots. Data files are attached in a zip.
Exceptions:
• The REFL165 trace was taken at the input of the amplifier that immediately preceeds the demod board.
• The 'POPBB' trace was taken with the coupler at the input of the bias tee, that leads to an amplifier, then splitter, then the 110 and 22 demod boards.
I also completely removed the cabling for REFLDC -> CM board, since it doesn't look like we plan on using it anytime in the immediate future.
Attachment 1: REFL.png
Attachment 2: AS.png
Attachment 3: POP.png
Attachment 4: 2015-02-PDSpectra.zip
11004 Wed Feb 11 18:07:42 2015 ericqUpdateLSCRFPD spectra
After some discussion with Koji, I've asked Steve to order some SBP-30+ bandpass filters as a quick and cheap way to help out REFL33. (Also some SBP-60+ for 55MHz, since we only have 1*fmod and 2*fmod bandpasses here in the lab).
11006 Wed Feb 11 18:44:53 2015 manasaUpdateGeneralOptical fiber module
I have moved the optical fiber module for FOL to the PSL table. It is setup on the optical table right now for testing.
Once tests are done, the box will move to the rack inside the PSL enclosure.
While doing any beat note alignment, please watch out for the loose fibers at the north side of the PSL enclosure until they are sheilded securely (probably tomorrow morning).
11007 Wed Feb 11 22:13:44 2015 JenneUpdateLSCNew Locking Paradigm - LSC model changes
In order to try out the new locking scheme tonight, I have modified the LSC model. Screens have not yet been made.
It's a bit of a special case, so you must use the appropriate filter banks:
CARM filter bank should be used for ALS lock. MC filter bank should be used for the REFL1f signal.
The output of the MC filter bank is fed to a new filter bank (C1:LSC-MC_CTRL_FF). The output of this new filter bank is summed with the error point of the CARM filter bank (after the CARM triggered switch).
The MC triggering situation is now a little more sophisticated than it was. The old trigger is still there (which will be used for something like indicating when the REFL DC has dipped). That trigger is now AND-ed with a new zero crossing trigger, to make the final trigger decision. For the zero crossing triggering, there is a small matrix (C1:LSC-ZERO_CROSS_MTRX) to choose what REFL 1f signal you'd like to use (in order, REFL11I, REFL11Q, REFL55I, REFL55Q). The absolute value of this is compared to a threshold, which is set with the epics value C1:LSC-ZERO_CROSS_THRESH. So, if the absolute value of your chosen RF signal is lower than the threshold, this outputs a 1, which is AND-ed by the usual schmidt trigger.
At this moment, the input and output switches of the new filter bank are off, and the gain is set to zero. Also, the zero crossing selection matrix is all zeros, and the threshold is set to 1e9, so it is always triggered, which means that effectively MC filter bank just has it's usual, old triggering situation.
11008 Thu Feb 12 01:00:18 2015 ranaUpdateLSCRFPD spectra
The nonlinearity in the LSC detection chain (cf T050268) comes from the photodetector and not the demod board. The demod board has low pass or band pass filters which Suresh installed a long time ago (we should check out what's in REFL33 demod board).
Inside the photodetector the nonlinearity comes about because of photodiode bias modulation (aka the Grote effect) and slew rate limited distortion in the MAX4107 preamp.
11009 Thu Feb 12 01:43:09 2015 ranaUpdateLSCNew Locking Paradigm - LSC model changes
With the Y Arm locked, we checked that we indeed can get loop decoupling using this technique.
The guess filter that we plugged in is a complex pole pair at 1 Hz. We guessed that the DC gain should be ~4.5 nm count. We then converted this number into Hz and then into deg(?) using some of Jenne's secret numbers. Then after measuring, we had to increase this number by 14.3 dB to an overall filter module gain of +9.3.
The RED trace is the usual 'open loop gain' measurement we make, but this time just on the LSC-MC path (which is the POY11_I -> ETMY path).
The BLUE trace is the TF between the ALS-Y phase tracker output and the FF cancellation signal. We want this to be equal ideally.
The GREEN trace is after the summing point of the ALS and the FF. So this would go to zero when the cancellation is perfect.
So, not bad for a first try. Looks like its good at DC and worse near the red loop UGF. It doesn't change much if I turn off the ALS loop (which I was running with ~10-15x lower than nominal gain just to keep it out of the picture). We need Jenne to think about the loop algebra a little more and give us our next filter shape iteration and then we should be good.
Attachment 1: TF.gif
11010 Thu Feb 12 03:43:54 2015 ericqUpdateLSC3F PRMI at zero ALS CARM
I have been able to recover the ability to sit at zero CARM offset while the PRMI is locked on RELF33 and CARM/DARM are on ALS, effectively indefinitely. However, I feel like the transmon QPDs are not behaving ideally, because the reported arm powers freqently go negative as the interferometer is "buzzing" through resonance, so I'm not sure how useful they'll be as normalizing signals for REFL11. I tried tweaking the DARM offset to help the buildup, since ALS is only roughly centered on zero for both CARM and DARM, but didn't have much luck.
Example:
Turning off the whitening on the QPD segments seems to make everything saturate, so some thinking with daytime brain is in order.
How I got there:
It turns out triggering is more important than the phase margin story I had been telling myself. Also, I lost a lot of time to needing demod angle change in REFL33. Maybe I somehow caused this when I was all up on the LSC rack today?
We have previously put TRX and TRY triggering elements into the PRCL and MICH rows, to guard against temporary POP22 dips, because if arm powers are greater than 1, power recylcing is happening, so we should keep the loops engaged. However, since TRX and TRY are going negative when we buzz back and forth through the resonsnace, the trigger row sums to a negative value, and the PRMI loops give up.
Instead, we can used the fortuitously unwhitened POPDC, which can serve the same function, and does not have the tendancy to go negative. Once I enabled this, I was able to just sit there as the IFO angrily buzzed at me.
Here are my PRMI settings
REFL33 - Rotation 140.2 Degrees, -89.794 measured diff
PRCL = 1 x REFL33 I; G = -0.03; Acquire FMs 4,5; Trigger FMs 2, 9; Limit: 15k ; Acutate 1 x PRM
MICH = 1 x REFL33 Q, G= 3.0, Acquire FMs 4,5,8; Trigger FM 2, 3; Limit: 30k; Actuate -0.2625 x PRM + 0.5 x BS
Triggers = 1 x POP22 I + 0.1 * POPDC, 50 up 5 down
Just for kicks, here's a video of the buzzing as experienced in the control room
Attachment 1: Feb12_negativeTR.png
11011 Thu Feb 12 11:14:29 2015 JenneUpdateLSCNew Locking Paradigm - Loop-gebra
I have calculated the response of this new 2.5 loop system.
The first attachment is my block diagram of the system. In the bottom left corner are the one-hop responses from each green-colored point to the next. I use the same matrix formalism that we use for Optickle, which Rana described in the loop-ology context in http://nodus.ligo.caltech.edu:8080/40m/10899
In the bottom right corner is the closed loop response of the whole system.
Also attached is a zipped version of the mathematica notebook used to do the calculation.
EDIT, JCD, 17Feb2015: Updated loop diagram and calculation: http://131.215.115.52:8080/40m/11043
Attachment 1: ALS_REFL_comboLockingCartoon_11Feb2015.PDF
Attachment 2: ALS_REFL_comboLocking_11Feb2015.zip
11012 Thu Feb 12 11:59:58 2015 KojiUpdateLSCNew Locking Paradigm - Loop-gebra
The goals are:
- When the REFL path is dead (e.g. S_REFL = 0), the system goes back to the ordinary ALS loop. => True (Good)
- When the REFL path is working, the system becomes insensityve to the ALS loop
(i.e. The ALS loop is inactivated without turning off the loop.) => True when (...) = 0
Are they correct?
Then I just repeat the same question as yesterday:
S is a constant, and Ps are cavity poles. So, approximately to say, (...) = 0 is realized by making D = 1/G_REFL.
In fact, if we tap the D-path before the G_REFL, we remove this G_REFL from (...). (=simpler)
But then, this means that the method is rather cancellation between the error signals than
cancellation between the actuation. Is this intuitively reasonable? Or my goal above is wrong?
11013 Thu Feb 12 12:16:04 2015 manasaUpdateGeneralFiber shielding
[Steve, Manasa]
The fibers around the PSL table were shielded to avoid any tampering.
11014 Thu Feb 12 12:23:21 2015 manasaUpdateGeneralWaking up the PDFR measurement system
[EricG, Manasa]
We woke up the PDFR measurement setup that has been sleeping since summer. We ran a check for the laser module and the multiplexer module. We tried setting things up for measuring frequency response of AS55.
We could not repeat Nichin's measurements because the gpib scripts are outdated and need to be revised.
PDFR diode laser was shutdown after this job.
11015 Thu Feb 12 15:21:37 2015 ericqUpdateComputer Scripts / Programsnetgpib updates
I've fixed the gpib scripts for the SR785 and AG4395A to output data in the same format as expected by older scripts when called by them. In addition, there are now some easier modes of operation through the measurement scripts SRmeasure and AGmeasure. These are on the \$PATH for the main control room machines, and live in scripts/general/netgpib
Case 1: I manually set up a measurement on the analyzer, and just want to download / plot the data.
Make sure you have a yellow prologix box plugged in, and can ping the address it is labeled with. (i.e. 'vanna'). Then, in the directory you want to save the data, run:
SRmeasure -i vanna -f mydata --getdata --plot
This saves mydata_(datetime).txt and mydata_(datetime).pdf in the current directory.
In all cases, AGmeasure has the identical syntax. If the GPIB address is something other than 10, specifiy it with -a, but this is rarely the case.
Case 2: I want to remotely specify a measurement
Rather than a series of command line arguments, which may get lost to the mists of time, I've set the scripts up to use parameter files that serve as arguments to the scripts.
Get the templates for spectrum and TF measurements in your current directory by running
SRmeasure --template
Set the parameters with your text editor of choice, such as frequency span, filename output, whether to create a plot or not, then run the measurement:
SRmeasure SR785template.yml
Case 3: I want to compare my data with previous measurements
In the template parameter files, there is an option 'plotRefs', that will automatically plot the data from files whose filenames start with the same string as the current measurement.
If, in the "#" commented out header of the data file, there is a line that contains "memo:" or "timestamp:", it will include the text that follows in the plot legend.
There are also methods to remotely trigger an already configured measurement, or remotely reset an unresponsive instrument. Options can be perused by looking at the help in SRmeasure -h
I've tested, debugged, and used them for a bit, but wrinkles may remain. They've been svn40m committed, and I also set up a separate git repository for them at github.com/e-q/netgpibdata
11016 Thu Feb 12 19:18:49 2015 JenneUpdateLSCNew Locking Paradigm - Loop-gebra
EDIT, JCD, 17Feb2015: Updated loop diagram and calculation: http://131.215.115.52:8080/40m/11043
Okay, Koji and I talked (after he talked to Rana), and I re-looked at the original cartoon from when Rana and I were thinking about this the other day.
The original idea was to be able to actuate on the MC frequency (using REFL as the sensor), without affecting the ALS loop. Since actuating on the MC will move the PSL frequency around, we need to tell the ALS error signal how much the PSL moved in order to subtract away this effect. (In reality, it doesn't matter if we're actuating on the MC or the ETMs, but it's easier for me to think about this way around). This means that we want to be able to actuate from point 10 in the diagram, and not feel anything at point 4 in the diagram (diagram from http://131.215.115.52:8080/40m/11011)
This is the same as saying that we wanted the green trace in http://131.215.115.52:8080/40m/11009 to be zero.
So. What is the total TF from 10 to 4?
${\rm TF}_{\rm (10 \ to \ 4)} = \frac{D_{\rm cpl} + {\color{DarkRed} A_{\rm refl}} {\color{DarkGreen} P_{\rm als}}}{1-{\color{DarkRed} A_{\rm refl} G_{\rm refl} S_{\rm refl} P_{\rm refl}} - {\color{DarkGreen} A_{\rm als} G_{\rm als} S_{\rm als}} ({\color{DarkGreen} P_{\rm als}} + D_{\rm cpl} {\color{DarkRed} G_{\rm refl} P_{\rm refl} S_{\rm refl}})}$
So, to set this equal to zero (ALS is immune to any REFL loop actuation), we need $D_{\rm cpl} = - {\color{DarkRed} A_{\rm refl}} {\color{DarkGreen} P_{\rm als}}$.
Next up, we want to see what this means for the closed loop gain of the whole system. For simplicity, let's let $H_* = A_* G_* S_* P_*$, where * can be either REFL or ALS.
Recall that the closed loop gain of the system (from point 1 to point 2) is
${\rm TF}_{\rm (1 \ to \ 2)} = \frac{1}{1-{\color{DarkRed} A_{\rm refl} G_{\rm refl} S_{\rm refl} P_{\rm refl}} - {\color{DarkGreen} A_{\rm als} G_{\rm als} S_{\rm als}} ({\color{DarkGreen} P_{\rm als}} + D_{\rm cpl} {\color{DarkRed} G_{\rm refl} P_{\rm refl} S_{\rm refl}})}$ , so if we let $D_{\rm cpl} = - {\color{DarkRed} A_{\rm refl}} {\color{DarkGreen} P_{\rm als}}$ and simplify, we get
${\rm TF}_{\rm (1 \ to \ 2)} = \frac{1}{1-{\color{DarkRed} H_{\rm refl}} - {\color{DarkGreen} H_{\rm als}} + {\color{DarkRed} H_{\rm refl}}{\color{DarkGreen} H_{\rm als}}}$
This seems a little scary, in that maybe we have to be careful about keeping the system stable. Hmmmm. Note to self: more brain energy here.
Also, this means that I cannot explain why the filter wasn't working last night, with the guess of a complex pole pair at 1Hz for the MC actuator. The ALS plant has a cavity pole at ~80kHz, so for our purposes is totally flat. The only other thing that comes to mind is the delays that exist because the ALS signals have to hop from computer to computer. But, as Rana points out, this isn't really all that much phase delay below 100Hz where we want the cancellation to be awesome.
I propose that we just measure and vectfit the transfer function that we need, since that seems less time consuming than iteratively tweaking and checking.
Also, I just now looked at the wiki, and the MC2 suspension resonance for pos is at 0.97Hz, although I don't suspect that that will have changed anything significantly above a few Hz. Maybe it makes the cancellation right near 1Hz a little worse, but not well above the resonance.
11017 Thu Feb 12 22:28:16 2015 JenneUpdateLSCNew Locking Paradigm - LSC model changes, screens modified
I have modified the LSC trigger matrix screen, as well as the LSC overview screen, to reflect the modifications to the model from yesterday.
Also, I decided that we probably won't ever want to trigger the zero crossing on the Q phase signals of REFL. Instead, we may want to try it out with the single arms, so the zero crossing selection matrix is now REFL11I, REFL55I, POX11I, POY11I, in that order.
11018 Thu Feb 12 23:40:12 2015 ranaUpdateIOOBounce / Roll stopband filters added to MC ASC filter banks
The filters were already in the damping loops but missing the MC WFS path. I checked that these accurately cover the peaks at 16.5 Hz and 23.90 and 24.06 Hz.
Attachment 1: 59.png
11019 Thu Feb 12 23:47:45 2015 KojiUpdateLSC3f modulation cancellation
- I built another beat setup on the PSL table at the South East side of the table.
- The main beam is not touched, no RF signal is touched, but recognize that I was present at the PSL table.
- The beat note is found. The 3rd order sideband was not seen so far.
- A PLL will be built tomorrow. The amplifier box Manasa made will be inspected tomorrow.
- One of the two beams from the picked-off beam from the main beam line was introduced to the beat setup.
(The other beam is used of for the beam pointing monitors)
There is another laser at that corner and the output from this beam is introduced into the beat setup.
The combined beam is introduced to PDA10CF (~150MHz BW).
- The matching of the beam there is poor. But without much effort I found the beat note.
The PSL laser had 31.33 deg Xtal temp. When the beat was found, the aux laser had the Xtal temp of 40.88.
- I could observe the sidebands easily, with a narrower BW of the RF analizer I could see the sidebands up to the 2nd order.
The 3rd order was not seen at all.
- The beat note had the amplitude of about -30dBm. One possibility is to amplify the signal. I wanted to use a spare channel
of the ALS/FOLL amplifier box. But it gave me rather attenuation than any amplification.
I'll look at the box tomorrow.
- Also the matching of two beams are not great. The PD also has clipping I guess. These will also be improved tomorrow
- Then the beat note will be locked at a certain frequency using PLL so that we can reduce the measurement BW more.
11020 Fri Feb 13 03:28:34 2015 ranaUpdateLSCNew Locking Paradigm - Loop-gebra
Not so fast!
In the drawing, the FF path should actually be summed in after the Phase Tracker (i.e. after S_ALS). This means that the slow response of the phase tracker needs to be taken into account in the FF cancellation filter. i.e. D = -A_REFL * P_ALS * S_ALS. Since the Phase Tracker is a 1/f loop with a 1 kHz UGF, at 100 Hz, we can only get a cancellation factor of ~10.
So, tonight we added a 666:55 boost filter into the phase tracker filter bank. I think this might even make the ALS locking loops less laggy. The boost is made to give us better tracking below ~200 Hz where we want better phase performance in the ALS and more cancellation of the ALS-Fool. If it seems to work out well we can keep it. If it makes ALS more buggy, we can just shut it off.
Its time to take this loop cartoon into OmniGraffle.
11021 Fri Feb 13 03:44:56 2015 JenneUpdateLSCHeld using ALS for a while at "0" CARM offset with PRMI
[Jenne, Rana]
We wanted to jump right in and see if we were ready to try the new "ALS fool" loop decoupling scheme, so we spent some time with CARM and DARM at "0" offset, held on ALS, with PRMI locked on REFL33I&Q (no offsets). Spoiler alert: we weren't ready for the jump.
The REFL11 and AS55 PDs had 0dB analog whitening, which means that we weren't well-matching our noise levels between the PD noise and the ADC noise. The photodiodes have something of the order nanovolt level noise, while the ADC has something of the order microvolt level noise. So, we expect to need an analog gain of 1000 somewhere, to make these match up. Anyhow, we have set both REFL11 and AS55 to 18dB gain.
On a related note, it seems not so great for the POX and POY ADC channels to be constantly saturated when we have some recycling gain, so we turned their analog gains down from 45dB to 0dB. After we finished with full IFO locking, they were returned to their nominal 45dB levels.
We also checked the REFL33 demod phase at a variety of CARM offsets, and we see that perhaps it changes by one or two degrees for optimal rotation, but it's not changing drastically. So, we can set the REFL33 demod phase at large CARM offset, and trust it at small CARM offset.
We then had a look at the transmon QPD inputs (before the dewhitening) for each quadrant. They are super-duper saturating, which is not so excellent.
QPDsaturation_12Feb2015.pdf
We think that we want to undo the permanently-on whitening situation. We want to make the second stage of whitening back to being switchable. This means taking out the little u-shaped wires that are pulling the logic input of the switches to ground. We think that we should be okay with one always on, and one switchable. After the modification, we must check to make sure that the switching behaves as expected. Also, I need to figure out what the current situation is for the end QPDs, and make sure that the DCC document tree matches reality. In particular, the Yend DCC leaf doesn't include the gain changes, and the Xend leaf which does show those changes has the wrong value for the gain resistor.
After this, we started re-looking at the single arm cancellation, as Rana elogged about separately.
ALSfool_12Feb2015.pdf
Attachment 1: QPDsaturation_12Feb2015.pdf
Attachment 2: ALSfool_12Feb2015.pdf
11022 Fri Feb 13 14:27:30 2015 ericqUpdateElectronicsSecond QPD Whitening Switch enabled
I have re-enabled the second whitening stage switching on each quadrant of each end's QPD whitening board, to try and avoid saturations at full power. Looking at the spectra while single arm locked, I confirmed that the FM2 whitening switch works as expected. FM1 should be left on, as it is still hard-wired to whiten.
The oscillations in the Y QPD still exist. Jenne is updating the schematics on the DCC.
11023 Fri Feb 13 14:59:13 2015 ericqUpdateElectronicsSecond QPD Whitening Switch enabled
Went to zero CARM offset on ALS; transmission QPDs are still saturating :(
Maybe we need to switch off all whitening.
11024 Fri Feb 13 17:07:51 2015 JenneUpdateElectronicsSecond QPD Whitening Switch enabled
I first updated the DCC branches for the Xend and Yend to reflect the as-built situation from December 2014, and then I updated the drawings after Q's modifications today.
11025 Fri Feb 13 18:56:44 2015 ranaUpdateElectronicsSecond QPD Whitening Switch enabled
Depends on the plots of the whitening I guess; if its low freq sat, then we lower the light level with ND filters. If its happening above 10 Hz, then we switch off the whitening.
Quote: Went to zero CARM offset on ALS; transmission QPDs are still saturating :( Maybe we need to switch off all whitening.
11026 Fri Feb 13 19:41:13 2015 ericqUpdateGeneralRF amplifier for ALS
As Koji found one of the spare channels of the ALS/FOL RF amplifier box nonfunctional yesterday, I pulled it out to fix it. I found that one of the sma cables did not conduct.
It was replaced with a new cable from Koji. Also, I rearranged the ports to be consistent across the box, and re-labeled with the gains I observed.
It has been reinstalled, and the Y frequency counter that is using one of the channels shows a steady beat freq.
I cannot test the amplitude of the green X beat at this time, as Koji is on the PSL table with the PSL shutter closed, and is using the control room spectrum analyzer. However, the dataviewer trace for the fine_phase_out_Hz looks like free swinging cavity motion, so its probably ok.
11027 Sat Feb 14 00:42:02 2015 KojiUpdateGeneralRF amplifier for ALS
The RF analyzer was returned to the control room. There are two beat notes from X/Y confirmed.
I locked the arms and aligned them with ASS.
When the end greens are locked at TEM00, X/Y beat amplitudes were ~33dBm and ~17dBm. respectively.
I don't judge if they are OK or not, as I don't recall the nominal values.
11028 Sat Feb 14 00:48:13 2015 KojiUpdateLSC3f modulation cancellation
[SUCCESS] The 3f sideband cancellation seemed worked nicely.
- Beat effeciency improved: ~30% contrast (no need for amplification)
- PLL locked
- 3f modulation sideband was seen
- The attenuation of the 55MHz modulation and the delay time between the modulation source was adjusted to
have maximum reduction of the 3f sidebands as much as allowed in the setup. This adjustment has been done
at the frequency generation box at 1X2 rack.
- The measurement and receipe for the sideband cancellation come later.
- This means that I jiggled the modulation setup at 1X2 rack. Now the modulation setup was reverted to the original,
but just be careful to any change of the sensing behavior.
- The RF analyzer was returned to the control room.
- The HEPA speed was reduced from 100% (during the action on the table) to 40%.
11030 Sat Feb 14 20:20:24 2015 JenneUpdateLSCALS fool cartoon
The ALS fool scheme is now diagrammed up in OmniGraffle, including its new official icon. The mathematica notebook has not yet been updated.
EDIT, JCD, 17Feb2015: Updated cartoon and calculation: http://131.215.115.52:8080/40m/11043
Attachment 1: ALSfool_LoopDiagram.png
Attachment 2: ALSfool_LoopDiagram.graffle.zip
11037 Mon Feb 16 02:49:57 2015 JenneUpdateLSCALS fool measured decoupling TF
I have measured very, very carefully the transfer function from pushing on MC2 to the Yarm ALS beatnote. In the newest loop diagram in http://nodus.ligo.caltech.edu:8080/40m/11030, this is pushing at point 10 and sensing at point 4.
Since it's a bunch of different transfer functions (to get the high coherence that we need for good cancellation to be possible), I attach my Matlab figure that includes only the useful data points. I put a coherence cutoff of 0.99, so that (assuming the fit were perfect, which it won't be), we would be able to get a maximum cancellation of a factor of 100.
This plot also includes the vectfit to the data, which you can see is pretty good, although I need to separately plot the residuals (since the magnitude data is so small, the residuals for the mag don't show up in the auto plot that vectfit gives).
If you recall from http://nodus.ligo.caltech.edu:8080/40m/11020, we are expecting this transfer function to consist of the suspension actuator (pendulum with complex pole pair around 1Hz), the ALS plant (single pole at 80kHz) and the ALS sensor shape (the phase tracker is an integrator, with a boost consisting of a zero at 666Hz and a pole at 55Hz). That expected transfer function does not multiply up to give me this wonky shape. Brain power is needed here.
Just in case you were wondering if this depends on the actuator used (ETM vs MC2), or IFO configuration (single arm vs. PRFPMI), it doesn't. The only discrepancy between these transfer functions is the expected sign flip between the MC2 and ETMY actuators. So, they're all pretty consistent.
Before locking the PRFPMI, I copied the boost filter (666:55) from the YARM ALS over to Xarm ALS, so now both arms have the same boost.
YARM_actTF_compareActuators.pdf
Things to do for ALSfool:
• Put fitted TF into the MC_CTRL_FF filter bank, and try to measure the expected cancellation, a la http://nodus.ligo.caltech.edu:8080/40m/11009
• Quick test with single arm, ALS locked using full loop (high gain, all boosts), since the previous versions were with ALS very loosely locked.
• Does this measured transfer function actually give us good cancellation?
• Think. Why should the transfer function look like this??
• Try it on the full PRFPMI
Attachment 1: ALSfool_measuredActuatorTF_YarmOnly_15Feb2015.png
Attachment 2: YARM_actTF_compareActuators.pdf
11038 Mon Feb 16 03:10:42 2015 KojiUpdateLSCALS fool measured decoupling TF
Wonkey shape: Looks like a loop supression. Your http://nodus.ligo.caltech.edu:8080/40m/11016 also suggests it too, doesn't it?
ELOG V3.1.3-
|
|
# General restriction for covariance matrix in multivariate normal distribution
Suppose we look at the following model
$$\vec y_i=\vec\mu_i + \vec\epsilon_i, \qquad \vec\epsilon_i\sim N(\vec0, \Sigma)$$
where $\vec y_i$s is observed, $\vec\mu_i$s are known, and $\vec\epsilon_i$s are unknown and iid. Following
Pinheiro, J. C., & Bates, D. M. (2000). Mixed-effects models in S and S-PLUS. New York: Springer.
then we can we can separate $\Sigma$ into standard deviation factor and correlation matrix factor
$$\Sigma =VCV, \quad V = \begin{pmatrix} \sigma_1 & 0 & \cdots & 0 \\ 0 & \sigma_2 & \ddots & 0 \\ \vdots & \ddots & \ddots & \vdots \\ 0 & \dots & 0 & \sigma_p \end{pmatrix}, \quad C = \begin{pmatrix} 1 & \rho_{21} & \cdots & \rho_{p1} \\ \rho_{21} & 1 & \ddots & \rho_{p2} \\ \vdots & \ddots & \ddots & \vdots \\ \rho_{p1} & \dots & \rho_{p,p-1} & 1 \end{pmatrix}$$
and restrict the $p(p+1)/2$ parameters above. In particular, say that we model
\begin{align*} \sigma_{i} &= \exp(s_i), \quad (s_1,s_2,\dots, s_p)^\top=K\vec\psi \\ \rho_{ij}&= \frac{2}{1 + \exp(-q_{ij})} - 1, \quad (q_{21},q_{31},\dots,q_{p1},q_{32},\dots,q_{p,p-1})^\top=L\vec\varphi \end{align*}
where $K\in\mathbb{R}^{p\times k}$, $L\in\mathbb{R}^{p(p-1)/2\times l}$, both are known, $k\leq p$ and $l\leq p(p-1)/2$. An issue with this approach is that $C$ may not be correlation matrix. However, computing the log-likelihood is straightforward and so is computing the gradient. Also it allows for quite general covariance matrices.
On the flip site, one can implement a series of different model as in the nlme package some of which are not included in the above. This will take time but one can write better optimized function for each specific case.
My questions are:
• Is the above a bad idea?
• Is the above expected to often failed when using numerical optimizer? Can one do better in a general model?
• Is there a smarter and more general way to restrict the parameters and still have quite general models?
I am considering the above to allow the end user to restrict parameters in an R package (which is not pure multivariate normal model as above) and the above seems easy for the end user to specify. An obvious assumption of the end user would be to restrict some of the correlations to be zero. References will be appreciated.
# Update
It is not clear from the above in what context I use the model. It is used in a hidden Markov model as part of a Monte Carlo expectation conditional maximization algorithm. See the example here where I have implemented the above in one of the conditional maximization steps. $l$ and $k$ are typically small and $p$ may be small or moderate.
Here is an example to make it more concrete. First I show the output and then the two function definitions (get_covar and ex_func)
######
# first example
K <- matrix(1, 4, 1)
psi <- log(2)
L <- matrix(0, 4 * (4 - 1) / 2, 1)
varphi <- log(- (.5 + 1) / (.5 - 1))
L[2, 1] <- 1
get_covar(K, L, psi, varphi)$Q #R [,1] [,2] [,3] [,4] #R [1,] 4 0 2 0 #R [2,] 0 4 0 0 #R [3,] 2 0 4 0 #R [4,] 0 0 0 4 set.seed(69832532) out <- ex_func(K, L, psi, varphi) do.call(rbind, lapply(out, "[[", "par")) #R [,1] [,2] #R wo_gr 0.6849 1.025 #R w_gr 0.6849 1.025 c(psi, varphi) # actual values #R [1] 0.6931 1.0986 ##### # second example K <- matrix(0, 4, 2) K[1:2, 1] <- K[3:4, 2] <- 1 psi <- log(c(2, 5)) L <- matrix(0, 4 * (4 - 1) / 2, 2) L[c(1, 2, 4), 1] <- 1 L[6, 2] <- 1 varphi <- log(- (c(.8, .4) + 1) / (c(.8, .4) - 1)) get_covar(K, L, psi, varphi)$Q
#R [,1] [,2] [,3] [,4]
#R [1,] 4.0 3.2 8 0
#R [2,] 3.2 4.0 8 0
#R [3,] 8.0 8.0 25 10
#R [4,] 0.0 0.0 10 25
set.seed(93900343)
out <- ex_func(K, L, psi, varphi)
do.call(rbind, lapply(out, "[[", "par"))
#R [,1] [,2] [,3] [,4]
#R wo_gr 0.7069 1.613 2.154 0.8533
#R w_gr 0.7070 1.613 2.154 0.8533
c(psi, varphi) # actual values
#R [1] 0.6931 1.6094 2.1972 0.8473
Here are the two function definitions
# function to get the covariance matrix
get_covar <- function(K, L, psi, varphi){
V <- diag(exp(drop(K %*% psi)))
C <- diag(1, ncol(V))
C[lower.tri(C)] <- 2/(1 + exp(-drop(L %*% varphi))) - 1
C[upper.tri(C)] <- t(C)[upper.tri(C)]
list(Q = V %*% C %*% V, V = V, C = C)
}
# function to simulate data and find MLE estimates
ex_func <- function(K, L, psi, varphi, nobs = 100){
# get covariance matrix
Q <- get_covar(K, L, psi, varphi)$Q p <- ncol(Q) # simulate some mus... Though, does not matter mus <- matrix(rnorm(nobs * p), ncol = p) Y <- mus + matrix(rnorm(nobs * p), ncol = p) %*% chol(Q) # ... since we do Z <- crossprod(Y - mus) # assign log-likelihood function ll <- function(par, k = length(psi)){ idx <- 1:k psi <- par[ idx] varphi <- par[-idx] Q <- get_covar(K, L, psi, varphi)$Q
Q_qr <- qr(Q)
deter <- determinant(Q, logarithm = TRUE)
if(deter$sign < 0 || Q_qr$rank < ncol(Q))
return(NA_real_)
-(nobs * deter$modulus + sum(diag(solve(Q_qr, Z)))) / 2 } # assign gradient function gr <- function(par, k = length(psi)){ idx <- 1:k psi <- par[ idx] varphi <- par[-idx] tmp <- get_covar(K, L, psi, varphi) list2env(tmp, environment()) # Computations could be done a lot smarter... Q_qr <- qr(Q) if(Q_qr$rank < ncol(Q))
return(NA_real_)
fac <- solve(Q_qr, Z) - diag(nobs, ncol(Q))
fac <- solve(Q_qr, t(fac)) / 2
d_V <-
diag(fac %*% V %*% C + C %*% V %*% fac) %*%
K %*% diag(exp(psi), length(psi))
d_C <- tcrossprod(V, V %*% fac)
exp_varphi <- exp(varphi)
d_C <- as.vector(d_C)[lower.tri(d_C, diag = FALSE)] %*% L %*%
diag((4 * exp_varphi / (1 + exp_varphi)^2), length(varphi))
c(drop(d_V), drop(d_C))
}
# use optim
# scale with full covariance matrix log-likelihood
Q_full <- Z / nobs
deter <- determinant(Q_full, logarithm = TRUE)
|
|
Semi-direct product isomorphic to direct product
I would like some help on the following problem from anyone who would like to help.
Let $f: H \to G$ be a group homomorphism. For $h \in H$, define $\rho(h) = \phi_{f(h)} \in Aut(G)$.
The situation being as described, prove that the semi-direct product $G\rtimes_{\rho} H$ is isomorphic to the direct product $G \times H$.
Help will be greatly appreciated!
-
What is $\phi_{f(h)}$? Is it the inner automorphism induced by conjugating by $f(h)$? – JSchlather Dec 1 '12 at 21:15
It is the conjugation by $f(h)$, i.e. $\phi_{f(h)}(g) = f(h)gf(h)^{-1}, \forall g \in G$ – user44069 Dec 1 '12 at 21:18
Look at the subgroup $(f(h),h^{-1})$ in the semidirect product. what does conjugation by $g\in G$ do? – user641 Dec 1 '12 at 21:24
@SteveD: Dear Steve, are the OP speaking about the internal semidirect product when he works on $G\time H$? Because as a fact D.Robinson noted in his book that we can find two subgroups like $N^*, H^*$ which $N^*\cong G$ and $H^*\cong H$ and $N^*\cap H^*=\{1\}$ such that $G\times_{\rho} H\cong N^*\times H^*$. Thanks. – Babak S. Dec 2 '12 at 15:44
@Stefan: See this math.stackexchange.com/q/201710/8581. – Babak S. Dec 2 '12 at 19:29
|
|
+ 1
# Angle measurement actually starts from a vertical 270deg?
I think the angle measurement described here (as well as w3schools) is confusing/wrong. To determine the angle parameter you should measure starting from a vertical vector pointing down at 270deg counter-clockwise to the gradient angle you want. See https://www.w3.org/TR/SVG/pservers.html#LinearGradients. for a good diagram.
31st Jul 2016, 1:09 AM
Troy Scott
1 Answer
0
The general explanation is a little bit confusing but the examples are correct - see my code example https://code.sololearn.com/Wt3RL9A6r5v2
4th Nov 2016, 8:34 AM
Mojmír Volf
|
|
OpenGL My OGL Render Class
This topic is 4869 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
Here we go! Its the birth of a render class, very incredibly simple, only sends a quad to the renderer and then, renders it. Renderer.h
#include <vector>
#include "Vector3.h"
using namespace std;
{
Vector3 P1, P2, P3, P4;
int Texture;//Index into gpTextureManager
};
class GLRenderer
{
public:
GLRenderer()
{
}
float P2X,float P2Y,float P2Z,
float P3X,float P3Y,float P3Z,
float P4X,float P4Y,float P4Z,
int Texture)
{
pData->P1 = Vector3(P1X,P1Y,P1Z);
pData->P2 = Vector3(P2X,P2Y,P2Z);
pData->P3 = Vector3(P3X,P3Y,P3Z);
pData->P4 = Vector3(P4X,P4Y,P4Z);
pData->Texture = Texture;
}
};
Renderer.cpp
#include <windows.h>
#include <gl\gl.h> // Header File For The OpenGL32 Library
#include <gl\glu.h> // Header File For The GLu32 Library
#include <gl\glaux.h> // Header File For The Glaux Library
#include "Renderer.h"
{
return;
glDisable(GL_LIGHTING);
glDisable(GL_BLEND);
{
pTemp = *ptr;
glBegin(GL_QUADS);//Cannot be brought out of loop due to bindtexture call
glVertex3f(pTemp->P1.x,pTemp->P1.y,pTemp->P1.z);
glVertex3f(pTemp->P2.x,pTemp->P2.y,pTemp->P2.z);
glVertex3f(pTemp->P3.x,pTemp->P3.y,pTemp->P3.z);
glVertex3f(pTemp->P4.x,pTemp->P4.y,pTemp->P4.z);
glEnd();
}
delete pTemp;
}
and my lil vector3 class, proves very useful :d Vector3.h
class Vector3
{
public:
float x, y, z;
Vector3()
{
}
Vector3(float x, float y, float z);
// And so on
};
Share on other sites
• Use triangles because you can draw anything with triangles, and you'll have to add it eventually. So, you might as well just have it draw only triangles.
• If you're changing it to triangles, you'll need to add texture coords to your class.
• Initialize 'pTemp' to NULL inside "RenderBasicQuads". It's not strictly necessary for this code since you assign it at the beginning of every loop iteration, but get into the habit, else you'll forget and wonder why there's an invalid memory access at 0x92739838 instead of 0x00000000, which is much easier to debug.
• You haven't included a call to glTexCoords (or whatever its called) to specify the texture coords for each vertex.
• Have the function "AddBasicQuadToRenderer" actually take a BasicQuad object, or 3 Vector3 objects and the texID, or whatever object type you'll be using, else you'll accidently swap a single parameter and be looking for it for hours.
• With the "delete pTemp;" statment inside "RenderBasicQuads", it will effectively delete the memory allocated for BasicQuad object for the last element in the list, but won't delete the list element, therefore causing memory access problems the next time you try to access the list element, because the pointer in the list element is still pointing to what is now an invalid memory address to access. I'm guessing that it's not what you're trying to do.
• Your list of things to be rendered should be kept inside the GLRenderer list. If a class has functionality, but no data members, make the functions static so you don't have to bother actually creating an object which has no data, just to run the functions.
• If you're not turning off/on any features like lighting, blending, just put the glDisable function calls in the same function where you initialize OpenGL. It's wasteful and just might make your rendering slower (don't quote me on this), calling these two every time you render.
It's late here. Please let me know if I got any of this wrong.
[Edited by - Endar on November 24, 2005 6:49:37 AM]
Share on other sites
I also plan on adding operator overloads to the Vector3 class so I can work with them a bit more naturally :)
Share on other sites
Did you also know that class and struct are almost exactly the same thing?
So when you have a struct, you can add constructors, functions and overloaded operators to it. I have a couple of things like this, where, even though all the data members are all public, I have functions and operators to make things simpler and the code more readable.
That's really all the suggestions I have for the moment. I hope I helped.
Share on other sites
Quote:
I have no idea what you mean.
struct BasicQuad{ Vector3 P1, P2, P3, P4; int Texture; BasicQuad(const BasicQuad& b) { P1 = b.P1; P2 = b.P2; P3 = b.P3; P4 = b.P4; Texture = b.Texture; } BasicQuad& operator=(const BasicQuad& b) { if( &b != this ){ P1 = b.P1; P2 = b.P2; P3 = b.P3; P4 = b.P4; Texture = b.Texture; } return *this; }};// blahvoid AddBasicQuadToRenderer(const BasicQuad& b){ // this BasicQuad *pData = new BasicQuad; (*pData) = b; // OR this BasicQuad *pData = new BasicQuad(b); // then add to the vector mBasicQuads.push_back(pData);}
See? Also, don't overload operators if it doesn't make sense. I mean, if you can look at an operator and the types on either side and instantly know what it does, even if you haven't seen the code before, then it's right. If you have to think for a second, chances are that it doens't belong and should have a proper function.
Share on other sites
Maybe i said overload and shouldn't have, thats exactly what I ment....
Throw the object in the parameters instead of having 30 million floats.... Looks much more effient than my code... But I don't have much use for my Vector3 class now do i ?
Share on other sites
In the code that you have there? Not really. But keep it in there and keep building it up, because for anything more complex you will use it.
You'll need it to do the dot product, cross product, have all the common overloaded operators (+,-,/,+=,-=,/= etc). It'll probably end up hidden inside a vertex class, which would be something like this:
class Vertex{public: Vector3 pos; // position Vector3 normal; // normal float u,v; // texture coords // functions};
So don't throw it away, or even consider stopping using it. As you get onto more complex things, pretty much anything more complex than what you have right now, you'll need it. It's one of the major building blocks of anything 3d graphics related.
Share on other sites
I saw a whole bunch of premade engines using it, but i didn't want to use someones engine so i just invented my own vector3.
Share on other sites
I did the same. In fact, it was the first thing that I wrote that I still use.
It's small enough to be mostly finished in a couple of days/week. If you don't know about vector math you learn a lot by reading the resources that you use to write it.
• What is your GameDev Story?
In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.
• 14
• 12
• 29
• 11
• 44
• Forum Statistics
• Total Topics
634856
• Total Posts
3019661
×
|
|
# TTP02-26 The Impact of $\sigma(e^+e^-\to \mbox{hadrons})$ Measurements at Intermediate Energies on the Parameters of the Standard Model
TTP02-26 The Impact of $\sigma(e^+e^-\to \mbox{hadrons})$ Measurements at Intermediate Energies on the Parameters of the Standard Model
TTP02-26 The Impact of $\sigma(e^+e^-\to \mbox{hadrons})$ Measurements at Intermediate Energies on the Parameters of the Standard Model
We discuss the impact of precision measurements of $\sigma(e^+e^-\to \mbox{hadrons})$ in the center-of-mass range between 3 and 12~GeV, including improvements in the electronic widths of the narrow charmonium and bottonium resonances, on the determination of parameters of the Standard Model. In particular we discuss the impact of potential improvements on the extraction of the strong coupling constant $\alpha_s$, on the evaluation of the hadronic contributions to the electromagnetic coupling $\alpha(M_Z)$, and the determination of the charm and bottom quark masses.
J.H. K\“uhn and M. Steinhauser JHEP 0210 018 2002 PDF PostScript arXiv
|
|
# How are you supposed to get the East/West variation on this question in the FAA PPL written exam?
While taking a practice test, I needed to calculate the magnetic heading between two airports. Most of the time I run into a problem, the map will provide me with isogonic lines with the East/West variation visible. But in at least one or more questions, the isogonic line is either not shown, or the numbers are not visible.
Using the protractor to calculate the angle on the isogonic line using the latitude/longitude lines doesn't seem to give me the correct number either.
Does anyone know what I am doing wrong or how I can come to a correct answer?
Problem
(Refer to figure 23.) What is the estimated time en route for a flight from Claxton-Evans County Airport (area 2) to Hampton Varnville Airport (area 1)? The wind is from 290° at 18 knots and the true airspeed is 85 knots. Add 2 minutes for climb-out.
1. 44
2. 39
3. 35
Update: I added an actual sample problem. Assuming the only information I am able to work with is the map, the question and my E6B (manual or electronic).
Update I found the original question that got this thread started:
(Refer to figure 24.) Determine the magnetic heading for a flight from Allendale County Airport (area 1) to Claxton-Evans County Airport (area 2). The wing is from 090° at 16 knots, and the true airspeed is 90 knots.
The solution given in study mode: With a plotter, measure the true course from Allendale to Claxton-Evans as 212°. Use a flight computer to find the true heading of 203°. Add variation of 6°W. 203°+6°=209°.
• VORs are aligned to magnetic north when they are commissioned and then periodically adjusted. You should be able to use the compass rose to determine offset. Also, I see the line on the east side of the map you posted, but don't see the printed offset. – Ron Beyer Oct 3 '16 at 4:43
• By the way, I've seen this map "excerpt" before and there is an isogonic line just to the west of the Claxton airport with a 6°W variation marking. – Ron Beyer Oct 3 '16 at 14:23
• @Ron Beyer Also, when you say the VOR declination is "periodically adjusted" I think that period is probably pretty long since they would have to change all the airways, fixes, procedures, etc would have to be changed to compensate. I doubt they do that often – TomMcW Oct 3 '16 at 19:49
• @David I can see the predicament. I was just pointing out that the solution of using the VOR's doesn't work. I'm watching this question to see if someone comes up with an answer because I'm wanting to know myself – TomMcW Oct 3 '16 at 23:02
• The angle that the isogonic line makes to the meridians is not related to the variation. – pericynthion Oct 3 '16 at 23:57
Real answer: The FAA screwed up in preparation of that exam question (this sometimes happens; they aren't perfect) and cropped the sectional to where the isogonal lines and or isogonal values are not visible. As it happens I know that area of the country pretty well.
It works out that Claxton has about a 6.5 deg W magnetic variation (get the full VFR sectionals for that area) and Varnville is just shy of 7deg W variation; I'd use 6.75 deg W as your number for mag variation over the duration of the flight. That works out to a course of 051 deg magnetic, 56.5 NM along the route. With winds at 290 at 18 kts cruising at a true airspeed of 85 kts, this gives an aircraft heading of 041 magnetic in cruise with a ground speed of 93 kts.
Now here's the thing: what is your groundspeed during climb out? That's going to also be necessary to calculate how much ground you cover during your 2 min climb to cruise. And critical to answering the question correctly.
Now, IF WE ASSUME a constant groundspeed of 93 kts, from point to point, that gives a flight time of 36.5 minutes. If we further assume the two minutes of climb are not enroute and just maneuvering from takeoff to on course right over the Claxton airport and we descent into Varnville at 93 kts as well, we get a figure of 38.5 minutes total flight time. Therefore, I'd select 'B' as my answer.
As a side note, I landed an LSA out at Varnville only once and that place is virtually abandoned; it was difficult to identify as an airport from the air, the runway is anemic and derelict, there are virtually no buildings there and all I could find for traffic there was a decrepit C-150 rusting in the sun. I took off again - and very glad my engine didn't quit, leaving me stranded there.
• I think this answer is misleading because during the pre-flight phase, you would be acquiring your wind information from a briefing weather source, which is given in true rather than magnetic. The question doesn't ask for the Magnetic (course) Heading, so variation is not needed. Since the variation is so small near Savannah and the trip is so short though, it doesn't significantly affect the answer even if you incorporate it. – Daniel Oct 4 '16 at 4:02
• You should differentiate between "critical to answering this question correctly" and knowing the answer for your actual XC flight (ie not the written exam). These kinds of questions seem to only involve the distance between two points, a simple, constant wind, a simple, constant speed, and a small 'rule of thumb' adder (in this case, 2 minutes). The ground speed during climb is factored in for you via the 'rule of thumb' number -- it's not any more complicated than that. – Daniel Oct 4 '16 at 4:07
• The reason for trying to get to the bottom of this answer is I want to do well on the written exam. I know the questions isn't exactly a real scenario, but in order to pass the test, you have to get the answers correct. Several questions in the Sporty's practice exam use that map. The solutions they provide show you including the variation in the calculation, but the map does not provide that information. I was wondering if they messed up or I was blind. As a new pilot, I couldn't say for certain. Thanks for all the feedback. – David Oct 4 '16 at 5:32
• These are surprisingly close to a real scenario though! My point about the rule of thumb is that you'd need take into account your aircraft climb performance, etc in the real world, but on the test they make it easy with the 'add 2 minutes for climb' clause. A possible reason for this figure weirdness is that this question references 'Figure 24'... that's obviously not right, so perhaps it refers to an old edition of the Airman Testing Supplement... On the real test it seems unlikely that you'll run into this bad of a whopper! – Daniel Oct 4 '16 at 6:33
• That's my point, David. The guy who wrote that test question screwed up and cropped the image such that you can't see the isogonal lines or numbers. – Carlo Felicione Oct 4 '16 at 15:01
The answer is that magnetic variation is not a factor in this problem.
See this answer: When are winds given with respect to true vs. magnetic north?
Another potential non-real-world issue with this problem might come from this:
NOTE: Chart is not to scale and should not be used for navigation. Use associated scale.
So, use another piece of paper to figure out how far the two airports are, then use your plotter to find the True Heading-- do not use your plotter to measure the distance!
Here is my solution to your original problem. I'm not going to work out the second one since you already have the solution, and the a real question should include the variation as clearly readable information somewhere, either on the chart or in an AFD entry.
I am using a method that doesn't require writing on the chart, which is not allowed on the FAA tests. In real life, you would draw your course on the map for reference during your flight.
First, find the distance between the two points. Mark it on the edge of one of your blank sheets:
I got 57 nautical miles, from the center of Claxton to the center of Hampton.
Estimate the true bearing from Claxton to Hampton. This is a northeast course, so we know the true course will be around 40-50 degrees.
Now put the Course Arrow on the two airports, and the hole of the plotter on the latitude line:
From your plotter, read off exactly 45 degrees from the third scale. You know to ignore the others because you expect the answer to be near 40 or 50 degrees.
Now pull out the whiz-wheel and follow the directions on it!
1. Set wind direction opposite TRUE INDEX
2. Mark W up from the Grommet (add 18kts)
1. Place true course under TRUE INDEX
2. Slide true airspeed under W (85kts)
3. Read Ground Speed under the Grommet
I get 91.5 kts GS.
Now you have all the information you need to do the calculation:
Remembering that a knot is a nautical mile per hour,
$${ 57nm \over 91.5kts } \times {60\;min \over hr} = 37.4 \; minutes$$
Now add two minutes for climb as directed in the problem, and you get a little over 39 minutes.
Note that you can ignore variation in this problem because both wind and course are calculated as true bearings. Since the problem doesn't ask what the Magnetic Heading or Course Heading are, that information isn't required.
• I added more detail at the bottom of my question to show an example of where the map was used and variation was required. – David Oct 4 '16 at 16:43
• @David It is highly unlikely that you'll encounter that in the exam, where basically a typo or figure update has invalidated a question. – Daniel Oct 4 '16 at 19:28
• thanks, it's good to know. It's hard enough to learn without confusing stuff like that making you think you are doing something wrong. – David Oct 5 '16 at 0:50
• Thanks for the detailed answer. I think your comment that the problem is not likely to show up on the test makes the most sense. – David Oct 7 '16 at 22:26
|
|
# How many ways are there to place the ball in N different bags under given condition? [duplicate]
Possible Duplicate:
In how many ways we can put $r$ distinct objects into $n$ baskets?
I have a doubt in following Combinatorics question : There are N bags, and there several balls of 4 colors (R,G,B,Y).Each bag should be filled by exactly one ball with the following condition -
-The color of ball in the consecutive bags must be different.
-The color of ball in first and last must be distinct too.
How many ways are there?
For N=2:
B-1|B-2
R G
R B
R Y
G R
G B
G Y
B R
B G
B Y
Y R
Y G
Y B
For N=3
B-1 B-2 B-3
R G B
R G Y
R B G
R B Y
R Y G
R Y B
G R B
G R Y
G B R
G B Y
G Y R
G Y B
B R G
B R Y
B G R
B G Y
B Y R
B Y G
Y R G
Y R B
Y G R
Y G B
Y B R
Y B G
|
|
# How do you write 28/19 in simplest from?
Nov 16, 2016
$\frac{28}{19}$ is in the simplest form.
#### Explanation:
$\frac{28}{19}$ is in its simplest form
19 is a prime number and is not a factor of 28.
$\frac{28}{19}$ can be written as a mixed number as $1 \frac{9}{19}$
However, this is not really in a simpler form, it is just in a different form.
|
|
# Electric Flux through a hemisphere
1. Aug 30, 2007
### xaer04
1. The problem statement, all variables and given/known data
"What is the flux through the hemispherical open surface of radius R? The uniform field has magnitude E. Hint: Don't use a messy integral!"
$$\mid \vec{E} \mid= E$$
radius = $R$
2. Relevant equations
Electric Flux over a surface (in general)
$$\Phi = \int \vec{E} \cdot \,dA = \int E \cdot \,dA \cos\theta$$
Surface area of a hemisphere
$$A = 2\pi r^2$$
3. The attempt at a solution
If it were a point charge at the center (the origin of the radius, $R$), all of the $\cos \theta$ values would be 1, making this as simple as multiplication by the surface area. The only thing that comes to mind for this, however, is somehow integrating in terms of $d\theta$ and using the angle values on both axes of the hemisphere:
$$\left( \frac{\pi}{2}, \frac{-\pi}{2}\right)$$
But i can't just stick an integral in there like that... can I? I'm really lost on this one...
2. Aug 30, 2007
### learningphysics
You don't need integrals... think of the flux lines... think of another area through which all the flux lines go, that are going through the hemisphere... flux = number of flux lines...
so if the flux lines that go through the hemisphere are the same flux lines that go through this other area, then the flux through that other area is the same as the flux through the hemisphere.
3. Aug 30, 2007
### xaer04
so... you're basically saying that shape doesn't matter and the answer is:
$$2\pi R^2 E$$
and was ridiculously simple, and my real flaw is i was trying to overcomplicate??? which i tend to do a lot...
and as suggested, i still want to think it has to be more complicated than this since it's not an enclosed surface, and it's not dealing with a point charge... and the flux at the center of the hemisphere's surface would be $E \cdot \,dA$ where the cosine would equal 1... whereas at the edge the cosine would be 0... and i would still have to account for everything in between... right? which is where i start thinking i would need to integrate. but maybe i'm missing the point here...
or maybe that doesn't matter because since it's a hemisphere and the electric field is going straight into it, meaning the y values cancel out in pairs so i shouldn't worry about cosine... but it's hitting the inside of the surface... blah, i just rambled a bunch, feel free to disregard.
Last edited: Aug 30, 2007
4. Aug 30, 2007
### learningphysics
No that isn't the answer... visualize all the flux lines going through the hemisphere... now imagine a flat surface through which those same flux lines go (and only those same flux lines)... what is the shape of this flat surface? What is its area? What is the flux through this new flat surface?
5. Aug 30, 2007
### learningphysics
I'm referring to the base of the hemisphere.
Assuming the field is oriented vertically, and the hemisphere is oriented in the same direction... the flux through the base (which is just a circle of radius R) is the flux through the hemisphere itself.
But is the field oriented in the same direction as the height of the hemisphere? The question doesn't say. I'm assuming it is... if it isn't then the problem is more complicated.
So it's $$\pi{R}^2E$$
Last edited: Aug 30, 2007
6. Aug 30, 2007
### proton
it seems to me to be (2)E(pi)R^2 IF the field lines are directed spherically
if the field lines are all vertical, then yes, the flux is E(pi)R^2
7. Aug 30, 2007
### genneth
Uniform electric field usually means a field that does not vary with position.
8. Aug 30, 2007
### xaer04
ah, i got it now... i didn't understand the concept of flux. thanks much, everyone:)
9. Nov 15, 2010
### antimatt3r
hello, im antimatt3er as you can see, well i understood is that you are trying to calculate the flux of a point charge through a hemisphere and the point charge is at O the center of the hemisphere.
1.i doon't wannat apply gauss law because im not sure that i have symetrie case so i will use classsic integrale method
2.$$\phi$$=$$\phi(disc)$$+$$\phi(spher)$$
3.now claculating first flux trought the disc
$$\phi(disc)$$=$$\int$$$$\vec{E}$$$$\vec{ds}$$
in all the surface the unit vector $$\vec{u}$$that hold$$\vec{E}$$ is perpenducular to the unit vetor $$\vec{n}$$ so the cos($$\pi/2$$)=0 then the $$\phi(disc)$$=0
4.now going to calculate the sencond flux equal q devide by epsilon zero.
so in the end he have symitri case and voila this is the flux i will right down the demonstartion latter on
but my reall problem is that what is the flux of a positiv point q placed somewhere in the axsis of the hemisphers
not just the point O i might a point R or R/2 or whatever?? plz i needd the answer soon and thanks.
10. Nov 15, 2010
### antimatt3r
im soory about 7th line it should be a double integrale or surface integrale .
|
|
# Homework Help: Simple harmonic motion of a body question
1. Jun 30, 2007
### srj200
1. The problem statement, all variables and given/known data
A body oscillates with simple harmonic motion along the x-axis. Its displacement varies with time according to the equation
A=Ai * sin(wt+ (pi/3)) ,
Where w = pi radians per second, t is in seconds, and Ai = 2.4m.
What is the phase of motion at t = 9.4 seconds? Answer in units of radians.
2. Relevant equations
A is the amplitude.
Ai is the initial amplitude.
w is actually "omega" but I didn't know how to enter that. That is the given angular velocity in rad/s.
Pi is 3.14.....
3. The attempt at a solution
I honesty don't know where to start. I just plugged into the equation with the given data and got
-1.78355 meters.
The answer wants radians. Also, it asks for the "phase of motion". The answer I got is just the final amplitude at the given time.
Any help would be appreciated.
Thanks.
2. Jun 30, 2007
### nrqed
The phase is simply the argument of the sine function, namely the $\omega t + \frac{\pi}{3}$ That's all there is to it.
3. Jun 30, 2007
### jamesrc
I think the "phase of motion" is the argument of the sine function (=ωt+φ)
So at time t=0, the phase of motion would just be the phase constant (in your problem, π/3). And I think your answer should be between 0 and 2π, so if you compute something larger than 2π, you should subtract multiples of 2π until you are in that range.
E.T.A.: Looks like I was too slow...and Greek letters don't work the way they used to...
4. Jun 30, 2007
### Staff: Mentor
You can trace the SHM motion through $2\pi$ radians of "phase" as the body moves past the origin, goes to maximum + displacement, returns to the origin, goes to maximum - displacement, and then back where it started. When the body crosses the origin, consider its phase to be 0; when it reaches maximum amplitude, phase = $\pi/2$; back to the origin, phase = $\pi$. Etc.
Hint: Consider the argument of the sine function.
(Looks like nrqed and jamesrc both beat me to it!)
5. Jun 30, 2007
### srj200
Thanks for the help. I got it.
Last edited: Jun 30, 2007
|
|
Home » 2011 » June » 17
# Daily Archives: June 17, 2011
## Learn Integral Calculus from Scratch
“Where can I learn integration fundamentals”
Just find below one of the best choice for you.
http://www.intmath.com/integration/integration-intro.php
Visit http://www.intmath.com/ for learning the fundamentals of Mathematics
## IIT JEE Physics Syllabus
“Can you tell me the IIT Physics Syllabus and how I should proceed to crack competitive exams, assuming that I am starting fro square one ?”
The IIT JEE Syllabus for Physics is available at http://jee.iitd.ac.in/physics.php
For preparation, the most important books you must study is the NCERT Physics for class XI and XII. Don’t get misguided by someone saying that the NCERT books are not worthy. Almost all the entrance examinations are depending on NCERT syllabus and books to set the standard. The NCERT text book has evolved as a result of strenuous effort of a group of hundreds of teachers from different parts of India and has been revised and updated after consultation and review workshop including teachers from every state. So, it is better to depend on it.
If you have mastered the book and can solve each and every exercise and additional exercise from the NCERT text book, then be glad that you have reached the level expected of you.
The next book you should take for reference is Fundamentals of Physics by Resnick and Hallidey. (This is the book many of the teachers and authors used to follow before the present revised edition of NCERT text book)
While preparing for IIT or any other competitive exams, it is important to understand the concept basically rather than to practice a large number of numerical problems.
A Chinese proverb says:
“Don’t give fish to a hungry man, teach him how to fish. He will not be hungry again!”
So, our aim is to practice the tactics and methodology with a strong foundation of the subject so that we can solve any type of conceptual or numerical problem.
Dare ask doubts to your teachers. (If you don’t dare, just post them here at www.askphysics.com ). Think and analyse facts yourselves and never depend on mere rot learning. Doubts pop up when you start thinking on what you learn. What you learn must be a part of your brain and life.
Wish you good luck
## Projectile motion on an inclined plane
“Can you tell me about projectile motion on an inclined plane?”
Dear Sourav,
The question is already discussed in detail in the following sites. Please go through them. If you still have doubts, I can explain them in simpler terms.
1. http://www.goiit.com/posts/list/mechanics-projectile-motion-in-inclined-plane-263.htm
2. http://cnx.org/content/m14614/latest/
## Mass of Proton
“how can we say that charge and mass of a proton has a constant value if it depends on the nature of the gas taken”
Answer: The mass of proton does not depend on the nature of gas or material.
## Factors affecting frequency of sound produced by a stretched string
Study how the frequency of sound produced will change in each case with the following strings of length 15cms when the strings are tied between 2 ends-
• aluminium string
• copper string
• cotton string
• metallic string
• jute string
Also study how the pitch changes when the strings are made taught and loose. Study how the frequency of sound changes with thickness of the following strings
• cotton strings
• copper strings
This seems to be a homework question or a project question. Therefore I am not giving a detailed answer so as not to tamper the basic aim of assigning a project.
The frequency of sound produced by a stretched string depends on the following factors:
1. the length of the string
2. the linear mass density (i.e; the mass per unit length) of the string
3. the tension in the string
When you are using strings of different materials, the factor which changes is the mass per unit length and the same is true when you are changing the thickness.
When you make the string more taut, the tension increases and vice versa.
The question is given for a constant length. Therefore the case of effect of changing length does not come into picture.
The formula showing the relationship is $\large \fn_jvn f=\frac{1}{2L}\sqrt{\frac{T}{m}}$
it is evident from the formula that the frequency of sound is
• inversely proportional to the length
• directly proportional to the square root of tension in the string and
• inversely proportional to the square root of linear density of the string.
on proper substitution, the formula can be recast as
$\large \fn_jvn f=\frac{1}{Ld}\sqrt{\frac{T}{\pi \rho }}$
and this will be more convenient for you to answer the questions.
I recommend that you try to explore by actually performing the experiments.
### Hits so far @ AskPhysics
• 2,227,489 hits
### CBSE Exam Count Down
AISSCE / AISSE 2020February 26, 2020
Board Exams for the core subjects of CBSE may start from 26 February (Class X) and 27 February (Class XII)
|
|
The current GATK version is 3.6-0
Examples: Monday, today, last week, Mar 26, 3/26/04
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
# Downsampling
Member Posts: 50
edited July 2012
Dear all,
I would like to ask you about some inconsistencies (maybe!) that I am observing regarding the downsampling of reads in GATK.
This is the first time I am dealing with real high coverage (several 1000/sample) and it starts to matter for me to know how to adjust these parameters.
All of the below is from GenomeAnalysisTK-1.6.596/GenomeAnalysisTKLite.jar.
1. I was anticipating the need for some tuning, so I read the CommandLine GATK documentation, where three options related to this issue are listed:
--downsample_to_coverage
--downsample_to_fraction
--downsampling_type
All of them have NA as default value, so I assumed, that all my reads would count for the variant calling.
2. In the documentation for the UnifiedGenotyper, there are no options related to downsampling.
My command:
java -jar GenomeAnalysisTKLite.jar -T UnifiedGenotyper -R ref.fa -I 1.bam -I 2.bam -o GATK.vcf -glm BOTH -stand_call_conf 1 -stand_emit_conf 1
3. Neither are in the VariantAnnotator which I used subsequently on the UG output.
My command:
java -jar GenomeAnalysisTKLite.jar -T VariantAnnotator -R ref.fa -I 1.bam -I 2.bam -o GATK_anno.vcf --variant GATK.vcf -A DepthPerAlleleBySample -A BaseCounts -A AlleleBalanceBySample -A AlleleBalance -A DepthOfCoverage -A SampleList
Nevertheless, my variant-called and annotated vcf file features the following:
1. In the header, I see the full set of options used for the UG and the annotations:
UnifiedGenotyper="analysis_type=UnifiedGenotyper input_file=[F323/lib.218C/C0TV2ACXX_7_12_1.fastq_process_MQ35.bam, F325/lib.219C/C0TV2ACXX_7_13_1.fastq_process_MQ35.bam] read_buffer_size=null phone_home=STANDA
RD gatk_key=null read_filter=[] intervals=null excludeIntervals=null interval_set_rule=UNION interval_merging=ALL interval_padding=0 reference_sequence=/project/production/Indexes/samtools/hsapiens_v37_chrM.fa no
nDeterministicRandomSeed=false downsampling_type=BY_SAMPLE downsample_to_fraction=null downsample_to_coverage=250 baq=OFF baqGapOpenPenalty=40.0 performanceLog=null useOriginalQualities=false BQSR=null
[etc]
VariantAnnotator="analysis_type=VariantAnnotator input_file=[F323/lib.218C/C0TV2ACXX_7_12_1.fastq_process_MQ35.bam, F325/lib.219C/C0TV2ACXX_7_13_1.fastq_process_MQ35.bam] read_buffer_size=null phone_home=STANDA
RD gatk_key=null read_filter=[] intervals=null excludeIntervals=null interval_set_rule=UNION interval_merging=ALL interval_padding=0 reference_sequence=/project/production/Indexes/samtools/hsapiens_v37_chrM.fa no
nDeterministicRandomSeed=false downsampling_type=BY_SAMPLE downsample_to_fraction=null downsample_to_coverage=1000 baq=OFF baqGapOpenPenalty=40.0 performanceLog=null useOriginalQualities=false BQSR=null
[etc]
1. My INFO and FORMAT fields for one position look like this:
ABHom=0.997;AC=4;AF=1.00;AN=4;BaseCounts=1990,4,5,0;BaseQRankSum=-0.059;DP=2000;DS;Dels=0.00;FS=0.000;HaplotypeScore=6.1837;MLEAC=4;MLEAF=1.00;MQ=35
1/1:1,998:250:99:8592,700,0
1/1:4,992:250:99:8405,719,0
Due to the different thresholds applied by UG and VA, I have now these inconsistenf values for per-sample AD and DP.
Is this supposed to work like this, or am I using a "non-cannonical" sequence of operations by following up the UG with the VA?
Is there a way to switch OFF downsampling?
Many thanks as always for your comments!
cheers,
Sophia
Tagged:
Hi Sophia,
Downsampling is performed by the GATK engine, so the command-line argument that controls it lives in the
CommandLineGATK
The reason you're seeing different behaviors is that some walkers like the UG override the engine's default downsampling setting; for example the UG sets downsampling to 250 by default (sorry that wasn't specified, we'll make sure to add a note about that in the UG docs in the future). But you can manually override this by using the -dcov CommandLineGATK argument (as documented in the link above).
Geraldine Van der Auwera, PhD
Hi Sophia,
Downsampling is performed by the GATK engine, so the command-line argument that controls it lives in the
CommandLineGATK
The reason you're seeing different behaviors is that some walkers like the UG override the engine's default downsampling setting; for example the UG sets downsampling to 250 by default (sorry that wasn't specified, we'll make sure to add a note about that in the UG docs in the future). But you can manually override this by using the -dcov CommandLineGATK argument (as documented in the link above).
Geraldine Van der Auwera, PhD
Turning off downsampling exposes you to major performance problems in regions of excess coverage. We don't recommend you play with this parameter
--
Mark A. DePristo, Ph.D.
Co-Director, Medical and Population Genetics
Broad Institute of MIT and Harvard
• Member Posts: 50
edited August 2012
I tried the following two settings now for both the UG and the VA:
--downsampling_type BY_SAMPLE --downsample_to_coverage 1000
This produced the expected outcome, namely a global DP of 1000xn(samples) in the INFO field, 1000 in the per-sample DP field and AD per sample numbers that added up to 1000.
--downsampling_type ALL_READS --downsample_to_coverage 10000
Here, I was expecting GATK to downsampling all reads from all samples to a total of 10.000 per position. Instead, only positions where any single sample reached a depth of 10.000 were printed, e.g.:
AC=2;AF=1.00;AN=2;BaseQRankSum=6.858;DP=170360;DS;Dels=0.00;FS=8.789;HaplotypeScore=6809.3095;MLEAC=2;MLEAF=1.00;MQ=35.00;MQ0=0;MQRankSum=1.569;QD=0
.19;ReadPosRankSum=1.513;SB=-3.277e+04 GT:AD:DP:GQ:PL ** ./. ** 1/1:63,170165:170355:99:32767,32767,0 ./.
@Mark_DePristo: I am aware of the problems excess coverage can cause; in this case, though, I am dealing with a very short reference sequence, and am interested in mosaik-like variant configurations.
|
|
Varying Force
1. Mar 15, 2015
brycenrg
1. The problem statement, all variables and given/known data
A 230kg crate hangs from the end of a 12.0 m rope. You push horizontally on the crate with a varying force F to move it 4.00m to the side.
What is the magnitude of F when the crate in the final position?
During the displacement, what are the work done on it, the work done by the weight of the crate, and the work done by the pull on the crate from the rope?
Knowing that the crate is motionless before and after displacement, use the answers to find the work your force does on the crate.
Why is the work of your force not equal to the product of the horizontal displacement and the initial magnitude of F?
2. Relevant equations
f= ma
w = fdcos(0)
3. The attempt at a solution
Magnitude of F is Fx = Tsin(Θ) = Fp
Work done by tension on the crate changes with (theta)
Work by weight = 0 because F is parallel to displacement
Work = ∫ from 0 to 4 (Tsin(Θ)) dΘ How do i do this? is this correct
The work of your force, is not equal to the product of horizontal displacement and the initial magnitude of F because F is varying. Also It is the sum of all the F*d from 0 to 4 m
2. Mar 15, 2015
Suraj M
When you are integrating for work, your saying that it's from 0 to 4 which is the displacement but you have a $d\theta$ you can't integrate that, you should either get $\theta$ in terms of the displacements or easier thing to do is to find the angle of displacement for 4 m then integrate from 0 to that angle.
3. Mar 15, 2015
brycenrg
Thank you, that makes sense. So ∫ from o to 18.4 degrees
One extra question, is it right for me to assume the function for force is t(sinθ) ?
4. Mar 17, 2015
Suraj M
Im not a 100% sure. It should work, but whats your T(tension)? I mean, value. is that constant? I doubt it.
5. Mar 17, 2015
PhanthomJay
I don't think you calculated the angle correctly. When the crate is displaced 4 m horizontally, the cord length is still 12 m.
There is more than one force acting on the crate.
Which of those forces do work?
What is the total net work done by all forces? (HINT: use work-energy theorem.)
You don't have to use calculus.
|
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Law of Reciprocal Proportion
## Introduction to Law of Reciprocal Proportion
Last updated date: 28th Mar 2023
Total views: 64.2k
Views today: 0.40k
German chemist Jeremias Richter devised a simple method for comparing compounds and determining how two elements will combine to make another chemical in the late 18th century.
Law of Reciprocal Proportion
Jeremias Ritcher put forward the law of reciprocal proportions in 1792. According to the law of reciprocal proportions, we may calculate the proportion of elements in compound AC if we know the proportion of elements in compounds AB and BC. This law aided in our understanding of stoichiometry, which is the process of calculating the amounts of reactants and products in relation to reactions.
## History of The Scientist
Jeremias Benjamin Richter (1762 - 1807)
Name: Jeremias Benjamin Richter
Born: 10 March 1762
Died: 4 May 1807
Field: Chemist
Nationality: German
## What is the Law of Reciprocal Proportion?
The rule of reciprocal proportions is another name for the law of reciprocal proportions, which is also referred to as the law of equivalent proportions or permanent ratios.
According to Ritcher, "When two elements combine independently with a fixed mass of the third element, the ratio of their masses at that time is either the same or some whole number multiple of the ratio they mix."
## Law of Reciprocal Proportion Examples
S. No Compounds Combining Elements Combining Weights CH4 C H 12 4 CO2 C 0 12 32
• Let's take methane and calculate the ratio of the components. Hydrogen has a molecular weight of 1 g/mol, and carbon has a molecular weight of 12 g/mol. Since there are 4 hydrogen atoms for every carbon atom, the ratio is 12:4, which can be expressed as 3:1.
• Therefore, both hydrogen and another element can be found in methane and water. This law states that the ratio of carbon to oxygen, which makes up the other element in both molecules, should be 3:8, or a straightforward multiple of that ratio.
• We obtain 3:8 because water has an oxygen to carbon ratio of 8, and methane has an oxygen to carbon ratio of 3. Let's check to see whether this is accurate: the ratio of carbon to oxygen in carbon dioxide is 12:32.
• Let's look at another illustration, beginning with sodium chloride. The molecular weights of sodium and chloride are 23 and 35 g/mol, respectively. This makes the ratio 23:35.
• Let's now examine hydrochloric acid with a 35:1 ratio. The ratio we would anticipate to observe if we combined salt and hydrogen is 23:1. Yes, these come together to produce sodium.
## Limitations of Law of Reciprocal Proportion
• Differences similar to those seen in the law of constant proportions are produced by the element's isotopes. The synthesis of a number of chemicals should thus use the same isotope or a combination of isotopes.
• The law only applies to a tiny subset of products that exhibit the disputed property since there are only a finite number of elements that will combine with the third element and also with one another.
## Applications of Law of Reciprocal Proportion
We can now understand stoichiometry thanks to this law. This is how the quantities of the reactants and products in relation to the reaction are calculated. The law that exists now makes sense because every element has a specific molecular weight. To create an elemental compound, each element is added in a ratio of whole numbers.
## Solved Examples
1. What made the law of reciprocal proportions significant?
1. It aided in the discovery of novel chemicals
2. It contributed to our current understanding of stoichiometry.
3. It assisted scientists in determining complex sizes.
4. The periodic table was made possible by it.
Ans: The correct answer is option B. As the law of reciprocal proportions plays an important role in studying and understanding the basic rules of stoichiometry.
2. Carbon is found in three different compounds: carbon dioxide (27.27%), carbon disulfide (15.79%), and sulphur dioxide (50%) Clearly demonstrate how the data exemplifies the law of reciprocal proportions.
Ans:
Carbon Compound
Let us take Carbon dioxide.
The percentage of carbon is 27.27
Percentage of oxygen (100 – 27.27) is 72.73
27.27 g of carbon reacts with 72.73 g of oxygen.
1 g of carbon joins with 72.73 / 27.27 => 2.67 g of oxygen.
Let us take Carbon disulfide.
Percentage of carbon = 15.79
Percentage of sulphur (100 – 15.79) is 84.21
15.79 g of carbon reacts with 84.21 g of sulphur.
Hence, 1 g of carbon reacts with $\frac{84.21}{15.79}$ = 5.33 g of sulphur.
The ratio of different masses of sulphur and oxygen joining with a fixed mass of carbon is 5.33: 2.67.
That is, 2: 1. —> [1]
Let us take Sulphur Dioxide,
Percentage of sulphur = 50
Percentage of oxygen = 100 – 50 = 50
50 g of sulphur reacts with 50 g of oxygen.
The ratio of the mass of sulphur to oxygen is 50: 50,
That is, 1:1 —> [2]
A straightforward whole-number multiple of the first ratio is the second ratio. The information serves as an example of the law of reciprocal proportions.
3. Different oxygen content ratios in the various nitrogen oxides demonstrate the following law:
1. The Law of reciprocal proportions
2. The Law of multiple proportions
3. The Law of constant proportions
4. The Law of conservation of mass
Ans: Option A offers the right response. The law of multiple proportions, which states that when two elements combine in more than one proportion to form one or more compounds, the weight of one element that combines with the given weight of other elements is in the ratio of the small whole number, is demonstrated by the different proportions of oxygen in the various oxides of nitrogen.
## Important Points to Remember
• The rule of reciprocal proportions is another name for the law of reciprocal proportions, which is also referred to as the law of equivalent proportions or permanent ratios.
• One of the fundamental laws of stoichiometry, along with definite and various proportions laws, is this one.
• German physicist Jeremias Richter proposed the statute in 1791.
## Conclusion
With the law's consent, tables of equivalent element weights might be made. These equivalent weights were frequently used by chemists in the nineteenth century. Two further stoichiometric laws are the law of definite proportions and the law of many proportions. The law of definite proportions is the formula for any compound formed between components A and B.
Competitive Exams after 12th Science
## FAQs on Law of Reciprocal Proportion
1. What is the historical background behind the law of reciprocal proportions?
Richter developed the reciprocal proportions law while researching the rates at which metals are neutralised in the presence of acid. When two separate chemicals A and B, have affinities for two additional substances, C and D, the proportion of C and D that can saturate the same quantity of A is the same as that of C and D that can saturate the same amount of B. This was researched by Berzelius in the early 19th century. Later, Jean Stas demonstrated that these stoichiometric laws were accurate within the bounds of experimental error.
2. Is the law of reciprocal proportions a rule for combining chemicals?
Jeremias Richter created the Law of Reciprocal Proportions in 1792. It claims that if two different elements combine to form a third element with the same weight, their mass ratios will either be multiples of or equal to the mass ratio at which the combination took place. According to Dalton's theory of atomicity, each element's atoms are identical, and compounds are created by joining atoms from distinct elements in a ratio of whole numbers. Therefore, the weights of the components that combine with the weights of a fixed element should be in a straightforward ratio to the weights of the elements when they mix.
3. What is the atomic hypothesis of Dalton?
John Dalton, an English physicist and chemist, proposed the atomic hypothesis in 1808 as a scientific theory about the composition of matter. It claimed that tiny, indivisible particles called "atoms" make up all substances. According to Dalton, the idea of atoms can be used to explain the laws of specific ratios and the law of conservation of mass. Dalton proposed that everything is made of minuscule atoms and indivisible particles. They were "solid, massy, hard, impenetrable, moving particle(s)," according to his description.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.