text
stringlengths
256
16.4k
If $f(x) = \Omega(n)$ and $g(x)= O(n)$, what would be the order of growth of $f(x) \cdot g(x)$ ? First I figured it should $\Theta(n)$ , as two extremes would cancel each other and the order of growth will be same as $n$ But, where I came across this question, the answer given was $\Omega(n)$, and no proof was mentioned. Well, I didn't understand why, but intuitively I convinced myself as "you can't know for sure the upper limit of growth for $f(x) \cdot g(x)$ so you can't say it's $O(n)$, but you can be sure that it won't be lower than $\Omega(n)$" Can someone help me in understanding this, in a more believable way?
Polarized Light Earlier, we discussed polarization as the direction of a wave's displacement with respect to its motion. We mentioned that an electromagnetic wave is transversely polarized, which is fairly unambiguous because both the electric and magnetic fields oscillate perpendicular to the direction the wave travels. From here on, "polarization" may also mean the orientation of the plane of polarization. Recall that the plane of polarization of the wave is the plane containing the direction of motion and the direction of electric field oscillation. Light that is "polarized" in this sense is composed of electromagnetic waves that all oscillate in the same direction. For example, vertically polarized light would have its electric field oscillating up-and-down. On one level we can just take this as a definition, but it is useful to note why we chose to define the polarization this way instead of using the magnetic field. For a wave oscillating in free space it does not make much of a difference if we defined the plane of polarization as the plane of oscillation of the electric field or the magnetic field. In fact, for an electromagnetic wave traveling in a vacuum it does not matter much what the polarization is at all. When this wave interacts with matter (when it's absorbed, scattered, reflected or refracted) then the polarization becomes important. So why choose the electric field over the magnetic field for defining the polarization? When a light ray hits an object at rest, to a good approximation the charges are more or less at rest themselves so the magnetic field from the wave cannot exert a force on the charges. The electric field, however, can exert a force on the charges and get them to move. The magnetic field doesn't play an important part in this because even if the particles are moving significantly we know that they must be traveling slower than the speed of light \(c\). The magnitude of the electric field \(E_0 = c B_0\) for an electromagnetic wave, so we see that the magnitude of the electric force on the charges is larger than that of the magnetic force on the charges! Furthermore, we can easily figure out the direction of the electric force on particles. The force on the charge is in the direction of the field for a positive charge, or against the field for a negative charge. Compare this to the fifty-nine (or so) rules that we needed to learn for magnetic force! We choose to talk about the polarization in terms of the electric field because the electric force is much more convenient to discuss. Polarizers The polarization of a wave depends on how the wave was produced and what it interacted with. Lasers typically (but not always) produce polarized beams. Light from thermal sources (such as an incandescent light bulb or the sun) are produced by the random vibrations of atoms, so light is polarized completely differently at different times. We call these sources unpolarized. Some polarizations are preferentially reflected or absorbed by matter, so it's possible to polarize light by shining it onto a special kind of matter called a polarizer. There are many materials which polarize light. Synthetic plastics (called polaroids) and natural crystals are common examples. The common feature among these materials is that they all have long linear chains of atoms which are oriented in one direction. Electrons in these media can travel more easily along the direction of the atomic chains. This allows the electric fields which are oriented in the direction of the atomic chains to transfer their energy to the electrons in the medium. The component of the electric field which is perpendicular to the atomic chains cannot give energy to the electrons, because the electrons cannot move in that direction. This means the wave component aligned with the atomic chains is absorbed, while the wave component nonaligned is transmitted. If we are given any polarization of light we can break it into components: a component for which the polarization of the wave is in the parallel to the chains of molecules (which will be absorbed by the electrons) and a component which is perpendicular to the chains of molecules (which will pass through as it cannot be absorbed). We call the parallel axis the absorption axis, because this component of the light is absorbed. We call this perpendicular direction the transmission axis or polarizer axis, because light that passes through the polarizer is polarized in this direction. The image above is a diagram of a polarizer. Because the electrons cannot oscillate in the direction of the polarizer axis, any light that passes through this polarizer will come out polarized in that direction (horizontally, in this diagram). In general we may be interested in how much light gets through a given polarizer if the incident light has a given intensity \(I_0\). If the electric field is \(E_0\) and the angle between the polarization plane of the light and the polarization axis is \(\theta\) then we can break the field into two parts: \[E_{absorbed} = E_0 \sin \theta\] \[E_{trans} = E_0 \cos \theta\] The example below shows how these quantities affect transmitted intensity. Example #1 Light polarized along a vertical axis, traveling into the page, hits a polarizer. The polarizing axis is 30° from the vertical. What is the intensity of the light coming out of the polarizer as a fraction of the intensity of incoming light? Which way is the light polarized? Solution We will call the amplitude of the electric field of the original wave \(E_0\), and we know that the initial electric field oscillates vertically. The polarization axis and the electric field are shown on the picture above. First we break the electric field into a part that is along the polarizer axis (which will be transmitted) and a part that is along the atomic chains of the material (which will be absorbed). The magnitude of the electric field that makes it through the polarizer has a magnitude \(E_{trans} = E_0 \cos 30°\). The intensity of the light that makes it through the polarizer is \[I = \dfrac{1}{2 \mu_0 c} (E_{trans})^2\] \[= \dfrac{1}{2 \mu_0 c} {E_0}^2 \cos^2 30° = \left( \dfrac{1}{2 \mu_0 c} {E_0}^2 \right) \left( \dfrac{3}{4} \right) \] \(E_0\) is the initial magnitude of the field, so \({E_0}^2/(2 \mu_0 c)\) is the initial intensity \(I_0\). Therefore we have \[I = \dfrac{3}{4} I_0\] So the outgoing light is polarized 30° from the vertical and has 3/4 the initial intensity. As you just saw, the intensity of the light coming out is proportional to the transmitted electric field squared. We then have \[I_{out} = I_0 \cos^2 \theta\] This equation is sometimes referred to as Malus’s law. If unpolarized light travels through a polarizer, \(I_{out}\) is always half the initial intensity. Contributors Authors of Phys7C (UC Davis Physics Department)
We shall show that, if at least one of $|a_1|,\ldots,|a_k|$ is larger than one, then the sequence $\displaystyle b_n=\frac{1}{n}(a_1^n+\cdots+a_k^n)$ is not bounded, and hence $\displaystyle\sum_{n=1}^\infty\frac{1}{n}(a_1^n+\cdots+a_k^n)$ diverges. Assume, without loss of generality that$$|a_1|=\cdots=|a_m|=r>s=|a_{m+1}|\ge |a_{m+2}|\ge\cdots\ge|a_k|, $$where $r>1$. We shall use the following lemma (its proof is postponed). Lemma. If $w_1,\ldots,w_m\in \mathbb C$, with $|w_1|=\cdots=|w_m|=1$, then the sequence $z_n=w_1^n+\cdots+w_m^n$ does not tend to zero. Hence, there exists an $\eta>0$, such that $|z_n|\ge \eta$, for infinitely many $n$'s. So, setting $a_1=rw_1,\ldots,a_m=rw_m$, then the Lemma provides that there exists an $\eta>0$ and infinitely many $n$'s such that$$|a_1^n+\cdots+a_m^n|=r^n|w_1^n+\cdots+w_m^n|\ge \eta r^n,$$and hence $z_n$ is unbounded since, for infinitely many $n$'s$$|a_1^n+\cdots+a_k^n|\ge |a_1^n+\cdots+a_m^n|-|a_{m+1}^n+\cdots+a_k^n|\ge r^n|w_1^n+\cdots+w_m^n|-(k-m)s^n \\\ge \eta r^n-(k-m)s^n=\eta r^n \left(1-\frac{k-m}{\eta}\Big(\frac{s}{r}\Big)^n\right)>\frac{\eta}{2}r^n,$$where the last inequality holds for sufficiently large $n$, as $s/r<1$. Proof of the Lemma. Assume that $z_n\to 0$, as $n\to\infty$. The terms $w_1,\ldots,w_k$ do not have to be different. Say that there are only $\ell$ different complex numbers is the set $\{w_1,\ldots,w_k\}$, without loss of generality the $w_1,\ldots,w_\ell$, are different from each other and they appear $j_1,\ldots,j_\ell$ times, respectively (with $j_1+\cdots+j_\ell=k$). We have$$j_1w_1^n+\cdots+j_\ell w_\ell^n=z_n,\\w_1 j_1w_1^n+\cdots+w_\ell j_\ell w_\ell^n=z_{n+1},\\ \cdots\\w_1^{\ell-1} j_1w_1^n+\cdots+w_\ell^{\ell-1} j_\ell w_\ell^n=z_{n+\ell-1}.$$ The above is viewed as an $\ell\times\ell$ linear system with unknowns the $j_1 w_1^n,\ldots,j_\ell w_\ell^n$ and system-matrix $A=(w_i^{j-1})_{i,j=1,\ldots,\ell}$ is the Vandermonde matrix, which is invertible, as the $w_1,\ldots,w_\ell$ are different from each other. Hence$$(j_1w_1^n,\ldots,j_\ell w_\ell^n)^T=A^{-1}(z_n,\ldots,z_{n+\ell-1})^T.$$So, if $z_n\to 0$, then so does the right-hand side of the above, and hence the left-hand side of the above. But $j_i|w_i|^n=j_i\ne 0$, for all $i=1,\ldots,\ell$. Contradiction. This concludes the proof of the Lemma.
My understanding is that a KDF is a function that takes a master secret and generates multiple keys. It is secure as long as the keys are "independent". If this is true, the following definition would generate a secure KDF, right? Assuming we have access to a completely random function, $R: \mathcal{K} \rightarrow \mathcal{K}$, we can define a $KDF : \mathcal{K} \rightarrow \mathcal{K}^n$ that on input $S \in \mathcal{K}$ performs the following. $$K_1 = R(S \oplus 1)$$ $$K_2 = R(S \oplus 2)$$ $$\vdots$$ $$K_n = R(S \oplus n)$$ Output $(K_1, K_2, ..., K_n).$ Now of course, we cannot construct random functions so what we do instead is to replace $R$ by a pseudorandom function that no "efficient" adversary can distinguish from random. Questions: Does this mean that replacing $R$ by, e.g. $AES(k, \cdot)$, where $k$ is fixed would also give a secure KDF? Does $k$ need to be chosen with some care or can it be any value? Would $AES(\cdot, m)$ for a fixed $m$ be equivalent? Does this mean that replacing $R$ by any secure hash function would also give a secure KDF? If that is the case, why does some KDF-suggestions use HMAC instead? Is this only to get a larger "security margin", and to be less depending on the security of the used hash function? Would replacing XOR above with concatenation (which is possible for the hash-function case but not for block cipher case) affect security? If I want to implement a secure KDF, and already have access to AES, SHA1-512, and HMAC, how would I do it such that it is simple but yet secure?
Bravais Lattices¶ ATK has built-in support for all 14 three-dimensional Bravais lattices along with an additional possibility to specify the unit cell directly. To allow QuantumATK to take advantage of the symmetries of the lattice and define the relevant high-symmetry points in the Brillouin zone, e.g. for band structure calculations, the lattice must be constructed using the correct Bravais lattice class, cf. the list below. If the lattice is defined using the UnitCell class, it is internally categorized as triclinic. None of the parameters have any default value, and they must all be specified with units (see the examples below). Lattice types¶ class SimpleCubic( a)¶ A simple cubic Bravais lattice. Parameters: a(PhysicalQuantity of type length) – Lattice parameter a. class BodyCenteredCubic( a)¶ A body centered cubic Bravais lattice. Parameters: a(PhysicalQuantity of type length) – Lattice parameter a. class FaceCenteredCubic( a)¶ A face centered cubic Bravais lattice. Parameters: a(PhysicalQuantity of type length) – Lattice parameter a. class Rhombohedral( a, alpha)¶ A rhombohedral Bravais lattice. Parameters: class Hexagonal( a, c)¶ A hexagonal Bravais lattice. Parameters: class SimpleTetragonal( a, c)¶ A simple tetragonal Bravais lattice. Parameters: class BodyCenteredTetragonal( a, c)¶ A body centered tetragonal Bravais lattice. Parameters: class SimpleOrthorhombic( a, b, c)¶ A simple orthorhombic Bravais lattice. Parameters: class BodyCenteredOrthorhombic( a, b, c)¶ A body centered orthorhombic Bravais lattice. Parameters: class FaceCenteredOrthorhombic( a, b, c)¶ A face centered orthorhombic Bravais lattice. Parameters: class BaseCenteredOrthorhombic( a, b, c)¶ A base centered orthorhombic Bravais lattice. Parameters: class SimpleMonoclinic( a, b, c, beta)¶ A simple monoclinic Bravais lattice. Parameters: class BaseCenteredMonoclinic( a, b, c, beta)¶ A base centered monoclinic Bravais lattice. Parameters: class Triclinic( a, b, c, alpha, beta, gamma)¶ A triclinic Bravais lattice. Parameters: a(PhysicalQuantity of type length) – Lattice parameter a. b(PhysicalQuantity of type length) – Lattice parameter b. c(PhysicalQuantity of type length) – Lattice parameter c. alpha(PhysicalQuantity of type degree) – Lattice parameter alpha. beta(PhysicalQuantity of type degree) – Lattice parameter beta. gamma(PhysicalQuantity of type degree) – Lattice parameter gamma. class UnitCell( vector_a, vector_b, vector_c, origin=None)¶ Class for representing a generic unit cell. Parameters: Common methods¶ All Bravais lattice classes share the following member functions: class BravaisLattice¶ Base class cannot be constructed. conventionalVectors()¶ Get the vectors of the conventional unit cell. Returns: The conventional lattice vectors. Return type: PhysicalQuantity of type length convertFractionalKPoint( kpoint=None)¶ Method for converting a k-point from fractional to Cartesian coordinates. Parameters: kpoint( array of floats) – The k-point in fractional coordinates. Returns: The k-point in Cartesian coordinates. Return type: PhysicalQuantity of type inverse length isOrthorhombic()¶ Determine whether the lattice unit cell satisfies the orthorhombic constraints, that is, the cell vectors are orthogonal. Returns: Whether the cell is Orthorhombic. Return type: bool monoclinicAxes()¶ Determine along each axis whether the lattice unit cell is monoclinic. Returns: The result of the check for each axis. Return type: list of bools nlprint( stream=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>, name='Bravais lattice')¶ Print a string containing an ASCII description of the BravaisLattice. Parameters: stream( Stream based object) – The io to write to. Default:sys.stdout name( string) – The name of the configuration being printed. Default:‘Bravais lattice’ origin()¶ Get the origin of the unit cell. Returns: The origin vector of the unit cell. Return type: PhysicalQuantity of type length primitiveVectors()¶ Return the primitive vectors. Returns: A list with the three vectors of the primitive unit cell. Return type: PhysicalQuantity of type length reciprocalVectors()¶ Get the reciprocal lattice vectors. Returns: The reciprocal lattice vectors. Return type: PhysicalQuantity of type inverse length symmetryPoints2D()¶ Get the high symmetry points in the reciprocal XY-plane. Returns: Mapping between symmetry label and position. Return type: dict unitCellDirectionalVolume()¶ Calculate the directional volume of the lattice unit cell. Directional volume is a volume whose sign depends on the handedness of the primitive vectors. Returns: The directional volume of the cell. Return type: PhysicalQuantity of type volume The Bravais lattices are classes, and their constructors take at most 6 arguments, corresponding to the lattice parameters \(a\), \(b\), \(c\), \(\alpha\), \(\beta\), \(\gamma\). In all cases, these refer to the corresponding conventional cell — not the primitive one. That is, the parameter a for the face-centered cubic cell is not the length of the first primitive lattice vector, but the side length of the corresponding conventional cubic cell. The cell created by QuantumATK and used in the calculations is, however, the primitive cell. All seven crystal systems, except the triclinic, have a higher degree of symmetry. This means that the lattice parameters are constrained to certain values or relationships. For example, in a simple cubic lattice, \(a = b = c\) and \(\mathrm{\alpha} = \mathrm{\beta} = \mathrm{\gamma} =\) 90. In those cases, the constructors only accept the free parameters as arguments (a, in the cubic case). For a complete list of the free parameters of each lattice, see the table below. Bravais lattice \(a\) \(b\) \(c\) \(\alpha\) \(\beta\) \(\gamma\) Cubic \(\bullet\) Hexagonal \(\bullet\) \(\bullet\) Rhombohedral \(\bullet\) \(\bullet\) Tetragonal \(\bullet\) \(\bullet\) Orthorhombic \(\bullet\) \(\bullet\) \(\bullet\) Monoclinic \(\bullet\) \(\bullet\) \(\bullet\) \(\bullet\) Triclinic \(\bullet\) \(\bullet\) \(\bullet\) \(\bullet\) \(\bullet\) \(\bullet\) The UnitCell class behaves differently. Instead of providing the lattice parameters, you directly specify the three lattice vectors, each one provided as an array with three arguments (see the example below). For this reason, this class does not have query methods for the lattice parameters. Usage examples¶ Specify a face-centered cubic lattice with lattice constants \(a\)=5.1 Å: lattice = FaceCenteredCubic(5.1*Ang) Specify a base-centered monoclinic lattice with lattice constants \(a\)=4.07 Å, \(b\)=8.02 Å, \(c\)=2.04 Å, and angle \(\beta\)=56\(^{\circ}\): lattice = BaseCenteredMonoclinic( a = 4.07*Angstrom, b = 8.02*Angstrom, c = 2.04*Angstrom, beta = 56*Degrees ) Specify a lattice by providing the unit cell vectors: lattice = UnitCell( [19.3, 1.0, 0.0]*Angstrom, [-2.0, 15.0, 1.0]*Angstrom, [ 0.2, 0.4, 4.9]*Angstrom ) Print a in Bohr and the c/a ratio for a hexagonal lattice: lattice = Hexagonal( a = 4.07*Angstrom, c = 2.04*Angstrom )print("a = %g Bohr" % (lattice.a().inUnitsOf(Bohr)))print("c/a = %g" % (lattice.a()/lattice.c())) Function that prints the lattice vectors, in Angstrom, of a given lattice: def printLatticeVectors(lattice): for vector in lattice.primitiveVectors(): for i in range(3): print(vector[i].inUnitsOf(Ang),' ') print() Print the primitive vectors of a bravais lattice and the x-coordinate of the first and third primitive vector: lattice = FaceCenteredCubic(2.5*Angstrom)vectors = lattice.primitiveVectors()for vector in vectors: print(vector)print(vectors[0][0]) #x-coordinate of the first vectorprint(vectors[2][0]) #x-coordinate of the third vector gives the output [ 0. 1.25 1.25] Ang[ 1.25 0. 1.25] Ang[ 1.25 1.25 0. ] Ang0.0 Ang1.25 Ang
Journé's theorem for $C^{n,\omega}$ regularity 1. Department of Mathematics, West Chester University, West Chester, PA 19383 Mathematics Subject Classification:Primary: 37D20, 58A05; Secondary: 41A0. Citation:V. Niţicâ. Journé's theorem for $C^{n,\omega}$ regularity. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 413-425. doi: 10.3934/dcds.2008.22.413 [1] [2] Xinsheng Wang, Lin Wang, Yujun Zhu. Formula of entropy along unstable foliations for $C^1$ diffeomorphisms with dominated splitting. [3] Wendong Wang, Liqun Zhang. The $C^{\alpha}$ regularity of weak solutions of ultraparabolic equations. [4] [5] Linlin Fu, Jiahao Xu. A new proof of continuity of Lyapunov exponents for a class of $ C^2 $ quasiperiodic Schrödinger cocycles without LDT. [6] [7] Fengping Yao, Shulin Zhou. Interior $C^{1,\alpha}$ regularity of weak solutions for a class of quasilinear elliptic equations. [8] [9] [10] [11] [12] [13] Percy Fernández-Sánchez, Jorge Mozo-Fernández, Hernán Neciosup. Dicritical nilpotent holomorphic foliations. [14] [15] [16] [17] [18] [19] Peng Mei, Zhan Zhou, Genghong Lin. Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
There is a theorem which says that if $V$ is a vector space over a field $\mathbb{K}$ and $V$ admits an infinite linearly independent set, then any two bases $B$ and $B'$ have the same cardinality. So I wanted to prove this theorem. The question is followed by a hint which says that the following lemma can be applied to prove this theorem. Lemma: Let $A\ne \emptyset \ne B$ be two sets. A map $\phi: B\to C$ is countable-to-one if for every $c\in C$, $\phi^{-1}(\{c\})$ is countable. If such a map exists then $|B| \le \aleph_0|C|$. The idea was to apply this lemma to the collections of finite sets of $B$ and $B'$, i.e. $\mathcal{F}(B)$ and $\mathcal{F}(B')$, which is quite straightforward, except that $\mathcal{F}(B)$ may well be uncountable. So I asked my professor for a hint and here's what he said: Given $E\in\mathcal{F}(B)$, there is a minimal finite set $f(E)$ of elements of $B'$ for which $E$ is contained in span$(f(E))$. He also added that this is a simple fact from linear algebra. But I didn't recognize this fact. Can someone please explain the concept from linear algebra above in simpler terms? Namely, am I correct in re-interpreting this statement as follows?: Given $E\in\mathcal{F}(B)$, there is a bijective function $f$ on $E$ which takes elements of $B$ into elements of $B'$ such that $E$ is spanned by $f(E)$ (that is, by a minimal finite subset of $B'$). Anyway, this statement doesn't seem to be a trivial concept from linear algebra, and I don't think I've ever heard about it.
Assessment | Biopsychology | Comparative |Cognitive | Developmental | Language | Individual differences |Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. It is the natural generalization to higher dimensions of the concept of the variance of a scalar-valued random variable. Definition Edit If entries in the column vector $ X = \begin{bmatrix}X_1 \\ \vdots \\ X_n \end{bmatrix} $ are random variables, each with finite variance, then the covariance matrix Σ is the matrix whose ( i, j) entry is the covariance $ \Sigma_{ij} =\mathrm{E}\begin{bmatrix} (X_i - \mu_i)(X_j - \mu_j) \end{bmatrix} $ where $ \mu_i = \mathrm{E}(X_i)\, $ is the expected value of the ith entry in the vector X. In other words, we have $ \Sigma = \begin{bmatrix} \mathrm{E}[(X_1 - \mu_1)(X_1 - \mu_1)] & \mathrm{E}[(X_1 - \mu_1)(X_2 - \mu_2)] & \cdots & \mathrm{E}[(X_1 - \mu_1)(X_n - \mu_n)] \\ \\ \mathrm{E}[(X_2 - \mu_2)(X_1 - \mu_1)] & \mathrm{E}[(X_2 - \mu_2)(X_2 - \mu_2)] & \cdots & \mathrm{E}[(X_2 - \mu_2)(X_n - \mu_n)] \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \mathrm{E}[(X_n - \mu_n)(X_1 - \mu_1)] & \mathrm{E}[(X_n - \mu_n)(X_2 - \mu_2)] & \cdots & \mathrm{E}[(X_n - \mu_n)(X_n - \mu_n)] \end{bmatrix}. $ As a generalization of the variance Edit The definition above is equivalent to the matrix equality $ \Sigma=\mathrm{E} \left[ \left( \textbf{X} - \mathrm{E}[\textbf{X}] \right) \left( \textbf{X} - \mathrm{E}[\textbf{X}] \right)^\top \right] $ This form can be seen as a generalization of the scalar-valued variance to higher dimensions. Recall that for a scalar-valued random variable X $ \sigma^2 = \mathrm{var}(X) = \mathrm{E}[(X-\mu)^2], \, $ where $ \mu = \mathrm{E}(X).\, $ The matrix $ \Sigma $ is also often called the variance-covariance matrix since the diagonal terms are in fact variances. Conflicting nomenclatures and notationsEdit Nomenclatures differ. Some statisticians, following the probabilist William Feller, call this matrix the variance of the random vector $ X $, because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector $ X $. Thus $ \operatorname{var}(\textbf{X}) = \operatorname{cov}(\textbf{X}) = \mathrm{E} \left[ (\textbf{X} - \mathrm{E} [\textbf{X}]) (\textbf{X} - \mathrm{E} [\textbf{X}])^\top \right] $ However, the notation for the "cross-covariance" between two vectors is standard: $ \operatorname{cov}(\textbf{X},\textbf{Y}) = \mathrm{E} \left[ (\textbf{X} - \mathrm{E}[\textbf{X}]) (\textbf{Y} - \mathrm{E}[\textbf{Y}])^\top \right] $ The $ var $ notation is found in William Feller's two-volume book An Introduction to Probability Theory and Its Applications, but both forms are quite standard and there is no ambiguity between them. Properties Edit For $ \Sigma=\mathrm{E} \left[ \left( \textbf{X} - \mathrm{E}[\textbf{X}] \right) \left( \textbf{X} - \mathrm{E}[\textbf{X}] \right)^\top \right] $ and $ \mu = \mathrm{E}(\textbf{X}) $ the following basic properties apply: $ \Sigma = \mathrm{E}(\mathbf{X X^\top}) - \mathbf{\mu}\mathbf{\mu^\top} $ $ \mathbf{\Sigma} $ is positive semi-definite $ \operatorname{var}(\mathbf{A X} + \mathbf{a}) = \mathbf{A}\, \operatorname{var}(\mathbf{X})\, \mathbf{A^\top} $ $ \operatorname{cov}(\mathbf{X},\mathbf{Y}) = \operatorname{cov}(\mathbf{Y},\mathbf{X})^\top $ $ \operatorname{cov}(\mathbf{X_1} + \mathbf{X_2},\mathbf{Y}) = \operatorname{cov}(\mathbf{X_1},\mathbf{Y}) + \operatorname{cov}(\mathbf{X_2}, \mathbf{Y}) $ If p= q, then $ \operatorname{var}(\mathbf{X} + \mathbf{Y}) = \operatorname{var}(\mathbf{X}) + \operatorname{cov}(\mathbf{X},\mathbf{Y}) + \operatorname{cov}(\mathbf{Y}, \mathbf{X}) + \operatorname{var}(\mathbf{Y}) $ $ \operatorname{cov}(\mathbf{AX}, \mathbf{BY}) = \mathbf{A}\, \operatorname{cov}(\mathbf{X}, \mathbf{Y}) \,\mathbf{B}^\top $ If $ \mathbf{X} $ and $ \mathbf{Y} $ are independent, then $ \operatorname{cov}(\mathbf{X}, \mathbf{Y}) = 0 $ where $ \mathbf{X}, \mathbf{X_1} $ and $ \mathbf{X_2} $ are a random $ \mathbf{(p \times 1)} $ vectors, $ \mathbf{Y} $ is a random $ \mathbf{(q \times 1)} $ vector, $ \mathbf{a} $ is $ \mathbf{(p \times 1)} $ vector, $ \mathbf{A} $ and $ \mathbf{B} $ are $ \mathbf{(p \times q)} $ matrices. This covariance matrix (though very simple) is a very useful tool in many very different areas. From it a transformation matrix can be derived that allows one to completely decorrelate the data or, from a different point of view, to find an optimal basis for representing the data in a compact way (see Rayleigh quotient for a formal proof and additional properties of covariance matrices). This is called principal components analysis (PCA) in statistics and Karhunen-Loève transform (KL-transform) in image processing. Which matrices are covariance matricesEdit From the identity $ \operatorname{var}(\mathbf{a^\top}\mathbf{X}) = \mathbf{a^\top} \operatorname{var}(\mathbf{X}) \mathbf{a}\, $ and the fact that the variance of any real-valued random variable is nonnegative, it follows immediately that only a nonnegative-definite matrix can be a covariance matrix. The converse question is whether every nonnegative-definite symmetric matrix is a covariance matrix. The answer is "yes". To see this, suppose M is a p× p nonnegative-definite symmetric matrix. From the finite-dimensional case of the spectral theorem, it follows that M has a nonnegative symmetric square root, which let us call M 1/2. Let $ \mathbf{X} $ be any p×1 column vector-valued random variable whose covariance matrix is the p× p identity matrix. Then $ \operatorname{var}(M^{1/2}\mathbf{X}) = M^{1/2} (\operatorname{var}(\mathbf{X})) M^{1/2} = M.\, $ Complex random vectorsEdit $ \operatorname{var}(z) = \operatorname{E} \left[ (z-\mu)(z-\mu)^{*} \right] $ where the complex conjugate of a complex number $ z $ is denoted $ z^{*} $. If $ Z $ is a column-vector of complex-valued random variables, then we take the conjugate transpose by both transposing and conjugating, getting a square matrix: $ \operatorname{E} \left[ (Z-\mu)(Z-\mu)^{*} \right] $ where $ Z^{*} $ denotes the conjugate transpose, which is applicable to the scalar case since the transpose of a scalar is still a scalar. EstimationEdit The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle. It involves the spectral theorem and the reason why it can be better to view a scalar as the trace of a 1 × 1 matrix than as a mere scalar. See estimation of covariance matrices. Further reading Edit Covariance Matrix at MathWorld van Kampen, N. G. Stochastic processes in physics and chemistry. New York: North-Holland, 1981. See also Edit This page uses Creative Commons Licensed content from Wikipedia (view authors).
I am interested in data analysis. While my working data (actually it's shopping mall's daily sale) is accumlating, I wish to find some statistical laws underlying business phenomena. I left school for more than 10 years, and I am working in a business enviroment, I have no choice but to study by myself from level 0. I am actually learning state space method following the book of Durbin & Koopman, and I can understand basic process underlying Kalman Filter, that is, the process starts from an initial state, and then go to prediction and updating, and so on. But it is very difficult for me to understand the math derivation. In section 4.3 of the book, the linear model is stated as: $$y_t = Z_t\alpha_t + \varepsilon_t$$ $$\alpha_{t+1} = T_t\alpha_t + R_t\eta_t$$ with $\varepsilon_t\sim N(0,H_t)$ and $\eta_t\sim N(0,Q_t)$. Further assumption is $\alpha_1\sim N(a_1,P_1)$, and $a_1$,$P_1$ are known. Let $Y_{t-1}$ denote ($y_1,...y_{t-1}$) for $t=2,...,n$, thus: $$v_t=y_t-E(y_t|Y_{t-1})=y_t-Z_ta_t$$ is the 1-step ahead predicted error of $y_t$ given $Y_{t-1}$. The author says that when $Y_{t-1}$ and $v_t$ are fixed then $Y_t$ is fixed and vice versa. Thus $E(\alpha_t|Y_t)=E(\alpha_t|Y_{t-1},v_t)$, and $E(v_t|Y_{t-1})=0$. My questions are in the following text of that section, which are: Why $E(v_t)=0$? Is it calculated by the law of iterated expectations? $$E(v_t)=E(E(v_t|Y_{t-1}))=E(0)=0$$ Why $Cov(y_j,v_t)=E(y_jE(v_t|Y_{t-1})^\prime)$ for $j=1,...,t-1$? Why the conditional meanappears in equation? As what I know, by the definition of covariance, it should be calculated straightly without using conditional mean: $$Cov(y_j,v_t) = E(y_jv_t^\prime)-E(y_j)E(v_t)^\prime = E(y_jv_t^\prime)$$ The updated state $a_{t|t} = E(\alpha_t|Y_t)=E(\alpha_t|Y_{t-1},v_t)$, and the result is shown out in the book as: $$a_{t|t} = E(\alpha_t|Y_{t-1}) + Cov(\alpha_t,v_t)[Var(v_t)]^{-1}v_t$$ How can I get this equation? I know how to calculate the conditional distribution $X|Y$ for a joint normally distribution $(X,Y)$ accorrding to Wikipedia. In my case, the conditional mean of $\alpha_t$ given $Y_{t-1}$ and $v_t$ is a little confused. Since there are 2 conditions: $Y_{t-1}$ and $v_t$. Consider partitioning $Y_{t-1}$ and $v_t$ into a random vector $A = (Y_{t-1}^\prime,v_t^\prime)^\prime$, thus my result is: $$a_{t|t} = E(\alpha_t|Y_{t-1},v_t)=E(\alpha_t) + Cov(\alpha_t,A)[Var(A)]^{-1}(A-E(A))$$ It is not the same as the equation given by the book. Sad.
I am trying to solve Exercise 3 a) given here. The problem states: Let $\mathcal{M}$ be an infinite $\sigma$-algebra. Prove that $\mathcal{M}$ contains an infinite sequence of nonempty, disjoint sets. (Hint: if $\mathcal{M}$ contains an infinite sequence of strictly nested sets, then we’re done, so assume that no such sequence exists. Next, use this assumption to find a nonempty set in $\mathcal{M}$ with no nonempty proper subsets in $\mathcal{M}$. Finally, show that this can be done infinitely many times.) Here's my idea: Let's say $\mathcal{M}$ is $\sigma$-algebra on the set $X$. If $\mathcal{M}$ does not contain an infinite sequence of strictly nested sets, then given any $A\in\mathcal{M}$, every strict chain starting with $A$ must terminate after finite time: i.e. $A\subsetneq E_{1}\subsetneq E_{2}\subsetneq … \subsetneq E_{k}$ and there is no set $X\neq B\in\mathcal{M}$ such that $E_{k}\subsetneq B$. But then the complement $E_k^{c}$ has no nonempty proper subsets in $\mathcal{M}$. But I am having trouble showing that the process can repeated infinitely many times. I have been stuck with this for the whole day, and it is driving me crazy! Thanks for the help!
Force in the Direction of Displacement The work done by a constant force is proportional to the force applied times the displacement of the object. learning objectives Contrast displacement and distance in constant force situations Work Done by a Constant Force When a force acts on an object over a distance, it is said to have done work on the object. Physically, the work done on an object is the change in kinetic energy that that object experiences. We will rigorously prove both of these claims. The term work was introduced in 1826 by the French mathematician Gaspard-Gustave Coriolis as “weight lifted through a height,” which is based on the use of early steam engines to lift buckets of water out of flooded ore mines. The SI unit of work is the newton-meter or joule (J). Units One way to validate if an expression is correct is to perform dimensional analysis. We have claimed that work is the change in kinetic energy of an object and that it is also equal to the force times the distance. The units of these two should agree. Kinetic energy – and all forms of energy – have units of joules (J). Likewise, force has units of newtons (N) and distance has units of meters (m). If the two statements are equivalent they should be equivalent to one another. \[\mathrm{N⋅m=kg\dfrac{m}{s^2}⋅m=kg\dfrac{m^2}{s^2}=J}\] Displacement versus Distance Often times we will be asked to calculate the work done by a force on an object. As we have shown, this is proportional to the force and the distance which the object is displaced, not moved. We will investigate two examples of a box being moved to illustrate this. Example Problems Here are a few example problems: (1.a) Consider a constant force of two newtons (F = 2 N) acting on a box of mass three kilograms (M = 3 kg). Calculate the work done on the box if the box is displaced 5 meters. (1.b) Since the box is displaced 5 meters and the force is 2 N, we multiply the two quantities together. The object’s mass will dictate how fast it is accelerating under the force, and thus the time it takes to move the object from point a to point b. Regardless of how long it takes, the object will have the same displacement and thus the same work done on it. (2.a) Consider the same box (M = 3 kg) being pushed by a constant force of four newtons (F = 4 N). It begins at rest and is pushed for five meters (d = 5m). Assuming a frictionless surface, calculate the velocity of the box at 5 meters. (2.b) We now understand that the work is proportional to the change in kinetic energy, from this we can calculate the final velocity. What do we know so far? We know that the block begins at rest, so the initial kinetic energy must be zero. From this we algebraically isolate and solve for the final velocity. \[\mathrm{Fd=ΔKE=KE_f−0=\dfrac{1}{2}mv^2_f}\] \[\mathrm{v_f=\sqrt{2\dfrac{Fd}{m}}=\sqrt{2\dfrac{4N⋅5m}{2kg}}=\sqrt{10}m/s}\] We see that the final velocity of the block is approximately 3.15 m/s. Force at an Angle to Displacement A force does not have to, and rarely does, act on an object parallel to the direction of motion. learning objectives Infer how to adjust one-dimensional motion for our three-dimensional world The Fundamentals Up until now, we have assumed that any force acting on an object has been parallel to the direction of motion. We have considered our motion to be one dimensional, only acting along the x or y axis. To best examine and understand how nature operators in our three-dimensional world, we will first discuss work in two dimensions in order to build our intuition. A force does not have to, and rarely does, act on an object parallel to the direction of motion. In the past, we derived that \(\mathrm{W = Fd}\); such that the work done on an object is the force acting on the object multiplied by the displacement. But this is not the whole story. This expression contains an assumed cosine term, which we do not consider for forces parallel to the direction of motion. “Why would we do such a thing? ” you may ask. We do this because the two are equivalent. If the angle of the force along the direction of motion is zero, such that the force is parallel to the direction of motion, then the cosine term equals one and does not change the expression. As we increase the force’s angle with respect to the direction of motion, less and less work is done along the direction that we are considering; and more and more work is being done in another, perpendicular, direction of motion. This process continues until we are perpendicular to our original direction of motion, such that the angle is 90, and the cosine term would equal zero; resulting in zero work being done along our original direction. Instead, we are doing work in another direction! Angle: Recall that both the force and direction of motion are vectors. When the angle is 90 degrees, the cosine term goes to zero. When along the same direction, they equal one. Let’s show this explicitly and then look at this phenomena in terms of a box moving along the x and y directions. We have discussed that work is the integral of the force and the dot product respect to x. But in fact, dot product of force and a very small distance is equal to the two terms times cosine of the angle between the two. \(\mathrm{F \times dx = Fd \cos( \theta )}\). Explicitly, \[\mathrm{\int_{t_2}^{t_1} F⋅dx= \int_{t_2}^{t_1} Fd \cos θ dx=Fd \cos θ}\] A Box Being Pushed Consider a coordinate system such that we have x as the abscissa and y as the ordinate. More so, consider a box being pushed along the x direction. What happens in the following three scenarios? The box is being pushed parallel to the x direction? The box is being pushed at an angle of 45 degrees to the x direction? The box is being pushed at an angle of 60 degrees to the x direction? The box is being pushed at an angle of 90 degrees to the x direction? In the first scenario, we know that all of the force is acting on the box along the x-direction, which means that work will only be done along the x-direction. More so, a vertical perspective the box is not moving – it is unchanged in the y direction. Since the force is acting parallel to the direction of motion, the angle is equal to zero and our total work is simply the force times the displacement in the x-direction. In the second scenario, the box is being pushed at an angle of 45 degrees to the x-direction; and thus also a 45 degree angle to the y-direction. When evaluated, the cosine of 45 degrees is equal to \(\mathrm{\frac{1}{\sqrt{2}}}\), or approximately 0.71. This means is that 71% of the force is contributing to the work along the x-direction. The other 29% is acting along the y-direction. In the third scenario, we know that the force is acting at a 60 degree angle to the x-direction; and thus also a 30 degree angle to the y-direction. When evaluated, cosine of 60 degrees is equal to 1/2. This means that the force is equally acting in the x and y-direction! The work done is linear with respect to both x and y. In the last scenario, the box is being pushed at an angle perpendicular to the x direction. In other words, we are pushing the box in the y-direction! Thus, the box’s position will be unchanged and experience no displacement along the x-axis. The work done in the x direction will be zero. Key Points Understanding work is quintessential to understanding systems in terms of their energy, which is necessary for higher level physics. Work is equivalent to the change in kinetic energy of a system. Distance is not the same as displacement. If a box is moved 3 meters forward and then 4 meters to the left, the total displacement is 5 meters, not 7 meters. Work done on an object along a given direction of motion is equal to the force times the displacement times the cosine of the angle. No work is done along a direction of motion if the force is perpendicular. When considering force parallel to the direction of motion, we omit the cosine term because it equals 1 which does not change the expression. Key Terms work: A measure of energy expended in moving an object; most commonly, force times displacement. No work is done if the object does not move. dot product: A scalar product. LICENSES AND ATTRIBUTIONS CC LICENSED CONTENT, SHARED PREVIOUSLY Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION work. Provided by: Wiktionary. Located at: . http://en.wiktionary.org/wiki/work License: CC BY-SA: Attribution-ShareAlike Sunil Kumar Singh, Work and Energy. September 17, 2013. Provided by: OpenStax CNX. Located at: . http://cnx.org/content/m14098/latest/ License: CC BY: Attribution Work (physics). Provided by: Wikipedia. Located at: . http://en.wikipedia.org/wiki/Work_(physics) License: CC BY-SA: Attribution-ShareAlike Work (physics). Provided by: Wikipedia. Located at: . http://en.wikipedia.org/wiki/Work_(physics) License: CC BY-SA: Attribution-ShareAlike Sunil Kumar Singh, Scalar (Dot) Product. September 17, 2013. Provided by: OpenStax CNX. Located at: . http://cnx.org/content/m14513/latest/ License: CC BY: Attribution work. Provided by: Wiktionary. Located at: . http://en.wiktionary.org/wiki/work License: CC BY-SA: Attribution-ShareAlike dot product. Provided by: Wiktionary. Located at: . http://en.wiktionary.org/wiki/dot_product License: CC BY-SA: Attribution-ShareAlike Sunil Kumar Singh, Scalar (Dot) Product. January 3, 2013. Provided by: OpenStax CNX. Located at: . http://cnx.org/content/m14513/latest/ License: CC BY: Attribution Sunil Kumar Singh, Scalar (Dot) Product. January 3, 2013. Provided by: OpenStax CNX. Located at: . http://cnx.org/content/m14513/latest/ License: CC BY: Attribution
To summarise the predictive power of a classifier for end users, I'm using some metrics. However, as the users input data themselves, the amount of data and class distribution varies a lot. So to inform users about the strength of the metrics, I'd like to include confidence intervals. Background on the metrics Suppose a binary classifier that is to classify 1000 items. Of those items, 700 belong to A, and 300 to B. The results are as follows: Predicted | # A | # B-------+-----+-----True A | 550 | 150True B | 50 | 250 We'll call class B a positive result (1) and class A a negative one (0). So there were 550 true negatives, 150 false positives, 50 false negatives and 250 true positives. There are some metrics defined for this classification: $$\text{Recall} = \frac{TP}{TP+FN} = 0.833$$ $$\text{Precision} = \frac{TP}{TP + FP} = 0.625$$ $$\text{F1 score} = \frac{2}{1/recall + 1/precision} = 0.714$$ Suggested approach This Stack Overflow question addresses confidences of recall or precision. It suggests using an adjusted version of recall: $\text{recall} = (TP+2) / (TP+FN+4)$ and the Wilson Score interval. The final formula would be $$p \pm Z_\alpha \cdot \text{std_error}$$ where $$\text{std_error} = \sqrt{\frac{recall\cdot(1-recall)}{N+4}}$$ There's something I don't understand about this approach. The formula resembles the confidence interval for a binomial distribution, but it's not quite the same. Neither is it the Wilson formulation. Where does that additional $+4$ come from? Maybe it is from the recall formula. He mentions that $p$ is calculated using the adjusted recall. That would imply $recall = \hat{p}$, which is reinforced by the error formula. So let's use the Wilson formulation for $p$. With $n=TP+FN=300$ the final calculation with a confidence of $\alpha=0.05$ yields $z=1.96$ and: $$p \pm Z_\alpha\cdot\text{std_err} = \frac{\hat{p} + z^2/(2n)}{1+z^2/n} \pm z\cdot\sqrt{\frac{\hat{p}\cdot(1-\hat{p})}{n+4}} = 0.825 \pm 0.042$$ Pondering While this does seem like a sensible result, I still wonder about the formula and the differences between the formulae presented. What could be the basis for the $\text{std_err}$ formula? Is it better to use the Wilson formula instead? A similar formulation could be used for calculating the confidence interval for precision using false positives instead of false negatives. How would this idea carry to the F1 score, which is a combination of the two, if at all? My statistic skills and intuition isn't so strong yet, so any help or insight is greatly appreciated! Edit Different approaches have little effect at least in this case $$\text{Normal approach: } \hat{p} \pm Z_\alpha\sqrt{\frac{\hat{p}\cdot(1-\hat{p})}{n}} = 0.829 \pm 0.043$$ $$\text{Wilson approach: } \frac{\hat{p} + z^2/(2n)}{1+z^2/n} \pm Z_\alpha\sqrt{\frac{\hat{p}\cdot(1-\hat{p})}{n} + \frac{z^2}{4n^2}} = 0.825 \pm 0.043$$ $$\text{Above approach: } \frac{\hat{p} + z^2/(2n)}{1+z^2/n} \pm Z_\alpha\sqrt{\frac{\hat{p}\cdot(1-\hat{p})}{n+4}} = 0.825 \pm 0.042$$ For smaller samples they start to vary a bit more, especially with the interval. The Wilson approach seems to give the largest intervals.
Mathematics can be used to describe processes we see everywhere! We can use maths to model how cancerous tumours form, how diseases spread in populations and how the patterns on animals coats form. My favourite application of such mathematical models is in Mathematical Ecology. Here we use mathematics to describe processes in ecology. These help us understand more about how animals and plants interact. In this post I am going to describe how an animal might move across its landscape using mathematics. To start, the most important thing about mathematical modelling is to know enough about the ecology we are trying to model! For example, if we are modelling how something grows in biology we need to understand the factors which drive growth, such as speed and the initial size. Once we know enough, we can then think about writing a suitable model. We’re going to look at modelling the movement of a Jaguar in the Amazon rain forest. The Earth’s Rainforests are disappearing at an alarming rate. Its currently so fast that in a century they will all be gone. Understanding how a top predator uses its space is important when finding ways to save it! When we are looking at how to model an animal about its landscape we need some information about the places it goes and the steps it takes. We will start by following a Jaguar and writing down its position every minute for a few hours. We can think about the Jaguar’s landscape as 2D space and its positions as (x,y) coordinates. Looking at the coordinates as time goes on, we have an estimate of the movement path of the Jaguar (the arrows in Figure 1 show the path). This is only an estimate as we only wrote the position down every minute and the Jaguar moves in a smooth motion (a continuous path). From these GPS positions we can create two histograms. One for the step lengths – that is the distance travelled each minute. One for each angle turned through from the previous step to the next step (shown above). This gives the following histograms The reason we create these histograms is to create something called Probability Distributions. These give us is a way to choose the most likely step and turning angle for the Jaguar, for each simulated step in our model. Once the histograms have been created we ‘fit’ these to a continuous probability distribution, these are the curves shown. Fitting means we find a curve which is most like the shape of our histogram. This line will help us choose probabilities. Here we fit something called the Normal Distribution, but many other distributions can be tried to find the best one. These give us a mathematical form for the probabilities (like an equation). These probability distribution curves are going to help us model the jaguars steps. Our model will need to simulate each step and turning angle to draw a path – This is the model. To understand more the purpose of these probability distributions we can think of them in terms of a box of coloured counters: The probability of picking each colour of counter approximately represents the probability of picking each step length above in Figure 2. Here there are 27 dark green counters out of a total of 108. 27/108 = 0.25, which if you look at the step-length distribution in Figure 1 is around the same. We can use the box of counters to think about how we will ask a computer to use the curve of the probability distribution to choose a step length for our model. Imagine mixing all the counters together, closing your eyes and picking one out at random . The most likely counter is dark green, but any colour is possible. This is what the graphs represent. The computer will choose a step length (metres) and a turning angle (degrees), just like picking a coloured counter. Some are more likely than others and that’s what the graph shows. Look at the graphs in Figure 2 again, the most likely step is 0 metres, meaning the Jaguar is most likely to do a small step. The most likely angle is 0 metres, meaning the Jaguar is most likely to continue the way it was going. Lets now think more about what else could influence the movement of a Jaguar. This is where the real biology comes in! Although, as mathematicians we like to start simple. Jaguars are elusive creatures so lets propose that they prefer to be underneath trees rather than out in the open. Now lets look at the entire landscape, where we previously recorded the Jaguar’s steps. Here we can see the whole view So lets say we know a position of the Jaguar and we are considering where he might travel to next, in the time-step of 1 minute. Here’s four options (arrows): Moving to the space west could be considered unlikely as there is no tree cover. The closest north-east movement could be likely as it is moving towards more tree cover. The movement into the sea is very unlikely or maybe even impossible for the jaguar to move that far in one minute. The movement south seems likely over time, but maybe too far to travel in 1 minute. We have only looked at four possibilities here. How many are there in this rectangular domain? Infinitely many! Infinity is a hard number to work with. So instead we consider discretising the space . We have already discretised time, as we are looking the position only every minute. We will also do this over space by drawing a grid: Now movement is only from somewhere in one square to somewhere in another. Realistically we are saying that the Jaguar could be anywhere in each cell but we only care about moving from one cell to the next. This gives us a finite amount of movement possibilities, 576 to be exact. So lets say the yellow square is the last place we saw the Jaguar and propose a model. We propose that the Jaguar’s movement is determined by the distance to another cell the angle turned through to get to a cell the amount of tree cover the cell has. We’ve already determined how to find the distance and angle. What about the tree cover? Simple, we go to each cell and give it a value between 0 and 1 based on how covered it is by trees. For example: So, for the cells shown above, if its half covered by trees it is given a value of 0.5. The cell completely covered by trees is given a value of 1. The cell in the sea has zero trees, but zero wouldn’t be a great number to use (we are going to multiply), so we allocate a really small number. Now lets propose our model! To understand this I want to draw your attention back to basic probability. What is the probability of rolling a 5 on a normal dice? It’s 1 in 6 of course or $$\frac{1}{6}$$ so $$ P(\text{5 on dice}) = \frac{1}{6}$$. What about the probability of choosing a Queen of Hearts at random from a pack of ordinary cards? $$P(\text{Queen of Hearts}) = \frac{1}{52}$$ So the dice had 6 possible outcomes and the cards have 52 possible outcomes. What about rolling a 5 and picking the Queen of Hearts? $$P(\text{both}) = \frac{1}{6} \times\frac{1}{52}=\frac{1}{216}$$ So there’s 216 chances here! Our model is similar to this, here’s the important part, the mathematical form! So for each cell in our landscape, we will assign a probability of being chosen, based on the Jaguar’s current position. This is the formula we will use for each cell: $$P(\text{Moving to cell}) = \frac{P(\text{Distance})\times P(\text{Angle})\times (\text{Tree})}{\sum(\text{Probabilities of all cells})}$$ where $$\sum$$ means to add all the things after it up and Distance= The distance to travel to the cell Angle=The angle turned through to travel to the cell Tree=The value based on tree coverage So every time the Jaguar takes a step, we calculate this for every cell in the landscape. This gives us a two-dimensional probability distribution for the computer to choose cells from. To understand this better, lets look at a small part of the landscape Here we can see the values of each cell based on the formula above and the current location of the Jaguar. We ask the computer to choose a cell, the highest probability is the most likely, but all cells are possible. Just like choosing a coloured counter from the box! But instead we are choosing a cell and some are more likely than others. Once we have chosen a cell the Jaguar moves and they are all calculated again. So each movement requires 576 calculations! One for each cell. Using a computer to calculate these is quite necessary! Okay, lets take a look at the model in action and simulate the Jaguars path! Looks good, but whats the problem? The walls of our rectangle are restricting the path, but those walls arent there really.. How can we write a better model? More information! What else could be driving the movement of our Jaguar? Let’s say she has a den and a litter of cubs, and we know the location! So we’d expect the Jaguar to be more likely to move to a cell nearer to the den. Now we will incorporate this into our old model to form a new model and see if its any better! $$P(\text{Moving to cell}) = \frac{P(\text{Distance})\times P(\text{Angle})\times (\text{Tree})\times (\text{Nest})}{\Sigma(\text{Probabilities of all cells})}$$ where Distance= The distance to travel to the cell Angle=The angle turned through to travel to the cell Tree=The value based on tree coverage Nest = A value which is larger the nearer to the nest the cell is. Okay, following the same process as before but using the new formula, lets simulate the Jaguars movement again There we go! We can clearly see the Jaguars usual space use forming! We’d call this an animals home-range. Modelling home ranges is the subject of my PhD Research at the University of Sheffield. I hope I’ve managed to give you an insight of the basic ideas in Mathematical Movement Ecology! Extensions of the model and other points: This model has been introduced as the most simple model to introduce the ideas to everyone! There are lots more aspects of a Jaguar’s behaviour we could include, such as territoriality and movement of prey. In real life Jaguars may move away from places they have detected other Jaguars and move towards prey. Its complicated but we could incorporate this too! It may not be completely obvious how our formulae relates to the data. In this example this is by creating the probability distributions for the steps and angles. Hidden in the models we would have parameters (constants such as speed) associated with each part of the model. These parameters can be changed using a special algorithm to find the most likely parameters for our data! For any reading mathematicians, in part 2 I write about a less simple general form for these models, with the use of parameters and a way to find the best parameters for the data. These are called ‘Step-Selection Models’ in the literature.
Let $K$ be a maximal subfield of $\mathbb C$ which doesn't contain $\sqrt{2}$ (one exists by Zorn's Lemma). Then $\mathbb C$ is algebraic over $K$, since if a complex number $\alpha$ is a transcendental over $K$, then adjoining $\alpha$ cannot result in $\sqrt{2}$ appearing, since squaring any expression of $\sqrt{2}$ as a rational function in $K(\alpha)$ gives a polynomial with $\alpha$ as a root. The next part of this problem asks to prove that any finite extension of $K$ is Galois and cyclic. We can assume that the extension $L$ is Galois, since once we prove that the Galois group is cyclic, every subextension is automatically Galois since all subgroups of cyclic groups are normal. So, $Gal(L/K)$ contains an element $\sigma$ that doesn't fix $\sqrt{2}$, but then $\langle\sigma\rangle$'s fixed field is an extension of $K$ which doesn't contain $\sqrt{2}$, and thus must be $K$ itself. Thus, the Galois group is cyclic, and it's of order $2^n$, since there is no nontrivial odd extension of $K$. Now, the final part of the problem is to prove that the degree $[\mathbb C:K]$ is countable and not finite. However, I just realized that it might in fact be $2$: shouldn't there be an automorphism $\sigma$ of $K(\sqrt{2})$ that fixes $K$ and maps $\sqrt{2}\mapsto -\sqrt{2}$, which extends to an automorphism of $\mathbb C$? I guess it's possible that this automorphism doesn't have order $2$ because of unforseen relations between complex numbers. I assume I have to prove that the degree is not finite by assuming $\mathbb C$ is cyclic over $K$ and deriving a contradiction, but I'm not sure how to do that, nor how to show that the degree is not uncountable.
Analysis of a Plane Blown Off Course oto a town B. The wind is blowing at 20km/h on a bearing of 300 o. If the pilot aims the plane on to fly to B, he will be blow off course by the wind. What will be the ground speed and the bearing actually flown by the plane? We can draw the vector diagram shown. If we construct a parallelogram we can use trigonometry. Using the Cosine Rule gives \[v=\sqrt{300^2+20^2-2 \times 300 \times 20 \times cos 110^o}=307.4 km/h\] To find the bearing, we find the angle \[x\]below using the Sine Rule. \[\frac{sinx}{20}=\frac{sin110}{300} \rightarrow x=sin^{-1}(20 \times \frac{sin110}{300} )=3.6^o\] to 1 decimal place. The pilot will actually fly on a bearing of \[30^o-3.6^o=26.4^o\]to 1 decimal place.
The bias-variance decomposition can be expressed as: \begin{align} \newcommand{\Var}{{\rm Var}} E[(y_0 - \hat{f}(x_0)) ^ 2] &= (E[\hat{f}(x_0)] - f(x_0)) ^ 2 + E[(\hat{f}(x_0) - E[\hat{f}(x_0)]) ^ 2] + \sigma ^ 2\\ &= [{\rm Bias}(\hat{f}(x_0))] ^ 2 + \Var(\hat{f}(x_0)) + \Var(\varepsilon) \end{align} I tried to verify the expression using a simulated experiment, but the result seems to suggest that the left hand side of the expression is not equal to the right hand side. Here's the R code to reproduce the experiment: library(tidyverse)set.seed(1)coefs <- rnorm(4) # generates four parameters of the underlying linear model (see below)training_sets <- lapply(1:100, function(...) { # 100 training sets, each with 100 observations tibble( X1 = rnorm(100), X2 = rnorm(100), X3 = rnorm(100), fX = coefs[1] * X1 + coefs[2] * X2 + coefs[3] * X3 + coefs[4], # a simple linear regression model with three predictors and an intercept Y = fX + rnorm(100) # adds irreducible error )})fX_estimates <- training_sets %>% # train a linear model on each training set lapply(function(training_set_i) { lm(Y ~ X1 + X2 + X3 + 1, # the last term (i.e., '1') represents the intercept data = training_set_i) })test_set <- tibble( # generate a test set with one observation, from the same population as the training sets X1 = rnorm(1), X2 = rnorm(1), X3 = rnorm(1), fX = coefs[1] * X1 + coefs[2] * X2 + coefs[3] * X3 + coefs[4], Y = fX + rnorm(1) # adds irreducible error)y0 <- test_set$Yfx0 <- test_set$fXfx0_estimates <- fX_estimates %>% # estimates of the outcome based on models trained with different training sets sapply(function(fX_estimate_i) { predict(fX_estimate_i, newdata = test_set) })lhs <- mean((y0 - fx0_estimates) ^ 2) # value of the left hand side of the expressionrhs <- (mean(fx0_estimates) - fx0) ^ 2 + mean((fx0_estimates - mean(fx0_estimates)) ^ 2) + 1 ^ 2 # value of the right hand side of the expression Here're some of my guesses that may explain the results: Some misunderstanding of the bias-variance decomposition, and hence the wrong math expressions and the wrong experiment. Some kind of bias was introduced by the way I generated the data and the underlying f(X). The inequality was introduced by random noise. However, increasing the sample size is not helpful. Additionally, when I ran the code with different random seeds for 100 time, $E[(y_0 - \hat{f}(x_0)) ^ 2]$ has a mean of 0.66 and a standard deviation of 0.78, while $(E[\hat{f}(x_0)] - f(x_0)) ^ 2 + E[(\hat{f}(x_0) - E[\hat{f}(x_0)]) ^ 2] + \sigma ^ 2$ has a mean of 1.04 and a standard deviation of 0.03. They are still not equal, and it seems that $E[(y_0 - \hat{f}(x_0)) ^ 2]$ is much more variable than $(E[\hat{f}(x_0)] - f(x_0)) ^ 2 + E[(\hat{f}(x_0) - E[\hat{f}(x_0)]) ^ 2] + \sigma ^ 2$. So, what seems to be the problem?
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest. Nah, I have a pretty garbage question. Let me spell it out. I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$. For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$. This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin. Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle. Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$ $$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$ @user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure). The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$. @RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea. The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible. Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$.... The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case? I think there you really do need some kind of PDEs to construct good charts. I might be way overcomplicating this. If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$? I think so by the squeeze theorem or something. this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$ but then we can replace all of those $U_i$'s with balls, incurring some fixed error In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid... @BalarkaSen what is this ok but this does confirm that what I'm trying to do is wrong haha In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas... Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function? I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
First, the permanent of an $n\times n$ integer matrix with $O(n)$-bit coefficients is an integer with $O(n^2)$ bits, hence if we know it modulo an integer with $\Omega(n^2)$ bits (with the implied constant depending on the constant in the input bit size), we know it outright. Your problem with $p$ allowed to have $O(\log n)$ bits is PP-hard under polynomial-time Turing reductions: in order to compute permanent, just compute it modulo every prime below $cn^2$, and use the Chinese remainder theorem to find its value modulo the product of these primes, which is roughly $e^{cn^2}$. By the above, this provides the true value of the permanent for suitable $c$. There is a serious obstacle to using your problem for primes with $\omega(\log n)$ bits: any reduction to this problem must in particular produce a prime number, and we know of no provably correct deterministic way of computing large primes significantly faster than brute force. But if you allow randomized reductions, or say, reductions with free access to solutions of the search problem “given $x$, find a prime $p$ such that $x\le p\le2x$”, then you can reduce the number of oracle queries in the reduction of permanent I gave above by using a smaller number of larger primes: if you allow primes $p$ with $m(n)$ bits, you can do away with about $n^2/m(n)$ oracle calls. For $m(n)=n^\alpha$ with constant $\alpha$, you can use padding to reduce the number of queries to $1$, i.e., to get a randomized many-one reduction: given the input $n\times n$ matrix $M$, find a prime $p$ of at least $n^2$ bits, and let $M'$ be the matrix $M$ padded with diagonal $1$s to dimension $n'\times n'$, where $n'^\alpha\ge\log p$. Then ask for the permanent of $M'$ modulo $p$.
The integral test applied to the harmonic series . Since the area under the curve y = 1/ x for x ∈ [1, ∞) is infinite, the total area of the rectangles must be infinite as well. In mathematics, the integral test for convergence is a method used to test infinite series of non-negative terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test. Statement of the test [ edit ] Consider an integer and a non-negative function N defined on the unbounded f interval [, on which it is N, ∞) monotone decreasing. Then the infinite series ∑ n = N ∞ f ( n ) {\displaystyle \sum _{n=N}^{\infty }f(n)} converges to a real number if and only if the improper integral ∫ N ∞ f ( x ) d x {\displaystyle \int _{N}^{\infty }f(x)\,dx} is finite. In other words, if the integral diverges, then the series diverges as well. If the improper integral is finite, then the proof also gives the lower and upper bounds ∫ N ∞ f ( x ) d x ≤ ∑ n = N ∞ f ( n ) ≤ f ( N ) + ∫ N ∞ f ( x ) d x {\displaystyle \int _{N}^{\infty }f(x)\,dx\leq \sum _{n=N}^{\infty }f(n)\leq f(N)+\int _{N}^{\infty }f(x)\,dx} ( 1) for the infinite series. The proof basically uses the comparison test, comparing the term with the integral of f( n) over the intervals f [ and n − 1, n) [, respectively. n, n + 1) Since is a monotone decreasing function, we know that f f ( x ) ≤ f ( n ) for all x ∈ [ n , ∞ ) {\displaystyle f(x)\leq f(n)\quad {\text{for all }}x\in [n,\infty )} and f ( n ) ≤ f ( x ) for all x ∈ [ N , n ] . {\displaystyle f(n)\leq f(x)\quad {\text{for all }}x\in [N,n].} Hence, for every integer , n ≥ N ∫ n n + 1 f ( x ) d x ≤ ∫ n n + 1 f ( n ) d x = f ( n ) {\displaystyle \int _{n}^{n+1}f(x)\,dx\leq \int _{n}^{n+1}f(n)\,dx=f(n)} ( 2) and, for every integer , n ≥ N + 1 f ( n ) = ∫ n − 1 n f ( n ) d x ≤ ∫ n − 1 n f ( x ) d x . {\displaystyle f(n)=\int _{n-1}^{n}f(n)\,dx\leq \int _{n-1}^{n}f(x)\,dx.} ( 3) By summation over all from n to some larger integer N , we get from ( M ) 2 ∫ N M + 1 f ( x ) d x = ∑ n = N M ∫ n n + 1 f ( x ) d x ⏟ ≤ f ( n ) ≤ ∑ n = N M f ( n ) {\displaystyle \int _{N}^{M+1}f(x)\,dx=\sum _{n=N}^{M}\underbrace {\int _{n}^{n+1}f(x)\,dx} _{\leq \,f(n)}\leq \sum _{n=N}^{M}f(n)} and from ( ) 3 ∑ n = N M f ( n ) ≤ f ( N ) + ∑ n = N + 1 M ∫ n − 1 n f ( x ) d x ⏟ ≥ f ( n ) = f ( N ) + ∫ N M f ( x ) d x . {\displaystyle \sum _{n=N}^{M}f(n)\leq f(N)+\sum _{n=N+1}^{M}\underbrace {\int _{n-1}^{n}f(x)\,dx} _{\geq \,f(n)}=f(N)+\int _{N}^{M}f(x)\,dx.} Combining these two estimates yields ∫ N M + 1 f ( x ) d x ≤ ∑ n = N M f ( n ) ≤ f ( N ) + ∫ N M f ( x ) d x . {\displaystyle \int _{N}^{M+1}f(x)\,dx\leq \sum _{n=N}^{M}f(n)\leq f(N)+\int _{N}^{M}f(x)\,dx.} Letting tend to infinity, the bounds in ( M ) and the result follow. 1 Applications [ edit ] The harmonic series ∑ n = 1 ∞ 1 n {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}} diverges because, using the natural logarithm, its antiderivative, and the fundamental theorem of calculus, we get ∫ 1 M 1 n d n = ln n | 1 M = ln M → ∞ for M → ∞ . {\displaystyle \int _{1}^{M}{\frac {1}{n}}\,dn=\ln n{\Bigr |}_{1}^{M}=\ln M\to \infty \quad {\text{for }}M\to \infty .} Contrary, the series ζ ( 1 + ε ) = ∑ x = 1 ∞ 1 x 1 + ε {\displaystyle \zeta (1+\varepsilon )=\sum _{x=1}^{\infty }{\frac {1}{x^{1+\varepsilon }}}} (cf. Riemann zeta function)converges for every , because by the ε > 0 power rule ∫ 1 M 1 x 1 + ε d x = − 1 ε x ε | 1 M = 1 ε ( 1 − 1 M ε ) ≤ 1 ε < ∞ for all M ≥ 1. {\displaystyle \int _{1}^{M}{\frac {1}{x^{1+\varepsilon }}}\,dx=-{\frac {1}{\varepsilon x^{\varepsilon }}}{\biggr |}_{1}^{M}={\frac {1}{\varepsilon }}{\Bigl (}1-{\frac {1}{M^{\varepsilon }}}{\Bigr )}\leq {\frac {1}{\varepsilon }}<\infty \quad {\text{for all }}M\geq 1.} From ( ) we get the upper estimate 1 ζ ( 1 + ε ) = ∑ x = 1 ∞ 1 x 1 + ε ≤ 1 + ε ε , {\displaystyle \zeta (1+\varepsilon )=\sum _{x=1}^{\infty }{\frac {1}{x^{1+\varepsilon }}}\leq {\frac {1+\varepsilon }{\varepsilon }},} which can be compared with some of the particular values of Riemann zeta function. Borderline between divergence and convergence [ edit ] The above examples involving the harmonic series raise the question, whether there are monotone sequences such that decreases to 0 faster than f( n) 1/ but slower than n 1/ in the sense that n 1+ ε lim n → ∞ f ( n ) 1 / n = 0 and lim n → ∞ f ( n ) 1 / n 1 + ε = ∞ {\displaystyle \lim _{n\to \infty }{\frac {f(n)}{1/n}}=0\quad {\text{and}}\quad \lim _{n\to \infty }{\frac {f(n)}{1/n^{1+\varepsilon }}}=\infty } for every , and whether the corresponding series of the ε > 0 still diverges. Once such a sequence is found, a similar question can be asked with f( n) taking the role of f( n) 1/, and so on. In this way it is possible to investigate the borderline between divergence and convergence of infinite series. n Using the integral test for convergence, one can show (see below) that, for every natural number , the series k ∑ n = N k ∞ 1 n ln ( n ) ln 2 ( n ) ⋯ ln k − 1 ( n ) ln k ( n ) {\displaystyle \sum _{n=N_{k}}^{\infty }{\frac {1}{n\ln(n)\ln _{2}(n)\cdots \ln _{k-1}(n)\ln _{k}(n)}}} ( 4) still diverges (cf. proof that the sum of the reciprocals of the primes diverges for ) but k = 1 ∑ n = N k ∞ 1 n ln ( n ) ln 2 ( n ) ⋯ ln k − 1 ( n ) ( ln k ( n ) ) 1 + ε {\displaystyle \sum _{n=N_{k}}^{\infty }{\frac {1}{n\ln(n)\ln _{2}(n)\cdots \ln _{k-1}(n)(\ln _{k}(n))^{1+\varepsilon }}}} ( 5) converges for every . Here ε > 0 ln denotes the k -fold k composition of the natural logarithm defined recursively by ln k ( x ) = { ln ( x ) for k = 1 , ln ( ln k − 1 ( x ) ) for k ≥ 2. {\displaystyle \ln _{k}(x)={\begin{cases}\ln(x)&{\text{for }}k=1,\\\ln(\ln _{k-1}(x))&{\text{for }}k\geq 2.\end{cases}}} Furthermore, denotes the smallest natural number such that the N k -fold composition is well-defined and k ln, i.e. ( k N ) ≥ 1 k N k ≥ e e ⋅ ⋅ e ⏟ k e ′ s = e ↑↑ k {\displaystyle N_{k}\geq \underbrace {e^{e^{\cdot ^{\cdot ^{e}}}}} _{k\ e'{\text{s}}}=e\uparrow \uparrow k} using tetration or Knuth's up-arrow notation. To see the divergence of the series ( ) using the integral test, note that by repeated application of the 4 chain rule d d x ln k + 1 ( x ) = d d x ln ( ln k ( x ) ) = 1 ln k ( x ) d d x ln k ( x ) = ⋯ = 1 x ln ( x ) ⋯ ln k ( x ) , {\displaystyle {\frac {d}{dx}}\ln _{k+1}(x)={\frac {d}{dx}}\ln(\ln _{k}(x))={\frac {1}{\ln _{k}(x)}}{\frac {d}{dx}}\ln _{k}(x)=\cdots ={\frac {1}{x\ln(x)\cdots \ln _{k}(x)}},} hence ∫ N k ∞ d x x ln ( x ) ⋯ ln k ( x ) = ln k + 1 ( x ) | N k ∞ = ∞ . {\displaystyle \int _{N_{k}}^{\infty }{\frac {dx}{x\ln(x)\cdots \ln _{k}(x)}}=\ln _{k+1}(x){\bigr |}_{N_{k}}^{\infty }=\infty .} To see the convergence of the series ( ), note that by the 5 power rule, the chain rule and the above result − d d x 1 ε ( ln k ( x ) ) ε = 1 ( ln k ( x ) ) 1 + ε d d x ln k ( x ) = ⋯ = 1 x ln ( x ) ⋯ ln k − 1 ( x ) ( ln k ( x ) ) 1 + ε , {\displaystyle -{\frac {d}{dx}}{\frac {1}{\varepsilon (\ln _{k}(x))^{\varepsilon }}}={\frac {1}{(\ln _{k}(x))^{1+\varepsilon }}}{\frac {d}{dx}}\ln _{k}(x)=\cdots ={\frac {1}{x\ln(x)\cdots \ln _{k-1}(x)(\ln _{k}(x))^{1+\varepsilon }}},} hence ∫ N k ∞ d x x ln ( x ) ⋯ ln k − 1 ( x ) ( ln k ( x ) ) 1 + ε = − 1 ε ( ln k ( x ) ) ε | N k ∞ < ∞ {\displaystyle \int _{N_{k}}^{\infty }{\frac {dx}{x\ln(x)\cdots \ln _{k-1}(x)(\ln _{k}(x))^{1+\varepsilon }}}=-{\frac {1}{\varepsilon (\ln _{k}(x))^{\varepsilon }}}{\biggr |}_{N_{k}}^{\infty }<\infty } and ( ) gives bounds for the infinite series in ( 1 ). 5 See also [ edit ] References [ edit ] Knopp, Konrad, "Infinite Sequences and Series", Dover Publications, Inc., New York, 1956. (§ 3.3) ISBN 0-486-60153-6 Whittaker, E. T., and Watson, G. N., , fourth edition, Cambridge University Press, 1963. (§ 4.43) A Course in Modern Analysis ISBN 0-521-58807-3 Ferreira, Jaime Campos, Ed Calouste Gulbenkian, 1987, ISBN 972-31-0179-3
Category:Derivatives Let $I\subset\R$ be an open interval. Let $f : I \to \R$ be a real function. Let $f$ be differentiable on the interval $I$. $\displaystyle \forall x \in I: f' \left({x}\right) := \lim_{h \mathop \to 0} \frac {f \left({x + h}\right) - f \left({x}\right)} h$ Subcategories This category has the following 7 subcategories, out of 7 total. D L ► Laplace Transforms of Derivatives (6 P) P ► Power Rule for Derivatives (14 P) Pages in category "Derivatives" The following 18 pages are in this category, out of 18 total. D Derivative of Cosine Integral Function Derivative of Error Function Derivative of Exponential Integral Function Derivative of Fresnel Cosine Integral Function Derivative of Fresnel Sine Integral Function Derivative of Gamma Function Derivative of Logarithm over Power Derivative of Nth Root Derivative of Sine Integral Function Derivative of Square Function Derivatives of Function of a x + b Derivatives of Hyperbolic Functions Derivatives of Inverse Hyperbolic Functions Derivatives of Inverse Trigonometric Functions Derivatives of Trigonometric Functions
Author Message TAGS: Manager Joined: 27 May 2010 Posts: 200 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 18 Jul 2019, 22:56 For every positive integer x, f(x) represents the greatest prime factor of x!, and g(x) represents the smallest prime factor of 2x+1. What is (g(f(12))? x = 12 f(12) = Greatest prime factor (12!) = 11 g(11) = Smallest prime factor (2^11 + 1) = (2049) = 3 Option B _________________ Please give Kudos if you like the post Intern Joined: 14 Mar 2017 Posts: 40 Location: United States (VA) GPA: 2.9 WE: Science (Pharmaceuticals and Biotech) Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 18 Jul 2019, 23:10 Question: For every positive integer \(x\), \(f(x)\) represents the greatest prime factor of \(x!\), and \(g(x)\) represents the smallest prime factor of \(2^x+1\). What is \((g(f(12))\)? \(f(x) =\) The greatest prime factor of \(12!\) \(f(12) =\) The greatest prime factor of \(12!\) \(12! =\text{12 }\times\text{ }\)\(11\) \(\times\text{ 10 }\times\text{9 }\times\text{8 }\times\text{7 }\times\text{6 }\times\text{5 }\times\text{4 }\times\text{3 }\times\text{2 }\times\text{1 }\)\(11\) \(=\) Greatest Prime Factor \(g(x) =\) The smallest prime factor of \(2^x+1\) \(g(\)\(11\) \() =2^{11}+1\) Since \(2^{11}\) is a little much to calculate without a calculator, start small and look for a pattern. \[ \begin{vmatrix*} 2^x & + & 1 & = & # & \to & \div 3?\\ 2^1 & + & 1 & = & 3 & \to & Yes \\ 2^2 & + & 1 & = & 5 & \to & No \\ 2^3 & + & 1 & = & 9 & \to & Yes \\ 2^4 & + & 1 & = & 17 & \to & No \\ 2^5 & + & 1 & = & 33 & \to & Yes \end{vmatrix*} \\ \text{Pattern #1: Every answer is odd and therefore not divisible by 2.}\\ \text{Pattern #2: Every odd exponent of 2 with the addition of 1, equals an answer divisible by 3.}\\ \] \(\text{Since }2^{11} \text{ has an odd exponent, the final answer will be divisible by 3.}\) \(\text{Therefore, 3 is the smallest prime factor of } (2^{11}+1)\).Correct Answer: B. 3 _________________ "What is my purpose?" "You pass butter." "Oh my god..." "Yeah, welcome to the club, pal." Manager Joined: 04 Dec 2017 Posts: 93 Location: India Concentration: Other, Entrepreneurship GMAT 1: 570 Q36 V33 GMAT 2: 620 Q44 V32 GMAT 3: 720 Q49 V39 GPA: 3 WE: Engineering (Other) Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 18 Jul 2019, 23:10 We need to find g(f(12)). Lets break it down. First lets find out f(12). f(12) = greatest prime factor of 12!. 12! = 1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 Hence, f(12) = 11. Therefore, g(f(12)) becomes g(11). g(11) = smallest prime factor of ((2^11)+1) ie, smallest prime factor of 2049. Using the options we get 3 as the smallest prime factor of 2049. Answer: B SVP Joined: 26 Mar 2013 Posts: 2330 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 18 Jul 2019, 23:11 f(12) = greatest prime factor of 12! = 11 g(11) = 2^11 + 1 It is known that 2^10 = 1024 Then 2^11= 2 * 1024 = 2048 g(11) = 2^11 + 1 = 2049 The sum of all digits is divisible by 3 Answer: B Manager Joined: 06 Jun 2019 Posts: 118 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 18 Jul 2019, 23:27 From the stem we know that: \(f(x)\) - is the greatest prime factor of \(x!\) \(g(y)\) - is the smallest prime factor of \(2^y+1\) Let's figure out step by step what \(g(f(12))\) is. In order to calculate what \(g(y)\) is, we need to first calculate what \(y\) is. Since \(y=f(12)\), we will first calculate what \(f(12)\) is.1. \(f(12)\) is the greatest prime factor of \(12!\) So \(f(12)=11\) or \(y=11\)2. \(g(11)\) is the smallest prime factor of \(2^{11}+1\). What is the smallest prime factor of \(2049\) ? \(2049\) is not even, so it can't be \(2\). Is \(2049\) divisible by \(3\) ? \(2+0+4+9=15\) is divisible by \(3\). Thus \(g(11)=3\) Hence B _________________ Bruce Lee : “I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.”GMAC : “I fear not the aspirant who has practiced 10,000 questions, but I fear the aspirant who has learnt the most out of every single question.” Intern Joined: 01 Jun 2019 Posts: 1 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 00:03 f(12) = factorial of 12 whose greatest prime factor is 11 as 12! is 1*2*3*4*5*6*7*8*9*10*11*12. Hence, g(f(12)) = g(11) = 2^11 + 1 = 2049 = 3*683. Hence, the smallest prime factor of g(f(12)) is 3. Correct answer is B. Manager Joined: 12 Jan 2018 Posts: 113 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 00:32 f(12)= greatest prime factor of 12!. That is 11. g(f(12))=g(11)= smallest prime factor of \(2^{11}\)+1=2048+1=2049. Smallest prime number which can divide 2049 is 3. Hence, ans:B _________________ "Remember that guy that gave up? Neither does anybody else" Senior Manager Joined: 18 Jan 2018 Posts: 308 Location: India Concentration: General Management, Healthcare GPA: 3.87 WE: Design (Manufacturing) Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 00:41 For every positive integer x, f(x) represents the greatest prime factor of x!, and g(x) represents the smallest prime factor of 2^x +1 . What is (g(f(12)) ? Given that f(x) = greatest prime factor of x! g(x) = smallest prime factor of 2^x +1 when x =12 , f(x) = 12! = 12*11*9*8*7*6*5*4*3*2*1 11 is the greatest prime factor g(11) = 2^11 +1 =2048 +1 = 2049 is divisible by 3 ==> 3 is the smallest prime factor a small Observation here , every odd power of 2 , 2^odd + 1 is a multiple of 3 . Option B , is the answet Senior Manager Joined: 25 Sep 2018 Posts: 413 Location: United States (CA) Concentration: Finance, Strategy GPA: 3.97 WE: Investment Banking (Investment Banking) For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 02:54 For every positive integer xx, f(x)represents the greatest prime factor of x!, and g(x) represents the smallest prime factor of \(2^x\) + 1. What is (g(f(12))? A. 2 B. 3 C. 5 D. 7 E. 11Solution:The above function is a compound function, in a compound function, the output of one function becomes the input of second function and so on. while solving this, the first function is the inner most function and the last function is the outermost. We'll first solve the innermost function. f(12) = 12! i.e 12 X 11 X 10 X 9 X 8 X 7 X 6 X 5 X 4 X 3 X 2 X 1 We observe the the greatest prime is 11. We take 11 as the input for the outer most function, it will be g(11) \(2^{11}\) + 1 = 2048 + 1 =2049. We can try out the options and divide them by 2049, We see that the smallest prime factor is 3. Hence the answer is B. _________________ Why do we fall?...So we can learn to pick ourselves up again Intern Joined: 24 Mar 2018 Posts: 48 Location: India Concentration: Operations, Strategy WE: Project Management (Energy and Utilities) Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 02:56 \(f(12)\) is the greatest prime factor of 12!. \(12!=12*11*10*...1\) The greatest prime factor is 11. So, \(f(12)=11\) \(g(f(12))=g(11)\) \(g(11)\) is the smallest prime factor of \(2^{11}+1\) (=2048+1=2049) The smallest prime number that can divide 2049 is 3. Hence, option (B). Intern Joined: 08 Jul 2019 Posts: 37 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 03:47 Greatest factor of 12! is 11. 2^11+1 equals to 2049, which is divisible by 3 - smallest prime factor. Answer B Manager Joined: 01 Oct 2018 Posts: 112 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 04:29 For every positive integer x, f(x) represents the greatest prime factor of x!, and g(x) represents the smallest prime factor of 2^x+1 What is (g(f(12))? f(12): 12! = 1*2*3...11*12 The greatest prime factor is 11. g(11): 2^11+1 = 2049 2049 / 2 = not int 2049 / 3 = int The smallest prime factor is 3 ANSWER B A. 2 B. 3 C. 5 D. 7 E. 11 Manager Joined: 03 Aug 2009 Posts: 58 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 05:26 For every positive integer \(x, f(x)\) represents the greatest prime factor of \(x!\), and \(g(x)\) represents the smallest prime factor of \(2^{x}+1\). What is \((g(f(12))\)? f(12) = GPF of 12!: GPF of 1*2*3*4*5*6*7*8*9*10*11*12 = 11 \(g(11) = 2^{11}+1 = 2048+1=2049\) 2049 is divisible by 3 as \((2+0+4+9 = 15)\) is divisible by 3. Therefore smallest PF of 2049 is 3 A. 2 B. 3 C. 5 D. 7 E. 11 Manager Joined: 07 Jul 2019 Posts: 54 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 05:37 f(12) = highest prime factor of 12 factorial - 11 g(11)=2^11+1=2^10=1024*2+1=2049. Smallest prime factor is 2, but 2049 cannot be divided by 2, A will not work B will work 2049 can be evenly divided by 3. B is the answer Manager Joined: 25 Jul 2018 Posts: 211 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 05:41 For every positive integer x, f(x) represents the greatest prime factor of x!, and g(x) represents the smallest prime factor of 2^x+1. What is (g(f(12))? If f(x) represents the greatest prime factor of x!, the greatest prime factor of it will be 11. ---> (g(f(12))=g(11)=2^x+1=2^11+1=2048+1=2049 it says that g(x) represents the smallest prime factor of 2^x+1 (=2049) --- > The smallest prime factor of 2049 is 3. The answer choice is B. Manager Joined: 24 Jun 2017 Posts: 71 For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 05:57 . What is (g(f(12)) (g(f(12))? f(12) = largest prime factor of 12! which is 11 g(11) = smallest PF of 2^11+1 = 2049 so B. 3 is the answer Posted from my mobile device Manager Joined: 24 Sep 2014 Posts: 50 Concentration: General Management, Technology Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 06:54 x=12, f(x) is greatest prime factor of x!, so x! = 12! = 12x11x10.....x1, the greatest prime factor is 11 --> f(x) = 11 g(x) is smallest prime factor of 2^x+1, since x = f(x) = 11 --> g(x) = g(2^11+1) 2^11+1 is odd number (=2049), and the smallest odd prime number is 3, so g(2^11+1) = 3 Answer is (B) Intern Joined: 23 Jul 2017 Posts: 20 Location: India Concentration: Technology, Entrepreneurship GPA: 2.16 WE: Other (Other) Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 19 Jul 2019, 07:53 f(12) = Greatest Prime Factor of 12!, that is 11. g(11)= Smallest Prime Factor of 2^11 + 1 2^11+1=32x32x2+1=2049, which is divisible by 3, so the smallest prime factor is 3. There might be a better solution to avoid the calculation but this is the best I got. Intern Joined: 05 May 2013 Posts: 12 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 18 Jul 2019, 09:04 (g(f(12)) : F(12) - represent the greatest prime of 12! = 11 g(11) = 2^11+1 The last digit of 2^11 = 4 (based on the pattern of 2,4,8,6) So the smallest prime or the prime that divides g(11) is 5 IMO : C Manager Joined: 12 Apr 2017 Posts: 134 Location: United States Concentration: Finance, Operations GPA: 3.1 Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] Show Tags 18 Jul 2019, 11:12 f(12) = 12!, 11 is the highest prime factor G(11) = 2^11 + 1, and 2 becomes the lowest prime factor. Re: For every positive integer x, f(x) represents the greatest prime fact [#permalink] 18 Jul 2019, 11:12 Go to page Previous 1 2 3 4 5 Next [ 84 posts ]
I'm thinking to the famous problem of cancellation property in Grp, i.e: $$G_1 \times G_2 \cong G_1 \times G_3 \Rightarrow G_2 \cong G_3. $$ Clearly there are many counterexamples like $\prod_{i \in \omega}\mathbb{Z}_i$ or $ \oplus_{i \in \omega}\mathbb{Z}_i$ but these counterexamples can be bypassed by giving a definition. We say that a group G is $\Pi$-compact iff $$G \cong \prod_{i\in I}G_i, \ G_i \neq \{e\} \ \Rightarrow |I| < \infty.$$ We say that a group G is $\Sigma$-compact iff $$G \cong \oplus_{i\in I}G_i, \ G_i \neq \{e\} \ \Rightarrow |I| < \infty.$$ We say that a group is $\times$-compact iff it's $\Pi$ and $\Sigma$ compact. I've been working on many conjectures and with Seirios' help many of them have been solved. I'll tick ($\checkmark$) proved ones and refuse ($\neg$) false ones. $$\checkmark? \ \ \ \ \ \ \ \ G_1,G_2 \times\text{-compact} \Rightarrow G_1 \times G_2 \times\text{-compact} $$ $$\checkmark \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ G \text{ finitely generated} \Rightarrow G \times\text{-compact} $$ $$\neg \ \ \ \ \ \ \ \ \ \ \ \ \ \ H <G, G \times\text{-compact} \Rightarrow H \times\text{-compact} $$ $$\neg \ \ H \triangleleft G, \ \ \ \ H,G \times\text{-compact} \Rightarrow G/H \times\text{-compact} $$ $$\neg \ \ \ \ \ G \times\text{-compact} \Rightarrow \text{cancellation property holds}.$$ What's clearly true is that: finite groups are $\times$-compact (and I've seen on the web that cancellation property holds for them), simple groups are $\times$-compact, free groups are $\times$-compact. Applying $(3) \wedge (4)$ on free groups we may get (2) but $(3) \wedge (4)$ have to be false because any group is quotient of a free group.$$ \neg ( (3) \wedge (4)).$$As Seirios has observed for countable groups it holds that $\times$-compact $\Leftrightarrow$ $\Sigma$-compact.Again Seirios noted, here is proved that cancellation is not true for finitely presented groups so if (2) is true (5) is false. $$ (2) \Rightarrow \neg (5) $$ Seirios proved (2) here. A sketch of proof for (1). Let's assume that $G_1 \times G_2$ is not $\times$-compact. So $G_1 \times G_2 \cong \prod_{i\in \nu}P_i$. Let's call $\pi_1$ projection on first coordinate. So $G_1\cong \pi_1 (G_1 \times G_2) \cong \pi_1(\prod P_i) \cong \prod (\pi_1 P_i)$ so finitely many $P_i$ are not in $\{0\} \times G_2$. Same argument on the other side rise to absurd $\square.$ Is this correct? I have a counterexample for (4). Let's consider $G:=*_{i \in \omega}\mathbb{Z}_i$ Since it's free it's $\times$-compact. Its commutator [G,G] is $\times$-compact but quotient $$G/[G,G] \cong \oplus_{i \in \omega} \mathbb{Z} $$ which is not $\times$-compact. $$ \neg (4) .$$ Since this $$(2) \Rightarrow \neg (3).$$ News: None.
Limit on num_vars? Post Reply 2 posts • Page 1of 1 I have a strange issue, I have added four new parameters to cosmomc putting them in parameter positions 14, 15 ,16 and 17. This has moved \Omega_\Lambda, Age/Gyr, \Omega_m, \sigma_8, z_{re} and H_0 to parameter positions 18, 19, 20, 21, 22, and 24. The problem is when I run getdist, H_0 is missing in the outputs. test.margstats only goes up to z_{re}. The value of num_vars is 11, meaning H_0 is not given in the output (num_vars should be 12). I have editted distparams.ini so that lab24 = H_0 and plotparams_num = 0. I've checked the chains, and they do contain H_0. Hence I was wondering if there is some limit on num_vars set elsewhere? What have I missed? The problem is when I run getdist, H_0 is missing in the outputs. test.margstats only goes up to z_{re}. The value of num_vars is 11, meaning H_0 is not given in the output (num_vars should be 12). I have editted distparams.ini so that lab24 = H_0 and plotparams_num = 0. I've checked the chains, and they do contain H_0. Hence I was wondering if there is some limit on num_vars set elsewhere? What have I missed?
calculateEffectiveMassTensor¶ calculateEffectiveMassTensor( configuration, kpoint=None, spin=None, band_indices=None)¶ Calculates the effective mass tensor \(\frac{d^2 E}{dk_i dk_j}\) at the given kpoint, spin, and bands. Parameters: configuration( BulkConfiguration) – The configuration for which to calculate the effective mass tensor. kpoint( tuple of floats) – The kpoint as three floats representing fractional reciprocal space coordinates. Default:The Gamma point (0.0, 0.0, 0.0) spin( Spin.Up| Spin.Down| Spin.All) – The spin component for which to perform the calculation. Default: Spin.All band_indices( list of non-negative int) – Indices of the bands for which to calculate the effective mass tensor. Default: None Returns: List of effective mass tensors in units of the electron mass \(m_e\) for each requested band. Return type: list of PhysicalQuantity Usage Examples¶ Evaluate the effective mass tensor for a bulk configuration: effective_mass_tensor = calculateEffectiveMassTensor( configuration=bulk_configuration, kpoint=[0.5, 0.5, 0.0], spin=Spin.Up, band_indices=[14], )effective_mass_xy = effective_mass_tensor[0,1] Notes¶ The effective mass tensor is calculated through the second order derivative of the energy band \(\epsilon_{n\mathbf{k}}\) with respect to the \(\mathbf{k}\) vector, which is a 3x3 tensor. The effective mass tensor \(\alpha,\beta\) is then given by the inverse of \(\mathbf{M}\), The second order derivative of the energy band is obtained through an expansion of the Kohn-Sham equation in terms of a momentum displacement \(\delta \mathbf{k}\). To second order of this expansion, one obtains where \(H_\mathbf{k}\) and \(S_\mathbf{k}\) are the Fourier components of the Hamiltonian and overlap matrices, respectively. Special care has is taken to deal with degenerate bands. The degenerate states are first rotated such that the perturbation becomes diagonal in these states. The sum in above equation then extends only over those states \(|m\rangle\) that do not belong to the set of degenerate states.
I saw another question on here concerning light speeds and specifically concerning different types of light waves etc. The typical answer dealt with the speed of light in a vacuum but how do we know that space and especially deep space is truly a vacuum? I mean Einstein implies space is able to be contorted or bent which would beg the question is it actually made of something? Space is also full of matter including dark matter so how can we tell these things don't affect light such its speed? For all we know, light could be twice as fast in a true vacuum. Space is not a complete vacuum. On average there's about 1 atom per cm3. The density varies with the location. Dark matter is additional to that (about 20 times more mass). We have checked the speed of light at densities ranging from normal densities around us (of the order of $10^{23}$ atoms per $cm^3$, down to the density of space (1 atom per $cm^3$). Under all those densities we have found that the speed of light varies with the refractive index of the medium. The speed is slightly lower inside matter. Vacuum has an RI = 1, and the speed of light in vacuum is $299792.458 km/s$. Water has RI = 1.33. Hence the speed of light in water is $3/4$ of the speed in vacuum, or $225,000 km/s$. Air has RI = 1.0002772 and the speed of light in air is $299,704 km/s$, very close to vacuum. In general, the lower the density of the material, the more the RI tends towards 0. Because of the extremely low matter density of space, the true RI of space will be extremely close to 1, and the speed of light is indistinguishable from complete vacuum. As for dark matter, until we know what it is made of, we won't be able to tell how it affects light. So it is possible that dark matter could have an RI smaller than 1, and light would go faster. However, because of its extremely low density, any effect will be minimal. The value of the speed of light in vacuum doesn't come from measurements - It comes from the laws of electrodynamics. It is another thing entirely to verify that this is indeed the speed of light in vacuum. In electromagnetic theory, there are two quantities known as the permittivity of free space $\epsilon_0$ and another one known as the permeability of free space $\mu_0$. These two quantites are what determine the value of the speed of light in vacuum.. Given these two, you obtain the speed of light in vacuum through the equation $$ c = \frac{1}{\sqrt{\mu_0 \epsilon_0}} $$ where $c$ is then your speed of light. The values of $\mu$ and $\epsilon$ vary with material, and hence the speed of light will also vary with material. Therefore debates about whether space is truly a vacuum or not are not related to what we think is the true value of the speed of light in a vacuum.
I am trying to implement a project about the BGM model, suggested in the book "The Concepts and Practice of mathematical finance" by Mark Joshi. My question is related to the forward volatility structure, particularly about the covariance matrix. First of all, the book assumes that forward $f_j$ has volatility $$ K_j \left(\left(a + b(t_j - t)\right)e^{-c(t_j-t)} + d\right) $$ for $t < t_j$ and $0$ otherwise. Also, the instantaneous correlation between the forward rates $f_i$ and $f_j$ is defined as $e^{-\beta|t_i - t_j|}$. Now, I need to write a method that "computes the covariance matrix for the time-step". I get confused with the fact that there are "simultaneous" forward rates at each time step that have to be simulated, which brings me to the following two questions: Concerning the dimensions of the covariance matrix, I see that they depend on the number of time periods (and not on the time step size), but how long are the time periods? Is there a convention? How do the elements of the covariance matrix come into play? This might sound a bit stupid, but when pricing a swaption, we can have the following discretization of the logarithm of the forward rate: $$ \ln F_k^{\Delta t}(t + \Delta t) = \ln F_k^{\Delta t}(t) + \sigma_k(t)\sum_{j = \alpha + 1}^k \frac{\rho_{k,j}\,\tau_j\,\sigma_j(t)\,F_j^{\Delta t}(t)}{1 + \tau_j F_j^{\Delta t}(t)} \Delta t - \\ \frac{\sigma_k(t)^2}{2}\Delta t + \sigma_k(t)\left(Z_k(t + \Delta t) - Z_k(t)\right)$$ I do not see how the covariance matrix helps in a Monte Carlo simulation. I might be confusing concepts, so some help would be welcome.
Both your teacher and your answer are wrong! There are two steps to this problem. First, we need to convert between mass fractions and mole fractions. Second, we need to convert from a per-amount basis to a per-volume basis. "Per-amount" is my made up word for all units such as mass fraction, mole fraction, molality, etc that are expressed per amount (whether moles or mass) of substance (whether total or solvent). "Per-volume" units include things like molarity, grams per liter, etc., where the basis is the volume of solution. Converting from an amount basis to a volume basis requires knowing the density of the solution! Step by step: Converting to mass fraction $$ 0.325 \frac{\mathrm{mol\;\ce{H2SO4}}}{\mathrm{mol\;total}} \Rightarrow \frac{0.325\;\mathrm{mol\;\ce{H2SO4}}}{(1-0.325)\;\mathrm{mol\;\ce{H2O}}}$$ $$\frac{0.325\;\mathrm{mol\;\ce{H2SO4}}}{(1-0.325)\;\mathrm{mol\;\ce{H2O}}} \times \frac{98.1\mathrm{\frac{g\;\ce{H2SO4}}{mol\;\ce{H2SO4}}}}{18\mathrm{\frac{g\;\ce{H2O}}{mol\;\ce{H2O}}}}=2.62\mathrm{\frac{g\;\ce{H2SO4}}{g\;\ce{H2O}}}\Rightarrow \frac{2.62}{2.62+1}\mathrm{\frac{g\;\ce{H2SO4}}{g\;total}}=0.724\mathrm{\frac{g\;\ce{H2SO4}}{g\;total}}\Rightarrow (1-0.724)\mathrm{\frac{g\;\ce{H2O}}{g\;total}}=0.276\mathrm{\frac{g\;\ce{H2O}}{g\;total}}$$ This step is pretty easy to do using the information in the problem. There are 72.4 grams of sulfuric acid present in 100 g of the solution. So far, so good. Converting to a per volume basis But at this point the problem gets very tricky: ...what is the mass of water (in grams) in 100 mL of solution? It says 100 mL, not 100 g. This makes the problem much harder. This also makes your teacher's answer wrong: Teacher is not convinced with this answer and says its wrong because the total must be 100g The total doesn't need to be 100 g because we are apparently dealing with 100 mL of solution. Depending on the density of the solution, the total mass will be more or less than 100 g. According to Wikipedia, sulfuric acid at a mass fraction of 0.7 has a density of 1.60 kg/L and at a mass fraction of 0.78 has a density of 1.70 kg/L. Let's suppose a mass fraction of 0.724 has a density of ~1.65 kg/L. $$0.276\mathrm{\frac{g\;\ce{H2O}}{g\;total}}\times\frac{1650\mathrm{\;g\;total}}{\mathrm{L}}= 46\mathrm{\frac{g\;\ce{H2O}}{L}}$$ With the final number, now it is easy to see that if there are 430 grams of water per liter, then in 100 mL there are . 46 grams of water
Ground state solutions of fractional Schrödinger equations with potentials and weak monotonicity condition on the nonlinear term 1. Center for Applied Mathematics, Tianjin University, Tianjin 300072, China 2. Department of Mathematics, East China University of Science and Technology, Shanghai 200237, China In this paper we are concerned with the fractional Schrödinger equation $ (-\Delta)^{\alpha} u+V(x)u = f(x, u) $, $ x\in {{\mathbb{R}}^{N}} $, where $ f $ is superlinear, subcritical growth and $ u\mapsto\frac{f(x, u)}{\vert u\vert} $ is nondecreasing. When $ V $ and $ f $ are periodic in $ x_{1},\ldots, x_{N} $, we show the existence of ground states and the infinitely many solutions if $ f $ is odd in $ u $. When $ V $ is coercive or $ V $ has a bounded potential well and $ f(x, u) = f(u) $, the ground states are obtained. When $ V $ and $ f $ are asymptotically periodic in $ x $, we also obtain the ground states solutions. In the previous research, $ u\mapsto\frac{f(x, u)}{\vert u\vert} $ was assumed to be strictly increasing, due to this small change, we are forced to go beyond methods of smooth analysis. Keywords:Fractional logarithmic Schrödinger equation, periodic potential, coercive potential, bounded potential, nonsmooth critical point theory. Mathematics Subject Classification:Primary: 35J60, 35R11; Secondary: 47J30. Citation:Chao Ji. Ground state solutions of fractional Schrödinger equations with potentials and weak monotonicity condition on the nonlinear term. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6071-6089. doi: 10.3934/dcdsb.2019131 References: [1] [2] [3] [4] K. C. Chang, Variational methods for non-differentiable functionals and their applications to partial differential equations, [5] X. J. Chang and Z. Q. Wang, Ground state of scalar field equations involving a fractional Laplacian with general nonlinearity, [6] R. Cont and P. Tankov, [7] [8] [9] [10] [11] [12] G. Molica Bisci and V. Rădulescu, Ground state solutions of scalar field fractional for Schrödinger equations, [13] G. Molica Bisci, V. Rǎdulescu and R. Servadei, Variational Methods for Nonlocal Fractional Problems, Cambridge University Press, Cambridge, 2016. doi: 10.1017/CBO9781316282397. Google Scholar [14] [15] F. O. de Pavia, W. Kryszewski and A. Szulkin, Generalized Nehari manifold and semilinear Schrödinger equation with weak monotonicity condition on the nonlinear term, [16] P. Pucci, M. Q. Xia and B. L. Zhang, Multiple solutions for nonhomogeneous Schrödinger-Kirchhoff type equations involving the fractional p-Laplacian in $\mathbb{R}^N$, [17] [18] [19] [20] [21] [22] [23] H. Zhang, J. X. Xu and F. B. Zhao, Existence and multiplicity of solutions for superlinear fractional Schrödinger equations in ${{\mathbb{R}}^{N}}$, [24] show all references References: [1] [2] [3] [4] K. C. Chang, Variational methods for non-differentiable functionals and their applications to partial differential equations, [5] X. J. Chang and Z. Q. Wang, Ground state of scalar field equations involving a fractional Laplacian with general nonlinearity, [6] R. Cont and P. Tankov, [7] [8] [9] [10] [11] [12] G. Molica Bisci and V. Rădulescu, Ground state solutions of scalar field fractional for Schrödinger equations, [13] G. Molica Bisci, V. Rǎdulescu and R. Servadei, Variational Methods for Nonlocal Fractional Problems, Cambridge University Press, Cambridge, 2016. doi: 10.1017/CBO9781316282397. Google Scholar [14] [15] F. O. de Pavia, W. Kryszewski and A. Szulkin, Generalized Nehari manifold and semilinear Schrödinger equation with weak monotonicity condition on the nonlinear term, [16] P. Pucci, M. Q. Xia and B. L. Zhang, Multiple solutions for nonhomogeneous Schrödinger-Kirchhoff type equations involving the fractional p-Laplacian in $\mathbb{R}^N$, [17] [18] [19] [20] [21] [22] [23] H. Zhang, J. X. Xu and F. B. Zhao, Existence and multiplicity of solutions for superlinear fractional Schrödinger equations in ${{\mathbb{R}}^{N}}$, [24] [1] Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. [2] Grégoire Allaire, M. Vanninathan. Homogenization of the Schrödinger equation with a time oscillating potential. [3] [4] Yuxia Guo, Zhongwei Tang. Multi-bump solutions for Schrödinger equation involving critical growth and potential wells. [5] Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. [6] César E. Torres Ledesma. Existence and concentration of solutions for a non-linear fractional Schrödinger equation with steep potential well. [7] [8] Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. [9] Naoufel Ben Abdallah, Yongyong Cai, Francois Castella, Florian Méhats. Second order averaging for the nonlinear Schrödinger equation with strongly anisotropic potential. [10] [11] Robert Magnus, Olivier Moschetta. The non-linear Schrödinger equation with non-periodic potential: infinite-bump solutions and non-degeneracy. [12] D. Motreanu, V. V. Motreanu, Nikolaos S. Papageorgiou. Nonautonomous resonant periodic systems with indefinite linear part and a nonsmooth potential. [13] [14] Fengshuang Gao, Yuxia Guo. Multiple solutions for a critical quasilinear equation with Hardy potential. [15] Toshiyuki Suzuki. Scattering theory for semilinear Schrödinger equations with an inverse-square potential via energy methods. [16] Yinbin Deng, Wei Shuai. Positive solutions for quasilinear Schrödinger equations with critical growth and potential vanishing at infinity. [17] Victor Isakov, Jenn-Nan Wang. Increasing stability for determining the potential in the Schrödinger equation with attenuation from the Dirichlet-to-Neumann map. [18] Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. [19] Thomas Bartsch, Zhongwei Tang. Multibump solutions of nonlinear Schrödinger equations with steep potential well and indefinite potential. [20] Hengguang Li, Jeffrey S. Ovall. A posteriori eigenvalue error estimation for a Schrödinger operator with inverse square potential. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
The mathese in your question makes it difficult to understand, it is best to be more concrete rather than abstract. The answer is yes, this is how higher dimensional Dirac operators are standardly constructed. If you have the Dirac algebra (clifford algebra) on a 2n-dimensional space $$ \{ \gamma_\mu \gamma_\nu \} = 2 g_{\mu\nu}$$ say Euclidean, then you can split the space-coordinates into even and odd pairs, and define the raising and lowering operators: $$ \sqrt{2}\gamma^-_{i} = \gamma_{2i} + i \gamma_{2i+1}$$$$ \sqrt{2}\gamma^+_{i} = \gamma_{2i} - i \gamma_{2i-1}$$ These anticommute, and obey the usual fermionic raising and lowering operator algebra, you can define a 0 dimensional fermionic system for which the state space are the spin-states. The state space the gamma matrices act on can be labelled starting with the spin-state called |0>, which is annihilated by all the lowering operators, and the other states are found using raising operators applied to |0>. Then the Dirac Hamiltonian is automatically a Hamiltonian defined on a system consisting of a particle at position x, and a fermionic variable going across the finite dimensional state space of spin-states.This post imported from StackExchange Physics at 2015-03-30 13:54 (UTC), posted by SE-user Ron Maimon
I have a data set where many of the actual values are zero, so I can't use MAPE. It's not a time series, so I can't use MASE ala our very own Rob Hyndman. Is there another alternative to MAPE that I could use? Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community MASE is suitable for non-time series data also. See my textbook: https://www.otexts.org/fpp/2/5 In that case, you can scale the data using the mean as the base forecast. So if $e_j$ denotes a prediction error on the test data, then the scaled errors are $$ q_{j} = \frac{\displaystyle e_{j}}{\displaystyle\frac{1}{N}\sum_{i=1}^N |y_i-\bar{y}|}. $$ where $y_1,\dots,y_N$ denotes the training data.
The Latin Modern Math and TeX Gyre Math support projects are complete. Together, these fonts provide a total of 5 fonts for typesetting mathematics, all produced by the GUST e-foundry. Details can be found here. That page lists an additional six fonts, produced by other foundries, which support the mathematics opentype extension, including 3 available from CTAN: Asana Math; Stix Math (v.2 expected 2016-12-31 according to barbara beeton; Xits Math (contains a bold version); and 3 proprietary: Cambria Math; Lucida Math (contains a demi bold version); Minion Math by Johannes Küster (contains a bold version). GUST also provide this comparison document which covers all of the above fonts except for Minion Math. That information is dated May 2014. What about new fonts? How can you evaluate their suitability? I wrote this answer a while back to explain why a particular font combination did not work with unicode-math. In that answer, I compared that font combination with Latin Modern Math, which provides proper support. You can use my answer to assess the suitability of other fonts for use with unicode-math since I explain, basically, what a font needs in order to support typesetting maths using the interface this package provides. (The answer is: quite a lot.) EDIT Here's a mini-demo of the 5 maths fonts provided by GUST: \documentclass{article} \usepackage{amsmath} \usepackage{unicode-math} \NewDocumentCommand \testme { o }{% \IfValueT{#1}{% \setmainfont{TeX Gyre #1}% \setmathfont{TeX Gyre #1 Math}% TeX Gyre #1 \& TeX Gyre #1 Math: }% \begin{align*} p(D_k/T) & = \frac{p(T/D_k) p(D_k)}{p(T)} \\ & = \frac{p(T/D_k) p(D_k)}{ \sum_{i=1}^n p(T/D_i) p(D_i)} & \text{where } \sum_{i=1}^n p(D_i) = 1 \\ \end{align*}% } \begin{document} Latin Modern Roman \& Latin Modern Math: \testme \testme[Bonum] \testme[Pagella] \testme[Schola] \testme[Termes] \end{document}
I am looking at the Harvard notes for the complex analysis, and I do not follow how they arrive at the circled: EDIT: Can also someone show me how to get to the last line? I am a bit confused about how the $\text{sgn}(b)$ emerges there. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I am looking at the Harvard notes for the complex analysis, and I do not follow how they arrive at the circled: EDIT: Can also someone show me how to get to the last line? I am a bit confused about how the $\text{sgn}(b)$ emerges there. You have $$\begin{align} x^2+y^2 &= \sqrt{a^2+b^2} \tag{1}\\ x^2-y^2 &= a\tag{2} \end{align}$$ where (2) is one of the two original equations of the system, and (1) is the new one they arrived to. Summing (1) and (2) and dividing by two, you get $$ x^2 = \frac{1}{2}\left(a+\sqrt{a^2+b^2}\right) $$ Substracting (2) from (1) and dividing by two, you get $$ y^2 = \frac{1}{2}\left(-a+\sqrt{a^2+b^2}\right). $$ Earlier it is written that $$x^2-y^2=a$$ So since $$x^2+y^2=\sqrt{a^2+b^2}$$ It follows that $$2x^2=a+\sqrt{a^2+b^2}$$ Hope that clears things up. I didn't see where your question concerning the appearance of $\text{sgn}(b)$ was addressed. The sign of the two solutions $x = +/-$ ... and $y = +/-$ ... can be chosen independently. BUT $2xy = b$, so you must choose either $(x>0, y<0)$ or $(x<0, y>0)$. The factor of $\text{sgn}(b)$ takes care of that. With it the two solutions are now properly given by the overall $+/-$ factor in front of the expression for $x+iy$. The square roots are positive - let me denote them as |x| and |y|.x and y can be positive or negative independently. So there are 4possible solutions which we can group as +/-(|x|+i|y|) when (x>0,y>0)or (x<0,y<0), and +/-(|x|-i|y|) when (x>0,y<0) or (x<0,y>0). In your original problem statement you have that 2xy = b. So whenx and y have the same sign, then b>0, and when they have opposite signsthen b<0. So the 4 solutions listed above can be written more compactlyas +/-( |x| + i sgn(b)|y| ).
Here are a couple quick and dirty ways to count these operators: Compute the conformal block expansion of the four-point function $\langle \phi\phi\phi\phi\rangle$. This will only contain blocks with $\Delta-\ell=d-2$. This is done in http://arxiv.org/abs/1009.5985, equation 64. Compute the character of the conformal group acting on operators in the theory. By decomposing this character into characters of irreducible representations of the conformal group, you can read off the conformal primaries. See http://arxiv.org/abs/hep-th/0508031 for an introduction to conformal characters. However, the free theory is simple enough that we can just do the analysis from scratch. An operator in the free theory is built from a string of derivatives $\partial_\mu$ and $\phi$'s. It's easy to see that the only operators appearing in the $\phi\times\phi$ OPE have $\phi$ number 0 (the unit operator) or 2. The case of $\phi$ number 2 is the most interesting. These are operators of the form $\partial\dots\partial\phi\partial\dots\partial\phi$. Recall that any operator of the form $\partial_\mu \mathcal{O}$ (where $\mathcal{O}$ is any operator) is a descendant. Let us consider the space of all operators modulo descendant operators. Equivalence classes in this space will be in 1 to 1 correspondence with conformal primaries. In this quotient space, we have $\partial(A B) = \partial A B + A\partial B \sim 0$ (where $\sim$ means "is equivalent to"). Using this relation, we can move derivatives from one $\phi$ to the other, modulo descendants. Let us put all the derivatives on the right-hand $\phi$. We now have operators of the form $\phi \partial_{\mu_1}\cdots\partial_{\mu_\ell}\partial^{2n}\phi$ However, the equation of motion says that $\partial^2 \phi=0$, so we're left with $\phi\partial_{\mu_1}\cdots\partial_{\mu_\ell}\phi$ We're not quite done. The above operator could be equivalent to a primary modulo descendants, or it could be a descendant itself. It turns out that when $\ell$ is odd, it is a pure descendant (homework exercise!). When $\ell$ is even, there is a primary in the same equivalence class. To find it, we must solve the equation $K_\mu \mathcal{O}=0$, where $K_\mu$ is the special conformal generator. The solutions are $\phi \partial^\leftrightarrow_{\mu_1}\cdots\partial^\leftrightarrow_{\mu_\ell}\phi$ where $A\partial^\leftrightarrow_\mu B=\partial_\mu A B - A\partial_\mu B$. You can see that this is indeed equivalent to the above modulo descendants when $\ell$ is even. Finally, note that the above operator is traceless by the equations of motion, so it transforms in a spin-$\ell$ representation of the rotation group. It has $\Delta-\ell=2\Delta_\phi=d-2$ (where $d$ is the spacetime dimension), so it satisfies the unitarity bound for $\ell>0$.This post imported from StackExchange Physics at 2015-04-17 12:32 (UTC), posted by SE-user davidsd
$L$ isn't recognizable. We'll first establish a couple of preliminary results I. $\overline{L}$ is recognizable The complement of $L$,$$\overline{L}=\{\langle M\rangle\mid M \text{ halts on at least one input}\} $$is recognizable. Define a recognizer TM as follows: R(<M>) = for n = 1, 2, 3, ... for each x in {x_1, x_2, ... , x_n} // in some canonical order run M on x for one move if M halts return accept It should be clear that $R$ accepts all and only those $\langle M \rangle$ for which $\langle M \rangle\in \overline{L}$ and so $\overline{L}$ is recognizable. Now if $L$ were also recognizable, then we could use the two recognizers to make decider for $L$, which brings us to our second result. II. L is undecidable If $L$ were decidable, then $\overline{L}$ would also be, and conversely. If that were the case, we could define a reduction from the known undecidable language$$HALT = \{(\langle M\rangle \mid M \text{ halts on input }w\}$$to $\overline{L}$ by the mapping$$(\langle M\rangle, w)\rightarrow M_w$$where, as babou has already noted, M_w(y) = erase the input y write w on the input tape simulate M on w Now observe that $M$ halts on $w$ $\Longleftrightarrow$ $M_w$ halts (on every input $y$, in fact) $\Longleftrightarrow$ $M_w\in \overline{L}$. In summary, if $L$ were decidable, then we could $L$'s decider (reversing the roles of accept and reject) to decide the halting problem, a contradiction. Now we can show that $L$ is not recognizable. If it were, then, using (I) we could conclude that $L$ was decidable, a contradiction to result (II).
Your score is simply the sum of difficulties of your solved problems. Solving the same problem twice does not give any extra points. Note that Kattis' difficulty estimates vary over time, and that this can cause your score to go up or down without you doing anything. Scores are only updated every few minutes – your score and rank will not increase instantaneously after you have solved a problem, you have to wait a short while. If you have set your account to be anonymous, you will not be shown in ranklists, and your score will not contribute to the combined score of your country or university. Your user profile will show a tentative rank which is the rank you would get if you turned off anonymous mode (assuming no anonymous users with a higher score than you do the same). The combined score for a group of people (e.g., all users from a given country or university) is computed as a weighted average of the scores of the individual users, with geometrically decreasing weights (higher weights given to the larger scores). Suppose the group contains $n$ people, and that their scores, ordered in non-increasing order, are $s_0 \ge s_1 \ge \ldots \ge s_{n-1}$ Then the combined score for this group of people is calculated as \[ S = \frac{1}{f} \sum_{i=0}^{n-1} \left(1-\frac{1}{f}\right)^i \cdot s_i, \] where the parameter $f$ gives a trade-off between the contribution from having a few high scores and the contribution from having many users. In Kattis, the value of this parameter is chosen to be $f = 5$. For example, if the group consists of a single user, the score for the group is 20% of the score of that user. If the group consists of a very large number of users, about 90% of the score is contributed by the 10 highest scores. Adding a new user with a non-zero score to a group always increases the combined score of the group. Kattis has problems of varying difficulty. She estimates the difficulty for different problems by using a variant of the ELO rating system. Broadly speaking, problems which are solved by many people using few submissions get low difficulty scores, and problems which are often attempted but rarely solved get high difficulty scores. Problems with very few submissions tend to get medium difficulty scores, since Kattis does not have enough data about their difficulty. The difficulty estimation process also assigns an ELO-style rating to you as a user. This rating increases when you solve problems, like your regular score, but is also affected by your submission accuracy. We use your rating to choose which problems to suggest for you to solve. If your rating is higher, the problems we suggest to you in each category (trivial, easy, medium, hard) will have higher difficulty values.
So I'm trying to take the derivative of $\frac {\sin(x^2)}{3x}$. Here are my steps: $$\frac {d}{dx}\left[\frac {\sin(x^2)}{3x}\right]$$ Use the Quotient Rule: $$\frac {3x\frac {d}{dx}[\sin(x^2)]-\sin(x^2)\frac {d}{dx}[3x]}{3x^2}$$ Simplify second part: $$\frac {3x\frac {d}{dx} [\sin(x^2)]-3\sin(x^2)}{3x^2}$$ Use Chain Rule on first part: $$\frac {d}{dx} [\sin(x^2)]= 2x(\cos(x^2))$$ Plug it into the numerator: $$\frac {3x(2x)\cos(x^2)-3\sin(x^2)}{3x^2}$$ Simplify a little / Result: $$\frac {6x^2\cos(x^2)-3\sin(x^2)}{3x^2}$$ Right answer: $$\frac {2x^2\cos(x^2)-\sin(x^2)}{3x^2}$$ So I messed something up, but I tried to redo the process multiple times and couldn't figure it out. Thanks in advance for any help!
Abstracts We consider space-bounded computations on a random-access machine (RAM) where the input is given on a read-only random-access medium, the output is to be produced to a write-only sequential-access medium, and the available workspace allows random reads and writes but is of limited capacity. The length of the input is $N$ elements, the length of the output is limited by the computation, and the capacity of the workspace is $O(S)$ bits for some predetermined parameter $S$. We present a state-of-the-art priority queue---called an adjustable navigation pile---for this restricted RAM model. Under some reasonable assumptions, our priority queue supports $\mathit{minimum}$ and $\mathit{insert}$ in $O(1)$ worst-case time and $\mathit{extract}$ in $O(N/S + \lg{} S)$ worst-case time for any $S \geq \lg{} N$. (We use $\lg{} x$ as a shorthand for $\log_2(\max\set{2, x})$.) We show how to use this data structure to sort $N$ elements and to compute the convex hull of $N$ points in the two-dimensional Euclidean space in $O(N^2/S + N \lg{} S)$ worst-case time for any $S \geq \lg{} N$. Following a known lower bound for the space-time product of any branching program for finding unique elements, both our sorting and convex-hull algorithms are optimal. The adjustable navigation pile has turned out to be useful when designing other space-efficient algorithms, and we expect that it will find its way to yet other applications. A number of tasks in classification, information retrieval, recommendation systems, and record linkage reduce to the core problem of inner product similarity join (IPS join): identifying pairs of vectors in a collection that have a sufficiently large inner product. IPS join is well understood when vectors are normalized and some approximation of inner products is allowed. However, the general case where vectors may have any length appears much more challenging. Recently, new upper bounds based on asymmetric locality-sensitive hashing (ALSH) and asymmetric embeddings have emerged, but little has been known on the lower bound side. In this paper we initiate a systematic study of inner product similarity join, showing new lower and upper bounds. Our main results are: * Approximation hardness of IPS join in subquadratic time, assuming the strong exponential time hypothesis. * New upper and lower bounds for (A)LSH-based algorithms. In particular, we show that asymmetry can be avoided by relaxing the LSH definition to only consider the collision probability of distinct elements. * A new indexing method for IPS based on linear sketches, implying that our hardness results are not far from being tight. Our technical contributions include new asymmetric embeddings that may be of independent interest. At the conceptual level we strive to provide greater clarity, for example by distinguishing among signed and unsigned variants of IPS join and shedding new light on the effect of asymmetry. Given a string $S$ of length $N$ on a fixed alphabet of $\sigma$ symbols, a grammar compressor produces a context-free grammar $G$ of size $n$ that generates $S$ and only $S$. In this paper we describe data structures to support the following operations on a grammar-compressed string: $\access(S,i,j)$ (return substring $S[i,j]$), $\rank_c(S,i)$ (return the number of occurrences of symbol $c$ before position $i$ in $S$), and $\select_c(S,i)$ (return the position of the $i$th occurrence of $c$ in $S$). Our main result for $\access$ is a method that requires $\O(n\log N)$ bits of space and $\O(\log N+m/\log_\sigma N)$ time to extract $m=j-i+1$ consecutive symbols from $S$. Alternatively, we can achieve $\O(\log_\tau N+m/\log_\sigma N)$ query time using $\O(n\tau\log_\tau (N/n)\log N)$ bits of space, matching a lower bound stated by Verbin and Yu for strings where $N$ is polynomially related to $n$ when $\tau=\log^\epsilon N$. For $\rank$ and $\select$ we describe data structures of size $\O(n\sigma\log N)$ bits that support the two operations in $\O(\log N)$ time. We also extend our other structure to support both operations in $\O(\log_\tau N)$ time using $\O(n\tau\sigma\log_\tau (N/n)\log N)$ bits of space. When $\tau=\log^\epsilon N$ the query time is $O(\log N/\log\log N)$ and we provide a hardness result showing that significantly improving this would imply a major breakthrough on a hard graph-theoretical problem. Some of the most efficient heuristics for the Euclidean Steiner minimal tree problem in the d-dimensional space, d ≥ 2, use Delaunay tessellations and minimum spanning trees to determine subsets with up to d + 1, geometrically close terminals. Their Steiner minimal trees are then determined and concatenated in a greedy fashion. The weakness of this approach is that the produced solutions are closely related to minimum spanning trees. To avoid this and to obtain even better solutions, we utilize bottleneck distances to determine good subsets of terminals without being constraint by minimum spanning trees. Computational experiments show a significant improvement in quality when using bottleneck distances instead of minimum spanning trees. Problem instances with up to 90000 terminals in R3 5300 terminals in R4 and 500 terminals in R5 can be solved within a minute. Joint work with Pawel Winter Graphics Processing Units (GPUs) have emerged as a powerful platform for general purpose computations due to their massive hardware parallelism. However, there is very little understanding from the theoretical perspective, what makes various parallel algorithms fast on GPUs. In this talk I will review recent advances in modeling GPUs from the algorithmic perspective and will present our recent algorithmic results, identifying some non-trivial and somewhat unexpected open problems. We show how to list the homotopy types of paths in a polygonal domain in the order of increasing length. We also present the k-th shortest path map -- a data structure to answer efficiently queries for length of the k shortest homotopically different paths from a given starting point to the query point. Joint work with S. Eriksson-Bique, J. Hershberger, B. Speckmann, S. Suri, T. Talvitie, K. Verbeek, H. Yıldız, presented at SODA 2015 Christian Wulff-Nilsen, "Approximate Distance Oracles for Planar Graphs with Improved Query Time-Space Tradeoff" We consider approximate distance oracles for edge-weighted n-vertex undirected planar graphs. Given fixed epsilon > 0, we present a (1+epsilon)-approximate distance oracle with O(n(log log n)^2) space and O((log log n)^3) query time. This improves the previous best product of query time and space of the oracles of Thorup (FOCS 2001, J.ACM 2004) and Klein (SODA 2002) from O(n log n) to O(n(log log n)^5). Riko Jacob, "On the Complexity of List Ranking in the Parallel External Memory Model"- joint work with Tobias Lieber and Nodari Sitchinava This work has already been presented at MFCS 2014 and MASSIVE 2015. We study the problem of {\em list ranking} in the parallel external memory (PEM) model. We observe an interesting dual nature for the hardness of the problem due to limited information exchange among the processors about the structure of the list, on the one hand, and its close relationship to the problem of permuting data, which is known to be hard for the external memory models, on the other hand. By carefully defining the power of the computational model, we prove a permuting lower bound in the PEM model. Furthermore, we present a stronger $\Omega(\log^2 \inputSize)$ lower bound for a special variant of the problem and for a specific range of the model parameters, which takes us a step closer toward proving a non-trivial lower bound for the list ranking problem in the bulk-synchronous parallel (BSP) and MapReduce models. Finally, we also present an algorithm that is tight for a larger range of parameters of the model than in prior work.
This is more of an extended comment, rather than an answer completely separate from what Urs and Moshe have already said. The axioms of AQFT are designed to capture a mathematical model of the physical observables of a theory, while OTOH gauge invariance is a feature of a formulation of a theory, though perhaps an especially convenient one. Yours and related questions are somewhat muddied by the fact that one physical theory may have several equivalent, but distinct formulations, which may also have different gauge symmetries. One example of this phenomenon is gravity, consider the metric and frame-field formulations, and another one according to Moshe is Seiberg duality. Another confounding factor is that some physical theories are only known in a formulation involving gauge symmetries (automatically rendering such formulations "especially convenient"), which naturally leads to your second question. However, one must remember that by design the gauge formulation should be visible in the AQFT framework only if it is detectable through physical observables. Now, to be honest, I really have no idea about what is the state of the art in AQFT of figuring out when a given net of local algebras of observables admits an "especially convenient" formulation involving gauge symmetry. But I believe answering this kind of question will remain difficult until the notion of "especially convenient" is made mathematically precise. I don't know how much progress has been made on that front either. But I think a prototype of this kind of question can be analyzed, though somewhat sketchily, in the simplified case of classical electrodynamics. Suppose we are given a local net of Poisson algebras of physical observables (the quantum counterpart would have *-algebras, but other than that, the geometry of the theory is very similar). The first step is to somehow recognize that this net of algebras is generated by polynomials in smeared fields, $\int f(F,x) g(x)$, where $g(x)$ is a test volume form, and $f(F,x)$ is some function of $F$ and its derivatives at $x$, with $F(x)$ a 2-form satisfying Maxwell's equations $dF=0$ and $d({*}F)=0$. Since we were handed the net of algebras with a given Poisson structure, as a second step we can compute the Poisson bracket $\{F(x),F(y)\}=(\cdots)$. The answer for electrodynamics would be the well known Pauli-Jordan / Lichnerowicz / causal propagator, which I will not reproduce here. Very roughly speaking, the components of $F(x)$ and the expression for the Pauli-Jordan propagator give a set of local "coordinates" on the phase space of the theory and an expression for the Poisson tensor on it. In the third step we can compute the inverse of the Poisson tensor, which if it exists would be a symplectic form. The answer for electrodynamics is well known and what's important is that the symplectic form is not given by some local expression like $\Omega(\delta F_1,\delta F_2) = \int \omega(\delta F_1, \delta F_2, x)$, where $\omega$ is a form depending only on the values of $\delta F_{1,2}(x)$ and their derivatives at $x$. Step four would consist of asking the question whether there is another choice of local "coordinates" on the phase space in which the symplectic form is local. The answer is again well known: extend the phase space by introducing the 1-form field $A(x)$ such that $F=dA$. The pullback of the symplectic form to the extended phase space now has a local expression $\Omega(\delta A_1,\delta A_2) = \int_\Sigma [{*}d(\delta A_1)(x)\wedge (\delta A_2)(x) - (1\leftrightarrow 2)]$, up to some constant factors, with $\Sigma$ some Cauchy surface. Note that $\Omega$ is no longer symplectic on the extended phase space, but only presymplectic, while its projection back to the physical phase space is. As a last step, one might try to solve the inverse problem of the calculus of variations and come up with a local action principle reproducing the the equations of motion for $A$ and the presymplectic structure $\Omega$. Let me summarize. (1) Obtain fundamental local fields and their equations of motion. (2) Express the Poisson tensor and symplectic form in terms of local fields. (3) Introduce new fields to make the expression for the (pre)symplectic form local. (4) Obtain local action principle in the new fields. Note that gauge symmetry and all the issues associated with it appear precisely in step (3). In my limited understanding of it, the literature on AQFT has spent a significant amount of time on step (1), but perhaps not enough time on steps (2) and (3) even to formulate these problems precisely. Finally, I should emphasize that the idea that redundant gauge degrees of freedom are introduced principally to give local structure to the (pre)symplectic structure on phase space is somewhat speculative. But it seems to fit in the field theories I am familiar with and I've not been able to identify a different yet equally competitive one.This post has been migrated from (A51.SE)
[Questions about Machine Learning] Chapter I Mathematics Fundamentals In this chapter, we will discuss some basic mathematics knowledge that you need to know for further study. Q. What are the relations between scalar, vector, matrix, and tensor? A. A vector is an ordered finite list of numbers. Vectors are usually written as vertical arrays, surrounded by square or curved brackets, as seen below. $$\begin{pmatrix}-1.1 \\ 0.0 \\ 3.6 \\ -7.2 \end{pmatrix} or \begin{bmatrix}-1.1 \\ 0.0 \\ 3.6 \\ -7.2 \end{bmatrix}$$ Sometimes, they are written as numbers separated by commas and surrounded by parentheses. As seen below. $$(-1.1, 0.0, 3.6, -7.2)$$ Vector is often denoted by a lowercase symbol $a$. We can get the element (also known as entries, coefficients or components) of a vector by the index, and the $i$th element of the vector $a$ is therefore denoted as $a_i$ where the subscript $i$ is an integer index of the vector. (Obviously, $0<i<n$). If two vectors have the same size, and more importantly, each of the corresponding entries is the same, then the two vectors are equal, which is denoted as $a=b$. A scalar is a number or a value. In most applications, scalars are real numbers. We usually use an italic lowercase symbol to denote a scalar. For example, $\textit{a}$ is a scalar. A matrix is a rectangular array, which means it is a 2-dimensional data table at the same time. Matrix is a collection of items that have the same feature and character. In a matrix, a column indicates a feature, and a row indicates an item. Matrix is usually denoted as a capital letter, $A$ for example. A tensor is an array with more than 2 dimensions. Generally, if the elements of an array are distributed in a regular grid with several dimensions, we would call it a tensor. We use a capital letter to denote a tensor, same with the matrix. $A$ for example. An element in a tensor is denoted as $A_(i,j,k)$. Relations between them Scalar is a 0-dimensional tensor. Vector is a 1-dimensional tensor. For example, with a scalar, we could get the length of a rod, but we cannot know the direction of this rod. With a vector, we could know both the length and direction of a rod. With a tensor, we may be able to know both the length and direction of a rod, and we could even know more about the rod. (for example, the degree of deflection) Q. What are the differences between tensor and matrix? From the aspect of algebra, the matrix is a generation of the vector, the matrix is a 2-dimensional table. $n$-dimensional is a so-called $n$-dimensional table. Noted that this is not a strict definition of the tensor. For the aspect of geometry, a matrix is a geometric sense value. It does not change with the coordinate transformation of the frame of reference. the vector has this feature too. The tensor can be represented by a $3$x$3$ matrix or an $n$x$n$ matrix. A scalar can be regarded as a $1$x$1$ matrix while a vector with $n$ items can be regarded as $1$x$n$ matrix. Q. What will happen if I multiply a matrix and a tensor? A. You can only multiply an $m$x$n$ matrix and a $n$ items vector. Then you will get a $m$ items vector. The key to this is regarded each row of the matrix as a vector, and multiply the given vector. For example, If you are going to multiply the following: $$\begin{bmatrix}1, 2 \\ 0.0, 1 \\ 3.6, 3 \\ -7.2,2 \end{bmatrix}$$ and $$\begin{bmatrix}-1.1 \\ 0.0 \\ 3.6 \\ -7.2 \end{bmatrix}$$ Q. What is the norm? In mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space. There are many different types of norms for a vector or a matrix. For example, 1-norm: $ ||x|| 1 = \sum{i=1}^N |x _i| $ 2-norm or Euclid norm: $ ||x|| _2 = $ 3-norm: Q. What are the norms of matrix and vector? We define a vector as $\vec{x}=(x 1,x2,...,x_N)$. Its norm will be: Q. What is the positive definite matrix? Q. How to judge if a matrix is the positive definite matrix? Q. What is a derivative? Q. How to calculate the derivatives? Q. What are the differences between derivatives and partial derivatives? Q. What is eigenvalue? What is eigenvector? What is eigenvalue decomposition? Q. What is the singular value? What is singular value decomposition? Q. What are the differences between singular value and eigenvalue? and what about their decomposition? Q. What is the probability? Q. What are the differences between variable and random variable? Q. What are the common probability distribution? Q. What is the conditional probability? Q. What is joint distribution? What is marginal distribution? What are their relations? Q. What is the chain rule for conditional probability? Q. What is independence and conditional independence? Q. What is the expectation? What is variance? What is covariance? What is the correlation coefficient?
Suppose one has the limit of a completely non-continuous when $x<0$ such as $$\lim _{x\to-\infty}\left|{x}^{\frac{1}{2x}}\right|$$ Which is undefined at any negative fraction with an odd denominator (such as $-2$, $-2/3$, $-3/5$). If you treat this function as a sequence at any integer it is undefined. However, by using integers of a fraction of even denominator such as $-1/2$ then $-1/4$. $$ \begin{array}{c|lcr} x & \text{y} \\ \hline -\frac{1}{2} & 2.. \\ -\frac{3}{2} & .87358.. \\ -\frac{5}{2} & .83255.. \\ -\frac{7}{2} & .83613..\\ -\frac{10001}{2} & .99915..\\ \end{array} $$ I did find an identity that $\lim _{x\to-\infty}\left|{x}^{\frac{1}{2x}}\right|=\lim_{x\to-\infty}\left|{\left(-x\right)}^{\frac{1}{2x}}\right|$ and you could take l'hospitals rule. $$\lim_{x\to-\infty}\left|\pm{\left(-x\right)}^{\frac{1}{2x}}\right|$$ $$\lim_{x\to-\infty}\left|{\pm}e^{\frac{\ln{\left(-x\right)}}{{{2x}}}}\right|$$ $$\lim_{x\to-\infty}\left|{\pm}e^{\frac{\frac{1}{x}}{{{2}}}}\right|$$ $$\lim_{x\to-\infty}\left|{\pm}e^{0}\right|$$ $$\lim_{x\to-\infty}\left|{\pm}e^{0}\right|=1$$ Has there been any analysis done in these areas that I could find. Is this how to solve these kinds of limits?
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
I've been given a proof for convergence of a monotone increasing sequence bounded from above, and am attempting to replicate it for a decreasing sequence bounded from below (we're working in the real space). I'm hoping this is a valid proof, any improvements/criticisms would be helpful: Let $\left\{x_n|n \in \mathbb{N}\right\}$ be a monotone decreasing sequence of positive real numbers bounded below. By completeness of $\mathbb{R}$ the sequence has a greatest lower bound which we will denote as $x^*$. Thus $x_n \geq x^* \enspace\forall\enspace n \in \mathbb{N}$. Let $\epsilon > 0$. Then $x^* + \epsilon$ is not a lower bound of $\left\{x_n\right\}$. $\exists$ $k \in \mathbb{N} : x^* + \epsilon >x_k$. Since $\left\{x_n\right\}$ is decreasing we can say: $x^* + \epsilon > x_n \enspace\forall\enspace n \geq k$. So we have: $x^* + \epsilon > x_k \geq x_n \geq x^* > x^*-\epsilon \enspace \forall \enspace n \geq k$ $\therefore \epsilon> x_n - x^* > -\epsilon \enspace \forall \enspace n \geq k$ Which gives: $\left|x_n - x^*\right| < \epsilon \enspace\forall\enspace n \geq k$ Showing that $\left\{x_n\right\} \to x^* \blacksquare$ Thanks in advance.
The ISO/IEC 9899:1990 edition of the C standard contains:EXAMPLE The following functions define a portable implementation of rand and srand.static unsigned long int next = 1;int rand(void) // RAND_MAX assumed to be 32767{next = next * 1103515245 + 12345;return (unsigned int)(next/65536) % 32768;}void srand(unsigned int ... You could be thinking about the Merkle-Hellman knapsack cryptosystem. It was invented in 1978 and everything seemed well and good until it was completely broken six years later in 1984 by Shamir - it was a complete and total break, i.e. the cryptosystem became unusable overnight.That said I don't know if the knapsack cryptosystem was ever "popular" in the ... Modern encryption can be broken in practice even when the algorithms are theoretically secure. There are a variety of ways this can happen:Side channel analysis could have played a part.The accepted answer to the linked question is excellent and provides some very good examples.Exploitation of implementation bugs could have played a part.Malware ... These aren't "attacks" in and of themselves, they are simply a way to classify attacks depending on how many assumptions they make. For instance, if an attack requires plaintext-ciphertext pairs to recover the key, but they don't have to be any particular pairs, that attack is categorized as a known-plaintext attack. However if another attack required the ... For example: the 5-qubit quantum computer created at MIT by using the technique of ion traps succeeded in prime-factorizing 15. Does that mean that since it succesfully managed that, that it is a all-purpose quantum computer which could be used for cryptanalysis and/or cryptographic attacks?No, not even close.Attacking e.g. RSA requires a lot more than ... This is a shot in the dark, but could you be thinking of the Needham-Schroeder protocol? It was published in 1978 [1], and an attack was published as much as 18 years later, in 1996 [2]. It is not an encryption method, though, but a protocol. In fact, the original paper does not even specify an encryption method to be used, but uses encryption symbolically. ... DES has not been mentioned in the previous two answers. Although it was known to be quite weak from very early on it was widely used for a couple of decades at least, until newer algorithms (3DES, AES, but also e.g. RC4) displaced it. Nowadays it can be broken in hours with dedicated hardware or with at most a few thousand dollars of cloud computing time.... Right now, the best published attack against MD5's preimage resistance (first preimage, actually, but it applies to second preimage resistance as well) finds preimages in cost $2^{123.4}$ average cost, which is slightly better than the generic attack (average cost of $2^{128}$), but still way beyond the technologically feasible. The attack rebuilds the ... Assume I have a list of plaintext text and its corresponding ciphertext which was created using a specific key with AES in ECB mode.Can I recover that key?No. This is what is referred to as a known plaintext attack, and secure block ciphers are designed to prevent exactly this kind of attack. This answer on the Mathematics Stack Exchange goes into ... The question asks how a collision in a hash such as SHA-1 could become a practical concern, with focus on the case of a public-key certificate à la X.509.I'll first give an example involving executable code signing. I'll assume an attacker in a position to write bootstrap code (like, the supplier of a development toolchain, or someone who compromised that ... It is all pairings... this is a rather complex matter. I recommend reading Ben Lynn's PhD dissertation; it is about as nice an introductory text on pairings as you can get.The definition is rather mind-twisting:You first define divisors, which are rather formal objects. It is the free group of the curve points: for each curve point $P$, you define a ... Yes, but the answer is more or less embedded in the question here; you can only say that you encrypt too much data in case the secret key and / or plaintext becomes vulnerable.Most modes of operation define how much data can be encrypted. This could mean real limits to the amount of data (approx. $2^{36}$ bytes or 64 GiB for AES-GCM) or it may be ... A TRNG is never used instead of a CSPRNG. They serve different purposes. A TRNG is used to seed a CSPRNG. A CSPRNG alone isn't enough to generate random data since it's reproducible. A hardware entropy source alone isn't enough to generate random data because all entropy sources have biases.For any purpose that's related to security or cryptography, a ... "Constant-time" is about not leaking information through timing-based side-channels. If you assume that there is no side-channel, then, in particular, there is no side-channel attack. It is nevertheless a rather bold assumption.There is a large variety of possible side-channels, and lab demonstrations of attacks have been done exploiting, among others, ... This is not correct, the private key $d_A$ must always be an integer. Your mistake is that you are doing modular division e.g. $\frac{a}{b} \text{ mod } n$ incorrectly. You cannot simply divide the integers and then reduce by the modulus. The correct way to do this is to compute the modular inverse of $b$ i.e. $b^{-1} \text{ mod } n$ and then compute $a*b^{-... In the ROCA paper the authors define an integer $M$ (which they call a primorial) as follows:$$M = \prod_{i=1}^{n} P_i = 2 * 3 * ... * P_n$$Said another way, $M$ is the product of the first $n$ primes. What the authors observed is that the factors of a vulnerable RSA modulus $N$ have the following form:$$p = k*M + (65537^a \text{ mod } M)$$The $65537 ... What is the simplest attack is the Brute Force Attack. However, it is infeasible to brute-force even AES-128 bit, AES also supports 192, and 256-bit keys sizes. To break the AES-128 with brute force, you need to execute $2^{128}$ AES operations, today's top computers can reach $2^{63}$ around one hour. However, reaching $2^{128}$ is beyond classical ... Blinding protects against some side channels attacks in RSA: those that target variations in timing or other side channel information as a known function of $C$ (or $C^d\bmod n$ should that end up to be available).As noted in the question, blinding is pointless against the most basic (Simple) Power Analysis attack, which determines the bits of the private ... Orin Kerr & Bruce Schneier have a recent paper out titled Encryption Workarounds, where they group the techniques to indirectly attack theoretically-secure encryption. I think their break-down into abstract categories is helpful, as it's freer of the technical "how", and you can miss the forest for the trees. Think of all the possible ways to get what ... There are $2^{256}$ different AES keys, the chance that you hit the one right one on first try is thus $2^{-256}=\frac1{2^{256}}$.To put this into perspective, here's a list of events that is roughly a billion times more likely to all happen (copied from Thomas Pornin's answer on Information Security SE):The computer spontaneously catches fire ... TL;DR: Yes, on narrow or some ad-hoc deterministic RSA padding, which must not be used.The Desmedt and Odlyzko attack on RSA signatures [DO1985] assumes a deterministic RSA signature scheme with appendix that, for RSA key $(N,e,d)$, signs messages $M$ with signature $S=(H(M))^d\bmod N$ for some public function $H$ with $H(M)<2^k$ and $k\ll\log_2(N)$, ... I once played this online game, it was an old-school MUD. You log in, chat, kill some goblins.It had a casino. You go into the casino and you bet X gold, and there was a 40% chance you win double your bet. Obviously in the long run, the casino will always win, right?But here's the thing. I knew the game was written in C++, and I knew the rand() ... It is vulnerable to a sort of meet-in-the-middle attack since you don't really have to brute force $k_1$. Given a plaintext-ciphertext pair, $P$ and $C$, you can calculate $C' = E(k_2', P \oplus k_3')$ for candidate values $k_2'$ and $k_3'$. Then, you can determine $k_1' = C' \oplus C$. So, you only need to brute force the two keys $k_2$ and $k_3$, making ... With respect to collisions, hashing twice can not increase security, because if $x$ and $x'\ne x$ collide for $H$, that is $H(x)=H(x')$, then $H(H(x))=H(H(x'))$. Otherwise said, any collision for $H$ is a collision for the double hash $H\circ H$. It is therefore trivial to exhibit collisions for $\operatorname{MD5}\circ\operatorname{MD5}$. Hence the answer ... I'm happy to have a crack at this one, providing I've understood your question correctly.Firstly I wouldn't say the cipher possibly exhibits low level bias at any point. It experiences plenty of bias and I'll attempt to explain how we can use it to launch practical attacks. As I'd imagine you know, the strongest bias is found right at the start of the KSA, ... A character is usually encoded as an ASCII. This means that it uses up one byte. That's a number from $0 - 255$. It can be represented as a hexadecimal $\text{0x00} - \text{0xFF}$. All your operations must be done character by character. From now on by "message", "key" and "cipher" i mean a single $0-255$ number.$$message1 \oplus key = cipher1 \\message2 \... As far as I know your attack is the best attack known, unless something better has very recently been published.Please note that for DES as the basic cipher the chosen $A$ may not work, but you can choose another $A$ and try againAlso, for a generic cipher with $k$ bit key, the complexity is $$2^{k+1}=2\times 2^k=O(2^k),$$as $k$ increases. You should read David Wagner's original paper. You can see all of his work here.He authored the 'Slide Attack', 'Advanced Slide Attacks' and a few more related to the attack.Wikipedia has a good introduction hereFeistel ciphers like Simon are very vulnerable to Slide Attacks and similar. Removing the round constants from the key schedule will likley ... As is, the attack seems rather pointless. But one malicious potential is that because $\bar y$ allows to successfully verify a genuine message $(m,s)$, the verifier might grow trust in it and use $\bar y$ instead of $y$ in order to verify other messages. If we assume this, practical issues could arise:That the attacker can somewhat find $(\bar m,\bar s)$ (... Let $n=pq$ be an RSA modulus. Let also $e$ be the public exponent and $d = e^{-1} \bmod (p-1)(q-1)$ the matching private exponent.From $ed \equiv 1 \pmod{(p-1)(q-1)}$, there exists some integer $k$ such that $$ed = 1 + k(p-1)(q-1) = 1 + k(n-p-q+1) \iff kn-ed=k(p+q-1)-1$$ Dividing through by $dn$ yields$$\frac{k}{d}-\frac{e}{n} = \frac{k}{d}\Bigl(\...
In your original question you didn't have the assumption of nilpotence. I want to answer the question both with and without this assumption, because it is easier without the nilpotency assumption an both proofs use the same idea. The answer is "no" in both cases. A group $G$ is residually finite if for every $g\in G$ there exists a homomorphism $\phi_g: G\rightarrow F_g$ where $F_g$ is a finite group and where $\phi_g(g)\neq1$. Equivalently (by looking at the kernels of the maps $\phi_g$), the intersection of all finite index subgroups is trivial. A quick google should convince you that: free groups are residually finite, and there are finitely generated, non-residually finite groups. So let $Q$ be finitely generated and non-residually finite. There then exists a free group $\mathcal{F}$ with normal subgroup $K$ such that $\mathcal{F}/K\cong Q$. As $\mathcal{F}$ is residually finite we can take the trivially-intersecting subgroups $H_n$ to be all the subgroups of finite index. Note that every finite-index subgroup of $\mathcal{F}/K$ is the image of an $H_n$, and hence has the form $H_nK$. As $\mathcal{F}/K$ is not residually finite, there exists a non-trivial element $gK\in \mathcal{F}/K$ such that $gK\in\cap H_nK$. Hence, $\cap H_nK\neq K$ as required. You can define yourself out of this problem: a group $G$ is called (locally) extended residually finite (LERF/ERF) if for any (finitely generated) subgroup $K<G$ and any non-trivial element $g \in G\setminus K$ there is a homomorphism $\phi_g: G\rightarrow F_g$ where $F_g$ is a finite group and where $\phi_g(g)\not\in \phi_g(K)$. What the above is saying is that free groups are not ERF. On the other hand, free groups are LERF (and LERF tends to be studied more). So, nilpotent groups. Finitely generated nilpotent groups are residually finite, and as quotients of nilpotent groups are also nilpotent we cannot use the same argument as above. However, you can restrict to a smaller class of subgroups $H_n$: Fix a prime $p$ and let $\mathcal{P}$ denote the class of finite $p$-groups. Suppose that $G$ is a non-cyclic, free nilpotent group. Then $G$ is residually $\mathcal{P}$ (this is due to Gruenberg, K. W. (1957), Residual Properties of Infinite Soluble Groups. Proc. Lond. Math. Soc. doi*). Hence, the subgroups of $G$ of index some power of $p$ intersect trivially: $$\bigcap_{[G:H_n]=p^j,\; j\in\mathbb{N}}H_n=1.$$As $G$ is free nilpotent it contains a subgroup $K$ such that $G/K$ contains a non-trivial element $gK$ of order $q$ where $q$ is coprime to $p$. Hence, $$\begin{align*}gK\in&\bigcap_{[G:H_n]=p^j}H_nK\\\Longrightarrow K\neq&\bigcap_{[G:H_n]=p^j}H_nK.\end{align*}$$Hence, the answer is still "no" if we assume finitely generated nilpotent. *I spend a good hour trying to first find and then access this reference. Despite the paper being >60 years old, and despite me being at my work computer, it is behind a paywall I cannot bypass; I had to use MathSciNet to work out the results. Both my current employer and my previous employer were respectable UK universities. They both have people on the LMS council. Neither university subscribes to any of the LMS journals.
For what follows, let us restrict ourselves to discussing discrete probability distributions. When learning about things like relative entropy (aka Kullback-Leibler divergence) $D(\cdot || \cdot)$, one typically learns that it does not technically define a metric, since $D(P||Q) \ne D(Q||P)$, for some distributions $P,Q$. In general, I don't think there is any well-defined notion of distance between two probability distributions. However, when learning about Markov chains, we learn about nice conditions under which some distribution $\pi_0$ converges to the stationary distribution of the Markov chain $\pi$ (assuming it exists). Since there is no well-defined metric on the set of probability distributions, what is going on here? How is this convergence defined? In precisely what sense is $\pi_0$ approaching $\pi$? Is there some (obvious) underlying topology that I am just not seeing? Can someone help reconcile what's going on here?
In a typical step-growth polymerization, the Anderson–Schulz–Flory distribution is the probability mass function that describes the number fraction or weight fraction of polymers of length $x$ at a given extent of reaction $p$. The extent of reaction, $p$, is a value between 0 and 1, and $x$ is the degree of polymerization for any given oligomer or polymer present in the system. $$\begin{align} \text{mass fraction} &= p^{x-1} \cdot (p-1)^2 \\ \text{weight fraction} &= x \cdot p^{x-1} \cdot (p-1)^2 \end{align}$$ A nice derivation of these equations can be found in this video. Figure 1 illustrates a plot of the number fraction distribution as a function of the degree of polymerization for several extent of reactions. Figure 2 illustrates the plot of the weight fraction distribution as a function of the degree of polymerization for several extent of reactions. Note that high extents of reaction in step-growth polymerizations result in very broad distributions, and are also required for high molecular weight polymers. Carothers' equation describes how stoichiometric imbalances necessarily limit the possible degree of polymerization: $$x_\text{average} = \frac{1+r}{1 + r - 2rp}$$ where $x_\text{average}$ is the average degree of polymerization, $p$ is the extent of reaction, and $r$ is the mole fraction of the reacting functional groups. $r$ is always a number between 0 and 1. The excess functional group is always taken to be in the denominator. When $r$ is equal to 1, stoichiometric balance exists, and the equation reduces to: $$x_\text{average} = \frac{1}{1-p}$$ For example, if I react a diol and a diacid under step-growth conditions: My goal is to purposely limit the degree of polymerization of a chemical reaction by introducing a stoichiometric imbalance in order to form short chain oligomers. I'd like to predict/calculate the statistical distribution of molecular weights for a given extent of reaction, p and stoichiometric imbalance, r. I'm having trouble convincing myself that the Flory–Schulz distribution is appropriate for modeling step-growth systems that contain stoichiometric imbalances. Is there a straightforward way to incorporate stoichiometric imbalance into these distribution functions? Any help would be appreciated.
Completeness is a property of proof systems. Roughly, we say that a proof system is complete iff truth implies derivability. Yet, the proofs for FOL we usually find in logic textbooks, i.e. "Henkin-proofs", do not seem to make any reference to any deductive system at all. Consider the following proof sketch: Lemma 1 (Model Existence). $\Gamma$ is consistent $\Rightarrow \Gamma$ is satisfiable Proof. Let $\mathcal{L}$ be our referring language. An usual proof goes more or less along the lines of Let $\Gamma$ be consistent. Extend $\Gamma$ to a maximal consistent set $\Delta$ Show that $\Delta$ preserves consistency and that $\Gamma \subseteq \Delta$ Define a valuation $v$ for $\Delta$ such that $v(\psi)=1$ iff $\psi \in \Delta$ for all atomic $\psi \in \mathcal{L}$ Define $v$'s unique extension $\bar v$ as usual. Then $\bar v \vDash \Delta$ and, since $\Gamma \subseteq \Delta$, $\bar v \vDash \Gamma$. $\qquad \qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \Box$ Theorem (Completeness)$\Gamma \vDash \varphi \Rightarrow \Gamma \vdash \varphi$ Proof. By contraposition. Let $\Gamma \nvdash \varphi$. Then $\Gamma \cup \{\neg \varphi\}$ is consistent and, by the Model Existence Lemma, it has a model. Hence, $\Gamma \nvDash \varphi$. $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \Box$ Note that I did not make any explicit mention to any intended deductive system at all. In fact, it seems to me the only syntactic notion I need is: Definition (Consistency)A set of sentences $\Gamma$ is consistent if there is $\varphi$ such that $\Gamma \nvdash \varphi$. But yet, I can define this notion generally, for any deductive system. Now I seemed to have proven completeness. But completeness of which deductive system? I don't know. For the above proof of completeness, I didn't make explicit any axiom or inference rule. This doesn't seem strange? If this is so, it seems to me I could apply the same argument mutatis mutandis for any other deductive system ( even those for that are not complete?). Question. What is the role of the proof system in a Henkin-proof of Completeness? I am probably missing something. So can someone explain this topic in details?
The Genetic and Evolutionary Computation Conference (GECCO) the leading conference in the area of evolutionary computation. In 2016 it will take place in Denver, Colorado, USA, from July 20-24. This year 381 papers were submitted to 15 different tracks, and 1740 reviews assigned. Approximately 36% of papers have been accepted as full papers, with a further 32% accepted for poster presentation. Four full papers and two poster papers of our group were accepted for presentation: Friedrich, Tobias; Neumann, Frank; Sutton, Andrew M.Genetic and Evolutionary Computation Conference, GECCO 2016, Denver, CO, USA, July 20-24, 2016, Companion Material Proceedings. 2016ACM. Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noise-handling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noise-handling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the Compact Genetic Algorithm can overcome the shortsightedness of mutation-only heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property. Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew M.Ant Colony Optimization Beats Resampling on Noisy Functions. Genetic and Evolutionary Computation Conference (GECCO) 2016: 3-4 Despite the pervasiveness of noise in real-world optimization, there is little understanding of the interplay between the operators of randomized search heuristics and explicit noise-handling techniques such as statistical resampling. Ant Colony Optimization (ACO) algorithms are claimed to be particularly well-suited to dynamic and noisy problems, even without explicit noise-handling techniques. In this work, we empirically investigate the trade-offs between resampling an the noise-handling abilities of ACO algorithms. Our main focus is to locate the point where resampling costs more than it is worth. Dang, Duc-Cuong; Friedrich, Tobias; Krejca, Martin S.; Kötzing, Timo; Lehre, Per Kristian; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew MichaelEscaping Local Optima with Diversity Mechanisms and Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 645-652 Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the \((\mu+1)\) GA using uniform crossover on the fitness function \(Jump_k\). All previous results in this context only hold for unrealistically low crossover probability \(p_c=O(k/n)\), while we give analyses for the setting of constant \(p_c < 1\) in all but one case. Our bounds show a dependence on the problem size \(n\), the jump length \(k\), the population size \(\mu\), and the crossover probability \(p_c\). For the typical case of constant \(k > 2\) and constant \(p_c\), we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of \(\mu\): \(O(n^{k-1})\) for duplicate elimination/minimisation, \(O(n^2 \log n)\) for maximising the convex hull, \(O(n \log n)\) for det. crowding (assuming \(p_c = k/n\)), \(O(n \log n)\) for maximising the Hamming distance, \(O(n \log n)\) for fitness sharing, \(O(n \log n)\) for the single-receiver island model. This proves a sizeable advantage of all variants of the \((\mu+1)\) GA compared to the (1+1) EA, which requires \(\Theta(n^k)\). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally. Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Nallaperuma, Samadhi; Neumann, Frank; Schirneck, MartinFast Building Block Assembly by Majority Vote Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 661-668 Different works have shown how crossover can help with building block assembly. Typically, crossover might get lucky to select good building blocks from each parent, but these lucky choices are usually rare. In this work we consider a crossover operator which works on three parent individuals. In each component, the offspring inherits the value present in the majority of the parents; thus, we call this crossover operator majority vote. We show that, if good components are sufficiently prevalent in the individuals, majority vote creates an optimal individual with high probability. Furthermore, we show that this process can be amplified: as long as components are good independently and with probability at least \(1/2+\delta\), we require only \(O(\log 1/\delta + \log \log n)\) successive stages of majority vote to create an optimal individual with high probability! We show how this applies in two scenarios. The first scenario is the Jump test function. With sufficient diversity, we get an optimization time of \(O(n \log n)\) even for jump sizes as large as \(O(n^{(1/2-\epsilon)})\). Our second scenario is a family of vertex cover instances. Majority vote optimizes this family efficiently, while local searches fail and only highly specialized two-parent crossovers are successful. Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.{EDA}s cannot be Balanced and Stable. Genetic and Evolutionary Computation Conference (GECCO) 2016: 1139-1146 Estimation of Distribution Algorithms (EDAs) work by iteratively updating a distribution over the search space with the help of samples from each iteration. Up to now, theoretical analyses of EDAs are scarce and present run time results for specific EDAs. We propose a new framework for EDAs that captures the idea of several known optimizers, including PBIL, UMDA, \(\lambda\)-MMASIB, cGA, and \((1,\lambda)\)-EA. Our focus is on analyzing two core features of EDAs: a balanced EDA is sensitive to signals in the fitness; a stable EDA remains uncommitted under a biasless fitness function. We prove that no EDA can be both balanced and stable. The LeadingOnes function is a prime example where, at the beginning of the optimization, the fitness function shows no bias for many bits. Since many well-known EDAs are balanced and thus not stable, they are not well-suited to optimize LeadingOnes. We give a stable EDA which optimizes LeadingOnes within a time of \(O(n\,\log n)\). Doerr, Benjamin; Doerr, Carola; Kötzing, TimoThe Right Mutation Strength for Multi-Valued Decision Variables. Genetic and Evolutionary Computation Conference (GECCO) 2016: 1115-1122 The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over \(\Omega=\{0,1,\dots,r-1\}n\). More precisely, we regard a variety of problem classes requesting the component-wise minimization of the distance to an unknown target vector \(z \in \Omega\). For such problems we see a crucial difference in how we extend the standard-bit mutation operator to these multi-valued domains. While it is natural to select each position of the solution vector to be changed independently with probability \(1/n\), there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of \(\Theta(nr\log n)\). If we change each selected position by either +1 or -1 (random choice), the optimization time reduces to \(\Theta(nr+n\log n)\). If we use a random mutation strength \(i \in \{0,1,\dots,r-1\}n\) with probability inversely proportional to \(i\) and change the selected position by either +\(i\) or -\(i\) (random choice), then the optimization time becomes \(\Theta(n\log(r)(\log(n)+\log(r)))\), bringing down the dependence on \(r\) from linear to polylogarithmic. One of our results depends on a new variant of the lower bounding multiplicative drift theorem. Algorithm Engineering Our research focus is on theoretical computer science and algorithm engineering. We are equally interested in the mathematical foundations of algorithms and developing efficient algorithms in practice. A special focus is on random structures and methods.
The practical thing that you are missing here is that the air compressor is cooled constantly so that it doesn't melt during operation (can't be adiabatic). So the upper limit on the temperature is driven by the operating temperature of the compressor. Then the other posters comments about heat transfer out of the tank apply. Start with the ideal gas law: $$PV=nRT$$Let's compare the volume of air in the tyre with a similar volume outside the tyre under normal atm conditions.We know that $V$, $R$ and $T$ are constant so:$$\frac{P_{atm}}{n_{atm}}=\frac{P_{tyre}}{n_{tyre}}$$We know that $$n \propto m \propto \rho$$So we can say: $$\frac{P_{atm}}{\rho_{atm}}=\frac{P_{tyre}}{... FoundationsGiven a state property $P$, the definition of its partial molar value in a mixture at a given temperature $T$ and total pressure $p$ is below.$$ \check{P}_i \equiv \left( \frac{\partial P}{\partial n_j} \right)_{T, p, n_j \neq n_i} $$The definition measures the change in the total intensive property of the system when the number of moles of ... Would you feel better if they said that it applies to all extensive properties with units involving energy, or, more precisely, with its most fundamental units involving Newtons? So Joules is Newton-meters. But, in the case of volume, if you want to say that it is energy (Joules=N-m) divided by pressure (N/m-s^2), the N's cancel out.Proof regarding ... Review the equipartition theorem. It tells us that each degree of freedom quadratic in coordinate or momentum contributes $\frac{1}{2} kT$ per element. Ideal gas particles have no internal structure, so there is no contribution from internal mechanisms. The energy of the particle does not depend on location, so there is no dependence on coordinate, and ... Use an inverted measuring cylinder instead of a graduated beaker: it's more precise:Once the cylinder is near full, adjust its position so that the liquid meniscus inside and outside of the cylinder coincide exactly. That means pressure inside the cylinder is equal to atmospheric pressure. The ratio of volume collected and time elapsed is the volumetric ... Interesting question!Like you say, temperature is related to molecular kinetic energy as$$e_t = \frac{3}{2} \frac{k T}{m} ,$$where $m$ is the molecular mass, $k$ is Boltzmann's constant, and $e_t$ the specific translational internal energy (in J/kg). The latter comes from the kinetic energy from random thermal translational motion of the gas molecules. ... Another way of phrasing your question might be how to determine how much air flows through the engine rather than around it.Super Sonic StartedUnder normal ramjet operation this is actually fairly easy to calculate. We can multiply the volumetric flow rate by the density:$$\dot m = \rho_1 V_1 A_1$$Where $\dot m$ is the mass flow rate, $\rho$ is ... The incoming air slows down and compresses due to its speed relative to the engine and the shape of the inlet. In order to slow it down, a force must be applied to the air. This force comes in part from the inlet hardware, and in part from the high pressure air in the combustion chamber. It is this incoming air that creates the pressure in the combustion ...
Learning Objectives By the end of this section, you will be able to: Explain the difference between the solar day and the sidereal day Explain mean solar time and the reason for time zones The measurement of time is based on the rotation of Earth. Throughout most of human history, time has been reckoned by positions of the Sun and stars in the sky. Only recently have mechanical and electronic clocks taken over this function in regulating our lives. The Length of the Day The most fundamental astronomical unit of time is the day, measured in terms of the rotation of Earth. There is, however, more than one way to define the day. Usually, we think of it as the rotation period of Earth with respect to the Sun, called the solar day. After all, for most people sunrise is more important than the rising time of Arcturus or some other star, so we set our clocks to some version of Sun-time. However, astronomers also use a sidereal day, which is defined in terms of the rotation period of Earth with respect to the stars. A solar day is slightly longer than a sidereal day because (as you can see from Figure 1) Earth not only turns but also moves along its path around the Sun in a day. Suppose we start when Earth’s orbital position is at day 1, with both the Sun and some distant star (located in the direction indicated by the long white arrow pointing left), directly in line with the zenith for the observer on Earth. When Earth has completed one rotation with respect to the distant star and is at day 2, the long arrow again points to the same distant star. However, notice that because of the movement of Earth along its orbit from day 1 to 2, the Sun has not yet reached a position above the observer. To complete a solar day, Earth must rotate an additional amount, equal to 1/365 of a full turn. The time required for this extra rotation is 1/365 of a day, or about 4 minutes. So the solar day is about 4 minutes longer than the sidereal day. Because our ordinary clocks are set to solar time, stars rise 4 minutes earlier each day. Astronomers prefer sidereal time for planning their observations because in that system, a star rises at the same time every day. Example 1: Sidereal Time and Solar Time The Sun makes a complete circle in the sky approximately every 24 hours, while the stars make a complete circle in the sky in 4 minutes less time, or 23 hours and 56 minutes. This causes the positions of the stars at a given time of day or night to change slightly each day. Since stars rise 4 minutes earlier each day, that works out to about 2 hours per month (4 minutes × 30 = 120 minutes or 2 hours). So, if a particular constellation rises at sunset during the winter, you can be sure that by the summer, it will rise about 12 hours earlier, with the sunrise, and it will not be so easily visible in the night sky. Let’s say that tonight the bright star Sirius rises at 7:00 p.m. from a given location so that by midnight, it is very high in the sky. At what time will Sirius rise in three months? [latex]90\text{ days}\times \frac{4\text{ minutes}}{\text{day}}=\text{360 minutes or 6 hours}[/latex] It will rise at about 1:00 p.m. and be high in the sky at around sunset instead of midnight. Sirius is the brightest star in the constellation of Canis Major (the big dog). So, some other constellation will be prominently visible high in the sky at this later date. Check Your Learning If a star rises at 8:30 p.m. tonight, approximately what time will it rise two months from now? In two months, the star will rise: [latex]\displaystyle60\text{ days}\times \frac{4\text{ minutes}}{\text{day}}=24\text{0 minutes or 4 hours earlier}[/latex] This means it will rise at 4:30 p.m. Apparent Solar Time We can define apparent solar time as time reckoned by the actual position of the Sun in the sky (or, during the night, its position below the horizon). This is the kind of time indicated by sundials, and it probably represents the earliest measure of time used by ancient civilizations. Today, we adopt the middle of the night as the starting point of the day and measure time in hours elapsed since midnight. During the first half of the day, the Sun has not yet reached the meridian (the great circle in the sky that passes through our zenith). We designate those hours as before midday ( ante meridiem, or a.m.), before the Sun reaches the local meridian. We customarily start numbering the hours after noon over again and designate them by p.m. ( post meridiem), after the Sun reaches the local meridian. Although apparent solar time seems simple, it is not really very convenient to use. The exact length of an apparent solar day varies slightly during the year. The eastward progress of the Sun in its annual journey around the sky is not uniform because the speed of Earth varies slightly in its elliptical orbit. Another complication is that Earth’s axis of rotation is not perpendicular to the plane of its revolution. Thus, apparent solar time does not advance at a uniform rate. After the invention of mechanical clocks that run at a uniform rate, it became necessary to abandon the apparent solar day as the fundamental unit of time. Mean Solar Time and Standard Time Instead, we can consider the mean solar time, which is based on the average value of the solar day over the course of the year. A mean solar day contains exactly 24 hours and is what we use in our everyday timekeeping. Although mean solar time has the advantage of progressing at a uniform rate, it is still inconvenient for practical use because it is determined by the position of the Sun. For example, noon occurs when the Sun is overhead. But because we live on a round Earth, the exact time of noon is different as you change your longitude by moving east or west. If mean solar time were strictly observed, people traveling east or west would have to reset their watches continually as the longitude changed, just to read the local mean time correctly. For instance, a commuter traveling from Oyster Bay on Long Island to New York City would have to adjust the time on the trip through the East River tunnel because Oyster Bay time is actually about 1.6 minutes more advanced than that of Manhattan. (Imagine an airplane trip in which an obnoxious flight attendant gets on the intercom every minute, saying, “Please reset your watch for local mean time.”) Until near the end of the nineteenth century, every city and town in the United States kept its own local mean time. With the development of railroads and the telegraph, however, the need for some kind of standardization became evident. In 1883, the United States was divided into four standard time zones (now six, including Hawaii and Alaska), each with one system of time within that zone. By 1900, most of the world was using the system of 24 standardized global time zones. Within each zone, all places keep the same standard time, with the local mean solar time of a standard line of longitude running more or less through the middle of each zone. Now travelers reset their watches only when the time change has amounted to a full hour. Pacific standard time is 3 hours earlier than eastern standard time, a fact that becomes painfully obvious in California when someone on the East Coast forgets and calls you at 5:00 a.m. Globally, almost all countries have adopted one or more standard time zones, although one of the largest nations, India, has settled on a half-zone, being 5.5 hours from Greenwich standard. Also, several large countries (Russia, China) officially use only one time zone, so all the clocks in that country keep the same time. In Tibet, for example, the Sun rises while the clocks (which keep Beijing time) say it is midmorning already. Daylight saving time is simply the local standard time of the place plus 1 hour. It has been adopted for spring and summer use in most states in the United States, as well as in many countries, to prolong the sunlight into evening hours, on the apparent theory that it is easier to change the time by government action than it would be for individuals or businesses to adjust their own schedules to produce the same effect. It does not, of course, “save” any daylight at all—because the amount of sunlight is not determined by what we do with our clocks—and its observance is a point of legislative debate in some states. The International Date Line The fact that time is always advancing as you move toward the east presents a problem. Suppose you travel eastward around the world. You pass into a new time zone, on the average, about every 15° of longitude you travel, and each time you dutifully set your watch ahead an hour. By the time you have completed your trip, you have set your watch ahead a full 24 hours and thus gained a day over those who stayed at home. The solution to this dilemma is the International Date Line, set by international agreement to run approximately along the 180° meridian of longitude. The date line runs down the middle of the Pacific Ocean, although it jogs a bit in a few places to avoid cutting through groups of islands and through Alaska (Figure 2). By convention, at the date line, the date of the calendar is changed by one day. Crossing the date line from west to east, thus advancing your time, you compensate by decreasing the date; crossing from east to west, you increase the date by one day. To maintain our planet on a rational system of timekeeping, we simply must accept that the date will differ in different cities at the same time. A good example is the date when the Imperial Japanese Navy bombed Pearl Harbor in Hawaii, known in the United States as Sunday, December 7, 1941, but taught to Japanese students as Monday, December 8. key concepts and summary The basic unit of astronomical time is the day—either the solar day (reckoned by the Sun) or the sidereal day (reckoned by the stars). Apparent solar time is based on the position of the Sun in the sky, and mean solar time is based on the average value of a solar day during the year. By international agreement, we define 24 time zones around the world, each with its own standard time. The convention of the International Date Line is necessary to reconcile times on different parts of Earth. Glossary apparent solar time: time as measured by the position of the Sun in the sky (the time that would be indicated by a sundial) International Date Line: an arbitrary line on the surface of Earth near longitude 180° across which the date changes by one day mean solar time: time based on the rotation of Earth; mean solar time passes at a constant rate, unlike apparent solar time sidereal day: Earth’s rotation period as defined by the positions of the stars in the sky; the time between successive passages of the same star through the meridian solar day: Earth’s rotation period as defined by the position of the Sun in the sky; the time between successive passages of the Sun through the meridian
Are there any differences between the study of Calculus done by Newton and by Leibniz. If so please mention point by point. Newton's notation, Leibniz's notation and Lagrange's notation are all in use today to some extent they are respectively: $$\dot{f} = \frac{df}{dt}=f'(t)$$ $$\ddot{f} = \frac{d^2f}{dt^2}=f''(t)$$ You can find more notation examples on Wikipedia. The standard integral($\displaystyle\int_0^\infty f dt$) notation was developed by Leibniz as well. Newton did not have a standard notation for integration. I have read from "The Information" by James Gleick the following: According to Babbage who eventually took the Lucasian Professorship at Cambridge which Newton held, Newton's notation crippled mathematical development. He worked as an undergraduate to institute Leibniz's notation as it is used today at Cambridge despite the distaste the university still had because of the Newton/Leibniz conflict. This notation is alot more useful that Newton's for most cases. It does, however, imply that it can be treated as a simple fraction which is incorrect. You should definitely take a look at the second chapter of Arnold's Huygens & Barrow, Newton & Hooke. The late Prof. Arnold summarized therein the difference between Newton's approach to mathematical analysis and Leibniz's as follows: Newton's analysis was the application of power series to the study of motion... For Leibniz, ... analysis was a more formal algebraic study of differential rings. Arnold's overview of Leibniz's contributions to the theme is spiced up with a non-negligible number of thought-provoking remarks: In the work of other geometers--e.g., Huygens and Barrow--many objects connected with a given curve also appeared [for example: abscissa, ordinate, tangent, the slope of the tangent, the area of a curvilinear figure, the subtangent, the normal, the subnormal, and so on]... Leibniz, with his individual tendency to universality [he considered necessary to discover the so-called characteristic, something universal, that unites everything in science and contains all answers to all questions], decided that all these quantities should be considered in the same way. For this he introduced a single term for any of the quantities connected with a given curve and fulfilling some function in relation to the given curve--the term function... Thus, according to Leibniz many functions were associated with a curve. Newton had another term--fluent--which denoted a flowing quantity, a variable quantity, and hence associated with motion. On the basis of Pascal's studies and his own arguments Leibniz quite rapidly developed formal analysis in the form in which we now know it. That is, in a form specially suitable to teach analysis by people who do not understand it to people who will never understand it... Leibniz quite rapidly established the formal rules for operating with infinitesimals, whose meaning is obscure. Leibniz's method was as follows. He assumed that the whole of mathematics, like the whole of science, is found inside us, and by means of philosophy alone we can hit upon everything if we attentively take heed of processes that occur inside our mind. By this method he discovered various laws and sometimes very successfully. For example, he discovered that $d(x+y) = dx+dy$, and this remarkable discovery immediately forced him to think about what the differential of a product is. In accordance with the universality of his thoughts he rapidly came to the conclusion that differentiation [had to be] a ring homomorphism, that is, that the formula $d(xy) = dx dy$ must hold. But after some time he verified that this leads to some unpleasant consequences, and found the correct formula $d(xy) = xdy + y dx$, which is now called Leibniz's rule. None of the inductively thinking mathematicians--neither Barrow nor Newton, who as a consequence was called an empirical ass in the Marxist literature--could [have ever gotten] Leibniz's original hypothesis into his head, since to such a person it was quite obvious what the differential of a product is, from a simple drawing... Beyond the issue of notation, Newton experimented with a number of foundational approaches. One of the earliest ones involved infinitesimals, whereas later he shied away from them because of philosophical resistance of his contemporaries, often stemming from sensitive religious considerations closely related to inter-denominational quarrels. Leibniz also was aware of the quarrels, but he used infinitesimals and differentials systematically in developing the calculus, and for this reason was more successful in attracting followers and stimulating research--or what he called the Ars Inveniendi. From a practical point of view, the notation was vastly different. A particular sore point for me is that the Leibniz notation lets you incorrectly work with derivatives as though they were a mathematical fraction. Unfortunately this 'works out' a lot of the time so its still used, even in college courses, today. I don't think there is anything wrong with shortcuts, up to the point that they don't interfere with understanding. In this case, I do believe it creates a misunderstanding of the subject matter. This alone I think puts Newtons notation above Leibniz's. From Loemker's translation, "Leibniz's reasoning, though it strives for a broader application of the law of inverse squares than to gravity alone, is less general than Newton's (Principia, Book I, Propositions I, 2, 14), since it presupposes harmonic motion." Leibniz, Gottfried Wilhelm Philosophical Papers and Letters : A Selection / Translated and Edited, with an Introduction by Leroy E. Loemker. 2d ed. Dordrecht : D. Reidel, 1970. p.362
I want to compute the following integral $$- \frac{1}{M(\lambda_1-\lambda_2)}\int\limits_{-\infty}^t(e^{\lambda_1(t-t')}-e^{\lambda_2(t-t')})(\beta\omega A\sin\omega t' +g)\;dt'$$ Here the integral that's not trivial is $$\int_{-\infty}^t (e^{\lambda_1(t-t')}-e^{\lambda_2(t-t')}) \sin\omega t'\; dt'.$$ If I use $\sin \omega t' = \frac{1}{2i}(e^{i\omega t'}-e^{-j\omega t'})$, then I get that the last integral is: $$\left[\frac{e^{\lambda_1t + (i\omega - \lambda_1)t'}}{i\omega-\lambda_1}+\frac{e^{\lambda_1t - (i\omega + \lambda_1)t'}}{i\omega+\lambda_1}+\frac{e^{\lambda_2t + (i\omega - \lambda_2)t'}}{-i\omega+\lambda_2}-\frac{e^{\lambda_2t - (i\omega + \lambda_2)t'}}{i\omega+\lambda_2}\right]^t_{-\infty}\tag{1}$$ which diverges. My question is. Can I assume that the integral is the imaginary part of whatever comes out if I use $\sin\omega t' = Im(e^{i\omega t'})$? I mean, I will mix this exponential with the others and I'm not sure if after I do some algebra and take the imaginary part I will get a true result. Also, why does the expression in (1) diverge (I don't see that I made any mistake)? I appreciate your help.
Find the values of $x$ such that $$2\tan^{-1}x+\sin^{-1}\left(\frac{2x}{1+x^2}\right)$$ is independent of $x$. Checking for $x\in [-1,1]$ In the taken domain $\sin^{-1}\left(\frac{2x}{1+x^2}\right)$ comes out to be $2\tan^{-1}x$ hence the taken function comes out to be equal to $4\tan^{-1}x$ hence the function is clearly dependent on $x$. Now checking for $x\in (1,\infty)$ In the taken domain $2\tan^{-1}x$ comes out to be $\pi-\sin^{-1}\left(\frac{2x}{1+x^2}\right)$ and hence the net sum becomes independent of $x$. Now checking for $x\in (-\infty,-1)$ In the taken domain $2\tan^{-1}x$ comes out to be $-\pi-\sin^{-1}\left(\frac{2x}{1+x^2}\right)$ and hence the net sum becomes $-\pi$ therefore becomes, independent of $x$. But the answer has been mentioned as just $x\in [1,\infty)$ Can anybody tell me why the second set has not been included.
Definition:Mellin Transform Jump to navigation Jump to search The Definition The Mellin Transform of $f$, denoted $\mathcal M \left({f}\right)$ or $\phi$, is defined as: $\displaystyle \mathcal M \left\{{f \left({t}\right)} \right\} \left({s}\right) = \phi \left({s}\right) = \int_0^{\to +\infty} t^{s-1} f \left({t}\right) \rd t$ wherever this improper integral exists. Source of Name This entry was named for Robert Hjalmar Mellin. Also see Results about Mellin Transformcan be found here.
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. In this note, we provide an explicit formula for computing the quasiconvex envelope of any real-valued function W; SL(2) → ℝ with W(RF) = W(FR) = W(F) for all F ∈ SL(2) and all R ∈ SO(2), where SL(2) and SO(2) denote the special linear group and the special orthogonal group, respectively. In order to obtain our result, we combine earlier work by Dacorogna and Koshigoe on the relaxation of certain conformal planar energy functions with a recent result on the equivalence between polyconvexity and rank-one convexity for objective and isotropic energies in planar incompressible nonlinear elasticity. For$\unicode[STIX]{x1D6FD}\in (1,2]$the$\unicode[STIX]{x1D6FD}$-transformation$T_{\unicode[STIX]{x1D6FD}}:[0,1)\rightarrow [0,1)$is defined by$T_{\unicode[STIX]{x1D6FD}}(x)=\unicode[STIX]{x1D6FD}x\hspace{0.6em}({\rm mod}\hspace{0.2em}1)$. For$t\in [0,1)$let$K_{\unicode[STIX]{x1D6FD}}(t)$be the survivor set of$T_{\unicode[STIX]{x1D6FD}}$with hole$(0,t)$given by $$\begin{eqnarray}K_{\unicode[STIX]{x1D6FD}}(t):=\{x\in [0,1):T_{\unicode[STIX]{x1D6FD}}^{n}(x)\not \in (0,t)\text{ for all }n\geq 0\}.\end{eqnarray}$$ In this paper we characterize the bifurcation set$E_{\unicode[STIX]{x1D6FD}}$of all parameters$t\in [0,1)$for which the set-valued function$t\mapsto K_{\unicode[STIX]{x1D6FD}}(t)$is not locally constant. We show that$E_{\unicode[STIX]{x1D6FD}}$is a Lebesgue null set of full Hausdorff dimension for all$\unicode[STIX]{x1D6FD}\in (1,2)$. We prove that for Lebesgue almost every$\unicode[STIX]{x1D6FD}\in (1,2)$the bifurcation set$E_{\unicode[STIX]{x1D6FD}}$contains infinitely many isolated points and infinitely many accumulation points arbitrarily close to zero. On the other hand, we show that the set of$\unicode[STIX]{x1D6FD}\in (1,2)$for which$E_{\unicode[STIX]{x1D6FD}}$contains no isolated points has zero Hausdorff dimension. These results contrast with the situation for$E_{2}$, the bifurcation set of the doubling map. Finally, we give for each$\unicode[STIX]{x1D6FD}\in (1,2)$a lower and an upper bound for the value$\unicode[STIX]{x1D70F}_{\unicode[STIX]{x1D6FD}}$such that the Hausdorff dimension of$K_{\unicode[STIX]{x1D6FD}}(t)$is positive if and only if$t<\unicode[STIX]{x1D70F}_{\unicode[STIX]{x1D6FD}}$. We show that$\unicode[STIX]{x1D70F}_{\unicode[STIX]{x1D6FD}}\leq 1-(1/\unicode[STIX]{x1D6FD})$for all$\unicode[STIX]{x1D6FD}\in (1,2)$. The classical notions of monotonicity and convexity can be characterized via the nonnegativity of the first and the second derivative, respectively. These notions can be extended applying Chebyshev systems. The aim of this note is to characterize generalized monotonicity in terms of differential inequalities, yielding analogous results to the classical derivative tests. Applications in the fields of convexity and differential inequalities are also discussed. Let Sn,n≥1, be the successive sums of the payoffs in the classical St. Petersburg game. The celebrated Feller weak law states that Sn∕(nlog2n)→ℙ1 as n→∞. In this paper we review some earlier results of ours and extend some of them as we consider an asymmetric St. Petersburg game, in which the distribution of the payoff X is given by ℙ(X=srk-1)=pqk-1,k=1,2,…, where p+q=1 and s,r>0. Two main results are extensions of the Feller weak law and the convergence in distribution theorem of Martin-Löf (1985). Moreover, it is well known that almost-sure convergence fails, though Csörgő and Simons (1996) showed that almost-sure convergence holds for trimmed sums and also for sums trimmed by an arbitrary fixed number of maxima. In view of the discreteness of the distribution we focus on `max-trimmed sums', that is, on the sums trimmed by the random number of observations that are equal to the largest one, and prove limit theorems for simply trimmed sums, for max-trimmed sums, as well as for the `total maximum'. Analogues with respect to the random number of summands equal to the minimum are also obtained and, finally, for joint trimming. When two trains travel along the same track in the same direction, it is a common safety requirement that the trains must be separated by at least two signals. This means that there will always be at least one clear section of track between the two trains. If the safe-separation condition is violated, then the driver of the following train must adopt a revised strategy that will enable the train to stop at the next signal if necessary. One simple way to ensure safe separation is to define a prescribed set of latest allowed section exit times for the leading train and a corresponding prescribed set of earliest allowed section entry times for the following train. We will find strategies that minimize the total tractive energy required for both trains to complete their respective journeys within the overall allowed journey times and subject to the additional prescribed section clearance times. We assume that the drivers use a discrete control mechanism and show that the optimal driving strategy for each train is defined by a sequence of approximate speedholding phases at a uniquely defined optimal driving speed on each section and that the sequence of optimal driving speeds is a decreasing sequence for the leading train and an increasing sequence for the following train. We illustrate our results by finding optimal strategies and associated speed profiles for both trains in some elementary but realistic examples. We prove Hardy-type inequalities for a fractional Dunkl–Hermite operator, which incidentally gives Hardy inequalities for the fractional harmonic oscillator as well. The idea is to use h-harmonic expansions to reduce the problem in the Dunkl–Hermite context to the Laguerre setting. Then, we push forward a technique based on a non-local ground representation, initially developed by Frank et al. [‘Hardy–Lieb–Thirring inequalities for fractional Schrödinger operators, J. Amer. Math. Soc.21 (2008), 925–950’] in the Euclidean setting, to obtain a Hardy inequality for the fractional-type Laguerre operator. The above-mentioned method is shown to be adaptable to an abstract setting, whenever there is a ‘good’ spectral theorem and an integral representation for the fractional operators involved. Let$s\in \mathbb{R}$and$0<p\leqslant \infty$. The fractional Fock–Sobolev spaces$F_{\mathscr{R}}^{s,p}$are introduced through the fractional radial derivatives$\mathscr{R}^{s/2}$. We describe explicitly the reproducing kernels for the fractional Fock–Sobolev spaces$F_{\mathscr{R}}^{s,2}$and then get the pointwise size estimate of the reproducing kernels. By using the estimate, we prove that the fractional Fock–Sobolev spaces$F_{\mathscr{R}}^{s,p}$are identified with the weighted Fock spaces$F_{s}^{p}$that do not involve derivatives. So, the study on the Fock–Sobolev spaces is reduced to that on the weighted Fock spaces. We examine dynamical systems which are ‘nonchaotic’ on a big (in the sense of Lebesgue measure) set in each neighbourhood of a fixed point$x_{0}$, that is, the entropy of this system is zero on a set for which$x_{0}$is a density point. Considerations connected with this family of functions are linked with functions attracting positive entropy at$x_{0}$, that is, each mapping sufficiently close to the function has positive entropy on each neighbourhood of$x_{0}$. We consider a scale invariant Cassinian metric and a Gromov hyperbolic metric. We discuss a distortion property of the scale invariant Cassinian metric under Möbius maps of a punctured ball onto another punctured ball. We obtain a modulus of continuity of the identity map from a domain equipped with the scale invariant Cassinian metric (or the Gromov hyperbolic metric) onto the same domain equipped with the Euclidean metric. Finally, we establish the quasi-invariance properties of both metrics under quasiconformal maps. The fractional derivatives include nonlocal information and thus their calculation requires huge storage and computational cost for long time simulations. We present an efficient and high-order accurate numerical formula to speed up the evaluation of the Caputo fractional derivative based on the L2-1σ formula proposed in [A. Alikhanov, J. Comput. Phys., 280 (2015), pp. 424-438], and employing the sum-of-exponentials approximation to the kernel function appeared in the Caputo fractional derivative. Both theoretically and numerically, we prove that while applied to solving time fractional diffusion equations, our scheme not only has unconditional stability and high accuracy but also reduces the storage and computational cost. We propose a hybrid spectral element method for fractional two-point boundary value problem (FBVPs) involving both Caputo and Riemann-Liouville (RL) fractional derivatives. We first formulate these FBVPs as a second kind Volterra integral equation (VIEs) with weakly singular kernel, following a similar procedure in [16]. We then design a hybrid spectral element method with generalized Jacobi functions and Legendre polynomials as basis functions. The use of generalized Jacobi functions allow us to deal with the usual singularity of solutions at t = 0. We establish the existence and uniqueness of the numerical solution, and derive a hptype error estimates under L2(I)-norm for the transformed VIEs. Numerical results are provided to show the effectiveness of the proposed methods. We consider stable and almost stable points of autonomous and nonautonomous discrete dynamical systems defined on the closed unit interval. Our considerations are associated with chaos theory by adding an additional assumption that an entropy of a function at a given point is infinite. Let$E$be a finite-dimensional normed space and$\unicode[STIX]{x1D6FA}$a non-empty convex open set in$E$. We show that the Lipschitz-free space of$\unicode[STIX]{x1D6FA}$is canonically isometric to the quotient of$L^{1}(\unicode[STIX]{x1D6FA},E)$by the subspace consisting of vector fields with zero divergence in the sense of distributions on$E$. We consider the total curvature of graphs of curves in high-codimension Euclidean space. We introduce the corresponding relaxed energy functional and prove an explicit representation formula. In the case of continuous Cartesian curves, i.e. of graphs cu of continuous functions u on an interval, we show that the relaxed energy is finite if and only if the curve cu has bounded variation and finite total curvature. In this case, moreover, the total curvature does not depend on the Cantor part of the derivative of u. We treat the wider class of graphs of one-dimensional functions of bounded variation, and we prove that the relaxed energy is given by the sum of the length and total curvature of the new curve obtained by closing the holes in cu generated by jumps of u with vertical segments. The computational work and storage of numerically solving the time fractional PDEs are generally huge for the traditional direct methods since they require total memory and work, where NT and NS represent the total number of time steps and grid points in space, respectively. To overcome this difficulty, we present an efficient algorithm for the evaluation of the Caputo fractional derivative of order α∈(0,1). The algorithm is based on an efficient sum-of-exponentials (SOE) approximation for the kernel t–1–α on the interval [Δt, T] with a uniform absolute error ε. We give the theoretical analysis to show that the number of exponentials Nexp needed is of order for T≫1 or for TH1 for fixed accuracy ε. The resulting algorithm requires only storage and work when numerically solving the time fractional PDEs. Furthermore, we also give the stability and error analysis of the new scheme, and present several numerical examples to demonstrate the performance of our scheme. In this paper, we first apply cosine radial basis function neural networks to solve the fractional differential equations with initial value problems or boundary value problems. In the examples, we successfully obtained the numerical solutions for the fractional Riccati equations and fractional Langevin equations. The computer graphics and numerical solutions show that this method is very effective. The second order weighted and shifted Grünwald difference (WSGD) operators are developed in [Tian, Zhou and Deng, Math. Comput., 84 (2015), pp. 1703–1727] to solve space fractional partial differential equations. Along this direction, we further design a new family of second order WSGD operators; by properly choosing the weighted parameters, they can be effectively used to discretize space (Riemann-Liouville) fractional derivatives. Based on the new second order WSGD operators, we derive a family of difference schemes for the space fractional advection diffusion equation. By von Neumann stability analysis, it is proved that the obtained schemes are unconditionally stable. Finally, extensive numerical experiments are performed to demonstrate the performance of the schemes and confirm the convergence orders. It is well known that the standard Lipschitz space in Euclidean space, with exponent α ∈ (0, 1), can be characterized by means of the inequality , where is the Poisson integral of the function f. There are two cases: one can either assume that the functions in the space are bounded, or one can not make such an assumption. In the setting of the Ornstein–Uhlenbeck semigroup in ℝn, Gatto and Urbina defined a Lipschitz space by means of a similar inequality for the Ornstein–Uhlenbeck Poisson integral, considering bounded functions. In a preceding paper, the authors characterized that space by means of a Lipschitz-type continuity condition. The present paper defines a Lipschitz space in the same setting in a similar way, but now without the boundedness condition. Our main result says that this space can also be described by a continuity condition. The functions in this space turn out to have at most logarithmic growth at infinity.
A elementary study of Quantum Mechanics, following $[1]$, yields in the realization that the basic algebraic structure are the complex vector spaces $\mathbb{C}^{n}$. Then a contravariant vector (using the terminology of Differential Geometry/General Relativity) is simply defined as an element of $\mathbb{C}^{n}$ ; $$\mid x \rangle \in \mathbb{C}^{n} \tag{1}$$ In $(1)$, these vectors are called Ket. Futhermore, concerning the vector space $\mathcal{L}(V,U)$ there's a particular subspace called Dual vector space $\mathcal{L}(V,\mathbb{K}) \equiv V^{*}$ which is the Vector Space of the Covariant Vectors (Linear Functionals). So, for the quantum mechanics, you can define: $$\begin{array}{rl} \langle y \mid:\mathbb{C}^{n} &\to \mathbb{C} \\ \mid x \rangle &\mapsto \langle y \mid (\mid x \rangle) \tag{2} \end{array}$$ In $(2)$, these vectors are called Bra. And then, roughly speaking, by Riesz Representation Theorem $[2]$, we can say that the inner product $\langle y \mid x \rangle$ is well defined as: $$\langle y \mid x \rangle =: \sum^{n}_{i=1} y^{*}_{i}x_{i} \tag{3}$$ Concerning the $(3)$, can we generalize to something like, $\langle y \mid x \rangle =: \sum^{n}_{i=1} g_{ij}y^{*i}x^{j}$ ? Where $g_{ij}$ is the metric tensor. $$ * * * $$ $[1]$ NAKAHARA.M.; Quantum Computing: From Linear Algebra to Physical Realizations, CRC Press, 2008.
Let $a_n = n^{1/n}$ and $b_n = \dfrac{a_{n+1}}{a_n}$. I am trying to prove that $\{b_n\}_{n=5}^{\infty}$ is increasing. Here is what I know and what I've tried: The first few elements of $b_n$ aren't increasing (because $a_n$ is decreasing only after $n \geq 3$, for instance.) and so it wouldn't be correct to prove that $b_n$ is increasing for all $n$. Then letting $S(n)$ be the statement $$b_{n} = \frac{(n+1)^{1/(n+1)}}{n^{1/n}} \leq \frac{(n+2)^{1/(n+2)}}{(n+1)^{1/(n+1)}} = b_{n+1}$$ we can assume $S(n-1)$ and prove $S(n)$ (and $S(5)$). I tried to do this by breaking apart the exponents and manipulating the inequalities but I couldn't get anywhere. The other thing I tried was to prove that $$a_n^{2} \leq a_{n-1}a_{n+1}$$ using a convexity argument, particularly if $f(n) = n^{1/n}$ that $$a_n = f(n) = f(\frac{(n-1) + (n+1)}{2}) \leq \frac{f(n-1) + f(n+1)}{2} = \frac{a_{n-1} + a_{n+1}}{2}$$ I figured the AM-GM inequality might be useful because of the RHS but it implies that $$a_{n-1}a_{n+1} \leq (\frac{a_{n-1} + a_{n+1}}{2})^2$$ which doesn't seem to help bound $a_n^2$ above. This question is identical to mine but I'm looking for an approach that doesn't make use of logarithms or differentiation, since the book I'm working through hasn't covered those yet. If I could have a small hint to point me in the right direction I would appreciate it!
Let $f\colon X \to Y$ and $g\colon Y \to X$ be functions. Assume $g \circ f$ is bijective. Prove $f$ is injective and $g$ is surjective. Approach: if $g \circ f$ is bijective then $g \circ f$ is one to one if $g \circ f$ is bijective then $g \circ f$ is onto so we know $$g ◦ f (a_1)=g ◦ f (a_2) \text{ this implies $a_1=a_2$}$$ $$\forall b\in X, \exists a\in x \text{ such that } g ◦ f(a)=b$$ so we have $$g(f(a_1))=g(f(a_2))$$ $$g(f(a))=b$$ From here, I am stuck. What's the next thing to consider?
The number of variables is the number of elements you need to change to change the value of your function. In your case, knowing that $x,y,z$ are functions of $t$, you are implicitly saying that $\bf F$ can be viewed as a function of $t$ only too. In symbols: you have a function of four variables$$\mathbf{F}:\bf{R}^4\longrightarrow \mathbf{R}^4$$when the variables are $x,y,z,t$. But you also have the functions $x,y,z:\mathbf{R}\longrightarrow \mathbf{R}$ defined by $t\mapsto x(t),y(t),z(t)$ respectively, so you have the function$$\mathbf{R}\longrightarrow \mathbf{R}^4$$ defined by$t\mapsto (x(t),y(t),z(t),t)$. The composite $$\mathbf{R}\longrightarrow \mathbf{R}^4\overset{\mathbf{F}}{\longrightarrow} \mathbf {R}^4$$is defined by $$t\mapsto (x(t),y(t),z(t),t)\mapsto \mathbf{F}(x(t),y(t),z(t),t)$$ and realises $\mathbf{F}$ as a function of one single variable. Edit: about derivatives With the above notations and assuming that $\mathbf{F}$ (that I will now call $F$) has the minimum required regularity (for instance, it is differentiable partially with respect to all the variables) you can take the partial derivatives:$$\frac{\partial F}{\partial x}, \frac{\partial F}{\partial y}, \frac{ \partial F}{\partial z}, \frac{\partial F}{\partial t}$$Moreover, if even your functions $t\mapsto x(t),y(t),z(t)$ are differentiable with respect to $t$, you can define the derivatives $ \frac{dx}{d t}, \frac{d y}{d t}, \frac{dz}{dt}$; formally speaking, you are doing the same also for the last variable $t$, considering the identity function $t\mapsto t$ whose derivative is $1$. When you want to consider the total derivative of $F$ with respect to $t$, you are just differentiating the composite function $t\mapsto (x(t),y(t),z(t),t)\mapsto F(x(t),y(t),z(t),t)$ and this, recalling the derivative rule for composite functions, amounts to consider:$$ \frac{d F}{d t} = \frac{\partial F}{\partial x} \frac{dF}{dt}+ \frac{\partial F}{\partial y} \frac{d y}{dt}+ \frac{\partial F}{\partial z} \frac{d z}{d t} + \frac{\partial F}{\partial t}\cdot 1$$ In particular, note that $\partial F/\partial t\neq dF/dt$.
Markov Chain Monte Carlo Background What if we know the relative likelihood, but want the probability distribution? But what if \( \int f(x)dx \) is hard, or you can’t sample from \( f \) directly? This is the problem we will be trying to solve. First approach If space if bounded (integral is between \( a,b \)) we can use Monte Carlo to estimate \( \int\limits_a^b f(x)dx \) Pick \( \alpha \in (a,b) \) Compute \( f(\alpha)/(b-a) \) Repeat as necessary Compute the expected value of the computed values What if it’s a big ol’ bag of nope? What if we can’t sample from \( f(x) \) but can only determine likelihood ratios? Enter Markov Chain Monte Carlo The obligatory basics A Markov Chain is a stochastic process (a collection of indexed random variables) such that \( \mathbb{P}(X_n=x|X_0,X_1,…,X_{n-1}) = \mathbb{P}(X_n=x|X_{n-1}) \). In other words, the conditional probabilities only depend on the last state, not on any deeper history. We call the set of all possible values of \( X_i \) the state space and denote it by, \( \chi \). Transition Matrix Let \( p_{ij} = \mathbb{P}(X_1 = j | X_0 = i) \). We call the matrix \( (p_{ij}) \) the Transition Matrix of \( X \) and denote it \( P \). Let \( \mu_n = \left(\mathbb{P}(X_n=0), \mathbb{P}(X_n=1),…, \mathbb{P}(X_n=l)\right) \) be the row vector corresponding to the “probabilities” of being at each state at the \( n \)th point in time (iteration). Claim: \( \mu_{i+1} = \mu_0P^{i+1} \) Proof: if \( i = 0 \): So: \( \mu_1 = \mu_0P \) if \( i > 0 \): Then, \( \mu_{i+1} = \mu_iP = (\mu_0P^i)P = \mu_0P^{i+1} \) Stationary Distributions We say that a distribution \( \pi \) is stationary if Main Theorem: An irreducible, ergotic Markov Chain \( {X_n} \) has a unique stationary distribution, \( \pi \). The limiting distribution exists and is equal to \( \pi \). And furthermore, if \( g \) is any bounded function, then with probability 1: This really just means that if our Markov Chain has certain properties (basically just that any state can be gotten to from any other state at any time), then we can sample from this Markov Chain, and it’ll have the same distribution as a generic sample from our desired distribution. Random-Walk-Metropolis - Hastings: Let \( f(x) \) be the relative likelihood function of our desired distribution. And \( q(y|x_i) \) the known distribution easily sampled from (generally taken to be \( N(x_i,b^2) \) ) 1) Given \( X_0,X_1,…,X_i \), pick \( Y \sim q(y|X_i) \) 2) Compute \( r(X_i,Y) = \min\left(\frac{f(Y)q(X_i|Y)}{f(X_i)q(Y|X_i)}, 1\right) \) 3) Pick \( a \sim U(0,1) \) 4) Set The Confidence Builder: We would like to sample from and obtain a histogram of the Cauchy distribution: Note: Since \( q \) is symmetric, \( q(x|y) = q(y|x) \). Pseudocode: MC = []X_i = initial_statefor i in range(int(num_iters)): Y = q(X_i) r = min(f(Y)/f(X_i), 1) a = np.random.uniform() if a < r: X_i = Y MC.append(X_i) Estimation: So you can clearly see that the different values of \( b \) lead to wildly different distributions, and therefore wildly different approximations of the end distribution. For example, if the standard deviation on our sampling distribution is very small, our prospective transitions won’t venture out very far, and hence the tails of our distribution will be woefully underrepresented. If, on the other hand, the standard deviation is too large, we will often venture our much further than we should, and the we’ll end up with something like a multi-modal distribution. Basically, the prospective transition states will often be too far out to accept, in which case the current state persists, or it will be just close enough to be accepted. What you end up with is far too many samples from the tails and not enough samples from the beafier parts of the distribution. Choosing the “best” value of \( b \) is more of an art than a science, really. This issue will poke its ugly head a little later. But even with that issue, we see clearly that something amazing is happening here. How is this happening? A property called detailed balance, which means, or in the continuous case: The intuition Since what we’re really after is the distribution of the samples and not the order in which they were picked, it’s reasonable to require a notion of (what I’m going to call) equal hopportunity: In other words, the probability of “hopping” from \( x \) to \( y \) should be the same as the reverse, because at the end of the day, both of these two scenarios only mean that \( x \) and \( y \) are in our sample. If you work out the math therein (as we will do below), you find that this ratio guarantees equal hopportunity. Let’s prove it! To reiterate, the claim was that this detailed balance property ( \( f(x)P_{xy} = P_{yx}f(y) \) ) guarantees that our likelihood function \( f \) is the stable distribution for our Markov Chain. Let \( f \) be the desired distribution (in our example, it was the Cauchy Distribution), and let \( q(y|x) \) be the distribution we draw from. First we’ll show that detailed balance implies \( \pi \) (or \( f \) if the distribution is continuous) is the stable distribution! Let \( \pi_ip_{ij} = \pi_jp_{ji} \) for discrete or for continuous \( f(i)P_{ij}=P_{ji}f(j) \) Now we’ll show that the Markov Chain defined by the Metropolis-Hastings algorithm has the detailed balance property (and hence \( f \) is the stable distribution). Without any loss of generality, we’ll assume \( f(x)q(y|x) > f(y)q(x|y) \) (if this is not the case, the proof will follow the same with just symbolic changes). Note: Since \( f(x)q(y|x) > f(y)q(x|y) \), we know, \( r(x,y) = \frac{f(y)q(x|y)}{f(x)q(y|x)} \), and \( r(y,x) = 1 \). Then, Modeling Change Point Models in Astrostatistics. This was taken from a Penn State statistics summer program website located at stat.psu.edu. A detailed walk-through of the process PSU took with this project can be found at that site. The data is a time series of light emission recorded and aggregated over periods of 10000 seconds (just under 3 hours). A plot of the time series can be seen below for your convenience. The idea (with this data) is that the emissions can be modeled by two Poisson distributions and a change point. The data will be modeled by the first Poisson until the change point, and then it switches to the second Poisson. So we start out with a multi-parameter model, and through some Bayesian inference you arrive at a pretty nasty looking posterior distribution. You’d like to sample this distribution to obtain, say, the mean \( k \) (change point) value along with a 95% confidence interval. Since you need that confidence interval, standard optimization methods won’t suffice: we’d need the distribution of \( k \)s. For this, we can use MCMC! The posterior distribution we would like to sample from is this: This distribution is pretty gnarly and since I don’t really know how to sample from this (beyond a blind search), we would like to use MCMC. Metropolis-Hastings (at least how we’ve discussed it) clearly doesn’t work here because this is multidimensional. For that we introduce another MCMC algorithm: Gibbs Sampling. Gibbs Sampling The basic pseudocode is this: X = initial_XY = initial_Y...Z = initial_ZMC = [(X,Y,...,Z)]for _ in range(num_iters): // for each variable we compute X ~ q_X(x|X,Y,...,Z) // We do this for X, Y, ..., Z in turn We can combine Gibbs with Metropolis quite easily: X = initial_XY = initial_Y...Z = initial_ZMC = [(X,Y,...,Z)]for _ in range(num_iters): // for each variable we compute prospect ~ q_X(x|X,Y,...,Z) r = min(f(prospect)/f(X), 1) a ~ uniform(0,1) if a < r: X = prospect // We do this for X, Y, ..., Z in turn So, basically, you update one at a time. Using this algorithm, we can analyze the data from the PSU website we mentioned above. In fact we did that very thing. Below you’ll find some graphs depicting the distribution of \( k \) values. Summary Random-Walk-Metropolis-Hastings Can be used to sample difficult univariate distributions relatively easily Have to tune the sampling parameter Curse of dimensionality in tuning parameters Requires \( f \) to be defined on all of \( \mathbb{R} \) Transform as needed Can be used to sample difficult univariate distributions relatively easily Gibbs Sampling Turn high dimensional sampling into iterative one-dimensional sampling Gibbs with Metropolis-Hastings Lovely Bibliography Summer School in Astrostatistics
Let $X$, respectively $Y$, be a space endowed with the $\sigma$-additive complete measure $\mu_x$, respectively $\mu_y$. If $f: X\times Y\to\mathbb{R}$ is $\mu_x\otimes\mu_y$-measurable$^1$ and Lebesgue summable in one variable, is its integral measurable in the other variable? I mean if, for example, $$\int_{Y}f(x,y)d\mu_y$$is $$x\mapsto\int_{Y}f(x,y)d\mu_y$$always a measurable function? If it is, how is it proved? I heartily thank any answerer. If I correctly understand this argument, for which I thank PhoemueX, once proved for $f$ as a non-negative function, the more general case that my question covers would follow. $^1$ Measurable according to the Lebesgue extension $\mu_x\otimes\mu_y$ (a complete measure) of the measure $\mu_x\times\mu_y$ defined by $(\mu_x\times\mu_y)(A\times B)=\mu_x(A)\mu_y(B)$ for the measurable sets $A\subset X$ and $B\subset Y$.
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea. I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.) @dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later... oops lol typo bohm bohr btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals... @dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality @dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as... @vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally." @dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing > The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment. ↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O @vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local? @dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated... if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around @vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best @dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view... Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo… And to make things even more confusing: Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion It seems my mind is getting more and more comfortable with dialetheia now @vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago. @Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII. If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl... @Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them. @AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily. @bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref. @PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification. @Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there. ← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments. hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference. One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass @vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore @Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense.
Please help with a question that I am working on just now...:) If $z=2e^{i\theta}$ where $0<\theta<\pi$, how can I find the real and imaginary parts of $w=(z-2)/(z+2)$? Hence, how can I calculate w and deduce that this complex number always lies parallel to the real axis in the Argand plane? The following information can be used: $\cos(2\theta)=2\cos^2(\theta)-1$ and $\sin(2\theta)=2\cos(\theta)\sin(\theta)$ Thanks for all your help!!
This question already has an answer here: The two circles are: 1) $$(x-2)^2 + (y+1)^2 = 25$$ 2) $$(y-2)^2 + (x+1)^2 = 25$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: The two circles are: 1) $$(x-2)^2 + (y+1)^2 = 25$$ 2) $$(y-2)^2 + (x+1)^2 = 25$$ $(1)$ and $(2)$ are symmetric for $x=y$. The two points you are looking for are the intersenction between $(1)$ and $y=x$. Therefore, the intersections are $(\frac{1 \pm \sqrt{41}}{2},\frac{1 \pm \sqrt{41}}{2})$.
So far, I have encountered two main definitions for the Lebesgue integral of a non-negative measurable function $f$. 1) $\int_A f d \mu = \sup _{h \leq f, \space h \space simple} \{ \int_A h d \mu \}$ 2) $\int_A f d \mu = \sup \{ \inf _{x_i \in E_i} \sum_{i=1}^N f(x_i) \mu(E_i) \} = \sup \{ \sum_{i=1}^N (\inf _{x_i \in E_i} f(x_i)) \mu(E_i) \}$ where in 2), the $E_i$'s form a partition of A and the sup is taken over all such partitions. I want to prove that they are equivalent. Here is what I did. 1) $\leq$ 2) : take $h$ a simple function such that $h \leq f$. Write $h$ in its canonical form : $h = \sum_i c_i \mathcal{1}_{A_i}$. Since $h \leq f$, we have $c_i \leq f(x_i)$ for all $x_i \in A_i$. Using the definition of the Lebesgue integral for simple functions, we then have $\int_A h d \mu = \sum_i c_i \mu (A_i) \leq \sum_{i=1}^N (\inf _{x_i \in A_i} f(x_i)) \mu(A_i) \leq \sup \{ \sum_{i=1}^N (\inf _{x_i \in E_i} f(x_i)) \mu(E_i) \}$, from which we have $\sup _{h \leq f, \space h \space simple} \{ \int_A h d \mu \} \leq \sup \{ \sum_{i=1}^N (\inf _{x_i \in E_i} f(x_i)) \mu(E_i) \}$. 2) $\leq$ 1) : let $E_i$'s be a partition of A, and $\alpha = \sum_i \inf_{x_i \in E_i} \{f(x_i) \} \mu (E_i)$. Fix $\epsilon > 0$. We can choose ($x_i^*)_i$ such that $\inf f(x_i) \leq f(x_i^*) \leq \inf f(x_i) + \epsilon$. We have $\alpha \leq \sum _i f(x_i^*) \mu (E_i)$. We set $h^* = \sum _i (f(x_i^*) - \epsilon) \mathcal{1}_{E_i}$. Then $h^* = \sum _i (f(x_i^*) - \epsilon) \mathcal{1}_{E_i} \leq \sum _i (\inf_{x_i \in E_i} f(x_i)) \mathcal{1}_{E_i} \leq f$. We also have $\int_A h^* d \mu = \sum_i (f(x_i^*) - \epsilon) \mu (E_i) = \sum_i (f(x_i^*) \mu (E_i) - \epsilon \mu (A) $. So $\alpha \leq \int_A h^* d \mu + \epsilon \mu(A) \leq \sup _{h \leq f, \space h \space simple} \{ \int_A h d \mu \} + \epsilon \mu (A)$. Take $\epsilon \to 0$ so that $\alpha \leq \sup _{h \leq f, \space h \space simple} \{ \int_A h d \mu \}$ and finally $\sup \{ \sum_{i=1}^N (\inf _{x_i \in E_i} f(x_i)) \mu(E_i) \} \leq \sup _{h \leq f, h simple} \{ \int_A h d \mu \}$. Is it correct ? Can ce generalize the second part to the case $\mu(A) = \infty$ ? If this is correct, in which case the definition 2) is preferably used ? Thanks.
Let $u_\epsilon \in W^{1,p}(\Omega)$ be a sequence with $\Omega \subset \mathbb{R}^n$ bounded. Let $u_\epsilon \rightharpoonup^* u$ weakly* in $L^\infty (\Omega)$ (for a subsequence) with $u \in W^{1,p}(\Omega)$ and assume $\sup_\epsilon \vert \nabla u_\epsilon\vert_{L^p (\Omega; \mathbb{R}^n)} < C$. Proof that $\nabla u_\epsilon \rightharpoonup \nabla u$ weakly in $L^p (\Omega; \mathbb{R}^n)$ (for a subsequence). My idea so far: $\nabla u_\epsilon$ is bounded, therefore there exists a weak limit (for a subsequence). $u_\epsilon \rightharpoonup^* u$ weakly* in $L^\infty (\Omega)$, therefore $u_\epsilon \rightharpoonup u$ weakly in $L^p (\Omega)$, which means that $~\int_\Omega \phi \partial_x u_\epsilon \to \int_\Omega \phi \partial_x u ~$ for all $~\phi \in W^{1,p^*}(\Omega)$. If I am correct, the weak convergence in $L^p (\Omega; \mathbb{R}^n)~$ is $~\int_\Omega \Phi \cdot \nabla u_\epsilon \to \int_\Omega \Phi \nabla u~$ for all $~\Phi \in L^{p^*} (\Omega; \mathbb{R}^n)$, but I can't seem to get that from the lines above. Any help is appreciated.
I have been stuck on a chemistry problem for a long time now and if anyone here can help me I would be eternally grateful. 40.Du tillsätter $\pu{100 ml}$ $\pu{0,01 M}$ $\ce{Na2SO4}$-lösning i $\pu{100 ml}$ $\pu{0,02 M}$ $\ce{CaCl2}$-lösning. Det bildas en $\ce{CaSO4}$-fällning. Hur mycket av $\pu{0,01 M}$ $\ce{Na2SO4}$-lösning måste ännu tillsättas för att den bildade $\ce{CaSO4}$-fällningen just och just ska upplösas helt? $K_\mathrm{s}(\ce{CaSO4}) = \pu{2,5e-5 M}$. $(t = \pu{25 °C})$ (2p) a. $\pu{0,43 l}$ b. $\pu{0,48 l}$ c. $\pu{0,53 l}$ d. $\pu{0,58 l}$ Translation: We add 100 ml of 0.01 M $\ce{Na2SO4}$-solution to 100 ml of 0.02 M $\ce{CaCl2}$-solution. A precipitate of $\ce{CaSO4}$ is formed. What is the (minimal) volume of 0.01 M $\ce{Na2SO4}$-solution that needs to be added to the mix for the $\ce{CaSO4}$ precipitate to be just dissolved completely? The problem then lists the solubility constant, $K_\mathrm{s}$, for $\ce{CaSO4}$ as $2.5\cdot 10^{-5}$. The only way I can think of solving this myself is by looking at the added $\ce{Na2SO4}$-solution as just water and using the $K_\mathrm{s}$ value for $\ce{CaSO4}$ to see what concentration of $\ce{CaSO4}$ is possible in just water. Then calculate the amount of moles of precipitate that was actually formed to see how much water I would need to add to the mixture: $$K_\mathrm{s}(\ce{CaSO4}) = \pu{2.5e-5 M^2} = [\ce{Ca^2+}][\ce{SO4^2-}]$$ $$\therefore [\ce{Ca^2+}] = [\ce{SO4^2-}] = \sqrt{\pu{2.5e-5 M^2}} = \pu{5e-3 M}$$ $$C = \frac{n}{V} \to n = CV$$ Amount of precipitate formed: $$n(\ce{CaSO4}) = \pu{0.02 M} \cdot \pu{0.1 dm^3} = \pu{0.002 mol}$$ Required total volume of water to dissolve $\pu{0.002 mol}$ of $\ce{CaSO4}$: $$V = \frac{n}{C} = \frac{\pu{0.002 mol}}{\pu{5e-3 M}} = \pu{0.4 dm^3}$$ This is wrong, but I can't figure out the right way to go about this. The answer surely has to do with how well $\ce{CaSO4}$ is dissolved in a $\ce{Na2SO4}$-solution as opposed to just water.
(11C) Tree Heights 11-02-2018, 01:24 PM (This post was last modified: 11-02-2018 01:26 PM by Gamo.) Post: #1 (11C) Tree Heights This program was adapted from the Hand-Held-Calculator Programs for the Field Forester. More detail information attached here. Procedure: 1. Enter slope distance to base of tree [A] -->Display known distance 2. Enter Slope percent to tip, [R/S] --> Display 0 (Slope to tip Stored) 3. Enter Slope percent to Base. I. If Positive [B] --> display 0 // slope base entered II. If Negative [C] --> display 0 // slope base entered 4. [D] ---> Tree Heights ----------------------------------------------- Example: FIX 1 Slope Percent to tip = 40 Negative slope percent to base = 20 Distance to tree = 56 What is the Tree Heights? 56 [A] display 56 40 [R/S] display 0 20 [C] display 0 [D] 32.9 Tree Heights is 32.9 ------------------------------------------------- Slope Percent to tip = 40 Positive slope percent to base = 20 Distance to tree = 56 What is the Tree Heights? 56 [A] display 56 40 [R/S] display 0 20 [B] display 0 [D] 10.9 Tree Heights is 10.9 Program: Code: Gamo 11-02-2018, 02:43 PM Post: #2 RE: (11C) Tree Heights An excellent read: Hand-Held-Calculator Programs for the Field Forester Wayne D. Shepperd, Associate Silviculturist General Technical Report RM-76 (July 1980) Rocky Mountain Forest and Range Experiment Station Forest Service U.S. Department of Agriculture Abstract A library of programs written for hand-held, programmable calculators is described which eliminates many of the computations previously done by hand in the field. Programs for scaling aerial photos, variable plot cruising, basal area factor gauge calibration, and volume calculations are included. Contents Introduction............................ 1 Slope to Horizontal Distance....... 2 Basal Area Computation............ 3 Tree Heights.......................... 4 Adequacy of Sample Test.......... 5 Multispecies Board Foot Volumes 7 BAF Gauge Calibration.............. 9 Limiting Distance................... 10 Photo Work Program............... 12 Spruce Variable Plot Cruising.... 14 Literature Cited..................... 17 BEST! SlideRule 11-02-2018, 06:15 PM (This post was last modified: 11-02-2018 09:48 PM by Dieter.) Post: #3 RE: (11C) Tree Heights (11-02-2018 01:24 PM)Gamo Wrote: This program was adapted from the Hand-Held-Calculator Programs Thank you very much. The attached program description seems to refer to a TI program: enter three values with four (!) label keys, finally press another key for the result. But this is HP, the 11C uses RPN, here all this can be done much shorter and more straightforward, even without using a single data register. A direct translationm on the other hand, duplicates the clumsy original procedure: (11-02-2018 01:24 PM)Gamo Wrote: 1. Enter slope distance to base of tree [A] -->Display known distance We can do better. ;-) First of all, mathematically there is no need to distinguish positive or negative base angles and handle them separately. The same formula will work for both cases, as tan(–x) = –tan(x). Also there is no need to calculate sin(90°–B1) as this is equivalent to cos(B1). Converting the slope values to angles is done in a subroutine. But on the 11C this is merely four steps,*) so two calls require (2x GSB, LBL, 4 steps, RTN) eight lines altogether. This does not save any program steps, compared to having the same four steps twice in the program. So a subroutine has no advantage, and without it the program would even run slightly faster. I left it in there anyway so that the user may do the slope-to-angle conversion with f[E], independently from the rest of the program. Here is my attempt at realizing all this in a compact 10/11/15C program, but it should run just as well on many other HPs. If your calculator does not feature LBL A or LBL E simply replaced them with numeric ones. Code: 01 LBL A Enter base distance [ENTER] tip slope percent [ENTER] base slope percent. Press f[A] to get the tree height. Additional feature: Enter slope percent, press f[E] and get the equivalent angle. Examples, using your above data: 56 [ENTER] 40 [ENTER] –20 f[A] => 32,95 56 [ENTER] 40 [ENTER] 20 f[A] => 10,98 What is the equivalent angle for a slope of 30% ? 30 f[E] => 16,70° Edit: here is a version for the HP25(C) which may also run on other calculators without labels and subroutines: Code: 01 ENTER Dieter __________ *) In your original program you could even do it with 3 steps: 1 % TANˉ¹ 11-02-2018, 07:03 PM Post: #4 RE: (11C) Tree Heights 11-02-2018, 07:41 PM Post: #5 RE: (11C) Tree Heights Ah, thank you very much. But I don't see much of a real program. It's more like a "program outline", as stated in the attachment, a kind of recipe for writing your own program. BTW the result for the second example, rounded to one decimal, should be 11,0 instead of 10,9. Dieter 11-03-2018, 01:35 AM (This post was last modified: 11-03-2018 01:38 AM by Gamo.) Post: #6 RE: (11C) Tree Heights Dieter thanks for the better program update. This book only show the program guide line to adapted to any programmable calculator as state at the beginning of the book. Personally I program this Tree Height as simple to operate as possible so I put all input operation separately on each labels like so [A] For Known Distance and Slope Tip [B] For known Positive Slope Base [C] For Known Negative Slope Base [D] Compute Tree Height ------------------------------------------------------ SlideRule Thanks for the program guide line page. Remark: At second page of this book there are marked for the typo error On Page 5 Example on the first line: Should be: Positive Slope Percent to Tip=40 ---------------------------------------------------- Gamo 11-03-2018, 12:47 PM (This post was last modified: 11-03-2018 02:13 PM by Dieter.) Post: #7 RE: (11C) Tree Heights See below. ;-) (11-03-2018 01:35 AM)Gamo Wrote: Personally I program this Tree Height as simple to operate as possible Does it get simpler than entering the three values on the stack? (11-03-2018 01:35 AM)Gamo Wrote: so I put all input operation separately on each labels like so Again: there is no need for separate calculations for positive or negative slope values. Try it: simply enter –20 at [B]. You may also use two separate labels for the distance and the slope percent to the tip. Finally here is another version: In many cases it is a good idea not to follow a given path but to try a new approach instead. This is also the case here. The tree height can also be calculated this way: b = a·cos(B2) · tan(B1) – a·sin(B2) The point here is that the sine and cosine term can be simultaneously calculated by means of the P–>R command. And the tangent simply is the tip slope divided by 100. This leads to the following even shorter program: Code: 01 LBL A And here is a version that uses the label keys: Code: 01 LBL A f[USER] Enter base distance [A] Enter base slope percent [B] (may be positive or negative) Enter tip slope percent [C] Calculate tree height with [D] 56 [A] => 56,00 20 [B] => 20,00 40 [C] => 40,00 [D] => 10,98 -20 [B] => -20,00 [D] => 32,95 Addendum: I was playing around a bit with a TI59 emulator, so here also is a version for the TI58/59. Code: 000 76 LBL Usage is the same as above. The final steps round the result to two decimals. Dieter 11-04-2018, 03:32 PM Post: #8 RE: (11C) Tree Heights We don't really need trigonometric functions here. Good old Pythagoras is good enough: Code: 01 LBL A Examples: 56 ENTER 20 ENTER 40 A 10.9825 56 ENTER -20 ENTER 40 A 32.9475 Cheers Thomas 11-04-2018, 04:57 PM (This post was last modified: 11-04-2018 05:35 PM by Dieter.) Post: #9 RE: (11C) Tree Heights (11-04-2018 03:32 PM)Thomas Klemm Wrote: We don't really need trigonometric functions here. Great! This way it can also be done on the 12C and other calculators without trigs or polar/rectangular conversion: Code: 01 X<>Y Since no →P is required this may even run slightly faster than Thomas' original version. If available, replace "ENTER x" with x². (11-04-2018 03:32 PM)Thomas Klemm Wrote: Examples: Same for the above version. Press [R/S] instead of [A]. ;-) I you, like me, prefer to enter base distance [ENTER] tip slope [ENTER] base slope, simply remove the first line. Gamo, if you want to implement this for the 11C using the label keys A...D, here is an adapted version: Code: 01 LBL A This thread shows once again how a new approach and a bit of better mathematical insight can substantially improve a given solution. So don't adapt programs or algorithms, rethink the problem and realize your own solution. Or "dare to think for yourself", as others have put it. Dieter 11-05-2018, 12:52 AM Post: #10 RE: (11C) Tree Heights Thanks Thomas Klemm and Dieter Programs updates is more streamline now even work on HP-12C Excellent Idea !! Gamo 11-22-2018, 05:27 PM (This post was last modified: 11-22-2018 05:35 PM by ijabbott.) Post: #11 RE: (11C) Tree Heights (11-04-2018 04:57 PM)Dieter Wrote:(11-04-2018 03:32 PM)Thomas Klemm Wrote: We don't really need trigonometric functions here. That's a neat solution! It's also worth mentioning that if you know the tangent, sine or cosine of an angle between 0 and 90 degrees, you can derive the others with standard arithmetic and the square root function. \( \tan(x) = \frac{\sqrt{1-\cos^2(x)}}{\cos(x)} \), or: \( \tan(x) = \sqrt{\frac{1}{\cos^2(x)} - 1} \) Code: ENTER \( \cos(x) = \frac{1}{\sqrt{1 + \tan^2(x)}} \) Code: ENTER \( \tan(x) = \frac{\sin(x)}{\sqrt{1 - \sin^2(x)}} \) Code: ENTER \( \sin(x) = \frac{\tan(x)}{\sqrt{1 + \tan^2(x)}} \) Code: ENTER \( \sin(x) = \sqrt{1 - \cos^2(x)} \), and: \( \cos(x) = \sqrt{1 - \sin^2(x)} \) Code: ENTER Of course, "ENTER", "×" can be replaced by "x²" in all of the above, if available. — Ian Abbott User(s) browsing this thread: 1 Guest(s)
Learning Objectives Express momentum as a two-dimensional vector Write equations for momentum conservation in component form Calculate momentum in two dimensions, as a vector quantity It is far more common for collisions to occur in two dimensions; that is, the angle between the initial velocity vectors is neither zero nor 180°. Let’s see what complications arise from this. The first idea we need is that momentum is a vector; like all vectors, it can be expressed as a sum of perpendicular components (usually, though not always, an x-component and a y-component, and a z-component if necessary). Thus, when we write down the statement of conservation of momentum for a problem, our momentum vectors can be, and usually will be, expressed in component form. The second idea we need comes from the fact that momentum is related to force: $$\vec{F} = \frac{d \vec{p}}{dt} \ldotp$$ Expressing both the force and the momentum in component form, $$F_{x} = \frac{dp_{x}}{dt}, F_{y} = \frac{dp_{y}}{dt}, F_{z} = \frac{dp_{z}}{dt} \ldotp$$ Remember, these equations are simply Newton’s second law, in vector form and in component form. We know that Newton’s second law is true in each direction, independently of the others. It follows therefore (via Newton’s third law) that conservation of momentum is also true in each direction independently. These two ideas motivate the solution to two-dimensional problems: We write down the expression for conservation of momentum twice: once in the x-direction and once in the y-direction. $$p_{f,x} = p_{1,i,x} + p_{2,i,x} \label{9.18}$$ $$p_{f,y} = p_{1,i,y} + p_{2,i,y}$$ This procedure is shown graphically in Figure 9.22. Figure \(\PageIndex{1}\): (a) For two-dimensional momentum problems, break the initial momentum vectors into their x- and y-components. (b) Add the x- and y-components together separately. This gives you the x- and y-components of the final momentum, which are shown as red dashed vectors. (c) Adding these components together gives the final momentum. We solve each of these two component equations independently to obtain the x- and y-components of the desired velocity vector: $$v_{f,x} = \frac{m_{1} v_{1,i,x} + m_{2} v_{2,i,x}}{m}$$ $$v_{f,y} = \frac{m_{1} v_{1,i,y} + m_{2} v_{2,i,y}}{m}$$ (Here, m represents the total mass of the system.) Finally, combine these components using the Pythagorean theorem, $$v_{f} = |\vec{v}_{f}| = \sqrt{v_{f,x}^{2} + v_{f,y}^{2}} \ldotp$$ Problem-Solving Strategy: Conservation of Momentum in Two Dimensions The method for solving a two-dimensional (or even three-dimensional) conservation of momentum problem is generally the same as the method for solving a one-dimensional problem, except that you have to conserve momentum in both (or all three) dimensions simultaneously: Identify a closed system. Write down the equation that represents conservation of momentum in the x-direction, and solve it for the desired quantity. If you are calculating a vector quantity (velocity, usually), this will give you the x-component of the vector. Write down the equation that represents conservation of momentum in the y-direction, and solve. This will give you the y-component of your vector quantity. Assuming you are calculating a vector quantity, use the Pythagorean theorem to calculate its magnitude, using the results of steps 3 and 4. Example 9.14 Traffic Collision A small car of mass 1200 kg traveling east at 60 km/hr collides at an intersection with a truck of mass 3000 kg that is traveling due north at 40 km/hr (Figure 9.23). The two vehicles are locked together. What is the velocity of the combined wreckage? Strategy First off, we need a closed system. The natural system to choose is the (car + truck), but this system is not closed; friction from the road acts on both vehicles. We avoid this problem by restricting the question to finding the velocity at the instant just after the collision, so that friction has not yet had any effect on the system. With that restriction, momentum is conserved for this system. Since there are two directions involved, we do conservation of momentum twice: once in the x-direction and once in the y-direction. Solution Before the collision the total momentum is $$\vec{p} = m_{c} \vec{v}_{c} + m_{T} \vec{v}_{T} \ldotp$$ After the collision, the wreckage has momentum $$\vec{p} = (m_{c} + m_{T}) \vec{v}_{w} \ldotp$$ Since the system is closed, momentum must be conserved, so we have $$m_{c} \vec{v}_{c} + m_{T} \vec{v}_{T} = (m_{c} + m_{T}) \vec{v}_{w} \ldotp$$ We have to be careful; the two initial momenta are not parallel. We must add vectorially (Figure 9.24). If we define the +x-direction to point east and the +y-direction to point north, as in the figure, then (conveniently), $$\vec{p}_{c} = p_{c}\; \hat{i} = m_{c} v_{c}\; \hat{i}$$ $$\vec{p}_{T} = p_{T}\; \hat{j} = m_{T} v_{T}\; \hat{j} \ldotp$$ Therefore, in the x-direction: $$m_{c} v_{c} = (m_{c} + m_{T}) v_{w,x}$$ $$v_{w,x} = \left(\dfrac{m_{c}}{m_{c} + m_{T}}\right) v_{c}$$ and in the y-direction: $$m_{T} v_{T} = (m_{c} + m_{T}) v_{w,y}$$ $$v_{w,y} = \left(\dfrac{m_{T}}{m_{c} + m_{T}}\right) v_{T} \ldotp$$ Applying the Pythagorean theorem gives $$\begin{split} |\vec{v}_{w}| & = \sqrt{\Big[ \left(\dfrac{m_{c}}{m_{c} + m_{T}}\right) v_{c} \Big]^{2} + \Big[ \left(\dfrac{m_{T}}{m_{c} + m_{T}} \right) v_{T} \Big]^{2}} \\ & = \sqrt{\Big[ \left(\dfrac{1200\; kg}{4200\; kg}\right) (16.67\; m/s) \Big]^{2} + \Big[ \left(\dfrac{3000\; kg}{4200\; kg}\right) (11.1\; m/s) \Big]^{2}} \\ & = \sqrt{(4.76\; m/s)^{2} + (7.93\; m/s)^{2}} \\ & = 9.25\; m/s \approx 33.3\; km/hr \ldotp \end{split}$$ As for its direction, using the angle shown in the figure, $$\theta = \tan^{-1} \left(\dfrac{v_{w,x}}{v_{w,y}}\right) = \tan^{-1} \left(\dfrac{7.93\; m/s}{4.76\; m/s}\right) = 59^{o} \ldotp$$ This angle is east of north, or 31° counterclockwise from the +x-direction. Significance As a practical matter, accident investigators usually work in the “opposite direction”; they measure the distance of skid marks on the road (which gives the stopping distance) and use the work-energy theorem along with conservation of momentum to determine the speeds and directions of the cars prior to the collision. We saw that analysis in an earlier section. Exercise 9.9 Suppose the initial velocities were not at right angles to each other. How would this change both the physical result and the mathematical analysis of the collision? Example 9.15 Exploding Scuba Tank A common scuba tank is an aluminum cylinder that weighs 31.7 pounds empty (Figure 9.25). When full of compressed air, the internal pressure is between 2500 and 3000 psi (pounds per square inch). Suppose such a tank, which had been sitting motionless, suddenly explodes into three pieces. The first piece, weighing 10 pounds, shoots off horizontally at 235 miles per hour; the second piece (7 pounds) shoots off at 172 miles per hour, also in the horizontal plane, but at a 19° angle to the first piece. What is the mass and initial velocity of the third piece? (Do all work, and express your final answer, in SI units.) Strategy To use conservation of momentum, we need a closed system. If we define the system to be the scuba tank, this is not a closed system, since gravity is an external force. However, the problem asks for the just the initial velocity of the third piece, so we can neglect the effect of gravity and consider the tank by itself as a closed system. Notice that, for this system, the initial momentum vector is zero. We choose a coordinate system where all the motion happens in the xy-plane. We then write down the equations for conservation of momentum in each direction, thus obtaining the x- and y-components of the momentum of the third piece, from which we obtain its magnitude (via the Pythagorean theorem) and its direction. Finally, dividing this momentum by the mass of the third piece gives us the velocity. Solution First, let’s get all the conversions to SI units out of the way: $$31.7\; lb \times \frac{1\; kg}{2.2\; lb} \rightarrow 14.4\; kg$$ $$10\; lb \rightarrow 4.5\; kg$$ $$235\; \frac{miles}{hour} \times \frac{1\; hour}{3600\; s} \times \frac{1609\; m}{mile} = 105\; m/s$$ $$7\; lb \rightarrow 3.2\; kg$$ $$172 \frac{mile}{hour} = 77\; m/s$$ $$m_{3} = 14.4\; kg - (4.5\; kg + 3.2\; kg) = 6.7\; kg \ldotp$$ Now apply conservation of momentum in each direction. x-direction: $$\begin{split} p_{f,x} & = p_{0,x} \\ p_{1,x} + p_{2,x} + p_{3,x} & = 0 \\ m_{1} v_{1,x} + m_{2} v_{2,x} + p_{3,x} & = 0 \\ p_{3,x} & = -m_{1} v_{1,x} - m_{2} v_{2,x} \end{split}$$ y-direction: $$\begin{split} p_{f,y} & = p_{0,y} \\ p_{1,y} + p_{2,y} + p_{3,y} & = 0 \\ m_{1} v_{1,y} + m_{2} v_{2,y} + p_{3,y} & = 0 \\ p_{3,y} & = -m_{1} v_{1,y} - m_{2} v_{2,y} \end{split}$$ From our chosen coordinate system, we write the x-components as $$\begin{split} p_{3,x} & = - m_{1} v_{1} - m_{2} v_{2} \cos \theta \\ & = - (14.5\; kg)(105\; m/s) - (4.5\; kg)(77\; m/s) \cos (19^{o}) \\ & = -1850\; kg\; \cdotp m/s \ldotp \end{split}$$ For the y-direction, we have $$\begin{split} p_{3,y} & = 0 - m_{2} v_{2} \sin \theta \\ & = - (4.5\; kg)(77\; m/s) \sin (19^{o}) \\ & = -113\; kg\; \cdotp m/s \ldotp \end{split}$$ This gives the magnitude of p 3: $$\begin{split} p_{3} & = \sqrt{p_{3,x}^{2} + p_{3,y}^{2}} \\ & = \sqrt{(-1850\; kg\; \cdotp m/s)^{2} + (-113\; kg\; \cdotp m/s)} \\ & = 1854\; kg\; \cdotp m/s \ldotp \end{split}$$ The velocity of the third piece is therefore $$v_{3} = \frac{p_{3}}{m_{3}} = \frac{1854\; kg\; \cdotp m/s}{6.7\; kg} = 277\; m/s \ldotp$$ The direction of its velocity vector is the same as the direction of its momentum vector: $$\phi = \tan^{-1} \left(\dfrac{p_{3,y}}{p_{3,x}}\right) = \tan^{-1} \left(\dfrac{113\; kg\; \cdotp m/s}{1850\; kg\; \cdotp m/s}\right) = 3.5^{o} \ldotp$$ Because \(\phi\) is below the −x -axis, the actual angle is 183.5° from the +x-direction. Significance The enormous velocities here are typical; an exploding tank of any compressed gas can easily punch through the wall of a house and cause significant injury, or death. Fortunately, such explosions are extremely rare, on a percentage basis. Exercise 9.10 Notice that the mass of the air in the tank was neglected in the analysis and solution. How would the solution method changed if the air was included? How large a difference do you think it would make in the final answer? Contributors Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
2019-09-04 12:06 Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Detaljert visning - Lignende elementer 2019-08-15 17:39 LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljert visning - Lignende elementer 2019-08-15 17:36 Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljert visning - Lignende elementer 2019-05-15 16:57 Detaljert visning - Lignende elementer 2019-02-12 14:01 XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Detaljert visning - Lignende elementer 2019-01-21 09:59 Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Detaljert visning - Lignende elementer 2019-01-15 14:22 Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Detaljert visning - Lignende elementer 2019-01-10 15:54 Detaljert visning - Lignende elementer 2018-12-20 16:31 Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Detaljert visning - Lignende elementer 2018-12-14 16:02 The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Detaljert visning - Lignende elementer
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Assume $f$ : E -> $\mathbb{R}$ is a continuous function defined on some set E $\subset$ $\mathbb{R}$ and if {$a_j$}$_{j\in\mathbb{N}}$ $\subset$ E is a Cauchy sequence is it true that {$f(a_j)$}$_{j\in\mathbb{N}}$ is also a Cauchy sequence? Not necessarily. A sufficient condition such that $\{f(x_j)\}$ is again Cauchy is that $f$ does not increase distances, as defined in this question: Function doesn't increase distance. This is because, for any continuous function $f$ such that $\{f(x_j)\}$ is no longer Cauchy, one has that $f$ must increase distances. In fact such an $f$, despite being continuous, would increase distances either just as rapidly or even more rapidly than the distances between terms in the Cauchy sequence decrease -- this would cause the sequence to no longer be Cauchy. The standard example is the following: let $\{a_j\} = \frac{1}{j}$. Then let $f$ = $\frac{1}{x}$ which is continuous. But $\{f(a_j)\}$ is the sequence $j$, which is clearly not Cauchy. Clearly the distances between terms for $f(a_j)$ do not decrease to zero -- instead they remain constant at $1$. Thus $f$ in some sense "exactly cancels out" the approach of the inter-term distances to zero which we had for $\{a_j\}$. (To address how this relates to the comments on the question above -- the limit of the original sequence, $0$, is not in the domain of the function $f(x)=\frac{1}{x}$, hence there is no contradiction.)
Let $(E_i,T_i)_{i\in I}$ be a family of topological spaces, and let $(E,T)$ be the product of these spaces. Let $f_i:E\to E_i\ (i\in I)$ be a family of mappings. Suppose that each $f_i$ is $(T,T_i)-$ continuous. Denote the product of these mappings by $\displaystyle\prod_{i\in I}f_i,$ or shortly $f.$ Denote the image of $f$ by $D,$ and let $T_D$ be the subspace topology on $D$ inherited from $(E,T).$ Next we'll introduce a condition for family $ (f_i)_{i\in I}.$ Condition:For any $T-$ closed set $F,$ and for any $x\in E/F,$ there exits some $i\in I,$ such that $f_i(x)\notin\mathrm{Cl}_{T_i}\big(f_i(F)\big).$ (Here $\mathrm{Cl}_{T_i}$ means taking the closure with respect to topology $T_i$.) Claim: If the family $(f_i)_{i\in I}$ satisfies the condition above, then the map $f$ is $(T,T_D)-$ open ,i.e. $X$ is a $T-$ open set $\Longrightarrow$ $f(X)$ is a $T_D-$ open set. Question: Is the claim above true or false? Any help would be appreciated. Background: I encountered this claim in a topology workbook, A General Topology Workbook by Iain T.Adamson, and I highly suspect its validity. The condition looks quite weird to me. My doubt rose when I found that the proof on that book is wrong. I tried to find one counterexample, but only got lost in details of the construction of each $f_i.$ Update: I find that in that very book, using the claim above the author proves an astonishing result: every topological space $(E,T)$ is homeomorphic to a subspace of $(\prod_{i\in I}X,\ \prod_{I\in I}T_0),$ here $X=\left\{a,b,c\right\},T_0=\left\{\emptyset,\left\{a\right\},X\right\},\ I=E\cup T.$
Garabedian,Typically, the "swap curve" refers to an x-y chart of par swap rates plotted against their time to maturity. This is typically called the "par swap curve."Your second question, "how it relates to the zero curve," is very complex in the post-crisis world.I think it's helpful to start the discussion with a government bond yield curve to ... I like to present to you a slightly different approach:Historically, only one single yield curve was derived from different instruments, such as OIS, deposit rates, or swap rates. However, market practice nowadays is to derive multiple swap curves, optimally one for each rate tenor. This idea goes against the idea of one fully-consistent zero coupon curve, ... I guess it depends on what they're referring to... The traditional swap curve (LIBOR-based) is certainly not risk free, as evidenced by the experience of the financial crisis and the resulting migration to OIS discounting. The OIS curve (which is a kind of swap curve...) is now the standard risk-free curve.The Treasury yield curve is not favored, because ... (In addition to the answers of Freddy and Phil H):With "modern" multi-curve setups: You have to distinguish between discount curves (which describe todays value of the a future fixed payoff (e.g. a zero coupon bond)) and forward curve, which describe the expectation (in a specific sense) of future interest rate fixings.Swaps pay LIBOR rates and are ... The OIS is not the secured (collateralised) lending rate. It represents the cost of repeated overnight unsecured lending over periods of up to two weeks (sometimes more). Because it is based on overnight lending, it is assumed to have a lower credit risk than longer term interbank loans based on say 1M, 2M or 3M Libor and this is what drivers the OIS-Libor ... You should take a look at the example from Hull's book.Assume that the 6-month, 12-month, 18-month zero rates are 4%, 4.5%, and 4.8%, respectively.Suppose we know that the 2-year swap rate is 5%, which implies thata 2-year bond with a semiannual coupon of 5% per annum sells for par:$$2.5 e^{-0.04 \bullet 0.5} + 2.5 e^{-0.045 \bullet 1.0}+ 2.5 e^{-... A Basis swap is a broad category of swaps where you exchange one floating rate against another floating rate. Without knowing the specific rates involved it is difficult to say more.An OIS Swap is an Overnight Index Swap, where you exchange a fixed rate against an average of the overnight rates for the tenor of the swap. For example, if the ON rate is ... The reason why you can price a swap without a model is because you can replicate the payoff using only zero-coupon bonds.For the fixed leg this is trivial.For the floating leg,at $T_0$ invest $1$ at Libor,at $T_1$ you get $1/B(T_0,T_1) = 1 + \tau L(T_0,T_1)$,you pay the floating coupon $\tau L(T_0,T_1)$reinvest $1$ at Liboretc...at $T_{n}$,... To elaborate on Freddy's answer:These days you need to maintain a separate funding (usually OIS) curve to your Libor* type curves. Once you have this discounting curve, you can calculate from Libor instrument market data what the market estimations of that Libor are: 3m instruments like Interest Rate Futures, IRS with a 3m float leg, 3m FRAs can be used to ... No, I'm afraid you're comparing apples with oranges. Your calculation of the DV01 of the swap is correct (with a caveat, see below), but the figure returned from swap.fixedLegBPS is not comparable.The DV01 tells you what happens to the NPV if the interest-rate curve change; in the case of the fixed leg, this affects the discount factors used to discount ... Chapter 1: Goldilocks is ousted by the bearsOnce upon a time, the banks used a fixing called LIBOR as a measure of the risk-free interest rate. Then the big hairy crisis came along and ate all our assumptions, leaving just the bones of the fixing (upon which everything else still fixes) and the mantle of risk-free rate proxy was passed on to a family of ... It is incorrect to use 1m euribor or O/N euribor in a 6m Euribor forward curve. You should only use instruments based on 6M euribor, such as 1x7 FRA, 6x12 FRA or swaps v 6m Euribor, as you have done in your second example. The actual 6m euribor fixing itself can be thought of as a 0x6 FRA out of spot.Before the financial crisis basis between different ... There is no contradiction.If the strike of the floor and cap are both equal to the swap rate, and all accrual/payment frequencies, etc. are the same, then put-call partiy implies$$C_{t}-F_{t}=S_{t},$$where $C_{t},F_{t},S_{t}$ are the values of the cap, floor and swap instruments at time $t$.Since the (theoretical Black-Scholes) volatility is ... At most banks, swaption traders have models that allow non atm volatilities to be controlled by two parameters. Specifically , a parameter to control the smile (richness of out of the money options) and the skew (whether implied vol is upward or downward sloping as a function of strike ). Look up papers on the SABR model.In practice, one would ... The key inputs to this calculation are two yield curves obtained from market data: $\{v_i\}$ the discounting factors (value today of \$1 received at time i) and $\{r_i\}$ the forecasting curve (forward semiannual rates for period i to i+1).The calculation itself proceeds as follows. There are two legs to a fixed/floating interest rate swap.The fixed leg,... Vol_smile. The sentence as you quote it doesn't make much sense, but my guess as to what they mean is this:OIS stands for Overnight Index Swap. In the US the overnight rate is called Fed Funds as 'experequite' mentioned (in the Euro-zone it is Eonia). The bank is borrowing at 3m Libor, which in this example is currently 2.10%. If 3m Fed Funds OIS is at 2%, ... are you using the same volatility 20% for both black76 and Bachelier?The black76 is a lognormal model, where volatilities are quoted as relative price changes. The bachelier/normal model quotes volatilities as absolute changes.That might be what you're missing?Kind regards A forward rate agreement is an agreement to exchange a fixed for a floating rate over one period, with the payment being made at the start of the period.A zero coupon swap (with both legs paid at maturity) is an agreement to exchange a fixed for floating rate over one or more periods, with the payments being made at the end of the final period.So the two ... thanks for all answers above.William's answer is more direct. actually i was quite new to the calibration area one year ago, so my question is quite simple but that simplicity might mislead others to a complex context.to comment on my own question in case anyone new to it might drop it, Damiano Brigo's book Interest Rate Models Theory and Practice (2006) ... I recenlty worked on a similar problem and solved it with the help of Quantlib library.Assuming you are working with EUR and USD:get cross currency (xccy) swap data EUR / USD. You want to know howthe xccy is collateralized and if Mark-to-Market resets apply to theUSD leg.get interest rates swaps fixed vs ois / 3m / 6m in EUR and USDbuild USD/FedFunds ... Consider a date sequence\begin{align*}0 \leq t_0 \leq T_s < T_p < T_e,\end{align*}where $t_0$ is the valuation date, $T_s$ is the Libor start date, $T_p$ is the payment date, and $T_e$ is the Libor end date. Let $\Delta_s^e = T_e-T_s$.For $0\le t \le T_s$, define\begin{align*}L(t, T_s, T_e) = \frac{1}{\Delta_s^e}\bigg(\frac{P(t, T_s)}{P(t, T_e)}-... The ois curves were (and still are) primarily build from adding together (a) interest rate swap rates and (b) Fed Funds/Libor basis swaps. For example, if 10yr swaps are 2.0%, and 10yr fF/libor is -35bp, the 10yr ois is 1.65%.The basis swaps have been liquid for decades, so this calculation has always been possible. However, participants didn't ... It turns out that the two things are the same, appropriately scaled. Proof: we can construct a 5 year swap using 3 month libor combined with a 3mo-4.75yr forward swap, weighted by the dv01s of each part. Thus, ignoring discounting, we have5yr swap rate = (0.25*3mo libor + 4.75*forward rate)/5.This can be rewritten as0.25*(5yr swap rate - ... I would not say that this is universally acknowledged but here is my view:Instead of considering par rates, i.e. 10Y and 20Y, consider forward rates, such as 10y and 10y10y. The useful difference here is that forwards do not 'overlap' and therefore incorporate aspects of each other into the price. A 20Y is >50% directly dependent upon the 10Y price for ... Suppose 40yr bond and 30yr bond have the same yield. It is a mathematical fact as @attack68 has pointed out, that the convexity of the 40yr is greater than the convexity of the 30yr bond. So consider the following strategy ; long the 40 yr bond and short the 30yr bond with the same dv01. Then every time the market moves, you make money (get longer when ... It's an interesting question. The fundamentally devout macro wannabe-strategist within cries out for a long-term growth/inflation expectation narrative. However, the cynical realist within reminds that although the market does make long-term predictions thus because it has to create prices then, there is no latent consensus that the world will really look so ... (This is my opinion; someone is likely to disagee).I like to think of the carry as the predictable part (e.g. the coupon that accrues daily) and the rolldown as the stochastic part (the curves moved - maybe the forwards realized, maybe not. A good estimate of what it might turn out to be as to reprice for the next day assuming all forwards are realized.I ... First of all, it seems that you are solely concerned about the Funding Valuation Adjustment (FVA) here, and not CVA; Sovereigns have credit risk which should also be valued here given they would not be posting any collateral as mitigant when the market moves in your favour. But let's focus on FVA:It is important to think about FVA (and all other VAs also) ... An interest rate swap (IRS) can have a vega component if it is not a standard IRS.If you are familiar with the convexity adjustment for FRAs (and single period IRSs) compared with their respective short term interest rate (STIR) future, you will be aware that it is the different gamma components of these products that result in profit-and-loss (PnL) over ... Instrument 2 looks to me like the standard regular definition of a 3x6 FRA. This is a relatively liquid instrument, so that forward rate r2 is just the price of the FRA and is available on Bloomberg, etc. If you have a yield curve model and associated suite of functions there will certainly be a function to return that forward rate, because it's vanilla....
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
Category:Definitions/Geometric Rotations A rotation is defined usually for either: $n = 2$, representing the plane or: $n = 3$, representing ordinary space. $\map {r_\alpha} O = O$ That is, $O$ maps to itself. Let $P \in \Gamma$ such that $P \ne O$. Let $OP$ be joined by a straight line. Let a straight line $OP'$ be constructed such that: $(1): \quad OP' = OP$ $(2): \angle POP' = \alpha$ such that $OP \to OP'$ is in the anticlockwise direction: Then: $\map {r_\alpha} P = P'$ $\forall P \in AB: \map {r_\theta} P = P$ Let $P \in \Gamma$ such that $P \notin AB$. $(1): \quad OP' = OP$ $(2): \quad \angle POP' = \theta$ such that $OP \to OP'$ is in the anticlockwise direction: Then: $\map {r_\theta} P = P'$ Pages in category "Definitions/Geometric Rotations" The following 9 pages are in this category, out of 9 total.
The marginal pdf will be calculated over the area defined by a triangle as mentioned in the comments. The reason for it lies in the boundary constraints $0 < x < y < 2$, where the bivariate joint pdf is defined. To see this, we can mentally slide along the $x$ axis from $0$ to $2$, and see how at any given point, the $y$ axis will be past the bisecting line ($y = x$) on the $xy$ plane by the inequality $y > x$, and with a maximum of $2.$ I tried illustrating this with the following plot, which looks at the $z= \frac{1}{2}\,x\,y$ surface slightly from the top and front. The area we are going to integrate over when obtaining the marginal pdf's will be the light blue (crayon) triangle, which will cut through the overhead surface "in front of" the yellow plane: And here is the view of the region where the pdf is defined (in blue) seen from above with the $z = 1/2\,x\,y$ surface in gray: Therefore the marginal pdf of $Y$ will be found by integrating the joint pdf, $f(x,y)$ over the support of $Y$, corresponding to the triangle in the plot: $f_Y(y) =\displaystyle\int_{x\,=\,0}^{x\,=\,y} f(x,y)\, dx = \displaystyle\int_0^y \frac{1}{2}\,x\,y\, dx = \frac{x^2}{4}\,y\,\Big|_{x\,=\,0}^{x\,=\,y}=\frac{y^3}{4}$, for $0<y<2.$ And the marginal of $X$ will be obtained by integrating away the $y$, keeping in mind, the the lower boundary of $Y$ is $X$: $f_X(x) =\displaystyle\int_{y\,=\,x}^{y\,=\,2} f(x,y)\, dy = \displaystyle\int_x^2 \frac{1}{2}\,x\,y\, dy =x\, \frac{y^2}{4}\,\Big|_{y\,=\,x}^{y\,=\,2}=x-\frac{x^3}{4}$, for $0 < x < 2.$ The conditional pdf of $Y$ given $X=x$, $f_{Y|X}$ is given by $\large \frac{f(x,y)}{f_X(x)}=\frac{2\,y}{4-x^2}$. We know that the marginal of $X$ and the joint pdf. The conditional will have the same support, $0<x<y$ and $x<y<2.$
First, you are confusing Lie groups with Lie algebras. Casimir elements are objects that can be attached to certain Lie algebras. Second, Casimir elements do not always exist. For any Lie algebra $\mathfrak{g}$, there is a canonical bilinear form, the Killing form $$B(x, y) = \text{tr}(\text{ad}_x \text{ad}_y)$$ where $\text{ad}_x(y) = [x, y]$ is the adjoint action of $\mathfrak{g}$ on itself. The Casimir element exists if and only if the Killing form is nondegenerate, which is equivalent to $\mathfrak{g}$ being semisimple (in particular, finite-dimensional). Concretely this means that $B(x, -)$ is a nonzero linear functional for any nonzero $x$. Abstractly this means that the map$$\mathfrak{g} \ni x \mapsto B(x, -) \in \mathfrak{g}^{\ast}$$ (where $\mathfrak{g}^{\ast}$ is the dual space of linear functionals $\mathfrak{g} \to k$ for our base field $k$ of characteristic zero) is an isomorphism. For an abelian Lie algebra, the Killing form is identically zero, and so the Casimir element does not exist in that case. In the semisimple case, the Killing form itself defines a linear functional $\mathfrak{g} \otimes \mathfrak{g} \to k$ (where $\otimes$ denotes the tensor product), or an element of $\mathfrak{g}^{\ast} \otimes \mathfrak{g}^{\ast}$, and because of the above isomorphism one can equivalently write the Killing form as an element of $\mathfrak{g} \otimes \mathfrak{g}$. This is the Casimir element. Concretely, we can compute the Casimir element as follows. Given a basis $e_1, ... e_n$ of $\mathfrak{g}$, compute the Killing form $B$ using the structure constants of $\mathfrak{g}$, then compute the dual basis $f_1, ... f_n$, which is the unique basis satisfying$$B(e_i, f_j) = \delta_{ij}.$$ Then the Casimir element is given by $\sum e_i \otimes f_i$. Since you are aware of this definition, perhaps what you're stuck on is either computing the Killing form or computing the dual basis. For fixed $\mathfrak{g}$ both of these are fairly straightforward linear algebra. Which step are you having trouble with? This post imported from StackExchange Mathematics at 2014-06-02 10:58 (UCT), posted by SE-user Qiaochu Yuan
I also do not understand the textbook solution. In an equilibrium situation the ball is effectively rolling down a plane inclined at angle $\theta$ while the plane is itself accelerating upwards along the plane. In the ground frame of reference the centre of mass of the ball is stationary, so the resultant force on it must be zero. However, there is a non-zero torque which causes rotational acceleration down the plane. The torque acting on the ball is $mgr\sin\theta$ where $r$ is its radius. Its acceleration down the plane, relative to the point of contact, is $r\beta$ where $\beta$ is its angular acceleration. Since there is no slipping at the point of contact, $r\beta$ must also equal the acceleration of the point of contact up the plane, which is $R\alpha$ where $R>r$ is the radius of the cylinder. That is : $$r\beta=R\alpha$$ The equation of motion for the angular acceleration of the ball is therefore $$mgr\sin\theta=I\beta=\frac25 mr^2 \frac{R}{r}\alpha$$ $$\sin\theta=\frac{2R\alpha}{5g}$$ If there is a finite coefficient of friction $\mu$ then we must also have that $$\tan\theta \le \mu$$
Let's assume we restrict consideration to symmetric distributions where the mean and variance are finite (so the Cauchy, for example, is excluded from consideration). Further, I'm going to limit myself initially to continuous unimodal cases, and indeed mostly to 'nice' situations (though I might come back later and discuss some other cases). The relative variance depends on sample size. It's common to discuss the ratio of ($n$ times the) the asymptotic variances, but we should keep in mind that at smaller sample sizes the situation will be somewhat different. (The median sometimes does noticeably better or worse than its asymptotic behaviour would suggest. For example, at the normal with $n=3$ it has an efficiency of about 74% rather than 63%. The asymptotic behavior is generally a good guide at quite moderate sample sizes, though.) The asymptotics are fairly easy to deal with: Mean: $n\times$ variance = $\sigma^2$. Median: $n\times$ variance = $\frac{1}{[4f(m)^2]}$ where $f(m)$ is the height of the density at the median. So if $f(m)>\frac{1}{2\sigma}$, the median will be asymptotically more efficient. [In the normal case, $f(m)=\frac{1}{\sqrt{2\pi}\sigma}$, so $\frac{1}{[4f(m)^2]}=\frac{\pi\sigma^2}{2}$, whence the asymptotic relative efficiency of $2/\pi$)] We can see that the variance of the median will depend on the behaviour of the density very near the center, while the variance of the mean depends on the variance of the original distribution (which in some sense is affected by the density everywhere, and in particular, more by the way it behaves further away from the center) Which is to say, while the median is less affected by outliers than the mean, and we often see that it has lower variance than the mean when the distribution is heavy tailed (which does produce more outliers), what really drives the performance of the median is inliers. It often happens that (for a fixed variance) there's a tendency for the two to go together. That is, broadly speaking, as the tail gets heavier, there's a tendency for (at a fixed value of $\sigma^2$) the distribution to get "peakier" at the same time (more kurtotic, in a loose sense). This is not, however, a certain thing - it tends to be the case across a broad range of commonly considered densities, but it doesn't always hold. When it does hold, the variance of the median will reduce (because the distribution has more probability in the immediate neighborhood of the median), while the variance of the mean is held constant (because we fixed $\sigma^2$). So across a variety of common cases the median will often tend to do "better" than the mean when the tail is heavy, (but we must keep in mind that it's relatively easy to construct counterexamples). So we can consider a few cases, which can show us what we often see, but we shouldn't read too much into them, because heavier tail doesn't universally go with higher peak. We know the median is about 63.7% as efficient (for $n$ large) as the mean at the normal. What about, say a logistic distribution, which like the normal is approximately parabolic about the center, but has heavier tails (as $x$ becomes large, they become exponential). If we take the scale parameter to be 1, the logistic has variance $\pi^2/3$ and height at the median of 1/4, so $\frac{1}{4f(m)^2}=4$. The ratio of variances is then $\pi^2/12\approx 0.82$ so in large samples, the median is roughly 82% as efficient as the mean. Let's consider two other densities with exponential-like tails, but different peakedness. First, the hyperbolic secant ($\text{sech}$) distribution, for which the standard form has variance 1 and height at the center of $\frac{1}{2}$, so the ratio of asymptotic variances is 1 (the two are equally efficient in large samples). However, in small samples the mean is more efficient (its variance is about 95% of that for the median when $n=5$, for example). Here we can see how, as we progress through those three densities (holding variance constant), that the height at the median increases: Can we make it go still higher? Indeed we can. Consider, for example, the double exponential. The standard form has variance 2, and the height at the median is $\frac{1}{2}$ (so if we scale to unit variance as in the diagram, the peak is at $\frac{1}{\sqrt{2}}$, just above 0.7). The asymptotic variance of the median is half that of the mean. If we make the distribution peakier still for a given variance, (perhaps by making the tail heavier than exponential), the median can be far more efficient (relatively speaking) still. There's really no limit to how high that peak can go. If we had instead used examples from say the t-distributions, broadly similar effects would be seen, but the progression would be different; the crossover point is a little below $\nu=5$ df (actually around 4.68) -- for smaller df the median is more efficient, for large df the mean is. ... At finite sample sizes, it's sometimes possible to compute the variance of the distribution of the median explicitly. Where that's not feasible - or even just inconvenient - we can use simulation to compute the variance of the median (or the ratio of the variance*) across random samples drawn from the distribution (which is what I did to get the small sample figures above). * Even though we often don't actually need the variance of the mean, since we can compute it if we know the variance of the distribution, it may be more computationally efficient to do so, since it acts like a control variate (the mean and median are often quite correlated).
Introduction Built at the Jet Propulsion Laboratory by an Investigation Definition Team (IDT) headed by John Trauger, WFPC2 was the replacement for the first Wide Field and Planetary Camera (WF/PC-1) and includes built-in corrections for the spherical aberration of the HST Optical Telescope Assembly (OTA). The WFPC2 was installed in HST during the First Servicing Mission in December 1993. Early IDT report of the WFPC2 on-orbit performance: Trauger et al. (1994, ApJ, 435, L3) A more detailed assessment of its capabilities: Holtzman et al. (1995, PASP, 107, page 156 and page 1065). The WFPC2 was used to obtain high resolution images of astronomical objects over a relatively wide field of view and a broad range of wavelengths (1150 to 11,000 Å). WFPC2 was installed during the first HST Servicing Mission in 1993 and removed during Servicing Mission 4 in 2009. WFPC2 data can be found on the MAST Archive. ISRs Filter WFPC2 ISRs Listing Results 2010-04: The Dependence of WFPC2 Charge Transfer Efficiency on Background Illumination 2010-01: WFPC2 Standard Star CTE Optical Configuration While it was in operation, the WFPC2 field of view was located at the center of the HST focal plane. The central portion of the f/24 beam coming from the OTA would be intercepted by a steerable pick-off mirror attached to the WFPC2 and diverted through an open port entry into the instrument. The beam would then pass through a shutter and interposable filters. An assembly of 12 filter wheels contained a total of 48 spectral elements and polarizers. The light would then fall onto a shallow-angle, four-faceted pyramid, located at the aberrated OTA focus. Each face of the pyramid was a concave spherical surface, dividing the OTA image of the sky into four parts. After leaving the pyramid, each quarter of the full field of view would then be relayed by an optically flat mirror to a Cassegrain relay that would form a second field image on a charge-coupled device (CCD) of 800 x 800 pixels. Each of these four detectors were housed in a cell sealed by a MgF2 window, which is figured to serve as a field flattener. The aberrated HST wavefront was corrected by introducing an equal but opposite error in each of the four Cassegrain relays. An image of the HST primary mirror would then be formed on the secondary mirrors in the Cassegrain relays. The spherical aberration from the telescope's primary mirror would be corrected on these secondary mirrors, which were extremely aspheric; the resulting point spread function was quite close to that originally expected for WF/PC-1. Field of View The U2,U3 axes were defined by the "nominal" Optical Telescope Assembly (OTA) axis, which was near the center of the WFPC2 FOV. The readout direction was marked with an arrow near the start of the first row in each CCD; note that it rotated 90 degrees between successive chips. The x,y arrows mark the coordinate axes for any POS TARG commands that may have been specified in the proposal. An optional special requirement in HST observing proposals, places the target an offset of POS TARG (in arcsec) from the specified aperture. Camera Configurations Camera Pixels Field of View Scale f/ratio PC (PC1) 800 x 800 36" x 36" 0.0455" per pixel 28.3 WF2, 3, 4 800 x 800 80" x 80" 0.0996" per pixel 12.9 A Note about HST File Formats Data from WFPC2 are made available to observers as files in Multi-Extension FITS (MEF) format, which is directly readable by most PyRAF/IRAF/STSDAS tasks. All WFPC2 data are now available in either waivered FITS or MEF formats. The user may specify either format when retrieving that data from the HDA. WFPC2 data, in either Generic Edited Information Set (GEIS) or MEF formats, can be fully processed with STSDAS tasks. The figure below provides a physical representation of the typical data format. Resources Charge Traps There are about 30 pixels in WFPC2 that are "charge traps" which do not transfer charge efficiently during readout, producing artifacts that are often quite noticeable. Typically, charge is delayed into successive pixels, producing a streak above the defective pixel. In the worst cases, the entire column above the pixel can be rendered useless. On blank sky, these traps will tend to produce a dark streak. However, when a bright object or cosmic ray is read through them, a bright streak will be produced. Here, we show streaks (a) in the background sky, and (b) stellar images produced by charge traps in the WFPC2. Individual traps have been cataloged and their identifying numbers are shown. Warm Pixels and Annealing Decontaminations (anneals), during which the instrument is warmed up to about 22 o C for a period of six hours, were performed about once per month. These procedures are required in order to remove the UV-blocking contaminants which gradually build-up on the CCD windows (thereby restoring the UV throughput) as well as fix warm pixels. Examples of warm pixels are presented in the figure below. Calibration Procedure Estimated Accuracy Notes Bias subtraction 0.1 DN rms Unless bias jump is present Dark subtraction 0.1 DN/hr rms Error larger for warm pixels; absolute error uncertain because of dark glow Flat fielding <1% rms large scale Visible, near UV 0.3% rms small scale Visible, near UV ~10% F160BW; however, significant noise reduction achieved with use of correction flats Relative Photometry Procedure Estimated Accuracy Notes Residuals in CTE correction < 3% for the majority (~90%) of cases up to 1-% for extreme cases (e.g., very low backgrounds) Long vs. short anomaly (uncorrected) < 5% Magnitude errors <1% for well-exposed stars but may be larger for fainter stars. Some studies have failed to confirm the effect. (see Chapter 5 of IHB for more details) Aperture correction 4% rms focus dependence (1 pixel aperture) Can (should) be determined from data <1% focus dependence (> 5 pixel) Can (should) be determined from data 1-2% field dependence (1 pixel aperture) Can (should) be determined from data Contamination correction 3% rms max (28 days after decon) (F160BW) 1% rms max (28 days after decon) (filters bluer than F555W) Background determination 0.1 DN/pixel (background > 10 DN/pixel) May be difficult to exceed, regardless of image S/N Pixel centering < 1% Absolute Photometry Precedure Estimated Accuracy Sensitivity < 2% rms for standard photometric filters 2% rms for broad and intermediate filters in visible < 5% rms for narrow-band filters in visible 2-8% rms for UV filters Astrometry Procedure Estimated Accuracy Notes Relative 0.005" rms (after geometric and 34th-row corrections) Same chip 0.1" (estimated) Across chips Absolute 1" rms (estimated) Photometric Systems Used for WFPC2 Data The WFPC2 flight system is defined so that stars of color zero in the Johnson-Cousins UBVRI system have color zero between any pair of WFPC2 filters and have the same magnitude in V and F555W. This system was established by Holtzman et al. (1995b) The zeropoints in the WFPC2 synthetic system, as defined in Holtzman et al. (1995b), are determined so that the magnitude of Vega, when observed through the appropriate WFPC2 filter, would be identical to the magnitude Vega has in the closest equivalent filter in the Johnson-Cousins system. \(m_{AB} = -48.60-2.5\log f_\nu \) \(m_{ST} = -21.10-2.5\log f_\lambda\) Photometric Corrections A number of corrections must be made to WFPC2 data to obtain the best possible photometry. Some of these, such as the corrections for UV throughput variability, are time dependent, and others, such as the correction for the geometric distortion of WFPC2 optics, are position dependent. Finally, some general corrections, such as the aperture correction, are needed as part of the analysis process. Here we provide examples of factors affecting photometric corrections. Cool Down on April 23, 1994 PSF Variations 34th Row Defect Gain Variation Pixel Centering Possible Variation in Methane Quad Filter Transmission Polarimetry WFPC2 has a polarizer filter which can be used for wide-field polarimetric imaging from about 200 through 700 nm. This filter is a quad, meaning that it consists of four panes, each with the polarization angle oriented in a different direction, in steps of 45 o. The panes are aligned with the edges of the pyramid, thus each pane corresponds to a chip. However, because the filters are at some distance from the focal plane, there is significant vignetting and cross-talk at the edges of each chip. The area free from vignetting and cross-talk is about 60" square in each WF chip, and 15" square in the PC. It is also possible to use the polarizer in a partially rotated. Accurate calibration of WFPC2 polarimetric data is rather complex, due to the design of both the polarizer filter and the instrument itself. WFPC2 has an aluminized pick-off mirror with a 47° angle of incidence, which rotates the polarization angle of the incoming light, as well as introducing a spurious polarization of up to 5%. Thus, both the HST roll angle and the polarization angle must be taken into account. In addition, the polarizer coating on the filter has significant transmission of the perpendicular component, with a strong wavelength dependence. Astrometry Astrometry with WFPC2 means primarily relative astrometry. The high angular resolution and sensitivity of WFPC2 makes it possible, in principle, to measure precise positions of faint features with respect to other reference points in the WFPC2 field of view. On the other hand, the absolute astrometry that can be obtained from WFPC2 images is limited by the positions of the guide stars, usually known to about 0.5" rms in each coordinate, and by the transformation between the FGS and the WFPC2, which introduces errors of order of 0.1" Because WFPC2 consists of four physically separate detectors, it is necessary to define a coordinate system that includes all four detectors. For convenience, sky coordinates (right ascension and declination) are often used; in this case, they must be computed and carried to a precision of a few mas, in order to maintain the precision with which the relative positions and scales of the WFPC2 detectors are known. It is important to remember that the coordinates are not known with this accuracy. The absolute accuracy of the positions obtained from WFPC2 images is typically 0.5" rms in each coordinate and is limited primarily by the accuracy of the guide star positions.
This MSE question made me wonder where the Leibnitz notation $\frac{d^2y}{dx^2}$ for the second derivative comes from. It does not arise immediately as the obvious generalization of $\frac{dy}{dx}$. Did Leibnitz use it himself? Or was it introduced later? This MSE question made me wonder where the Leibnitz notation $\frac{d^2y}{dx^2}$ for the Leibniz did use this notation for instance in his paper Supplementum geometriae practicae, Acta Eruditorum, April 1693, p. 179 (Google Books link): The differential symbold $dx$ is due to Leibniz. He introduced also "iterated" differentials; see : H.J.M. Bos, Differentials and Higher-Order Differentials in the Leibnizian Calculus (1974), page 17: Moreover, to introduce higher-order differentials, first-order differentials have to be conceived as variables ranging over an ordered sequence; if only a single $dx$ is considered, $ddx$ does not make sense. The following quotation from Leibniz ["Monitum de characteribus algebraicis", 1710] illustrates this: Further, $ddx$ is the element of the element or the difference of the differences, for the quantity $dx$ itself is not always constant, but usually increases or decreases continually. $dx, ddx, dv, ddv, dy, ddy \ldots$ We have to note that Leibniz has $xx$ for $x^2$; see page 151 : Then, since $y-xx : a \ldots$ The accepted answer leaves no doubt that Leibniz was the first to write $d^2y/(dx)^2$ for the second derivative. But since I've found so many misleading justifications for this notation online, I feel that something additional needs to be said about it. Most justifications in the links above are along the lines of: "by formal manipulation" or "too obviously" $$\frac{d}{dx} \frac{d}{dx} =\frac{d^2}{dx^2}.\tag1$$But Leibniz, the Bernoullis or Euler would not have approved of this without reservation. Not even if the equation was written in the form $$\frac{d \left(\frac{dy}{dx} \right)}{dx} = \frac{d dy}{(dx)^2},\tag2$$which is closer to the standard of the time. To explain let me make a simple analogy first. No one today would claim that the following is correct $$ \frac{\log \frac{\log y}{\log x}}{\log x} = \frac{\log \log y}{(\log x)^2}, \tag3 $$ and everyone can spot the error. Analogously, for Leibniz, $d$ was an operator (he might not have called it that way, but he knew it acted on variables just like $\log$) and he knew the quotient rule for $d$. So he might have approved of the following general equation $$\frac{d \left(\frac{dy}{dx} \right)}{dx} = \frac{d^2y}{(dx)^2} - \frac{dy\cdot d^2x}{(dx)^3}.\tag4$$The reason the second term on the right disappeared, was because an additional assumption was often made: it was assumed that the differential of the differential of $x$ is zero (i.e. $d^2x=0$), or put differently: $dx$ was assumed constant. This can be seen in the 1693 article of Leibniz quoted by @ViktorBlasjo, a line above $ddx:\overline{dy}^2$, where he writes posita $dy$ constante It can also be found in Eulers Institutiones Calculi Differentialis (1743) § 131. Now we will proceed under the assumption that $x$ increases uniformly, so that the first differentials $dx, dx^I , dx^{II},\ldots$ are equal to each other, so that the second and higher differentials are equal to zero. We can state this condition by saying that the differential of $x$, that is $dx$, is assumed to be constant. Let $y$ be any function of $x$; ... And it can be found in Lacroix's Traité du calcul différentiel et du calcul intégral (1797) p.96 Pour la simplifier nous observons que l'accroissement $dx$ étant regardé invariable, $f'(x)dx$ se change en $f'(x+dx)dx$ ... Summarizing: for Leibniz, Euler and others the equation $$ \frac{d \left(\frac{dy}{dx} \right)}{dx} = \frac{d^2y}{dx^2} \tag5 $$ was only true under the additional assumption that $dx$ is constant. This leaves a question for me, which hopefully someone else can answer: when and why did mathematicians forget this additional assumption and simply adopt the notation $\frac{d^2}{dx^2}$ for what should actually be written as $\left(\frac{d}{dx}\right)^2$? I think $\frac{d^2y}{dx^2}$ comes from multiplying $\frac{dy}{dx}$ by $\frac{d}{dx}$. In the Notation(https://en.m.wikipedia.org/wiki/Abuse_of_notation#Derivitive) multiplication signifies iteration. (Disclaimer; This is a very rough response. There have not been any other answers yet, I will look for the notation in a textbook.) $\frac{d^2}{dx^2}$ $y$ = $\frac{d^2y}{dx^2}$ is too obviously built from $\frac{d}{dx}$$\frac{d}{dx}$ $y$ = $(\frac{d}{dx})^2$ $y$ to deserve any further explanation.
Overview So far in the course we have discussed how a source could create a wave pulse, a repeating wave or a harmonic wave. By knowing the motion of the source, we have seen that the disturbance keeps its shape and propagates with a speed \(v_{wave}\). These discussions all assumed that the medium was unperturbed before the wave propagated – what if there was another wave already in the medium? What happens when the two waves collide? An example of how this could occur is if you and a friend both hold a rope. If you wiggle your end, the wave you make will propagate toward your friend. If your friend propagates her end, her wave will propagate toward you. What happens when your waves collide? Another example is dropping two stones in a river. Eventually the ripples will overlap; how can we calculate the displacement from equilibrium? To solve this problem, consider each displacement from equilibrium separately. For each piece of rope add (as a vector) the displacement from each wave. This “combined displacement” will be the total displacement for that piece of rope. This procedure is valid provided the amplitude of the wave is small. For example, if your friend’s wave would have caused a particular piece of the rope to rise 2 cm, and your wave caused the same piece of rope to rise 1 cm, the actual amount that piece of rope will rise is 3 cm. The idea of adding the individual effects of waves to get the total effect is called superposition. Definition of Superposition Superposition is the concept of adding the effects of two (or more) waves together at the same location at the same time. This gives us the total effect from the two waves. For material waves, "effect" means "displacement," although the principle of superposition works for non-material waves (such as electromagnetic waves, the pressure interpretation of sound waves, and matter waves). For the time being, let us concentrate on material waves. We express superposition mathematically as follows: \[\Delta y_{total}(x,t) = \Delta y_1 (x,t) + \Delta y_2 (x,t)\] where \(\Delta y_1\) and \(\Delta y_2\) are the displacement from equilibrium for wave 1 and wave 2 only. The actual displacement of the medium is described by \(\Delta y_{total}\), as in the picture below. Conventions on Space and Time Some of our conventions are useful, but a little confusing at first. We have emphasized already that superposition is combining two or more waves acting at the same location at the same time. But so far we have dealt only with single source systems, and we have always chosen the origin of our coordinates to be the location of the source. What do we do if we seek to combine the effects of the two sources at an arbitrary location? To keep as close as possible to the work we have already done on waves, we adopt the following conventions: We use a universal clock \(t\). As we are combining the effect of the two waves at the same time, we should use the same value \(t\) in \(\Delta y_1\) and \(\Delta y_2\). We use a different origin for each source. Even though we are combining the waves at the same location, we have two distances \(x_1\) and \(x_2\). Here \(x_1\) is the distance between source 1 and where we wish to combine the waves; an analogous definition holds for \(x_2\). We use \(x_1\) for calculating \(y_1\) and \(x_2\) for calculating \(y_2\) even though we are interested in the same point. When we are using a sinusoidal wave we also need a convention for \(\phi_1\) and \(\phi_2\), the phase constants. The convention we use here is that \(\phi_1\) determines \(y_1\) at time \(t=0\) and \(\phi_2\) determines \(y_2\). The reason that we use these particular conventions, rather than just picking one origin, is that it allows us to keep the formulas \[\Delta y_1 = A_1 \sin \left( \dfrac{2 \pi t}{T_1} \pm_1 \dfrac{2 \pi x_1}{\lambda_1} + \phi_1 \right) \textrm{ and } \Delta y_2 = A_2 \sin \left( \dfrac{2 \pi t}{T_2} \pm_2 \dfrac{2 \pi x_2}{\lambda_2} + \phi_2 \right)\] (Here \(±_1\) and \(±_2\) refer to the direction of propagation of wave \(1\) and \(2\) respectively, and are independent.) There is one more convention that is worth noting: we treat \(x_1\) and \(x_2\) as positive distances from the source. For a wave that travels outward (this is almost always the case) we would use the − sign. This is because the peak of a wave (for example) gets further away from the source as time increases. We would only use the + sign when waves were traveling inward. Constructive and Destructive Interference While the idea of superposition is fairly straightforward, there is a lot of associated vocabulary that comes with it. Intuitively we can see that if two waves displace in the same direction at the same time, then the resulting wave from will be larger than either of the two initial waves. This is called On the other hand, if one wave displaces up and the other displaces down, for example, the resulting wave will be smaller than the initial waves. This is known as constructive interference. If the waves are the same amplitude, then these waves will cancel each other out completely! destructiveinterference. Partial interference is any kind of interference that isn't completely constructive or completely destructive .The term "partial interference" is not very descriptive – we can have partial interference that is either almost constructive or almost destructive. Limits of Superposition Because simply adding the waves together is the most obvious thing to do, it is worth pausing and considering if it is the only way we could have combined the effects of two waves. Actually, we could have combined the waves in much more complicated ways. For very large water waves or sound waves we cannot simply use the principle of superposition presented here. shock waves, such as the ones produced by explosions or sonic booms, are examples of waves for which the principle of superposition simply does not work. It should be appreciated that the principle of superposition is an experimentally verified result; it's not one that can be derived from purely logical thought. We are lucky that for “small” waves the principle of superposition is adequate. Contributors Authors of Phys7C (UC Davis Physics Department)
Problem for Secondary Group from Divisional Mathematical Olympiad will be solved here. Forum rules Please don't post problems (by starting a topic)in the "Secondary: Solved" forum. This forum is only for showcasing the problems for the convenience of the users. You can post the problems in the main Divisional Math Olympiad forum. Later we shall move that topic with proper formatting, and post in the resource section. Find the range of the function \[f(x)=\frac{\lceil 2x\rceil-2 \lfloor x \rfloor}{\lfloor 2x \rfloor-2\lceil x\rceil} \] Here, $\lceil x\rceil$ represents the minimum integer greater than $x$ and $\lfloor x \rfloor$ represents the maximum integer less than $x$. Here, $\lceil x\rceil$ represents the minimum integer greater than $x$ and $\lfloor x \rfloor$ represents the maximum integer less than $x$. Please write full solution so that someone can tell whether you are right or wrong or where is your fault... *to hide something write : [ h i d e ] text [ / h i d e ] (without spaces) *to hide something write : [ h i d e ] text [ / h i d e ] (without spaces) Every logical solution to a problem has its own beauty. ( ( Important: Please make sure that you have read about the Rules, Posting Permissions and Forum Language) Posts:125 Joined:Mon Dec 13, 2010 12:05 pm Location:চট্রগ্রাম,Chittagong Contact: for case (¡¡): $(-1)$;Tahmid Hasan wrote:soln for number5(partial) [\hide]i came up with only 1 range $-1$.let's express $x$=$p+k$ where p is an integer and k is a fraction.now let's make some cases csae1. $k=0$.(then x=p).but i find that there is no domain for x as an integer. case2.$k=\frac{1}{2}$.(i had to do this because of multiplying $x$ by 2) case3.$0<k<\frac{1}{2}$ case4.$\frac{1}{2}<k<1$ in these cases ,i come up with only 1 range $-1$.but i was informed that there are 2 more solutions.[\hide] case (¡¡¡): $\frac{-1}{2}$; case(¡v): $(-2)$... One thing that you must remember here is that the symbols that looks like floors and ceilings are not the same as the usual floors and ceilings! ha ha...eita amre confuse kre disilo at first, that's why I posted the caution! actually they were meant to be floors and ceilings, but when it came for describing the unpopular symbols, the words were not exact. that's why the -1 came in the range "Je le vois, mais je ne le crois pas!" - Georg Ferdinand Ludwig Philipp Cantor According to the definition if $a$ integer $\lceil a\rceil -\lfloor a\rfloor =2,$otherwise $1$then let $k=\dfrac {\lceil 2x\rceil -2\lfloor x\rfloor} {\lfloor 2x\rfloor -2\lceil x\rceil }$BdMO wrote:Find the range of the function \[f(x)=\frac{\lceil 2x\rceil-2 \lfloor x \rfloor}{\lfloor 2x \rfloor-2\lceil x\rceil} \] Here, $\lceil x\rceil$ represents the minimum integer greater than $x$ and $\lfloor x \rfloor$ represents the maximum integer less than $x$. If $x$ integer $k=1+\frac 6 {\lfloor 2x\rfloor-2\lfloor x\rfloor -4}=-1$ If $x$ not integer,let $x=\lfloor x\rfloor +j $ where $j $ denotes the fractional part of $x,$and then $k=1+\frac 3 {\lfloor 2x\rfloor -2\lfloor x\rfloor -2}$ Now if $j\le .5,\lfloor 2x\rfloor =\lfloor 2\lfloor x\rfloor \rfloor =2\lfloor x\rfloor ,k=-\frac 1 2$ If $j>.5,\lfloor 2x\rfloor =2\lfloor x\rfloor +1,k=-2$ So $k\in {-1,-2,-\frac 1 2 }$ One one thing is neutral in the universe, that is $0$.
CentralityBin () CentralityBin (const char *name, Float_t low, Float_t high) CentralityBin (const CentralityBin &other) virtual ~CentralityBin () CentralityBin & operator= (const CentralityBin &other) Bool_t IsAllBin () const const char * GetListName () const virtual void CreateOutputObjects (TList *dir, Int_t mask) virtual Bool_t ProcessEvent (const AliAODForwardMult *forward, UInt_t triggerMask, Bool_t isZero, Double_t vzMin, Double_t vzMax, const TH2D *data, const TH2D *mc, UInt_t filter, Double_t weight) virtual Double_t Normalization (const TH1I &t, UShort_t scheme, Double_t trgEff, Double_t &ntotal, TString *text) const virtual void MakeResult (const TH2D *sum, const char *postfix, bool rootProj, bool corrEmpty, Double_t scaler, Int_t marker, Int_t color, TList *mclist, TList *truthlist) virtual void End (TList *sums, TList *results, UShort_t scheme, Double_t trigEff, Double_t trigEff0, Bool_t rootProj, Bool_t corrEmpty, Int_t triggerMask, Int_t marker, Int_t color, TList *mclist, TList *truthlist) Int_t GetColor (Int_t fallback=kRed+2) const void SetColor (Color_t colour) TList * GetResults () const const char * GetResultName (const char *postfix="") const TH1 * GetResult (const char *postfix="", Bool_t verbose=true) const void SetDebugLevel (Int_t lvl) void SetSatelliteVertices (Bool_t satVtx) virtual void Print (Option_t *option="") const const Sum * GetSum (Bool_t mc=false) const Sum * GetSum (Bool_t mc=false) const TH1I * GetTriggers () const TH1I * GetTriggers () const TH1I * GetStatus () const TH1I * GetStatus () Calculations done per centrality. These objects are only used internally and are never streamed. We do not make dictionaries for this (and derived) classes as they are constructed on the fly. Definition at line 701 of file AliBasedNdetaTask.h. Calculate the Event-Level normalization. The full event level normalization for trigger \(X\) is given by \begin{eqnarray*} N &=& \frac{1}{\epsilon_X} \left(N_A+\frac{N_A}{N_V}(N_{-V}-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{1}{N_V}(N_T-N_V-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{N_T}{N_V}-1-\frac{\beta}{N_V}\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(\frac{1}{\epsilon_V}-\frac{\beta}{N_V}\right) \end{eqnarray*} where \(\epsilon_X=\frac{N_{T,X}}{N_X}\) is the trigger efficiency evaluated in simulation. \(\epsilon_V=\frac{N_V}{N_T}\) is the vertex efficiency evaluated from the data \(N_X\) is the Monte-Carlo truth number of events of type \(X\). \(N_{T,X}\) is the Monte-Carlo truth number of events of type \(X\) which was also triggered as such. \(N_T\) is the number of data events that where triggered as type \(X\) and had a collision trigger (CINT1B) \(N_V\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex. \(N_{-V}\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), but no vertex. \(N_A\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex in the selected range. \(\beta=N_a+N_c-N_e\) is the number of control triggers that were also triggered as type \(X\). \(N_a\) Number of beam-empty events also triggered as type \(X\) events (CINT1-A or CINT1-AC). \(N_c\) Number of empty-beam events also triggered as type \(X\) events (CINT1-C). \(N_e\) Number of empty-empty events also triggered as type \(X\) events (CINT1-E). Note, that if \( \beta \ll N_A\) the last term can be ignored, and the expression simplyfies to \[ N = \frac{1}{\epsilon_X}\frac{1}{\epsilon_V}N_A \] Parameters t Histogram of triggers scheme Normalisation scheme trgEff Trigger efficiency ntotal On return, the total number of events to normalise to. text If non-null, fill with normalization calculation Returns \(N_A/N\) or negative number in case of errors. Definition at line 1776 of file AliBasedNdetaTask.cxx.
If you had taken the $\operatorname{lcm}$ operation more seriously, and used $\operatorname{lcm}(1..n)=\operatorname{lcm}(1,2,3,4, \dots, n)$ as the place values (for $\operatorname{lcm}(1..(n-1)) \neq \operatorname{lcm}(1..n)$, i.e. for prime powers $n=p^r$) in your mixed radix number representation, then you would have got the LCM numeral system. It is closer in its properties to the factorial number system than to the primorial number system. It seems to feel like a slightly optimized version of the factorial number system (and fixes the drawbacks of the primorial number system compared to the factorial number system): This may be the "smallest" product-based numbering system that has a unique finite representation for every rational number. In this base 1/2 = .1 (1*1/2), 1/3 = .02 (0*1/2 + 2*1/6), 1/5 = .0102 (0*1/2 + 1*1/6 + 0*1/12 + 2*1/60). — Russell Easterly, Oct 03 2001 However, the factorial number system might still have slightly better properties. For example, if an arbitrarily huge number is divided by a fixed number $k$ (assumed to be small), then the factorial number system only needs to know the lowest $i+2k$ digits of the huge input number for being able to write out the lowest $i$ digits of the result. The LCM number system also only need to look ahead a finite number of digits for writing out the lowest $i$ digits of the result. The exact look ahead is harder to determine, it should be more or less the lowest $i+ik$ lowest digits of the huge input number. However, the LCM number system also has advantages over the factorial number system. Just like the primorial number system, it allows a simple and relatively fast multiplication algorithm. The numbers can be quickly converted into an optimal chinese remainder representation and back: $x_2 + 2 x_3 + 6 x_4 + 12 x_5 + 60 x_7 + \dots + \operatorname{lcm}(1..(p^r-1)) x_{p^r} \ = \ x \ $ with $\ 0\leq x_{p_i^{r_i}} < p_i$. $x_2 = x \mod 2$ $x_2 + 2 x_3 = x \mod 3$ $x_2 + 2 x_3 + 2 x_4 = x \mod 4$ $x_2 + 2 x_3 + x_4 + 2 x_5 = x \mod 5$ $x_2 + 2 x_3 + 6 x_4 + 5 x_5 + 4 x_7 = x \mod 7$ $\dots$ $x_2 + 2 x_3 + 6 x_4 + \dots + \operatorname{lcm}(1..(p_i^{r_i}-1)) x_{p_i^{r_i}} = x \mod p_i^{r_i}$ So to multiply $x$ and $y$, one determines an upper bound for the number of places of $z = x\cdot y$, then computes value of $x$ and $y$ modulo all those places, multiplies them separately for each place (modulo $p_i^{r_i}$), and then converts the result back to the LCM representation. The conversion back is easy, since one can first determine $z_2$, then $z_3$ by subtracting $z_2$ from the known value $z \mod 3$ before converting it to $z_3$, then $z_4$ by subtracting $z_2+2z_3$ from the known value $z \mod 4$ before converting it to $z_4$, and so on. This chinese remainder representation is optimal in the sense that the individual moduli are as small as possible for being able to represent a number of a given magnitude. The LCM number system may be even more optimal than the primorial number system in this respect. (It should be possible to do the computations modulo $p_i^{r_i}$ only for the biggest $r_i$, due to the structure of the LCM number system.)
I have this piecewise function: $$f(x)=\begin{cases} \dfrac{x^2-x-2}{|x-2|}, & x \neq 2 \\ 0, & x = 2\text{.} \end{cases}$$ I can't figure out how to graph it. I punched these numbers into my calculator, and it created a parabola, but I haven't been able to get there on my own without the calculator. So far I have plotted the point $(2,0)$ on my graph, and I factored the first function to $\dfrac{(x-2)(x+1)}{|x-2|}$ and now I am stuck. I tried multiplying both the top and bottom by $x+2$, but after simplifying I ended up with $x+1$, which I am pretty positive is not correct. Where did I go wrong?
Let $U \subset \mathbb{C}$ be open. I want to show that we can write $U$ as a countable union of open connected sets. If $U$ is empty there is nothing to prove. Else it contains some ball, and hence a point in $\mathbb{Q} + i\mathbb{Q}$. Denote $U_{\mathbb{Q}} = U \cap \mathbb{Q} + i\mathbb{Q}$. For each $q \in U_{\mathbb{Q}}$ let $U_q = \{ z \in \mathbb{C}$ | $\exists \gamma_z:[0,1] \to U$ a path connecting $z$ and $q \}$. We'll show that $U_{q}$ is open and connected: First if $z_1, z_2 \in U_q$ then $Im(\gamma_{z_1}), Im(\gamma_{z_2}) \subset U_q$, simply by shortening the paths. It follows that joining these two paths results in a path in $U_q$ and so $U_q$ is path connected, and thus connected. Now, if $z \in U_q$ by definition $z \in U$, being in the image of some path. Then choosing a ball around $z$ contained in $U$, and knowing the ball is path connected we can connect any point within to $q$. Thus $U$ is open. Finally, $\cup_{i \in \mathbb{N}} U_{q_i} \subset U$, $\cup_{i \in \mathbb{N}}\{q_i\} = U_{\mathbb{Q}}$. Letting $z \in U$ we choose a ball around it contained in $U$, and containing some 'rational' point $q$. Since the ball is path connected there is a path within the ball connecting $z$ and $q$, and so $z \in U_q$. Is this alright?
I've been given this as a homework assignment, and have no idea how to proceed. Can anyone help? The question is: (1) Let $\phi: M_{n}(\mathbb{C}) \rightarrow M_{n}(\mathbb{C})$ be an endomorphism such that $$M \in \operatorname{GL}_{n}(\mathbb{C}) \implies \phi (M) \in \operatorname{GL}_{n}(\mathbb{C}).$$ Show that, for any $M \in \operatorname{GL}_{n}(\mathbb{C})$, we have $$M \in \operatorname{GL}_{n}(\mathbb{C}) \iff \phi (M) \in \operatorname{GL}_{n}(\mathbb{C}).$$ For this problem, we received a hint from the professor: (2) If rank$(M) < n,$ then there exists $P \in \operatorname{GL}_{n}(\mathbb{C})$ such that, for any $\lambda \in \mathbb{C}$, $P - \lambda M$ is invertible. I don't know how to prove (1) nor (2). Although my main goal is to prove (1), any help with proving (2) would be appreciated.
There is a classic problem: Suppose that $X_1,\ldots,X_n$ form an i.i.d. sample from a uniform distribution on the interval $(0,\theta)$, where $\theta>0$ is unknown. I would like to find the MLE of $\theta$. The pdf of each observation will have the form: $$ f(x\mid\theta) = \begin{cases} 1/\theta\quad&\text{for }\, 0\leq x\leq \theta\\ 0 &\text{otherwise}. \end{cases} $$ The likelihood function therefore has the form: $$ L(\theta) = \begin{cases} 1/\theta^n \quad&\text{for }\; 0\leq x_i \leq \theta\;\; \text{for all }i,\\ 0 &\text{otherwise}. \end{cases} $$ The general solution is usually that the MLE of theta must be a value of $\theta$ for which $\theta \geq x_i$ and which maximizes $1/\theta^n$ among all such values. The reasoning is that since $1/\theta^n$ is a decreasing function of $\theta$, the estimate will be the smallest possible value of $\theta$ such that $\theta\geq x_i$. Therefore, the mle of $\theta$, $\hat{\theta}$, is $\max(X_1,\ldots,X_n)$. Here, I do not understand why we cannot just differentiate the likelihood function with respect to theta and then set it equal to $0$? Thanks!
It can be shown starting from Maxwell's equations that the electromagnetic field satisfies the wave equation: $$\square^2 \mathbf{E} = \frac{1}{\epsilon_0}\nabla\rho+\mu_0\frac{\partial\mathbf{J}}{\partial t}$$ $$\square^2 \mathbf{B}=\mu_0\nabla\times\mathbf{J}$$ In particular, when the RHS of the above equations are zero we may write the fields as plane waves. As a specific example, consider Example 2 on page 402 in Griffiths: $ \tilde{\mathbf{E}}=\tilde{E}_0e^{i(kz-\omega t)}\hat{\mathbf{x}}$ , $\tilde{\mathbf{B}}=\frac{1}{c}\tilde{E}_0e^{i(kz-\omega t)}\hat{\mathbf{y}}$. Taking the divergence of the first and curl of the second then gives $$\nabla\cdot\tilde{\mathbf{E}}=0$$ $$\nabla\times\tilde{\mathbf{B}}=-i\frac{k}{c}\tilde{E}_0e^{i(kz-\omega t)}\hat{\mathbf{x}}$$ From which we may identify the RHS of the second to be $\mu_0\mathbf{J}$. This is clearly not a vacuum in the strict sense, which is one seeming contradiction but the chapter is not focused on the space in which waves live, so it is more or less negligible. Inspection of the wave equations show that homogeneity is still satisfied if $\rho =\epsilon_0\partial_t\lambda$ and $\mathbf{J}=\nabla\lambda$ for some $\lambda$. The second contradiction manifests in calculating $\lambda = -i\frac{k}{c}\tilde{E}_0e^{i(kz-\omega t)}x + f(t,y,z)$; there is no $\rho$, and $f$ cannot depend on $x$, so the first wave equation is not homogenous. Is the above analysis correct, and if so can these contradictions be reconciled? In other words, can some solution both agree with the vacuum Maxwell equations (or at least homogenous wave equations) and preserve the general analysis of electromagnetic waves?
(50g) Simpson's rule for f(x,y) - Printable Version +- HP Forums ( https://www.hpmuseum.org/forum) +-- Forum: HP Software Libraries ( /forum-10.html) +--- Forum: General Software Library ( /forum-13.html) +--- Thread: (50g) Simpson's rule for f(x,y) ( /thread-11818.html) (50g) Simpson's rule for f(x,y) - peacecalc - 11-17-2018 09:27 PM Hello friends, like Eddie Shore showed us HERE an algorithm for integration a function with two variables with the simpson rule and a matrix. He implemented this for the HP 71B. I do the same thing for the HP 50G but not so elegant, it is brute force: I implementated this formular: \[ F = \int_a^b\int_c^d f(t,s)dtds \\ \sim \frac{ha}{3} \left( \frac{hi}{3}\left( f(a,c) + f(b,c) + \sum_{j=1}^{k-1}\left( 2f(t_j,c) + 4f(t_j,c)\right) + f(a,d) + f(b,d) + \sum_{j=1}^{k-1}\left( 2f(t_j,d) + 4f(t_j,d)\right) + \\ \sum_{i=1}^{m-1}\left(2\left(f(a,s_i)+f(b,s_i) + \sum_{j=1}^{k-1}\left( 2f(t_j,s_i) + 4f(t_j,s_i)\right)\right) + \\ 4\left(f(a,s_i)+f(b,s_i) + \sum_{j=1}^{k-1}\left( 2f(t_j,s_i) + 4f(t_j,s_i)\right)\right)\right)\right)\right) \] That looks horrible, but I used the stack to sum up all function values and multiplied them afterwards with 2 or 4. And the indices in formular above has to be disdinguish between even or odd numbers (only values with odd indices has to be multiplied be 4 and even with 2). I used in the FOR loops no integer values but the values for the variables (the hp 50g is very happy to use a real variable in the FOR loop. For instance I used my little program for estimate antiderivatives with harmonic sphere function multiplied with a light function to geht the coeffecients.The one angle goes from 0 to pi, the other one from 0 to 2pi. With N = 15 the hp 50g has to calculate 30*60 = 180 function values and it takes 2 minutes at average. That seems to be very long, but it is faster as you take the built-in function \[ \int \]. I have the impression that the built in function works then (when you have more variables) with recursion. Code: RE: (50g) Simpson's rule for f(x,y) - Valentin Albillo - 11-17-2018 10:31 PM . Hi, peacecalc: (11-17-2018 09:27 PM)peacecalc Wrote: like Eddie Shore showed us HERE an algorithm for integration a function with two variables with the simpson rule and a matrix. He implemented this for the HP 71B. I didn't see Eddie's post at the time but he's wrong on one count, namely (my bolding): Eddie W. Shore Wrote:On the HP 71B, matrices cannot be typed directly, elements have to be stored and recalled on element at a time. The program presented does not use modules. That's not correct. HP-71B's BASIC language allows for filling in all elements of an arbitrary size matrix at once by including the values in one or more DATA statements and then reading them all into the matrix using a single READ statement, no extra ROM modules needed. Thus, this lengthy initialization part in Eddie's code: 14 DIM I(5,5) 20 I(1,1) = 1 21 I(1,2) = 4 22 I(1,3) = 2 23 I(1,4) = 4 24 I(1,5) = 1 25 I(2,1) = 4 26 I(2,2) = 16 27 I(2,3) = 8 28 I(2,4) = 16 29 I(2,5) = 4 30 I(3,1) = 2 31 I(3,2) = 8 32 I(3,3) = 4 33 I(3,4) = 8 34 I(3,5) = 2 35 I(4,1) = 4 36 I(4,2) = 16 37 I(4,3) = 8 38 I(4,4) = 16 39 I(4,5) = 4 40 I(5,1) = 1 41 I(5,2) = 4 42 I(5,3) = 2 43 I(5,4) = 4 44 I(5,5) = 1 can be replaced by this much shorter, much faster version (OPTION BASE 1 assumed): 14 DATA 1,4,2,4,1,4,16,8,16,4,2,8,4,8,2,4,16,8,16,4,1,4,2,4,1 20 DIM I(5,5) @ READ I where the READ I fills in all data into the matrix with a single statement, no individual assignments or loops needed, thus it's much faster and uses less program memory. Notice that this also works for arbitrary numerical expressions in the DATA, i.e.: the following hipothetical code would work Ok: 10 DATA 5,-3,2.28007e20,X,2*Z+SIN(Y),FNF(X+Y,X-Y),FNZ(2+FNF(C+D),3/FNF(6,8)),43 20 DIM M(2,4) @ READ M Anyway, Simpson's rule is suboptimal for integration purposes, either one-dimensional or multi-dimensional. There are much better methods providing either significantly increased accuracy for the same number of function evaluations or the same accuracy with fewer evaluations. V. RE: (50g) Simpson's rule for f(x,y) - peacecalc - 11-19-2018 08:33 PM Hello Valentin, I second your statement: Quote:Anyway, Simpson's rule is suboptimal for integration purposes, either one-dimensional or multi-dimensional. There are much better methods providing either significantly increased accuracy for the same number of function evaluations or the same accuracy with fewer evaluations. But as I wrote with double integrals the build-in solution for the hp 50g is slower then the brute force simpson rule.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Divided by Positive Element of Quotient Field Theorem Then: $\forall z \in K: \exists x, y \in D: z = \dfrac x y, y \in D_{>0}$ Proof By definition: $\forall z \in K: \exists x, y \in D: z = \dfrac x y, y \in D_{\ne 0}$ Suppose $z = x' / y'$ such that $y' \notin D_{>0}$. Then $y' < 0$ as $D$ is totally ordered. Then: \(\displaystyle x' / y'\) \(=\) \(\displaystyle x' \circ \left({y'}\right)^{-1}\) Definition of Division of $x$ by $y$ \(\displaystyle \) \(=\) \(\displaystyle \left({- x'}\right) \circ \left({- y'}\right)^{-1}\) Product of Ring Negatives \(\displaystyle \) \(=\) \(\displaystyle \left({- x'}\right) / \left({- y'}\right)\) Definition of Division of $x$ by $y$ If $y' < 0$, then $\left({- y'}\right) > 0$ from Properties of Ordered Ring $(4)$. So all we need to do is set $x = -x', y = -y'$ and the result follows. $\blacksquare$
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Let $K$ be a compact metric space, and let $A \subset K$. Prove that $A$ is compact if and only if, for every continuous function $f:K \to \mathbb{R}$, there exists a $q \in A$ so that $$f(q) = \max_{a \in A} \{f(a)\}.$$ i.e. $f$ attains its maximum on the restriction to $A$. Here's an attempt: The forward direction is a standard theorem. Suppose conversely every restriction attains its maximum on $A$. Let $\rho \in \overline{A}$. We shall show $\rho \in A$ and therefore $A$ is closed, and being a subset of a compact space, compact. Suppose $\rho \notin A$. Let $\{a_n\}_{n \in \mathbb{N}} \to \rho$. There exists a continuous function $f: K \to \mathbb{R}$ which attains its maximum at $\rho$ (for example, $d:A \to \mathbb{R}$ with $d: x \mapsto -d(x,\rho)$). In particular, $f: \overline{A} \to \mathbb{R}$ is continuous and has a maximum at $\rho$. Therefore, given $\epsilon >0$, there exists $\delta >0$ so that $f(B_\delta(\rho)) \subset B_\epsilon (f(\rho)).$ Choose any $a_{n_1} \in B_\delta(\rho)$. Then, we may take $0 < \tilde{\epsilon} < \epsilon$ and $0< \tilde{\delta}< d(a_{n_1},\rho)$ so that, given $a_{n_2} \in B_{\tilde{\delta}}$, $f(a_{n_2}) > f(a_{n_1})$. Repeating this process indefinitely, we see that $f$ cannot attain a maximum on $a$, a contradiction. We conclude $\rho \in A$, so that $A$ is compact. Any issues? Other methods of proof are welcome. In fact, it'd be great to see alternatives. I've never seen this result anywhere, so I thought it interesting.
Prove: Let $a_n, b_n$ be positive sequences so that $\limsup \frac {a_n}{b_n} \lt\infty$. If $\sum b_n$ converges then $\sum a_n$ converges. So far I was thinking that from $\limsup \frac {a_n}{b_n} \lt\infty$, we know that $a_n$ doesn't have a subsequential limit of positive infinity and $b_n$ doesn't have a subsequential limit of $0$, since those two cases would make $\limsup \frac {a_n}{b_n}$ be infinity... but I'm not sure if that's what $\limsup \frac {a_n}{b_n}$ means. If it means supremum of the set of subsequential limits of the sequences of ratios of each individual term. that means that eventually $b_n$ will be larger than $a_n$. Since otherwise it would tend to infinity. Then we can use theorem $3.25$ from Rudin, that if $|a_n| \le c_n$ for $n \ge N$, where $N$ is some fixed integer, and $\sum c_n$ converges, then $\sum a_n$ converges as well. Tell me what you guys think, many thanks.
I am trying to show that $$\int_{S(t)} (\textbf{n} \cdot \sigma ) \cdot\textbf{u} \ dS = \int_{V(t)}(\nabla\cdot\sigma) \cdot \textbf{u} \space + \sigma : \nabla \textbf{u} \space dV$$ where $\textbf{u,n}$ are vectors and $\sigma$ is a symmetric second order tensor. Using the divergence theorem $$\int_{S(t)} (\textbf{n} \cdot \sigma ) \cdot\textbf{u} \ dS = \int_{V(t)} (\nabla \cdot \sigma ) \cdot\textbf{u} \ dV$$ Now I have this identity: $$\tau:\nabla \textbf{u} + \textbf{u}\cdot(\nabla \cdot \tau) = \nabla\cdot(\tau\textbf{u})$$ $\tau$ is a symmetric second order tensor. This leads me to believe that I have done something wrong. Because I want the two terms on the left inside the volume integral, have I misused the divergence theorem?