text
stringlengths
256
16.4k
Usually this part of thermodynamics is not presented in the most efficient way. To derive the expression for, e.g., $C_P - C_V$, one goes through a series of steps involving Maxwell's relations and the triple product rule. Figuring out what to do in each step is done mostly by guesswork. However, all relations of these sorts really come from just a single mathematical relation that more generally holds. That is to say, Consider a twice-differentiable, two-variable function $f(x,y)$ and its Legendre transform with respect to $y$, defined by $g(x,f_y) := f - f_y y$. 1 Here, $f_y$ denotes the derivative of $f$ with respect of $y$ with other independent variables held fixed. Then, the following identity is satisfied: \begin{equation} \boxed{g_{xx} - f_{xx} = - \frac{f_{xy}^2}{f_{yy}} = \frac{g_{x f_y}^2}{g_{f_y f_y}}}. \end{equation} Proof. We have \begin{equation} \bigg(\frac{\partial f}{\partial x} \bigg)_{f_y} = f_x + f_y \bigg(\frac{\partial y}{\partial x}\bigg)_{f_y} = f_x - f_y \frac{f_{xy}}{f_{yy}}. \end{equation} On the above, the first equality holds because of the chain rule [for the change of variables $(x,y)$ $\to$ $(x, f_y)$], and the second equality because of the triple product rule. Then, \begin{equation} g_x = \bigg(\frac{\partial f}{\partial x} \bigg)_{f_y} - f_y\bigg(\frac{\partial y}{\partial x}\bigg)_{f_y} =f_x, \end{equation} and \begin{equation} g_{xx} = \bigg(\frac{\partial f_x}{\partial x} \bigg)_{f_y}= f_{xx} + f_{xy}\bigg(\frac{\partial y}{\partial x}\bigg)_{f_y} = f_{xx} - \frac{f_{xy}^2}{f_{yy}}. \end{equation} Also, using the chain rule for the change of variables $(x,y)$ $\to$ $(x, f_y)$, we get \begin{equation} g_{f_y} = \bigg(\frac{\partial f}{\partial f_y} \bigg)_{x} -\bigg(\frac{\partial y}{\partial f_y} \bigg)_{x}f_y - y = -y. \end{equation} Therefore, Legendre transform of $g$ with respect to $f_y$ is simply $g - g_{f_y} f_y = f$. Then, performing a similar analysis as above for $g$ gives \begin{equation} f_{xx} = g_{xx} - \frac{g_{x f_y}^2}{g_{f_y f_y}}. \end{equation} Now, all that remains to be done is to choose the variables and functions $x$, $y$, $f$ and $g$. Below are some examples. (1) $x = S$, $y = V$, $f = U$, $g = H$, and $f_y = -P$:\begin{equation}H_{SS} - U_{SS} = -\frac{U_{SV}^2}{U_{VV}}.\end{equation}Note that\begin{equation}U_{SS} = \bigg(\frac{\partial^2 U}{\partial S^2}\bigg)_V = \bigg(\frac{\partial T}{\partial S}\bigg)_V = \frac{T}{T(\partial S/\partial T)_V} = \frac{T}{C_V},\end{equation}and that, similarly,\begin{equation}H_{SS} = \bigg(\frac{\partial^2 H}{\partial S^2}\bigg)_P = \bigg(\frac{\partial T}{\partial S}\bigg)_P = \frac{T}{T(\partial S/\partial T)_P} = \frac{T}{C_P}.\end{equation}Hence, we have\begin{equation}\boxed{\frac{1}{C_P} - \frac{1}{C_V} = -\frac{U_{SV}^2}{T U_{VV}} = -\frac{\Big(\frac{\partial^2 U}{\partial S \partial V}\Big)^2}{\Big(\frac{\partial U}{\partial S}\Big)_V \Big(\frac{\partial^2 U}{\partial V^2}\Big)_S}}\end{equation}which is equivalent to the relation OP wanted to derive. (2) $x = T$, $y = V$, $f = A$, $g = G$, $f_y = -P$, and noting that $f_{-y} = -f_y$,\begin{equation}G_{TT} - A_{TT} = \frac{G_{T,-P}^2}{G_{-P,-P}} = \frac{G_{TP}^2}{G_{PP}}.\end{equation}Here,\begin{equation}A_{TT} = \bigg(\frac{\partial^2 A}{\partial T^2}\bigg)_V = -\bigg(\frac{\partial S}{\partial T}\bigg)_V = - \frac{1}{T}C_V,\end{equation}\begin{equation}G_{TT} = \bigg(\frac{\partial^2 G}{\partial T^2}\bigg)_P = -\bigg(\frac{\partial S}{\partial T}\bigg)_P = - \frac{1}{T}C_P,\end{equation}\begin{equation}G_{TP} = \frac{\partial^2 G}{\partial T \partial P} = \bigg(\frac{\partial V}{\partial T}\bigg)_P = V \alpha,\end{equation}where $\alpha := \frac{1}{V}\Big(\frac{\partial V}{\partial T}\Big)_P$ is the volume expansion coefficient,\begin{equation}G_{PP} = \bigg(\frac{\partial^2 G}{\partial P^2}\bigg)_T = \bigg(\frac{\partial V}{\partial P}\bigg)_T = - V\kappa_T,\end{equation}where $\kappa_T := -\frac{1}{V}\Big(\frac{\partial V}{\partial P}\Big)_T$ is the isothermal compressibility. Then, it follows that\begin{equation}\boxed{C_p - C_V = \frac{TV\alpha^2}{\kappa_T}}.\end{equation} (3) $x = V$, $y = S$, $f = H$, $g = G$, and $f_y = T$:\begin{equation}G_{PP} - H_{PP} = \frac{G_{TP}^2}{G_{TT}}.\end{equation}We have\begin{equation}H_{PP} = \bigg(\frac{\partial^2 H}{\partial P^2}\bigg)_S = \bigg(\frac{\partial V}{\partial P}\bigg)_S = - V\kappa_S,\end{equation}where $\kappa_S := -\frac{1}{V}\Big(\frac{\partial V}{\partial P}\Big)_S$ is the adiabatic compressibility. We have already shown that\begin{equation}G_{PP} = -V\kappa_T, \quad G_{TP} = V \alpha, \quad G_{TT} = - \frac{C_P}{T},\end{equation}whence we obtain\begin{equation}\boxed{\kappa_T - \kappa_S = \frac{TV\alpha^2}{C_P}}.\end{equation} 1 To be more precise, one should first write down $f(x,y) - y f_y$ and then express $y$ as a function of $x$ and $f_y$. Also, for the Legendre transform to exist, $f(x,y)$ should be a convex or concave function of $y$.
Existence and uniqueness of positive solutions for a class of logistic type elliptic equations in $\mathbb{R}^N$ involving fractional Laplacian 1. Departamento de Matemática, Universidad Técnica Federico Santa María, Casilla: V-110, Avda. España 1680, Valparaíso, Chile 2. Department of Mathematics, Jiangxi Normal University, Nanchang, Jiangxi 330022, China $ \begin{eqnarray*}(-Δ)^α u=λ a(x)u-b(x)u^p&\ \ \ {\rm in}\,\,\mathbb{R}^N, \end{eqnarray*}$ $a\left( x \right) \to {a^\infty } > 0\;\;\;\;{\rm{and}}\;\;\;b\left( x \right) \to {b^\infty } > 0\;\;\;\;{\rm{as}}\;\;\;{\rm{|}}\mathit{x}{\rm{|}} \to \infty $ Mathematics Subject Classification:35J60, 47G20. Citation:Alexander Quaas, Aliang Xia. Existence and uniqueness of positive solutions for a class of logistic type elliptic equations in $\mathbb{R}^N$ involving fractional Laplacian. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2653-2668. doi: 10.3934/dcds.2017113 References: [1] [2] N. Abatangelo, Large [3] G. Barles, E. Chasseigne and C. Imbert, On the Dirichlet problem for second-order elliptic integro-differential equations, [4] [5] [6] H. Chen, P. Felmer and A. Quaas, Large solutions to elliptic equations involving fractional Laplacian, [7] [8] [9] Y. Du and Z. Guo, Boundary blow-up solutions and their applications in quasilinear elliptic equations, [10] Y. Du and L. Ma, Logistic type equations on $\mathbb{R}^N$ by a squeezing method involving boundary blow up solutions, [11] [12] [13] [14] [15] [16] M. Struwe, show all references References: [1] [2] N. Abatangelo, Large [3] G. Barles, E. Chasseigne and C. Imbert, On the Dirichlet problem for second-order elliptic integro-differential equations, [4] [5] [6] H. Chen, P. Felmer and A. Quaas, Large solutions to elliptic equations involving fractional Laplacian, [7] [8] [9] Y. Du and Z. Guo, Boundary blow-up solutions and their applications in quasilinear elliptic equations, [10] Y. Du and L. Ma, Logistic type equations on $\mathbb{R}^N$ by a squeezing method involving boundary blow up solutions, [11] [12] [13] [14] [15] [16] M. Struwe, [1] Nicola Abatangelo. Large $s$-harmonic functions and boundary blow-up solutions for the fractional Laplacian. [2] [3] Antonio Vitolo, Maria E. Amendola, Giulio Galise. On the uniqueness of blow-up solutions of fully nonlinear elliptic equations. [4] Huyuan Chen, Hichem Hajaiej, Ying Wang. Boundary blow-up solutions to fractional elliptic equations in a measure framework. [5] Ronghua Jiang, Jun Zhou. Blow-up and global existence of solutions to a parabolic equation associated with the fraction [6] Jorge García-Melián, Julio D. Rossi, José C. Sabina de Lis. Elliptic systems with boundary blow-up: existence, uniqueness and applications to removability of singularities. [7] Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. [8] Shota Sato. Blow-up at space infinity of a solution with a moving singularity for a semilinear parabolic equation. [9] Tetsuya Ishiwata, Shigetoshi Yazaki. A fast blow-up solution and degenerate pinching arising in an anisotropic crystalline motion. [10] Frédéric Abergel, Jean-Michel Rakotoson. Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation. [11] Binhua Feng. On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities. [12] Van Duong Dinh. On blow-up solutions to the focusing mass-critical nonlinear fractional Schrödinger equation. [13] [14] [15] [16] [17] [18] W. Edward Olmstead, Colleen M. Kirk, Catherine A. Roberts. Blow-up in a subdiffusive medium with advection. [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
What is the best way to approximate how many primes there are less than $2^{43112609}-1$? I know that one can use prime number theorem. I also found that in the Internet that $\pi (10^{24})=18435599767349200867866$ and then one can use Loo's theorem that there are always prime between $3n$ and $4n$ so this method gives an upper and a lower bound. Follow JavaMan's hint: (logarithmic integral) $$\pi (n) \sim \int _{ 2 }^{ n }{ \frac{1}{\log{\quad x}}}dx$$ It is aproximately: $1.0590175682245865561220555017659840985462778602424400915*{10}^{12978181}$ We can compute the Logarithmic Integral, as suggested by JavaMan, with $$ \begin{align} \operatorname{li}(x) &=\operatorname{PV}\int_0^x\frac{\mathrm{d}t}{\log(t)}\\ &=\gamma+\log|\log(x)|+\sum_{k=1}^\infty\frac{\log(x)^k}{k\;k!} \end{align} $$ which converges for all $x>0$. For large values of x, there is an asymptotic expansion: $$ \operatorname{li}(x)=\frac{x}{\log(x)}\left(1+\frac{1}{\log(x)}+\frac{2}{\log(x)^2}+\dots+\frac{k!}{\log(x)^k}+O\left(\frac{1}{\log(x)^{k+1}}\right)\right) $$ This doesn't converge, as is the case with most asymptotic expansions. It turns out SAGE does not have a single function name for the logarithmic integral, but that is not necessary. It does have the exponential integral. In Abramowitz and Stegun this is written $\mbox{Ei}(x),$ see formula 5.1.2 on page 228. Meanwhile, formula 5.1.3 on the same page gives what you want $$ \mbox{li}(x) = \mbox{Ei}(\log x), $$ where logarithms are base $e = 2.718281828459...$ So that is what you want. In SAGE, I found ....................................................eint() Returns the exponential integral of this number. EXAMPLES: sage: r = 1.0 sage: r.eint() 1.89511781635594 sage: r = -1.0 sage: r.eint() NaN...................................................log(base='e') EXAMPLES: sage: R = RealField() sage: R(2).log() 0.693147180559945 sage: log(RR(2)) 0.693147180559945 sage: log(RR(2),e) 0.693147180559945 sage: r = R(-1); r.log() 3.14159265358979*I sage: log(RR(-1),e) 3.14159265358979*I sage: r.log(2) 4.53236014182719*I......................................................... So, however SAGE syntax works, you want eint(log x)) for your number... For comparison, $$ \mbox{li}(2) = 1.04516378..., $$ $$ \mbox{li}(e) = 1.895117816... $$ and you can compare some other small values with robjohn's formula until you are sure you have it right. As Gerry points out, this is still unlikely to give a value, So, the best you can do is the asymptotic series, I expect the best accuracy is taking $n$ terms when $n \approx \log x,$ which is still huge but actually possible to calculate with a loop and patience. That is, take about $43,112,609 \log 2 \approx 29,883,383 $ terms. If you run out of patience, just do 100 terms. Or ten. Maple says: N:= 2^(43112609)-1: evalf(Li(N)); $0.1059014049\ 10^{12978182}$ Assuming the Riemann hypothesis, Schoenfeld's estimate says the error $|\pi(N) - \text{li}(N)| \le\sqrt{N} \log(N)/(8 \pi) = 0.2115223997\ 10^{6489101}$ The simplest is to use the estimation $$\pi(x) \leq \frac{x}{\ln(x)}(1+\frac{1}{\ln(x)}+\frac{2}{\ln(x)^2}+\frac{7.59}{\ln(x)^3})$$ $$\pi(x) > \frac{x}{\ln(x)}(1+\frac{1}{\ln(x)}+\frac{2}{\ln(x)^2})$$ Other than that you can take (you would not need more than 50.000.000 terms) $$R(x) = 1 + \sum_{k=1}^\infty \frac{(\ln x)^k}{k! k \zeta(k+1)}$$ $$\pi(x) \approx R(x) - \frac{1}{\ln x} + \frac{1}{\pi} \arctan (\frac{\pi}{\ln x})$$ This last is currently likely the best approximation you can have. Error term is about $\frac{\sqrt{x}}{\ln(x)}$.
2 Methods for Simulating Radiated Fields in COMSOL Multiphysics® In Part 2 of our blog series on multiscale modeling in high-frequency electromagnetics, we discuss a practical implementation of multiscale techniques in the COMSOL Multiphysics® software. We will simulate radiated fields using two different techniques and verify our results with theory. While these methods can be generally applied, we will always revolve around the practical issue of antenna-to-antenna communication. For a review of the theory and terms, you can refer to the first post in the series. Simulating a Radiating Antenna Let’s begin by discussing a traditional antenna simulation using COMSOL Multiphysics and the RF Module. When we simulate a radiating antenna, we have a local source and are interested in the subsequent electromagnetic fields, both nearby and outgoing from the antenna. This is fundamentally what an antenna does. It converts local information (e.g., voltage or current) into propagating information (e.g., outgoing radiation). A receiving antenna inverts this operation and changes incident radiation into local information. Many devices, such as a cellphone, act as both receiving and emitting antennas, which is what enables you to make a phone call or browse the web. Antennas of the Atacama Large Millimeter Array (ALMA) in Chile. ALMA detects signals from space to help scientists study the formation of stars, planets, and galaxies. Needless to say, the distance these signals travel is much greater than the size of an antenna. Image licensed under CC BY 4.0, via ESO/C. Malin. In order to keep the required computational resources reasonable, we model only a small region of space around the antenna. We then truncate this small simulation domain with an absorbing boundary, such as a perfectly matched layer (PML), which absorbs the outgoing radiation. Since this will solve for the complex electric field everywhere in our simulation domain, we will refer to this as a Full-Wave simulation. We then extract information about the antenna’s emission pattern using a Far-Field Domain node, which performs a near-to-far-field transformation. This approach gives us information about the electromagnetic field in two regions: the fields in the immediate vicinity of the antenna, which are computed directly, and the fields far away, which are calculated using the Far-Field Domain node. This is demonstrated in a number of RF models in the Application Gallery, such as the Dipole Antenna tutorial model, so we will not comment further on the practical implementation here. Using the Far-Field Domain Node One question that occasionally comes up in technical support is: “How do I use the Far-Field Domain node to calculate the radiated field at a specific location?” This is an excellent question. As stated in the RF Module User’s Guide, the Far-Field Domain node calculates the scattering amplitude, and so determining the complex field at a specific location requires a modification for distance and phase. The expression for the x-component of the electric field in the far field is: and similar expressions apply to the y– and z-component, where r is the radial distance in spherical coordinates, k is the wave vector for the medium, and emw.Efarx is the scattering amplitude. It is worth pointing out that emw.Efarx is the scattering amplitude in a particular direction, and so it depends on angular position (\theta, \phi), but not radial position. The decrease in field strength is solely governed by the 1/ r term. There are also variables emw.Efarphi and emw.Efartheta, which are for the scattering amplitude in spherical coordinates. To verify this result, we simulate a perfect electric dipole and compare the simulation results with the analytical solution, which we covered in the previous blog post. As we stated in that post, we split the full results into two terms, which we call the near- and far-field terms. We briefly restate those results here. \overrightarrow{E} & = \overrightarrow{E}_{FF} + \overrightarrow{E}_{NF}\\ \overrightarrow{E}_{FF} & = \frac{1}{4\pi\epsilon_0}k^2(\hat{r}\times\vec{p})\times\hat{r}\frac{e^{-jkr}}{r}\\ \overrightarrow{E}_{NF} & = \frac{1}{4\pi\epsilon_0}[3\hat{r}(\hat{r}\cdot\vec{p})-\vec{p}](\frac{1}{r^3}+\frac{jk}{r^2})e^{-jkr} \end{align} where \vec{p} is the dipole moment of the radiation source and \hat{r} is the unit vector in spherical coordinates. Below, we can see the electric fields vs. distance calculated using the Far-Field Domain node for a dipole at the origin with \vec{p}=\left(0,0,1\right)A\cdot m. For comparison, we have included the Far-Field Domain node, the full theory, as well as the near- and far-field terms individually. The fields are evaluated along an arbitrary cut line. As you can see, there is overlap between the Far-Field Domain node and the far-field theory plots, and they agree with the full theory as the distance from the antenna increases. This is because the Far-Field Domain node will only account for radiation that goes like 1/ r, and so the agreement improves with increasing distance as the contribution of the 1/ r 2 and 1/ r 3 terms go to zero. In other words, the Far-Field Domain node is correct in the far field, which you probably would have guessed from the name. A comparison of the Far-Field Domain node vs. theory for a point dipole source. Using the Electromagnetic Waves, Beam Envelopes Interface For most simulations, the near-field and far-field information is sufficient and no further work is necessary. In some cases, however, we also want to know the fields in the intermediate region, also known as the induction or transition zone. One option is to simply increase the simulation size until you explicitly calculate this information as part of the simulation. The drawback of this technique is that the increased simulation size requires more computational resources. We recommend a maximum mesh element size of \lambda/5 for 3D electromagnetic simulations. As the simulation size increases, the number of mesh elements increases, and so do the computational requirements. Another option is to use the Electromagnetic Waves, Beam Envelopes interface, which here we will simply refer to as Beam-Envelopes. As discussed in a previous blog post, Beam-Envelopes is an excellent choice when the simulation solution will have either one or two directions of propagation, and will allow us to use a much coarser mesh. Since the phase of the emission from an antenna will look like an outgoing spherical wave, this is a perfect solution for determining these fields. We perform a Full-Wave simulation of the fields near the source, as before, and then use Beam-Envelopes to simulate the fields out to an arbitrary distance, as required. The simulation domain assignments. If the outer region is assigned to PML, then a Full-Wave simulation is performed everywhere. It is also possible to solve the inner region using a Full-Wave simulation and the outer region using Beam-Envelopes , as we will discuss below. Note that this image is not to scale, and we have only modeled 1/8 of the spherical domain due to symmetry. How do we couple the Beam-Envelopes simulation to our Full-Wave simulation of the dipole? This can be done in two steps involving the boundary conditions at the interface between the Full-Wave and Beam-Envelopes domains. First, we set the exterior boundary of the Full-Wave simulation to PMC, which is the natural boundary condition for that simulation. The second step is to set that same boundary to an Electric Field boundary condition for Beam-Envelopes. We then specify the field values in the Beam-Envelopes Electric Field boundary condition according to the fields computed from the Full-Wave simulation, as shown here. The Electric Field boundary condition in Beam-Envelopes . Note that the image in the top right is not to scale. A Matched Boundary Condition is applied to the exterior boundary of the Beam-Envelopes domain to absorb the outgoing spherical wave. The remaining boundaries are set to PEC and PMC according to symmetry. We must also set the solver to Fully Coupled, which is described in more detail in two blog posts on solving multiphysics models and improving convergence from a previous blog series on solvers. If we again examine the comparison between simulation and theory, we see excellent agreement over the entire simulation range. This shows that the PMC and Electric Field boundary conditions have enforced continuity between the two interfaces and they have fully reproduced the analytical solution. You can download the model file in the Application Gallery. A comparison of the electric field of the Full-Wave and Beam-Envelopes simulations vs. the full theory. Concluding Thoughts on Simulating a Radiating Source in COMSOL Multiphysics® In today’s blog post, we examined two ways of computing the electric field at points far away from the source antenna and verified the results using the analytical solution for an electric point dipole. These two techniques are using the Far-Field Domain node from a Full-Wave simulation and linking a Full-Wave simulation to a Beam-Envelopes simulation. In both cases, the fields near the source and in the far field are correctly computed. The coupled approach using Beam-Envelopes has the additional advantage in that it also computes fields in the intermediate region. In the next post in the series, we will combine the calculated far-field radiation with a simulation of a receiving antenna and determine the received power. Stay tuned! Read the Full Series Browse the other posts in the Multiscale Modeling for High-Frequency Electromagnetics blog series Comments (6) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Your 1st equation can be interpreted as saying that the flux through the section is equal to $\frac{1}{\epsilon_0}$ times the charge $Q$ contained within a cone whose base is the section and whose vertex is the centre of the sphere. This result can be obtained directly using Gauss' Law. The electric field inside a uniformly charged sphere is radial. So the flux across the slanting curved face of the cone is zero, because this face is radial. The only flux out of the cone is across its base. The electric field across this base varies in magnitude and direction. Nevertheless, the total flux across it equals the charge $Q$ enclosed by the cone divided by $\epsilon_0$. Your 2nd equation is derived using Gauss' Law and gives the total flux through the surface of the cylinder. The similarity between the formulas for $\phi_1$ and $\phi_2$ is entirely due to geometry. Gauss' Law says that total flux through a surface of any shape containing uniform charge density $\rho$ is $$\phi=\frac{\rho V}{\epsilon_0}$$ The only difference for each shape is the volume $V$. The volumes of cone and cylinder depend in the same way on base area $\pi a^2=\pi (R^2−r_0^2)$ and height $r_0$. They are both of the form $k r_0 \pi a^2$. For a cone $k=\frac13$ while for a cylinder $k=1$. You can see by looking at extremes that your conjecture (that the flux through the flat ends of a cylinder is $\frac13$ of the total flux) must be false. For a short fat cylinder ($r_0 \ll a$) almost all of the flux will be through the flat ends. For a long thin cylinder ($r_0 \gg a$) almost none of the flux will be through the flat ends.
Let's reformulate this problem in terms of commutative algebra (its second tag): for an arbitrary field $K$ the ring $K[z_{11},\dots,z_{mn}]/\ker\phi$ is isomorphic to $\text{Im}\ \phi$ which is obviously the subring of $K[x_1,\dots,x_m,y_1,\dots,y_n]$ generated by all the monomials $x_iy_j$. Now think in terms of affine semigroup rings: $$K[x_1,\dots,x_m,y_1,\dots,y_n]=K[\mathbb{N}^{m+n}]$$ and $$K[x_iy_j: 1\le i\le m, 1\le j\le n]=K[S],$$ where $S\subset\mathbb{N}^{m+n}$ is the subsemigroup generated by the elements $(e_i,f_j)$. (Here we consider $e_i=(0,\dots,1,\dots,0)\in\mathbb{N}^m$ with $1$ on the place $i$, and $f_j=(0,\dots,1,\dots,0)\in\mathbb{N}^n$ with $1$ on the place $j$.) At this moment I leave you the pleasure to prove that $$S=\{(a_1,\dots,a_m,b_1,\dots,b_n)\in\mathbb{N}^{m+n}:a_1+\cdots+a_m=b_1+\cdots+b_n\}.$$ Theorem 6.1.4 from Bruns and Herzog, Cohen-Macaulay Rings, provides a criterion for the normality of affine semigroup rings. It says that $K[S]$ is normal if and only if $S$ is a normal semigroup, that is, if $n\in\mathbb{N}$, $n>0$, and $x\in\mathbb{Z}S$ (the subgroup of $\mathbb{Z}^{m+n}$ generated by $S$), then $nx\in S$ implies $x\in S$. In our case it is pretty clear that $S$ is normal. Remark. Even simpler, we can think of $K[x_iy_j: 1\le i\le m, 1\le j\le n]$ as being the Segre product of two polynomial rings and use the fact that the Segre product of two normal rings is normal (why?).
This is a special case of the following theorem (Lemma 2.1 in this paper on disjunctive sequences), whose proof is along the same lines as that posted in the answer by @EricWofsey: If $a_1, a_2, a_3, \dots$ is a strictly increasing infinite sequence of positive integers such that $$\lim_{n\to \infty} \frac{a_{n+1}}{a_n} = 1$$ then for any positive integer $m$ and any integer base $b \ge 2$, there is an $a_n$ whose expression in base $b$ starts with the expression of $m$ in base $b$. Your result is then the very special case of taking $a_n = n^k$ and $k=m$. For this and some other special cases, see these examples of disjunctive sequences. (E.g., for any desired positive integer, there are infinitely many prime numbers whose representation begins with the digits of that number.) NB: For any positive integer exponent $k$ and any desired positive integer $m$, there are infinitely many positive integers $n$ such that the representation of $n^k$ starts with the representation of $m$; furthermore, this holds for digital representations in any integer base $b \ge 2$.
I am working on an Index and I am trying to price Call options on it. I work with the 3 Months LIBOR as Cash. I use the following Black-Scholes formula $$C_{t} = S_{t}e^{-q_{t}(T-t)}\mbox{N}[d_{1}(t)] - K e^{-r_{t}(T-t)}\mbox{N}[d_{0}(t)] $$ with the usual notations. $r_{t}$ is the LIBOR rate and $q_{t}$ is the dividend of my Index. I cannot use the classical formula $R(t,T) = \frac{1}{T-t} \int_{t}^{T} r_{s} ds$ since the rate are not deterministic. I implemented a classical delta-hedging. My delta-hedging works well, except for too deep In the money call options, where the P&L behaves exactly like the interest rate. This happens only for Call with maturities of several years and strikes far in the money. The conclusion of my manager is that I have to hedge against the stochasticity of the interest rate, using a zero-coupon bond. I looked at some documentations on zero-coupon, and saw that we can have a closed formula under the Hull-White model. However, I am not so sure how to calibrate it as my only inputs are the LIBOR rates. Besides, I do not know which amount I should invest in my zero-coupon bond : should I invest the $\rho$ ( $\frac{\partial C_{t}}{\partial r_{t}}$) ? I do not really know where to start so any help would be appreciated :) Thank you!
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
The answer is D, the melting points increase. This is absolutely true (source for values):\begin{array}{lrr}\text{Halogen} & \text{Melting point}/^\circ\mathrm{C}& \text{Boiling point}/^\circ\mathrm{C}\\\hline\text{fluorine} & -220 & -188 \\\text{chlorine} & -101 & -35 \\\text{bromine} & -7.2 & 58.8 \\\text{iodine} & 114 & 184 \\\text{astatine} & 302 & 337 \\\hline\end{array} And the boiling point increases, too, so answer C is definitely wrong. Another obviously wrong answer is B, the atoms get smaller. With increasing main quantum number the atoms obviously have to become bigger since the main electron density moves further away from the nucleus. In numbers:\begin{array}{lrr}\text{Halogen} & \text{Covalent radius}/\mathrm{pm}& \text{Ionic radius }\ce{(X^{-})}/\mathrm{pm}\\\hline\text{fluorine} & 71 & 133 \\\text{chlorine} & 99 & 181 \\\text{bromine} & 114 & 196 \\\text{iodine} & 133 & 220 \\\text{astatine} & 150 & \\\hline\end{array} For answers A and E it cannot be unambiguously answered, because reactivity is not a well defined concept, see "What is reactivity really, and can it be quantified?" for example. The IUPAC goldbook states, that reactivity is a kinetic property. It goes on to describe, that it can only be absolutely used in a given context or in reference to another system. However, it is also often used in expressing general trends, which is the case when looking at this question. Given this context, you can find the following portion on UC Davis ChemWiki: Reactivity of Elements: decreases down the group The reactivities of the halogens decrease down the group ( At < I < Br < Cl < F). This is due to the fact that atomic radius increases in size with an increase of electronic energy levels. This lessens the attraction for valence electrons of other atoms, decreasing reactivity. This decrease also occurs because electronegativity decreases down a group; therefore, there is less electron "pulling." In addition, there is a decrease in oxidizing ability down the group. In this context, answer A is wrong. However, I would not consider E to be strictly true either. Have a look at the electron affinities, and you will find, that fluorine behaves anomalous. This was also discussed in "Why does chlorine have a higher electron affinity than fluorine?" However the general statement will usually be given as: "Electron affinity decreases down the group."\begin{array}{lc}\text{Halogen} & \text{Electron Affinity}/\mathrm{kJ\cdot mol^{-1}}\\\hline\text{fluorine} & -328.0 \\\text{chlorine} & -349.0 \\\text{bromine} & -324.6 \\\text{iodine} & -295.2 \\\text{astatine} & -270.1 \\\hline\end{array}The same trend can be observed with bond enthalpies of the $\ce{X2}$ series (source): This trend is broken for the hydrogen halides $\ce{HX {(g)}}$. Your assumption is therefore correct, maybe you have already done the same analysis. However, we can only definitely state and proof, that answer D must be correct, while answer E might be correct depending on the reference system.
Let's assume an $xy$ plane and let there be a force field defined by the potential $$V=F_0|x|$$ Though the potential is not differentiable still its a perfectly realisable system. If we solve the force equation with the initial conditions $x = \delta$ and $\dot{x}=0$, we will have to solve it for $x\geq0$ and $x\leq 0$ separately and whenever the particle crosses $x=0$ we will have to switch solutions. Rather than going through that pain is there any approximation that can reduce it to simple harmonic? (since we already know a small perturbation would lead to oscillatory behaviour) This we already know a small perturbation would lead to oscillatory behaviour is not enough. Harmonic oscillations aren't just oscillatory; they also have a period which is independent of the initial amplitude. Your system doesn't satisfy this, so it can't be understood as a harmonic oscillation. For the system you propose, if you release the particle at rest from a separation $s$, it will 'fall' parabolically to the origin in time $t=\sqrt{x_0/a}$ (where $a=F_0/m$ for a potential of the form $V(x)=F_0|x|$), and the motion will just be time-translated and reflected copies of this parabolic motion, which will therefore have a period $$ T=\frac{4}{\sqrt{F_0/m}}\sqrt{x_0} $$ that depends critically on the oscillation amplitude $x_0$. This is inconsistent with harmonic motion. Now, is there some change to the motion that you could do so that it will approximate as harmonic? Sure, there's plenty of similar potentials, like, say $$ V(x) = F_0d \sqrt{1 + (x/d)^2}, $$ which looks white similar to your potential as long as $d\ll x_0$, but then the harmonic approximation is only valid in the limit $x_0 < d$. Or, put it another way, anything that restores harmonic motion would destroy the key aspects of your potential's behaviour.
Disclaimer: I come from an academic finance perspective and hence I will definitely have my inherent biases in this question. How does one think about "alpha" in portfolio management? In particular, in some practitioner's literature, there's this discussion of an "alpha factor" in the linear factor models. Taking an instance out of numerous examples out there, see http://www.iijournals.com/doi/abs/10.3905/jpm.2008.709976 https://www.msci.com/documents/10199/c6e5e3f7-cd44-4322-aeb5-331e20e2afb7 As I understand it, the practitioner has in mind a linear factor model of the form, $$R_k = \alpha_k + \beta_{k1} R_{k1} + \ldots + \beta_{kN} R_{kN} + \epsilon_k $$, where $R_{kn}$ are the $n = 1, \ldots, N$ factor excess returns and $R_k$ is the excess return of asset $k$, and $\epsilon_k$ is the usual idiosyncratic risk. But what is this discussion of $\alpha_k$ being "spanned" by risk factors (i.e., it can be rewritten as a linear form of $R_{kn}$) or that it represents "return forecasts"? To my (perhaps limited) understanding, no top academic finance journal would ever interpret $\alpha_k$ as an "alpha factor" that is spanned by risk factors (i.e. this would imply that such an "alpha" can be collapsed into the linear span of ${ R_{k1}, ..., R_{kN} }$ and hence is risky; or that it can be viewed as a forecast of future returns, since the expectation would simply be $E R_k$, which when $R_{kn}$ have nonzero expectations, is not equal to $\alpha_k$. In our usual terminology, the presence of a nonzero $\alpha_k$ represents mispricings in the market, skills of a portfolio manager, or an error in the model that drives returns. Indeed, going way back to tests of flaws in the CAPM, this was exactly what was done. In all: is "alpha" as understood by academics and practitioners equivalent?
Dipak Ghosh Articles written in Pramana – Journal of Physics Volume 63 Issue 5 November 2004 pp 963-968 In this paper intermittent behaviour of the pions from ‘cold’ and ‘hot’ classes of events from 12C-AgBr interactions at 4.5 A GeV has been studied, separately. The results reveal strong intermittent pattern in case of ‘cold’ class of events. Volume 68 Issue 5 May 2007 pp 789-801 Research Articles A self-affne analysis of charged-particle multiplicity distribution (protons + pions) in $\pi^{−}$ –AgBr interaction at 350 GeV/c is performed according to the two-dimensional factorial moment methodology using the concept of Hurst exponent in $X_{\cos \theta^{-}} $X_{\phi}$ phase space. Comparing with the results obtained from self-similar analysis, the self affine analysis shows a better power-law behaviour. Corresponding results are compared with shower multiplicity distribution (pions). Multifractal behaviour is observed for both types of distributions. Volume 73 Issue 4 October 2009 pp 685-697 We compute the factorial correlators to study the dynamical fluctuations of pions and a combination of pions and protons (compound multiplicity) in 32S–AgBr interactions at 200 A GeV. The study reveals that for both pion and compound multiplicity the correlated moments increase with the decrease in bin–bin separation 𝐷, following a power-law, which suggests the self-similarity of multiplicity fluctuation in each case. The results of the analysis also show a consistency with the prediction of 𝛼-model for the existence of intermittency in both cases. Volume 77 Issue 2 August 2011 pp 297-313 The multiplicity fluctuations of the produced pions were studied using scaled variance method in 16O–AgBr interactions at 2.1 AGeV, 24Mg–AgBr interactions at 4.5 AGeV, 12C–AgBr interactions at 4.5 AGeV, 16O–AgBr interactions at 60 AGeV and 32S–AgBr interactions at 200 AGeV at two different binning conditions. In the first binning condition, the rapidity interval was varied in steps of one centring about the central rapidity until it reached 14. In the second case, the rapidity interval was increased in steps of 1.6 up to 14.4. Multiplicity distributions and their scaled variances were presented as a function of the dependence on the rapidity width for both the binning conditions. Multiplicity fluctuations were found to increase with the increase of rapidity interval and later found to saturate at larger rapidity window for all the interactions and in both the binning conditions. Multiplicity fluctuations were found to increase with the energy of the projectile beam. The values of the scaled variances were found to be greater than one in all the cases in both the binning conditions indicating the presence of correlation during the multiparticle production process in high-energy nucleus–nucleus interactions. Experimental results were compared with the results obtained from the Monte Carlo simulated events for all the interactions. The Monte Carlo simulated data showed very small values of scaled variance suggesting very small fluctuations for the simulated events. Experimental results obtained from 16O–AgBr interactions at 60 AGeV and 32S–AgBr interactions at 200 AGeV were compared with the events generated by Lund Monte Carlo code (FRITIOF model). FRITIOF model failed to explain the multiplicity fluctuations of pions emitted from 16O–AgBr interactions at 60 AGeV for both the binning conditions. However, the experimental data agreed well with the FRITIOF model for 32S–AgBr interactions at 200 AGeV. Volume 79 Issue 6 December 2012 pp 1395-1405 Event-to-event fluctuation pattern of pions produced by proton and pion beams is studied in terms of the newly defined erraticity measures $\chi (p, q)$, $\chi_{q}^{'}$ and $\mu_{q}^{'}$ proposed by Cao and Hwa. The analysis reveals the erratic behaviour of the produced pions signifying the chaotic multiparticle production in high-energy hadron–nucleus interactions ($\pi^{-}$ –AgBr interactions at 350 GeV/c and 𝑝–AgBr interactions at 400 GeV/c). However, the chaoticity does not depend on whether the projectile is proton or pion. The results are compared with the results of the Volume 80 Issue 4 April 2013 pp 631-642 We have presented an investigation on the ring- and jet-like azimuthal angle sub-structures in the emission of secondary charged hadrons coming from 32S–Ag/Br interactions at 200 A GeV/c. Nuclear photographic emulsion technique has been employed to collect the experimental data. The presence of such substructures, their average behaviour, their size, and their position of occurrence have been examined. The experimental results have also been compared with the results simulated by Monte-Carlo method. The analysis strongly indicates the presence of ring- and jet-like structures in the experimental distributions of particles beyond statistical noise. The experimental results are in good agreement with I M Dremin idea, that the phenomenon is similar to the emission of Cherenkov electromagnetic radiation. Volume 92 Issue 1 January 2019 Article ID 0004 Research Article A complex network and chaos-based method, based on the visibility graph algorithm, is applied to study particle fluctuations in $\pi^{−}$–AgBr interactions at 350 GeV with respect to the shower multiplicity dependence. The fractal structure of the fluctuations is studied by using the power of scale freeness of visibility graph (PSVG). The selection of visibility graph as the type of complex network for our analysis is justified as this algorithm gives the most precise result with finite number of data points and this experiment has finite number of events. The topological parameters along with PSVG values are extracted and analysed. The analysis shows that the fractality character is weaker for the lowest multiplicity bin and is stronger for the highest multiplicity bin. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
I'm reading about the mean-variance optimization of active portfolios. A bit of prior background from the book I'm reading: the author discusses the mean-variance optimal portfolios without cash, which amounts to solving the following optimization problem: Maximize: $w^Tf - \frac{1}{2}\lambda (w^T\Sigma w)$, subject to $w^Ti = 1$, where $w$ and $f$ are column vectors of the weights and returns (respectively) of all securities, $\Sigma$ is the covariance matrix, $\lambda$ the risk tolerance and $i = (1, ..., 1)^T$ is just a column vector of all $1$'s. The optimal weight vector turns out to be: $$w^* = \frac{\Sigma^{-1}i}{i^T \Sigma^{-1}i} + \frac{1}{\lambda}\frac{(i^T \Sigma^{-1}i)\Sigma^{-1}f\ -\ (i^T \Sigma^{-1}f)\Sigma^{-1}i}{i^T \Sigma^{-1}i}$$ Next, while considering an active portfolio, we can decompose the portfolio into benchmark and active weights - $w = b+a$. Since $w^T i = 1$ and $b^T i = 1$, $a^Ti = 0$. So the optimization problem in this case is: Maximize: $a^Tf - \frac{1}{2}\lambda (a^T\Sigma a)$, subject to $a^Ti = 0$. The optimal active weight vector is: $$a^* = \frac{1}{\lambda}\frac{(i^T \Sigma^{-1}i)\Sigma^{-1}f\ -\ (i^T \Sigma^{-1}f)\Sigma^{-1}i}{i^T \Sigma^{-1}i}$$. So far so good. Now the interpretation given in the book is as follows: ...it (the active weights) is independent of the benchmark. Consequently, the expected active return or alpha and the active risk are also independent of the benchmark. I can't understand how this claim follows from the equations above. Secondly, It is therefore theoretically feasible to utilize or port it on any benchmark. In other words, two active equity portfolios managed against two different equity benchmarks could have the same active weights. For instance, the active weights of an equity portfolio managed against S&P 500 index could be the same as the weights of a long-short market-neutral hedge fund. This is the idea behind the so-called portable alpha strategies, i.e., the alpha or excess return generated from a strategy can be ported onto another different benchmark. Could someone please explain what is meant by "porting" on to a benchmark? How can you port an $n$-dimensional active weight vector meant for an index with $n$ stocks on to another benchmark with $m \neq n$ stocks?
Lasted edited by Andrew Munsey, updated on June 15, 2016 at 1:21 am. Mechanical work is a force applied through a distance, defined mathmatically as the There was an error working with the wiki: Code[1] and displacement vectors. Work is a There was an error working with the wiki: Code[2] quantity which can be positive or negative. More simply, it is the energy related to the applied force over a distance. The force can do positive, negative, or zero work. For instance, a Centripetal force in uniform There was an error working with the wiki: Code[14] does zero work (because the scalar product of force and displacement vector is zero as they are orthogonal to each other). Another example is Lorentz magnetic force on moving electric charge which always does zero work because it is always orthogonal to the direction of motion of the charge. Note: Readers not familiar with There was an error working with the wiki: Code[3], please see "Simpler formulae" below. definition 1: Work is defined as the following There was an error working with the wiki: Code[15]: : W = \int_{C} \vec F \cdot d\vec{s} \,\! where: :C is the path or There was an error working with the wiki: Code[16] traversed by the object : \vec F is the Force vector :\vec s is the There was an error working with the wiki: Code[17]. This formula readily explains how a nonzero force can do zero work. The simplest case is where the force is always perpendicular to the direction of motion, making the There was an error working with the wiki: Code[18] always zero (viz. circular motion). However, even if the integrand sometimes takes nonzero values, it can still integrate to zero if it is sometimes negative and sometimes positive. The possibility of a nonzero force doing zero work exemplifies the difference between work and a related quantity: There was an error working with the wiki: Code[19] (the integral of force over time). Impulse measures change in a body's There was an error working with the wiki: Code[20], a vector quantity sensitive to direction, whereas work considers only the magnitude of the velocity. For instance, as an object in uniform circular motion traverses half of a revolution, its centripetal force does no work, but it transfers a nonzero impulse. In thermodynamics, thermodynamic work is the quantity of energy transferred from one system to another. It is a generalization of the concept of mechanical work in mechanics. The There was an error working with the wiki: Code[4]'s 1824 definition of work as "weight lifted through a height", which is based on the fact that early steam engines were principally used to lift buckets of water, though a gravitational height, out of flooded ore mines. The dimensionally equivalent There was an error working with the wiki: Code[21] (N·m) is sometimes used instead however, it is also sometimes reserved for Torque to distinguish its units from work or energy. Non-SI units of work include the There was an error working with the wiki: Code[22], the There was an error working with the wiki: Code[23], the There was an error working with the wiki: Code[24], and the There was an error working with the wiki: Code[25]. In the simplest case, that of a body moving in a steady direction, and acted on by a constant force parallel to that direction, the work is given by the formula : W = F s \,\! where : F is the force and : s is the distance travelled by the object. The work is taken to be negative when the force opposes the motion. More generally, the force and distance are taken to be There was an error working with the wiki: Code[5] quantities, and combined using the There was an error working with the wiki: Code[26]: : W = \vec F \cdot \vec {s} = |{F}| |{s}| \cos\phi \,\! where \phi is the angle between the force and the displacement vector. This formula holds true even when the object changes its direction of travel throughout the motion. To further generalize the formula to situations in which the force changes over time, it is necessary to use There was an error working with the wiki: Code[6] to express the infinitesimal work done by the force over an infinitesimal displacement, thus: : dW = \vec F \cdot d\vec{s} \,\! The There was an error working with the wiki: Code[7] of both sides of this equation yields the most general formula, as given above. Forms of work that are not evidently mechanical in fact represent special cases of this principle. For instance, in the case of "electrical work", an There was an error working with the wiki: Code[8]d particles as they move through a medium. One mechanism of There was an error working with the wiki: Code[27] is collisions between fast-moving Atoms in a warm body with slow-moving atoms in a cold body. Although colliding atoms do work on each other, it averages to nearly zero in bulk, so conduction is not considered to be mechanical work. There was an error working with the wiki: Code[28] studies PV work, which occurs when the volume of a fluid changes. PV work is represented by the following There was an error working with the wiki: Code[29]: :dW = -P dV \, where: W = work done on the system P = external pressure V = volume Therefore, we have: :W=-\int_{V_i}^{V_f} P\,dV Like all work functions, PV work is There was an error working with the wiki: Code[9]. (The path in question is a curve in the There was an error working with the wiki: Code[30] specified by the fluid's There was an error working with the wiki: Code[31] and There was an error working with the wiki: Code[32], and infinitely many such curves are possible.) From a thermodynamic perspective, this fact implies that PV work is not a There was an error working with the wiki: Code[33]. This means that the differential dW is an There was an error working with the wiki: Code[34] to be more rigorous, it should be written ?W (with a line through the d). From a mathematical point of view, that is to say, dW is not an There was an error working with the wiki: Code[10] There was an error working with the wiki: Code[35]. This line through is merely a flag to warn us there is actually no function ( There was an error working with the wiki: Code[36]) W which is the There was an error working with the wiki: Code[37] of dW. If there were, indeed, this function W, we should be able to just use There was an error working with the wiki: Code[38], and evaluate this putative function, the potential of dW, at the There was an error working with the wiki: Code[39] of the path, that is, the initial and final points, and therefore the work would be a state function. This impossibility is consistent with the fact that it does not make sense to refer to the work on a point work presupposes a path. PV work is often measured in the (non-SI) units of litre-atmospheres, where 1 L·atm = 101.3 J. In physics, mechanical energy describes the potential energy and kinetic energy present in the components of a mechanical system. The mechanical energy of a body is that part of its total Energy which is subject to change by mechanical work. It includes Kinetic energy and Potential energy. Some notable forms of energy that it does not include are Thermal energy (which can be increased by There was an error working with the wiki: Code[40]al work, but not easily decreased) and There was an error working with the wiki: Code[41] (which is constant so long as the There was an error working with the wiki: Code[42] remains the same). The study of There was an error working with the wiki: Code[11] of There was an error working with the wiki: Code[12] and the Forces that act upon them. Most people are familiar with systems described by There was an error working with the wiki: Code[43] - objects that sit around, move, collide, and are influenced by Gravity. Mechanical energy includes things like the kinetic energy of a moving There was an error working with the wiki: Code[44], or the potential energy a There was an error working with the wiki: Code[45] at the top of its track. The physics of There was an error working with the wiki: Code[13], but in some situations, the mechanics (i.e. the mathematics of motion) of bodies influenced by electromagnetic forces is the same as that of those influenced by gravity. For example, two particles of opposite electrical charge experience an attractive force which is (allowing for certain idealizations) mathematically identical to the gravitational forces two passing planets experience. An electromechanical system might also involve the conversion of mechanical energy into electrical charges or magnetic fields, or vice versa. Everyday objects are composed of Atoms and There was an error working with the wiki: Code[46], which to some degree, are like billiard balls that are constantly bouncing off one another. "Mechanical energy" might include the kinetic energy of these particles, or potential energy stored in the physical arrangement. For example, a compressed solid exerts pressure because electromagnetic forces between particles tend to push them apart. Compressing a solid (moving the particles "uphill" against repulsive electromagnetic forces) stores potential energy in a similar way that pushing a boulder up a hill does (moving the object uphill against the attractive gravitational force of the Earth). On the other hand, a compressed gas exerts pressure because independently moving particles collide with the walls of the container and change direction. The particle is accelerated (its velocity vector changed), and the acceleration times the mass of the particle gives the force applied. Compressing a gas changes the average kinetic energy of the particles, which is reflected in the corresponding increase in the temperature of the gas. The pressure also increases, but this is because the same number of particles have been forced into a smaller volume, so they collide more often with the walls. The force of any given collision is the same, but the number of collisions has increased. Potential energy does play a role in the pressure of a gas. During an individual collision, a gas molecule comes closer to the molecules of the container wall. The electric fields exert a force on the molecule, slowing it down and reducing its kinetic energy. This energy is temporarily stored as potential energy. Soon, the particle is nearly stationary (if it happened to approach head on), or at least, it is not approaching the wall any more. The electric fields continue to exert a force on the gas molecule. The force continues to change the velocity, and soon the molecule is moving away from the wall and gaining kinetic energy. Generally, the collision is elastic, and all of the kinetic energy is recovered and the particle continues moving with the same speed it had originally. There was an error working with the wiki: Code[47] studies how rigid bodies behave in response to external forces. There was an error working with the wiki: Code[48] studies the internal motion of There was an error working with the wiki: Code[49]s, Gases, and other forms of matter. Mechanical energy can be expended in crushing a soda can, affecting the motion and positional arrangement of its component molecules. Mechanical energy can be transferred from the molecules of a solid to the molecules of a liquid when, for example, a glass of water is stirred. When a given quantity of mechanical energy is transferred (such as when throwing a ball, lifting a box, crushing a can, or stirring a beverage) it is said that this amount Mechanical work has been done. Both mechanical energy and mechanical work are measured in the same units as Energy in general. It is usually said that a component of a system has a certain amount of "mechanical energy" (i.e. it is a There was an error working with the wiki: Code[50]), whereas "mechanical work" describes the amount of mechanical energy a component has gained or lost. The There was an error working with the wiki: Code[51] is a principle which states that, under certain conditions, the total mechanical energy of a system is constant. This rule does not hold when mechanical energy is converted to other forms, such as chemical, nuclear, or electromagnetic. However, the principle of general There was an error working with the wiki: Code[52] is so far an unbroken rule of physics - as far as we know, energy cannot be created or destroyed, only changed in form. Scientists often make simplifying assumptions to make calculations about how mechanical systems behave. For example, instead of calculating the mechanical energy separately for each of the billions of molecules in a soccer ball, it is easier to treat the entire ball as one object. This means that only two numbers (one for kinetic mechanical energy, and one for potential mechanical energy) are needed for each There was an error working with the wiki: Code[53] (for example, up/down, north/south, east/west) under consideration. To calculate the energy of a system without any simplifying assumptions would require examining the state of all There was an error working with the wiki: Code[54]s and considering all four There was an error working with the wiki: Code[55]s. This is usually only done for very small systems, such as those studied in There was an error working with the wiki: Code[56]. The classification of energy into different "types" often follows the boundaries of the fields of study in the natural sciences. There was an error working with the wiki: Code[58], energy stored in interactions between the particles in the There was an error working with the wiki: Code[59] studied in There was an error working with the wiki: Code[60] In certain cases, it can be unclear what counts as "mechancial" energy. For example, is the energy stored in the structure of a crystal "mechanical" or "chemical"? Scientists generally use these "types" as convenient labels which clearly distinguish between different phenomena. It is not scientifically important to decide what is "mechanical" energy and what is "chemical". In these cases, usually there is a more specific name for the phenomenon in question. For example, in considering two bonded atoms, there are energy components from vibrational motion, from angular motions, from the electrical charge on the nuclei, secondary electromagnetic considerations like the There was an error working with the wiki: Code[63], and quantum mechanical contributions concerning the energy state of the electron shells. If an external work W acts upon a body, causing its Kinetic energy to change from Ek1 to Ek2, then: :W = \Delta E_k = E_{k2} - E_{k1}\, The principle of conservation of mechanical energy states that, if a system is subject only to There was an error working with the wiki: Code[64]s (e.g. only to a There was an error working with the wiki: Code[65]), its mechanical energy remains constant. For instance, if an object with constant mass is in free fall, the total energy of position 1 will equal that of position 2. : (E_k + E_p)_1 = (E_k + E_p)_2 \,\! where E_k is the Kinetic energy, and E_p is the Potential energy. Mark T. Holtzapple and W. Dan Reece, "Foundations of Engineering", 2/e, Glossary, highered.mcgraw-hill.com. Serway, Raymond A. Jewett, John W., "Physics for Scientists and Engineers" (6th ed.) Brooks/Cole, 2004 ISBN 0-534-40842-7 Tipler, Paul, "Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics" (5th ed.) W. H. Freeman, 2004 | id=ISBN 0-7167-0809-4 There was an error working with the wiki: Code[1], Wikipedia: The Free Encyclopedia. Wikimedia Foundation.
Your calculation assumes that the charges on both capacitors are the same. There is a fixed amount of charge $Q$ which is shared between the two capacitors. When the charge on the spherical capacitor is $q$ that on the other capacitor is $Q-q$. Another thing which your diagram does not show is that the LH plate of the spherical capacitor is the Earth, so it is grounded. In the steady state, when current is no longer flowing, the voltage across both capacitors is the same, but reversed. You have the time constant almost correct : it should be $CR$ where $1/C=1/C_1+1/C_2$ and $C_1=4\pi \epsilon_0 r$ and $R=R_1+R_2$. Your mistake here was to write $i=\frac{dq}{dt}$ where $q$ is the charge on the spherical capacitor. A +ve current flowing away from the spherical capacitor $C_1$ means that $q$ is decreasing, so you should have $i=-\frac{dq}{dt}$. Conservation of Energy Method The initial voltage across the spherical capacitor is $\frac{Q}{C_1}$ where $C_1=4\pi\epsilon_0 r$. In the steady state the voltages across both capacitors will be the same (but reversed) because the outer plates are grounded. This means that the final charges $q_1, q_2$ will be in proportion to capacitance : $$\frac{q_1}{C_1}=\frac{q_2}{C_2}$$ $$Q=q_1+q_2$$ $$q_1=\frac{C_1}{C_1+C_2} Q, q_2=\frac{C_2}{C_1+C_2} Q$$ The initial energy stored in the spherical capacitor is $\frac{Q^2}{2C_1}$. The final energy stored is $$\frac{q_1^2}{2C_1}+\frac{q_2^2}{2C_2}=\frac{C_1Q^2}{2(C_1+C_2)^2}+\frac{C_2 Q^2}{2(C_1+C_2)^2}=\frac{Q^2}{2(C_1+C_2)}$$ The loss in energy (which is dissipated as heat in both resistors) is $$E=\frac{Q^2}{2C_1}-\frac{Q^2}{2(C_1+C_2)}=\frac{C_2}{2C_1(C_1+C_2)}Q^2$$ The same current flows through both resistors, so the total energy is dissipated in proportion to resistance : $$E_1=\frac{R_1}{R_1+R_2}E$$ $$E_2=\frac{R_2}{R_1+R_2}E$$ Integration Method The same current $i$ flows through both resistors [1]. The application of Kirchhoff's Voltage Law gives $$\frac{q_1}{C_1}-i(R_1+R_2)-\frac{Q-q_1}{C_2}=0$$ You can solve this for $q_1$ but it is more convenient to differentiate to get an equation for the current $i$ then solve that. Note that $i=-\frac{dq_1}{dt}$ because +ve current is a decrease in $q_1$. $$-\frac{i}{C_1}-(R_1+R_2)\frac{di}{dt}-\frac{i}{C_2}=0$$ $$(R_1+R_2)\frac{di}{dt}=-(\frac{1}{C_1}+\frac{1}{C_2})i$$ $$\frac{di}{dt}=-\frac{1}{CR}i$$ where $R=R_1+R_2$ and $ \frac{1}{C}=\frac{1}{C_1}+\frac{1}{C_2}$. When $t=0$ the initial current is $i_0=\frac{V}{R}=\frac{Q}{RC_1}$ [2]. The solution is $$i=i_0 e^{-t/RC}=\frac{Q}{RC_1} e^{-t/RC}$$ The total electrical energy dissipated as heat in the two resistors is $$E=\int_0^\infty i^2 R dt=\int_0^\infty i_0^2R e^{-2t/RC}dt=[-i_0^2 \frac{R^2C}{2}e^{-2t/RC}]_0^\infty=\frac{R^2C}{2}\frac{Q^2}{R^2 C_1^2}=\frac{CQ^2}{2C_1^2}=\frac{C_2}{2C_1(C_1+C_2)}Q^2$$ as found using Conservation of Energy. Note 1 : The same current flows through both resistors because any +ve charge $+\delta q$ leaving $C_1$ flows to the +ve plate $C_2$ and induces an equal -ve charge $-\delta q$ on the grounded plate of $C_2$. This -ve charge comes from Earth. A -ve charge flowing left is equivalent to a +ve charge flowing right. Likewise the decrease $\delta q$ of +ve charge on the RH plate of $C_1$ induces an equal decrease $\delta q$ in the -ve charge on the LH plate of $C_1$. The -ve charge flowing leftwards to Earth is equivalent to a +ve charge flowing right to $C_1$. In every section of the circuit the same conventional +ve current is flowing left to right. Note 2 Initially $C_2$ is uncharged. The potential difference across $C_2$ is zero. The RH plate of $C_2$ is grounded so the LH plate of $C_2$ is also at zero potential. Meanwhile the RH plate of $C_1$ is at $V$. The voltage across $R_1$ is $V$. So why isn't the initial current $i_0=\frac{V}{R_1}$ instead of $\frac{V}{R}$? It is again because (as in Note 1) at all times the same current flows in $R_1$ and $R_2$.
The $(M+1)$ peak is often considered in the high-resolution mass spectra of organic molecules as it reveals the number of carbon atoms in the sample. In general, it is known that the ratio of the size of the $M$ to $(M+1)$ peaks is $98.9 : 1.1 \times n $ since the relative abundance in nature of $^{13}$C is $ 1.1$% for the mass spectrum of an organic molecule containing $n$ carbon atoms and no heteroatoms. It is mentioned that the $(M+2)$ peak is statistically insignificant on this site. However, I believe that only applies for organic molecules with a relatively small number of carbon atoms and this peak would become significant when considering the mass spectra of larger organics. Using simple mathematics, I derived that the ratio of the $M$ to $(M+2)$ peak is $98.9^{2} : 1.1^{2} \times _nC _2 $. I would like to verify if this is correct. If it is not, could someone then correct it by posting an answer? You are correct on all accounts. To a very good approximation, molecules can be thought as made of elements (with their respective isotope distributions) combining completely independently. You can think of it like rolling multiple die at once. This means that a simple multinomial distribution will describe this problem mathematically. Let's start with something easy and consider the hypothetical molecule $\ce{C_5}$. Furthermore, let us consider that the only carbon isotopes with significant natural occurrence are $\ce{^{12}C}$ (98.9%) and $\ce{^{13}C}$ (1.1%). We can find all of the isotopic peaks and their relative abundances by then expanding the binomial $(0.989\times m[^{12}C] + 0.011\times m[^{13}C])^5$, where $m[^{12}C]$ and $m[^{13}C]$ denote the exact masses of the carbon-12 and carbon-13 isotopes, respectively. Expanding the binomial yields: $\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times m[^{13}C])^5 ={} & \ \ \ \ \ \binom {5} {0}(0.989\times m[^{12}C])^5 \\ & + \binom {5} {1}(0.989\times m[^{12}C])^4 \times (0.011\times m[^{13}C]) \\ & + \binom {5} {2}(0.989\times m[^{12}C])^3 \times (0.011\times m[^{13}C])^2 \\ & + \binom {5} {3}(0.989\times m[^{12}C])^2 \times (0.011\times m[^{13}C])^3 \\ & + \binom {5} {4}(0.989\times m[^{12}C]) \times (0.011\times m[^{13}C])^4 \\ & + \binom {5} {5} (0.011\times m[^{13}C])^5 \\ \end{aligned} \end{equation}$ Calculating the coefficients in each term: $\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times (m[^{13}C])^5 ={} & \ \ \ \ \ 0.946 \times (m[^{12}C])^5 \\ & + 0.0526\times (m[^{12}C])^4 \times m[^{13}C] \\ & + 0.00117 \times (m[^{12}C])^3 \times (m[^{13}C])^2 \\ & + 0.0000130 \times (m[^{12}C])^2 \times (m[^{13}C])^3 \\ & + 7.24×10^{-8}\times (m[^{12}C]) \times ( m[^{13}C])^4 \\ & + 1.61×10^{-10} \times m[^{13}C])^5 \\ \end{aligned} \end{equation}$ From the expansion, we see that 94.6% of all $\ce{C_5}$ molecules contain only carbon-12 (the lowest possible mass for the molecule), and almost all of the rest (5.3% out of the remaining 5.4%) is accounted for by molecules that contain a single carbon-13 atom. Only about 0.1% of $\ce{C_5}$ molecules contain two or more carbon-13 atoms. But what happens in very large molecules? Intuitively, if there are many atoms, you would expect a higher chance of there being at least one less common isotope in the mix. Let's see the first few terms for the molecule $\ce{C_100}$: $\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times m[^{13}C])^{100} ={} & \ \ \ \ \ \binom {100} {0}(0.989\times m[^{12}C])^{100} \\ & + \binom {100} {1}(0.989\times m[^{12}C])^{99} \times (0.011\times m[^{13}C]) \\ & + \binom {100} {2}(0.989\times m[^{12}C])^{98} \times (0.011\times m[^{13}C])^2 \\ & + \ ...\\ \end{aligned} \end{equation}$ Calculating the coefficients: $\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times m[^{13}C])^{100} ={} & \ \ \ \ \ 0.331 \times (m[^{12}C])^{100} \\ & + 0.368 \times (m[^{12}C])^{99} \times (m[^{13}C]) \\ & + 0.203 \times (m[^{12}C])^{98} \times (m[^{13}C])^2 \\ & +\ ...\\ \end{aligned} \end{equation}$ Well that's interesting. Now only 33.1% of the molecules contain only carbon-12 atoms, and in fact more molecules contain exactly one carbon-13 atom, at 36.8% of the total. Even molecules with two carbon-13 atoms are quite abundant, at 20.3%. Indeed, peaks containing rarer isotopes eventually dominate. For the huge molecule $\ce{C_10000}$, the strongest mass spectrum signal would come from molecules contaning 110 carbon-13 atoms, corresponding to 3.8% of the total, while a measly $9.2\times 10^{-47}\%$ of molecules contain only carbon-12. This happens because when $n$ is large, the term $\binom {n} {k}$ grows very quickly as $k$ rises from zero, overwhelming the increase in the exponent of the rarer isotope. You can see this behaviour quite nicely in this sequence of mass spectra of molecules with increasing size. To calculate the specific $M/M+2$ ratio for a molecule containing only $n$ carbon atoms, all you need is to get the ratio for first and third terms in the binomial: $\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times m[^{13}C])^n ={} & \ \ \ \ \ \color{#0000ff}{ \binom {n} {0}(0.989\times m[^{12}C])^n} \\ & + \binom {n} {1}(0.989\times m[^{12}C])^{n-1} \times (0.011\times m[^{13}C]) \\ & + \color{#0000ff}{\binom {n} {2}(0.989\times m[^{12}C])^{n-2} \times (0.011\times m[^{13}C])^2} \\ & +\ ...\\ \end{aligned} \end{equation}$ The ratio is then: $$\frac{\binom {n} {0}0.989^n}{\binom {n} {2}0.989^{n-2} \times 0.011^2}=\frac{2\times 0.989^2}{n(n-1)\times 0.011^2}$$ Technically this only holds if there are no other elements which contain multiple isotopes, though it will hold approximately if the other elements only have very rare alternate isotopes, such as hydrogen (99.98% hydrogen-1, 0.02% hydrogen-2). As a last curiosity, all of the above extends to analysing more complicated molecules. For example, glucose ($\ce{C6H12O6}$) will have a mass spectrum exactly described by the expression: $$(0.989\times m[^{12}C] + 0.011\times m[^{13}C])^6 \times (0.9998\times m[^{1}H] + 0.002\times m[^{2}H])^{12} \times (0.9976\times m[^{16}O] + 0.004\times m[^{17}O] + 0.020\times m[^{18}O])^6$$ Happy expanding! Let’s assume your compound is $\ce{C_nH_xO_y}$. Thankfully, both hydrogen and oxygen are elements that only have one significant naturally occuring isotope. Therefore, we can treat the entire contribution of $\ce{H_xO_y}$ as a constant $c$. All we need to answer is how large the $M+1$ and $M+2$ peaks are given the number of carbons, $n$. (This treatment is not entirely correct as highlighted by orthocresol, but it is close enough for my purposes.) Assuming you do not have any isotope enrichments, each carbon will independently have a chance of either being $\ce{^12C}$ or $\ce{^13C}$ (again, we will ignore all other isotopes such as $\ce{^14C}$). The independence is the big key word here. We can use general principles of stochastics to calculate the result. The probability of all carbon atoms of one molecule of your compound are $\ce{^12C}$ is: $$P(\ce{^12C_nH_xO_y}) = 0.989^n$$ The $M+1$ peak is represented by one single atom being $\ce{^13C}$. Again, the principles of stochastics apply: $$P(\ce{^12C_{n-1}^13C1H_xO_y}) = \left ( n\atop 1\right) \times 0.989^{n-1} \times 0.011$$ And finally for the $M+2$ peak: $$P(\ce{^12C_{n-2}^13C2H_xO_y}) = \left ( n \atop 2\right) \times 0.989^{n-2} \times 0.011^2$$ Using this, we can plot what the probability of each peak is – and use that as their relative heights. We see that from $n=90$ the $M+1$ peak is actually larger than the $M$ peak. From $n=128$, even the $M+2$ peak will be larger than the $M$ peak. And from $n=181$ the $M+2$ peak becomes the largest of these three. Of course, further peaks will also start appearing, meaning that the mass spectra of very large molecules will become difficult to analyse. Now the numbers from which $M$ is no longer te principal peak are quite alrge. Unless you are synthesising maitotoxin, you will probably not need to resort to any analysis that does not only involve $M+1$.
NOTE: AFAICT, D.W found a hole in this reduction and it is wrong (see comments). Keeping it here for historical reasons. Intro: first I will reduce the problem to our problem. Though the Monotone 3SAT problem is trivially satisfiable, our problem can further solve the Monotone 3SAT problem, which is NP-hard; thus this problem is NP-hard. Minimum True Monotone 3SAT Reduction from to our problem Monotone 3SAT We have a monotone boolean formula expressed as a sequence of variables, and a sequence of clauses. The CNF is of the form $\Phi = (\mathcal V,\mathcal C)$ such that: $$\forall_{\left(c_i \in \mathcal C\right)} ~ \left.c_i=\left(x_j \vee x_k \vee x_l\right) \vphantom{\LARGE | } \right|_{\left(x_j,x_k,x_l \in \mathcal V\right)}$$and $$\left.{\Large{\bigwedge}}_{i=1}^{n}{c_i}\right|_{\genfrac{}{}{0}{}{c_i\in \mathcal C,}{n=\left|\mathcal C\right|}}.$$ Conversion We construct a graph, $G'=V',E'$. Each vertex in $G'$ has a label; vertices with the same label are eligible for contraction. First we construct the graph as follows: for each $x_i \in \mathcal V$, we make two nodes, each labeled $x_i$, and a directed edge from one to the other (click images for high resolution view). These nodes can of course be contracted, because they have the same label. We will consider variable/nodes that are contracted to be valued as false, and those that are uncontracted to be valued as true: After this step, $V'$ should contain $2\cdot \left|\mathcal V\right|$ nodes. Next, we introduce the clause constraints. For each clause, $c_i \in \mathcal C, ~ \left.c_i = (x_j \vee x_k \vee x_l) \right|_{x_j,x_k,x_l \in \mathcal V}$, we introduce node $c_i$, and the following edges: one Note the duplicatation of $c_i$ is for viewing purposes only; there is only $1$ node labeled $c_i$. (click image for full view) After this step, we should have $2\cdot \left|\mathcal V\right| + |\mathcal C|$ nodes. Now, if $x_i$, $x_j$ and $x_k$ get contracted, $c_i \rightarrow c_i$ will result in a cycle. Here is another visualization, unrolling the clause constraint: Thus, each clause constraint requires that at least one of the variables it contains remain uncontracted; since the uncontracted nodes are valued as true, this requires that one of the variables be true; exactly what Monotone SAT requires for its clauses. Reduction from Minimum True Monotone 3SAT Monotone 3SAT is trivially satisfiable; you can simply set all the variables to true. However, since our DAG minimization problem is to find the most contractions, this translates to finding the satisfying assignment that produces the most false variables in our CNF; which is the same as finding the minimum true variables. This problem is sometimes called or here (as an optimization problem, or decision problem), or Minimum True Monotone 3SAT (as a weaker decision problem); both NP-hard problems. Thus our problem is NP-hard. k-True Monotone 2SAT References: Graph sources:
In relativity, the symmetric energy-momentum tensor is given by $$ T^{ij}, $$ where $T^{00}$ is the energy density and $\frac{1}{c}T^{10}$ is the momentum density. Thus: $$ \left(\frac{1}{c}T^{00}dV, \frac{1}{c}T^{10}dV\right)^{T}$$ is the 4-momentum. Under a Lorentz transformation, this should transform like 4-vectors where $$ \frac{1}{c}T^{00}dV= \left[\frac{1}{c}T'^{00}dV'+\frac{v}{c^2}T'^{10}dV'\right] \left( 1-\frac{v^2}{c^2}\right)^{-1/2}\\dV=dV'\sqrt{1-\frac{v^2}{c^2}}.$$ After simplifications, we have: $$ T^{00}= \left[T'^{00}+\frac{v}{c}T'^{10} \right] \left( 1-\frac{v^2}{c^2}\right)^{-1}$$ But if we apply the Lorentz transformation to the tensor directly we get $$ T^{00}= \left[T'^{00}+\frac{v}{c}T'^{10}+\frac{v^2}{c^2}T\ ^{11} \right]\left( 1-\frac{v^2}{c^2}\right)^{-1}$$ What accounts for the difference? I think the first is wrong but have no idea why. One really doesn't talk about components of the Stress-Energy tensor being momentum densities. Instead, you think about projections on vectors. So, the energy density observed by a timelike observer with 4-velocity $u^{a}$ is given by $$j^{a}=T^{a}{}_{b}u^{b}$$ Now, when you boost, you have two choices. You can either boost to a new reference frame, and still look at the density observed with our original observer, which then means you have to boost $u^{a}$ as well: $$\begin{align} j^{\prime}{}^{a} &= \Lambda^{a}{}_{c}\Lambda_{b}{}^{d}T^{c}{}_{d}\left(\Lambda^{b}{}_{e}u^{e}\right)\\ &= \Lambda^{a}{}_{c}\Lambda_{e}{}^{d}T^{c}{}_{d}u^{e} \end{align}$$ OR, we could measure the stress energy tensor as observed by someone who has a four velocity $v^{a}$ which has the same components as $u^{a}$, but in the boosted frame (i.e., $v^{a} = \Lambda^{-1}{}^{a}{}_{c}u^{c}$). Then, we have: $$\begin{align} j^{\prime}{}^{a} &= \Lambda^{a}{}_{c}\Lambda_{b}{}^{d}T^{c}{}_{d}\left(v^{b}\right)\\ &= \Lambda^{a}{}_{c}\Lambda_{b}{}^{d}T^{c}{}_{d}\left(\Lambda^{-1}{}^{b}{}_{e}u^{e}\right)\\ &=\Lambda^{a}{}_{c}\delta_{e}{}^{d}T^{c}{}_{d}u^{e}\\ &=\Lambda^{a}{}_{c}T^{c}{}_{d}u^{d}\\ \end{align}$$ As you can see, the two factors differ by a factor of the lorentz transformation, and most critically, you need the two factors of the lorentz transformation to mix in the $T_{11}$ component into the momentum density. Let $\textbf{u},\textbf{w}$ be the four velocities of observers at rest in the original and primed frames, respectively. Let $\Delta V$ be a small volume in $\textbf{u}$'s frame, and let $\Delta p^{\mu}$ be the total amount of four momentum contained in $\Delta V$. $\Delta p^{\mu}$ is also expressible as $T^{\mu0}\Delta V=T^{\mu\alpha}(u_{\alpha}\Delta V)$ - the flux of $T\cdot\textbf{d}x^{\mu}$ through an oriented (in the direction of $\textbf{u}$) hypersurface $\Delta V$. When you perform a Lorentz transformation on the volume $\Delta V$ to obtain $\Delta V'$ the components of the volume's orientation also transform. The total four momentum in $\Delta V'$, oriented with $\textbf{u}$, is: $$\Delta p'^{\mu}=T'^{\mu\alpha}(u'_{\alpha}\Delta V')=T^{\sigma \rho}\Lambda^{\mu'}_{\sigma}\Lambda^{\mu'}_{\rho}\Lambda^{\beta}_{\mu'}u_{\beta}\Delta V'=\left(\frac{\Delta p^{\sigma}}{\Delta V}\right)\Lambda^{\mu'}_{\sigma}\Delta V'$$ So that the four momentum density flowing through the hypersurface $\Delta V$ oriented by $\textbf{u}$ is indeed a four vector. This is not the same object as $T '^{\mu0}\Delta V'=T'^{\mu \alpha}w'_{\alpha}\Delta V'$ - the total four momentum flowing through a hypersurface (of volume $\Delta V'$) oriented by $\textbf{w}$. $T^{00}$ is $\textbf{u}=\textbf{e}_{0}$ component of the momentum flux through $\textbf{u}$ hypersurface, this corresponds to your last equation. The quantity on the right hand side of the preceding eqaution is the $\textbf{u}=\textbf{e}_{0}$ component of momentum flux through the $\textbf{w}$ hypersurface. For an observer with 4-velocity $u^\mu$, the density of 4-momentum is $u_\nu T^{\mu\nu}$. If $u^\nu = (1,0,0,0)$, then indeed the 4-momentum is $p^\mu = T^{\mu 0}$ and this is a 4-vector. Now let primed indices correspond to components in some other frame. Then $$ p^{0'} = L^{0'}_\mu p^\mu = L^{0'}_{\mu} T^{\mu 0}.$$ But it is not the case that $$p^{0'} \overset{!}{=} T^{0'0'}.$$ You can see this from the covariant expression $$p^{\mu'} = u_{\nu'}T^{\mu' \nu'}.$$ Since after a boost along the $1$-axis, $u_\nu' = (\gamma, -\gamma v, 0,0) \neq (1,0,0,0)$, what is true is that $$p^{0'} = \gamma T^{0'0'} - \gamma vT^{1'0'}.$$ This should cancel the additional term that confused you. I build up on Brian Trundy's answer, giving a more explicit derivation. Indeed, when you do a Lorentz boost your orientation changes and you cannot think of $dV$ transforming as a simple length. You need to think of it as the component of a four-vector $dV^\mu=u^\mu dV$, where $u^\mu$ is the four-velocity. In a static reference frame, the four-velocity is $u^\mu=(c,0)$ (let me assume one spatial dimension for simplicity), so the energy is just $$dE=\frac{1}{c}dp^0=\frac{1}{c}T^{0\nu}dV_\nu=\frac{1}{c}T^{00}dV$$ Now, indeed, if you transform this object as a vector you get $$\frac{1}{c}T^{00}dV=\frac{1}{c}T^{0\nu}dV_\nu=\left[\frac{1}{c}T'^{0\nu}dV'_\nu-\frac{v}{c^2}T'^{1\nu}dV'_\nu\right]\left(1-\frac{v^2}{c^2}\right)^{-1/2}$$ And the transformation of the volume element four-vector is $$dV'^\nu=\left(1-\frac{v^2}{c^2}\right)^{-1/2}(dV,-v\,dV)$$ So in the end we have $$\frac{1}{c}T^{00}dV=\frac{1}{c}\left[T'^{00}-2\frac{v}{c}T'^{01}+\frac{v^2}{c^2}T'^{11}\right]\left(1-\frac{v^2}{c^2}\right)^{-1}dV$$ Which coincides with the transformation of the stress-energy tensor as a $2$-tensor, the factor of $2$ appearing due to the symmetry $T^{\mu\nu}=T^{\nu\mu}$.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
I have a question about deriving Eq. (6.2.13) in Polchinski's string theory book volume I. It is claimed that Now consider the path integral with a product of tachyon vertex operators, $$A_{S_{2}}^{n}(k,\sigma)=\left\langle [e^{ik_{1}\cdot X(\sigma_{1})}]_{r}[e^{ik_{2}\cdot X(\sigma_{2})}]_{r}\cdots[e^{ik_{n}\cdot X(\sigma_{n})}]_{r}\right\rangle _{S_{2}}\tag{6.2.11}$$ This corresponds to $$J(\sigma)=\sum_{i=1}^{n}k_{i}\delta^{2}(\sigma-\sigma_{i})\tag{6.2.12}$$ The amplitude (6.2.6) then becomes $$A_{S_{2}}^{n}(k,\sigma)= iC_{S_{2}}^{X}(2\pi)^{d}\delta^{d}(\sum_{i}k_{i})\times ... $$ $...\exp(-\sum_{i<j} k_{i}\cdot k_{j}G'(\sigma_{i},\sigma_{j})-\frac{1}{2}\sum_{i=1}^{n}k_{i}^{2}G_{r}'(\sigma_{i},\sigma_{i}))\tag{6.2.13}$ where $C_{S_{2}}^{X}=X_{0}^{-d}(\det'\frac{-\nabla^{2}}{4\pi^{2}\alpha'})_{S_{2}}^{-d/2}$ and $G_{r}'(\sigma,\sigma')=G'(\sigma,\sigma')+\frac{\alpha'}{2}\ln d^{2}(\sigma,\sigma')$ Eq. (6.2.6) is $$Z[J]=i(2\pi)^{d}\delta^{d}(J_{0})(\det'\frac{-\nabla^{2}}{4\pi^{2}\alpha'})^{-d/2}\times ...$$ $...\exp(-\frac{1}{2}\int d^{2}\sigma d^{2}\sigma'J(\sigma)\cdot J(\sigma')G'(\sigma,\sigma'))\tag{6.2.6}$ My question is: where do $X_0^{-d}$ and $G_r'$ come from in Eq. (6.2.13)? I could try to plug (6.2.12) into (6.2.6) to see all other term appears, but not $X_0^{-d}$ nor $G_r'$. This post imported from StackExchange Physics at 2014-07-06 20:38 (UCT), posted by SE-user user26143
One way to overcome the problem of excessive extrapolation by least squares involves directly executing on the unconfoundedness assumption and nonparametrically matching subjects with similar covariate values together. As we shall see, least squares still plays an important role under this approach, but its scope is restricted to being a local one. Recall that unconfoundedness says that conditional on \(X\), treatment assignment is as good as random. This means that conditional on \(X\), we should be able to estimate the conditional average treatment effect \(\mathrm{E}[Y(1)-Y(0)|X]\) by simply computing the difference between the average outcomes of the treated and control subjects that share similar covariate values. Once the conditional average treatment effects have been identified and estimated, we should then be able to recover the unconditional average treatment effect by aggregating them appropriately. This is the matching estimator of Abadie and Imbens (2006) in a nutshell. More specifically, match each unit \(i\) in the sample with a unit \(m(i)\) in the opposite group, where $$m(i) = \mathrm{argmin}_{j: D_j \neq D_i} \|X_j - X_i\|.$$ Here \(\|X_j - X_i\|\) denotes some measure of distance between the covariate vectors \(X_j\) and \(X_i\). More precisely, it is defined as $$\|X_j - X_i\| = (X_j-X_i)' W (X_j-X_i).$$ By varying the positive-definite weighting matrix \(W\) we can obtain different measures of distance. One reasonable candidate for \(W\) is the inverse variance matrix \(\mathrm{diag}\{\hat{\sigma}_1^{-2}, \ldots, \hat{\sigma}_K^{-2}\}\), where \(\hat{\sigma}_k\) denotes the sample standard deviation of the \(k\)th covariate. Using this weighting matrix ensures that each covariate is put on a comparable scale before being aggregated. Once the matching is complete, we can estimate the subject-level treatment effect by calculating the difference in observed outcomes between the subject and its matched twin. Averaging over these individual treatment effect estimates gives an estimate of the overall average treatment effect. In Causalinference, we can implement this matching estimator and display the results by >>> causal.est_via_matching() >>> print(causal.estimates) Treatment Effect Estimates: Matching Est. S.e. z P>|z| [95% Conf. int.] -------------------------------------------------------------------------------- ATE 14.245 1.038 13.728 0.000 12.211 16.278 ATC 10.288 1.815 5.669 0.000 6.731 13.845 ATT 16.796 0.940 17.866 0.000 14.953 18.638 While the basic matching estimator is theoretically sound, as we see above its actual performance seems to be lacking, as its ATE estimate of 14.245 still seems quite far from the true value of 10. One reason is that in practice, the matching of one subject to another is rarely perfect. To the extent that a matching discrepancy exists, i.e., that \(X_i\) and \(X_{m(i)}\) are not equal, the matching estimator of the subject-level treatment effect will generally be biased. It turns out it is possible to correct for this bias. In particular, one can show that the unit-level bias for a treated unit is equal to $$\mathrm{E}[Y(0)|X=X_i] - \mathrm{E}[Y(0)|X=X_{m(i)}].$$ A popular way of adjusting for this bias is to assume a linear specification for the conditional expectation function of \(Y(0)\) given \(X\), and approximate the above term by the inner product of the matching discrepancy and slope coefficient from an ancillary regression. The same principle of course applies for control units. Although it might seem like we are back to assuming a linear regression function as was the case with OLS, the role played by the linear approximation is quite different here. In the OLS case, we are using the linearity assumption to extrapolate globally across the covariate space. In the current scenario, however, the linear approximation is only applied locally, to matched units whose covariate values were already quite similar. To invoke bias adjustment in Causalinference, we simply supply True to the optional argument bias_adj, as follows: >>> causal.est_via_matching(bias_adj=True) >>> print(causal.estimates) Treatment Effect Estimates: Matching Est. S.e. z P>|z| [95% Conf. int.] -------------------------------------------------------------------------------- ATE 9.624 0.245 39.354 0.000 9.145 10.103 ATC 9.642 0.270 35.776 0.000 9.114 10.170 ATT 9.606 0.318 30.159 0.000 8.981 10.230 As we can see above, the resulting ATE estimate is now much closer to the true ATE of 10. In addition to bias adjustments, est_via_matching accepts two other optional parameters worth mentioning. The first is weights, which allows users to supply their own positive-definite weighting matrix to use for calculating distances between covariate vectors. The second is matches, which allows users to implement multiple matching by supplying an integer that is greater than 1. Setting matches=3, for instance, will result in having the three closest units matched to a given subject. In general, increasing this number introduces biases (since less ideal matches are being included), but lowers variance (as the counterfactual estimates are less dependent on any single unit). Typically it is advised that the number of matches be kept under 4, though there are no hard-and-fast rules. References Abadie, A. & Imbens, G. (2006). Large sample properties of matching estimators for average treatment effects. Econometrica, 74, 235-267.
I need to find the following asymptotic expansion as $t\rightarrow \infty$ : $\int_{0}^{e^{-1}}e^{-t\sqrt{-y\ln y}}{\rm d}y. $ Introducing the new variable (related to the left branch of the Lambert function) : $u=-e^{\ln y}\ln y\Longleftrightarrow y=\exp\left(W_{-1}\left(-u\right)\right)$ and ${\rm d}y=-\frac{{\rm d}u}{1+W_{-1}\left(-u\right)}$, we have : $\int_{0}^{e^{-1}}e^{-t\sqrt{-y\ln y}}{\rm d}y=-\int_{0}^{e^{-1/2}}\frac{e^{-t\sqrt{u}}}{1+W_{-1}\left(-u\right)}{\rm d}u$. Unfortunately, from there I can not say much.. Numerically it seems that the integral is pretty close to $1 / (t^2\ln t)$ (c.f Mathematica)
Is it possible to find all polynomials of the form $ an^2 + bn +c $ where a,b, and c are integers and such that $$ a+b+c \equiv 31 \pmod{54} $$ $$ 4a+2b+c \equiv 3 \pmod{54} $$ $$ 9a+3b+c \equiv 11 \pmod{54} $$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community That you're making them coefficients of a polynomial is irrelevant: the problem is "solve this linear system of three equations in three variables". That you're solving systems of equations modulo 54 rather than systems of equations of real numbers doesn't change things much: the only real difference is how to compute the inverse of a number, and that some non-zero numbers can fail to be invertible. Solving the first equation gives $$ c \equiv 31 - a - b $$ plugging into the other equations gives $$ 3a + b \equiv -28 \equiv 26 $$ $$ 8a + 2b \equiv -20 \equiv 34 $$ Solving the first of these gives $$ b \equiv 26 - 3a $$ plugging into the other equation gives $$ 2a \equiv -18 \equiv 36 $$ Solving this is a little trickier: the solution space to $$ gux \equiv gv \pmod{gw}$$ is the same as the solution space to $$ ux \equiv v \pmod w$$ (Note if the equation was $gux \equiv v \pmod{gw}$ where $v$ is not divisible by $g$, then there would be no solutions for $x$) So to solve $$ 2a \equiv 36 \pmod{54} $$ you factor out the 2 $$ a \equiv 18 \pmod{27} $$ $$ a \equiv 18, 45 \pmod{54} $$ Now backsolving gives $$ a \equiv 18 \pmod{54} \qquad b \equiv 26 \pmod{54} \qquad c \equiv 41 \pmod{54} $$ $$ a \equiv 45 \pmod{54} \qquad b \equiv 53 \pmod{54} \qquad c \equiv 41 \pmod{54} $$ Of course, I could have used Gaussian elimination and its variants instead. Or I could have solved for $a$ first, or any other variation. The big thing I didn't demonstrate is the Chinese remainder theorem approach. $54 = 2 \cdot 3^3$; Often, it's easier to solve a problem modulo $2$ then solve the problem modulo $3^3$, then use the Chinese remainder theorem to combine the solutions to modulo $54$ than it is to solve it directly. So you want to solve the system of linear equations $$\left(\begin{matrix}1 & 1 & 1\\ 4 & 2 & 1\\ 9 & 3 & 1\end{matrix}\right) \left(\begin{matrix} a\\ b\\ c \end{matrix}\right) = \left(\begin{matrix}31\\ 3\\ 11 \end{matrix}\right)$$ over the ring $\mathbb Z/54\mathbb Z$. By the chinese remainder theorem, it is enough to solve this over $\mathbb Z/ 2\mathbb Z$ and $\mathbb Z/27 \mathbb Z$ separately. Using the Gaussian algorithm over $\mathbb Z/2\mathbb Z$, we get $$\left(\begin{array}{ccc|c} 1 & 1 & 1 & 1\\ 0 & 0 & 1 & 1\\ 1 & 1 & 1 & 1\end{array}\right) \leadsto \left(\begin{array}{ccc|c} 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 \end{array}\right),$$ whose solutions are $(0,0,1)$ and $(1,1,1)$. Over $\mathbb Z/27 \mathbb Z$ the matrix is invertible (its determinant is $-2$ which is a unit modulo $27$), so we get a unique solution, namely $(18,26,14)$ by the Gaussian algorithm or by computing the inverse of the matrix. Putting the solutions together with the Chinese Remainder Theorem gives $$(a,b,c) \equiv (18,26,41) \mod 54$$ (corresponding to $(0,0,1) \bmod 2$) and $$(a,b,c) \equiv (45,53,41) \mod 54$$ (corresponding to $(1,1,1) \bmod 2$). The fact we're searching for a polynomial allows us to use polynomial interpolation methods. We seek a polynomial such that $$ f(1) \equiv 31 \qquad f(2) \equiv 3 \qquad f(3) \equiv 11 $$ One trick is to split things apart: first, we find a polynomial $$g_1(x) = (x-2)(x-3) $$ which satisfies $$ g_1(1) = 2 \qquad g_1(2) = 0 \qquad g_1(3) = 0 $$ similarly, $g_2(x) = (x-1)(x-3)$ and $g_3(x) = (x-1)(x-2)$ satisfy $$ g_2(1) = 0 \qquad g_2(2) = -1 \qquad g_2(3) = 0 $$ $$ g_3(1) = 0 \qquad g_3(2) = 0 \qquad g_3(3) = 2 $$ Now, it's easy to see how to combine the values! $$ \begin{align} f(x) &= \frac{31}{2} g_1(x) - 3 g_2(x) + \frac{11}{2} g_3(x) \\&= 18 x^2 - 82x + 95 \end{align} $$ We lucked out, we got the coefficients to actually be integers, so this polynomial definitely works. There is a "problem" here in that I'm dividing by $2$, but that's not really allowed modulo $54$. As you can see, we missed the other set of solutions where the leading coefficient is $45$ modulo $54$. I'm pretty sure there's a way to account for this problem, but I don't know how to do it systematically. It probably involves the Chinese remainder theorem, and treating the modulo $2$ case with a different method (which is easy because $2$ is small). I believe there are other ways this approach can go badly as well. We seek a quadratic $\rm\,f(x)\,$ with $\rm\,f(1) \equiv 31,\ f(2) \equiv 3,\ f(3)\equiv 11\:\ (mod\ 54).$ Interpolating gives one solution $\rm\: f_1(x) = 3 + (x\!-\!2)(8 + 18(x\!-\!3)) \equiv \color{#C00}{18 x^2 + 26 x - 13}.\: $ If $\rm\,f_2(x)\,$ is another solution then $\rm\,h = f_1-f_2\,$ has degree $\le 2$ but has three distinct roots $1,2,3$ mod $54$. These remain distinct roots mod $3,\,$ thus $\rm\,h \equiv 0\:\ (mod\ 3),\:$ being a polynomial over a field with more roots than its degree. So $\rm\,h = 3h_1.\,$ Similarly we deduce $\rm\, h_1\! = 3h_2,\,$ then $\rm\,h_2\! = 3g,\:$ so $\rm\:f = 27g.\,$ So $\rm\, mod\ 2\!:\,$ $\rm\,f,\,$ so $\rm\,g,\,$ has roots $\rm\,0,1,\,$ hence $\rm\ g\equiv x(x\!-\!1).\,$ Hence $\rm\ f = 27g = 27(x(x\!-\!1)+2g')\equiv 27(x^2\!-x)\:\ (mod\ 54).\,$ Therefore the only other solution is $\rm\: f_2 = f_1\!+h = f_1\! + 27x^2\!-27x \equiv \color{#C00}{-9x^2\!-x-13}\:\ (mod\ 54).$
Given the linear diophantine equation $$ax+by=c $$ I have to show that it has solution if and only if $gcd(a,b)$ divides $c$. $$1)\Rightarrow $$ Let $m=gcd(a,b)$ then $$a'x+b'y=c'$$ where $gcd(a',b')=1$, but how can I continue from here? $$2) \Leftarrow $$ I don't know even how to start. I've never studied seriously number theory until now, and I'm self-studying so this is difficult to me. Any help will be appreciated. Should I use the division theorem? I mean that, for $a,b\neq0 \in \mathbb{Z}$ there exists numbers $q, 0\leq r <|b|$ such that $a=bq+r$ $$ $$
In Quantum Field Theory and the Jones Polynomial, Witten showed how to get the Jones polyomial as a Wilson Loop in Chern-Simons theory. The Chern-Simons Lagrangian is $$ \mathcal{L} = \frac{k}{4\pi} \int_M \mathrm{Tr}(A \wedge dA + \frac{2}{3} A \wedge A \wedge A )$$Here you're integrating over a 3-manifold (e.g. $M= S^3)$, but you're also integrating over the moduli space of connections $A$ on $M$, so $A$ takes values in some lie algebra, e.g. $\mathfrak{g} = \mathfrak{su}(2)$. Based on this information they can calculate the partition function for $M = S^3, \mathfrak{g}=\mathfrak{su}(2)$ to be $$ Z(S^3) = \sqrt{\frac{2}{k+2}}\sin \frac{\pi}{k+2} $$ In this theory, one can also define ``Wilson loops" over closed curves in your 3-manifold, i.e. knots.$$ W_R(C) = \mathrm{Tr}_R\left[ P \exp \int_C A \cdot dx \right]$$ Remember if we exponentiate an element of the Lie algebra $A \in \mathfrak{g}$ then $e^A$ is going to be an element of the Lie group $G$. So $e^{\int_C A dx} \in G$. Proving the Wilson loops give you Jones polynomials involves the Atiyah-Singer index theorem and some surgery theory of manifolds. Wilson loops can be used to derive Khovanov Homology. Lately, in the physics literature, there is a tendency to derive things from 6-dimensional gauge theory and "dimensionally" reduce down to lower dimensions. Unfortunately I am in a hurry, and I refer you to Section 6, pp 120-123 for the definition of "monodromy defect" which I can fill in later In gauge theory with gauge group G on any manifold X, let U be a submanifold of codimension 2. Let C be a conjugacy class in G. Then one considers gauge theory on X\U with the condition that the gauge fields have a monodromy around U that is in the conjugacy class C. A surface operator supported on U is defined by asking in addition that the fields should have the mildest type of singularity consistent with this monodromy or (depending on the context) by imposing additional conditions on the singular behavior along U. We will call codimension two operators of this sort monodromy defects. So in gauge theory, there are line operators and sometimes surface operators. Since Chern-Simons theory is 3-dimensional, co-dimension 2 is 3-2 = 1-dimensional. Witten wants to re-derive some properties of knots using these operators instead.This post imported from StackExchange MathOverflow at 2015-04-04 12:37 (UTC), posted by SE-user john mangual
I want to calculate the limit which is above without using L'hopital's rule ; $$\lim_{x\rightarrow0} \frac{e^x-1}{\sin(2x)}$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Using the fact that $$\lim _{ x\rightarrow 0 }{ \frac { { e }^{ x }-1 }{ x } =1 } \\ \lim _{ x\rightarrow 0 }{ \frac { \sin { x } }{ x } =1 } $$ we can conclude that $$\\ \lim _{ x\rightarrow 0 }{ \frac { { e }^{ x }-1 }{ \sin { 2x } } } =\frac { 1 }{ 2 } \lim _{ x\rightarrow 0 }{ \frac { { e }^{ x }-1 }{ x } \frac { 2x }{ \sin { 2x } } } =\frac { 1 }{ 2 } $$ As an alternative and admittedly more mechanical approach, you can series expand top and bottom: $$\lim _{ x\rightarrow 0 }{ \frac { { e }^{ x }-1 }{ \sin2x }} =\lim _{ x\rightarrow 0 }\frac{1+x+O(x^2)-1}{2x+O(x^3)}=\frac12$$ This will work for most fractions if the variable is tending to $0$. In fact, l'Hopital's rule essentially comes from series expanding $\frac{f(x)}{g(x)}$, so in a sense we are doing the same thing. Equivalents: $\;\mathrm e^x-1\sim_0 x$, $\;\sin 2x\sim_0 2x$, so $\;\dfrac{\mathrm e^x-1}{\sin 2x}\sim_0\dfrac{x}{2x}=\dfrac12.$
PartialStructureFactor¶ class PartialStructureFactor( md_trajectory, start_time=None, end_time=None, pair_selection=None, maximum_q_value=None, q_resolution=None, cutoff_radius=None, resolution=None, time_resolution=None, info_panel=None)¶ Constructor for the PartialStructureFactor object. Parameters: md_trajectory( MDTrajectory| AtomicConfiguration) – The MDTrajectory or configuration to calculate the partial structure factor for. start_time(PhysicalQuantity of type time) – The start time. Default: 0.0 * fs end_time(PhysicalQuantity of type time) – The end time. Default:The last frame time pair_selection( sequence) – Only consider interactions between these groups of atoms. A sequence has to contain containing two of the following types: Element, tag name, list of indices, or None. Default:all interactions maximum_q_value(PhysicalQuantity of type inverse length) – The maximum scattering vector length. Default: 15.0 / Angstrom q_resolution(PhysicalQuantity of type inverse length) – The binning of the scattering vectors. Default: 0.05 / Angstrom cutoff_radius(PhysicalQuantity of type length) – Upper limit on sampled distances. Default:Half the diagonal of the unit cell resolution(PhysicalQuantity of type length) – The binning of the radial distribution function. Default: 0.05 * Angstrom time_resolution(PhysicalQuantity of type time) – The time interval between snapshots in the MD trajectory that are included in the analysis. info_panel( InfoPanel (Plot2D)) – Info panel to show the calculation progress. Default:No info panel data()¶ Return the partial structure factor. qRange()¶ Return the list of scattering vector magnitudes. Usage Examples¶ md_trajectory = nlread('alumina_trajectory.nc')[-1]partial_sq = PartialStructureFactor(md_trajectory, pair_selection=[Aluminum, Aluminum])# Get the q-values and the partial structure factor.q_values = partial_sq.qRange().inUnitsOf(Angstrom**-1)s_q = partial_sq.data()# Plot the data using pylab.import pylabpylab.plot(q_values, s_q, label='Al-Al structure factor')pylab.xlabel('q (1/Ang)')pylab.ylabel('S(q)')pylab.legend()pylab.show() Notes¶ This object calculates the isotropic (i.e. averaged over all scattering angles) PartialStructureFactor. It is therefore predominantly aimed at the characterization of amorphous materials. The partial structure factor is calculated as described in Ref. [mGJ02]: where \(g_{\alpha,\beta}(r)\) is the partial radial distribution function. If no element pair is selected, then all atoms are treated equally and the full radial distribution function is used to calculate \(S(q)\). If the pair_selection parameter contains partially overlapping lists, the delta-function in the equation for \(S(q)\) is evaluated as \(\sqrt{\frac{ N_a N_b }{N^2}}\). The cutoff_radius parameter determines the range of the radial distribution function. This parameter might have an influcence on the calculated structure factor at small scattering vectors. [mGJ02] G. Gutiérrez and B. Johansson. Molecular dynamics study of structural properties of amorphous al2o3. Phys. Rev. B, 65:104202, Feb 2002. doi:10.1103/PhysRevB.65.104202.
In an effort to find a proof that builds intuition for students in my proof writing course, I devised the following. Its seems too easy to me, so I am worried something is wrong. The class is very Naive Naive Set Theory so I realize some axioms are needed but that is way too hard for my students. I guess I would just like to have this result and I used to just state it, but then I thought about this proof. We assume that $\mathbb{N}\times \mathbb{N}$ is countable. Suppose $A_1$, $A_2$, $A_3$,... is a sequence of countably infinite pairwise disjoint sets. Write $A_1=\{a_{11}, a_{12}, a_{13},...\}$, $A_2=\{a_{21}, a_{22}, a_{23},...\}$,...,$A_n=\{a_{n1}, a_{n2}, a_{n3},...\}$. Let $A=\bigcup_{i\in \mathbb{N}}A_i$. Define $f:A\rightarrow \mathbb{N}\times \mathbb{N}$ by the rule $f(a_{ij})=(i,j)$. Since the $A_i$'s are disjoint $f$ is a function. 1-1 and onto are easily proved. Thus, $A$ is countable. If we take off the pairwise dijoint condition then for every $A_i$ and $A_j$ with $A_i\bigcap A_j \neq \emptyset$, replace $A_j$ with $A_j-(A_i\bigcap A_j)$. I guess this is where I think I may be a little to hand wavey.
A very good way to generate normal variables is to start with pairs of uniform variablesand apply a rejection method. If you're considering these normal variables as inputto a rejection method, then, why not start with uniform variables so you only haveto do one level of rejection? You could generate a random point uniformly distributed within the volume of thesphere of radius $R_1$ (using three uniform variables and a rejection method)and then move that point further from the originby applying a transformation of the form $r' = f(r)$($r$ the original distance from the origin, $r'$ the distance after moving the point)so that the result is uniformly distributed in the spherical shell. The transformation function $f$ should satisfy$$\int_r^{R_1} t^2 dt = k \int_{f(r)}^{R_1} t^2 dt$$where $k = \dfrac{R_1^3}{R_1^3-R_2^3}$is the ratio of the volume of the entire sphere to the volume of the spherical shell.I get the relationship$$(f(r))^3 = \frac 1k r^3 + \frac{k-1}{k} R_1^3= \frac{R_1^3-R_2^3}{R_1^3} r^3 + R_2^3 $$by computing both integrals and rearranging terms.Note that $f(0) = R_2$ and $f(R_1) = R_1$, as desired.Another way of writing this is$$\frac{f(r)}{r} = \left(\frac{R_1^3-R_2^3}{R_1^3} + \frac{R_2^3}{r^3} \right)^{\frac13}.$$ So the procedure is to generate three uniform variables, $U_1$, $U_2$, and $U_3$ (between $-R_1$ and $R_1$),and compute $r = \sqrt{U_1^2 + U_2^2 + U_3^2}.$If $r \leq R_1$, the desired point within the spherical shell is$$(x,y,z) = \left(\frac{R_1^3-R_2^3}{R_1^3} + \frac{R_2^3}{r^3} \right)^{\frac13} (U_1, U_2, U_3).$$If $r > R_1$ you reject the three variables and try again.You will be able to use $\frac\pi6$ of the triples you generate,which is worse than the $\frac\pi4$ of pairs of uniform variablesthat you would be able to use when generating pairs of normal variablesby the usual rejection method, but a lot better than rejecting everything insidethe shell if the shell is thin.
Prove that the following inequality holds for $x\ge0$ : $$\sin(x) \cos(x) \geq x-x^3$$ This is an inequality often met during my high school classes I also used for this problem yesterday. I'm interested in a non-calculus proof if this is possible. Proof involving calculus: Let's consider $$f(x) = \sin(x) \cos(x)-x+x^3$$ then $$f'(x) = 3 x^2-2\sin^2(x)\tag1$$ $$x\ge \sin(x)\tag2$$ From $(1)$ and $(2)$ we immediately notice that $f'(x)\ge0$ and taking into account that $f(0)=0$ we may conclude that the inequality holds. Thanks.
The ODE I'm trying to solve is: $y''+2y'+2y = 3$. I've never tried to solve an ODE with complex roots until this problem so it's challenge for me. These are my steps for getting r: $$a_2r^2 +a_1r+a_0=0$$ $$r^2+2r+2=0$$ $$(r+1)^2 = -1$$ $$r = \pm i-1$$ $Q_2(x) = \frac32$ and these are my last few steps plugging everything into the final equation: $$y = c_1e^{r_1x}+c_2e^{r_2x}+Q_k(x)$$ $$y = c_1e^{(i-1)x} + c_2e^{(-i-1)x} + \frac32$$ $$y = c_1 e^{-x} (\cos{x}+i\sin{x})+ c_2 e^{-x} (\cos{x}-i\sin{x})+\frac32$$ I think somehow the imaginary answers should cancel each other out, but they don't. Wolfram alpha gives $$y = c_1 e^{-x}\sin{x} + c_2e^{-x}\cos{x}+\frac32$$ Where did I go wrong?
\begin{equation*} \int_{-a}^{a}e^{-\rho x}P_n \left( \frac{x}{a} \right) dx = (-1)^n\sqrt{\frac{2\pi a}{\rho}}J_{n+1/2}(a\rho) ~~~~~~\text{for $a>0$} \end{equation*} I am just wondering this formula is correct or not? Thanks in advance! Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community \begin{equation*} \int_{-a}^{a}e^{-\rho x}P_n \left( \frac{x}{a} \right) dx = (-1)^n\sqrt{\frac{2\pi a}{\rho}}J_{n+1/2}(a\rho) ~~~~~~\text{for $a>0$} \end{equation*} I am just wondering this formula is correct or not? Thanks in advance!
Modeling of Materials in Wave Electromagnetics Problems Whenever we are solving a wave electromagnetics problem in COMSOL Multiphysics, we build a model that is composed of domains and boundary conditions. Within the domains, we use various material models to represent a wide range of substances. However, from a mathematical point of view, all of these different materials end up being handled identically within the governing equation. Let’s take a look at these various material models and discuss when to use them. What Equations Are We Solving? Here, we will speak about the frequency-domain form of Maxwell’s equations in the Electromagnetic Waves, Frequency Domain interface available in the RF Module and the Wave Optics Module. The information presented here also applies to the Electromagnetic Waves, Beam Envelopes formulation in the Wave Optics Module. Under the assumption that material response is linear with field strength, we formulate Maxwell’s equations in the frequency domain, so the governing equations can be written as: This equation solves for the electric field, \mathbf{E}, at the operating (angular) frequency \omega = 2 \pi f (c_0 is the speed of light in vacuum). The other inputs are the material properties \mu_r, the relative permeability; \epsilon_r, the relative permittivity; and \sigma , the electrical conductivity. All of these material inputs can be positive or negative, real or complex-valued numbers, and they can be scalar or tensor quantities. These material properties can vary as a function of frequency as well, though it is not always necessary to consider this variation if we are only looking at a relatively narrow frequency range. Let us now explore each of these material properties in detail. Electrical Conductivity The electrical conductivity quantifies how well a material conducts current — it is the inverse of the electrical resistivity. The material conductivity is measured under steady-state (DC) conditions, and we can see from the above equation that as the frequency increases, the effective resistivity of the material increases. We typically assume that the conductivity is constant with frequency, and later on we will examine different models for handling materials with frequency-dependent conductivity. Any material with non-zero conductivity will conduct current in an applied electric field and dissipate energy as a resistive loss, also called Joule heating. This will often lead to a measurable rise in temperature, which will alter the conductivity. You can enter any function or tabular data for variation of conductivity with temperature, and there is also a built-in model for linearized resistivity. Linearized Resistivity is a commonly used model for the variation of conductivity with temperature, given by: where \rho_0 is the reference resistivity, T_{ref} is the reference temperature, and \alpha is the resistivity temperature coefficient. The spatially-varying temperature field, T, can either be specified or computed. Conductivity is entered as a real-valued number, but it can be anisotropic, meaning that the material’s conductivity varies in different coordinate directions. This is an appropriate approach if you have, for example, a laminated material in which you do not want to explicitly model the individual layers. You can enter a homogenized conductivity for the composite material, which would be either experimentally determined or computed from a separate analysis. Within the RF Module, there are two other options for computing a homogenized conductivity: Archie’s Law for computing effective conductivity of non-conductive porous media filled with conductive liquid and a Porous Media model for mixtures of materials. Archie’s Law is a model typically used for the modeling of soils saturated with seawater or crude oil, fluids with relatively higher conductivity compared to the soil. Porous Media refers to a model that has three different options for computing an effective conductivity for a mixture of up to five materials. First, the Volume Average, Conductivity formulation is: \sum \theta_i \sigma_i where \theta is the volume fraction of each material. This model is appropriate if the material conductivities are similar. If the conductivities are quite different, the Volume Average, Resistivity formulation is more appropriate: \sum\frac{\theta_i}{ \sigma_i} Lastly, the Power Law formulation will give a conductivity lying between the other two formulations: \prod\sigma_i^{\theta_i } These models are all only appropriate to use if the length scale over which the material properties’ change is much smaller than the wavelength. Relative Permittivity The relative permittivity quantifies how well a material is polarized in response to an applied electric field. It is typical to call any material with \epsilon_r>1 a dielectric material, though even vacuum (\epsilon_r=1) can be called a dielectric. It is also common to use the term dielectric constant to refer to a material’s relative permittivity. A material’s relative permittivity is often given as a complex-valued number, where the negative imaginary component represents the loss in the material as the electric field changes direction over time. Any material experiencing a time-varying electric field will dissipate some of the electrical energy as heat. Known as dielectric loss, this results from the change in shape of the electron clouds around the atoms as the electric fields change. Dielectric loss is conceptually distinct from the resistive loss discussed earlier; however, from a mathematical point of view, they are actually handled identically — as a complex-valued term in the governing equation. Keep in mind that COMSOL Multiphysics follows the convention that a negative imaginary component (a positive-valued electrical conductivity) will lead to loss, while a positive complex component (a negative-valued electrical conductivity) will lead to gain within the material. There are seven different material models for the relative permittivity. Let’s take a look at each of these models. Relative Permittivity is the default option for the RF Module. A real- or complex-valued scalar or tensor value can be entered. The same Porous Media models described above for the electrical conductivity can be used for the relative permittivity. Refractive Index is the default option for the Wave Optics Module. You separately enter the real and imaginary part of the refractive index, called n and k, and the relative permittivity is \epsilon_r=(n-jk)^2. This material model assumes zero conductivity and unit relative permeability. Loss Tangent involves entering a real-valued relative permittivity, \epsilon_r', and a scalar loss tangent, \delta. The relative permittivity is computed via \epsilon_r=\epsilon_r'(1-j \tan \delta), and the material conductivity is zero. Dielectric Loss is the option for entering the real and imaginary components of the relative permittivity \epsilon_r=\epsilon_r'-j \epsilon_r''. Be careful to note the sign: Entering a positive-valued real number for the imaginary component \epsilon_r'' when using this interface will lead to loss, since the multiplication by -j is done within the software. For an example of the appropriate usage of this material model, please see the Optical Scattering off of a Gold Nanosphere tutorial. The Drude-Lorentz Dispersion model is a material model that was developed based upon the Drude free electron model and the Lorentz oscillator model. The Drude model (when \omega_0=0) is used for metals and doped semiconductors, while the Lorentz model describes resonant phenomena such as phonon modes and interband transitions. With the sum term, the combination of these two models can accurately describe a wide array of solid materials. It predicts the frequency-dependent variation of complex relative permittivity as: \sum\frac{f_k\omega_p^2}{\omega_{0k}^2-\omega^2+i\Gamma_k \omega} where \epsilon_{\infty} is the high-frequency contribution to the relative permittivity, \omega_p is the plasma frequency, f_k is the oscillator strength, \omega_{0k} is the resonance frequency, and \Gamma_k is the damping coefficient. Since this model computes a complex-valued permittivity, the conductivity inside of COMSOL Multiphysics is set to zero. This approach is one way of modeling frequency-dependent conductivity. The Debye Dispersion model is a material model that was developed by Peter Debye and is based on polarization relaxation times. The model is primarily used for polar liquids. It predicts the frequency-dependent variation of complex relative permittivity as: \sum\frac{\Delta \epsilon_k}{1+i\omega \tau_k} where \epsilon_{\infty} is the high-frequency contribution to the relative permittivity, \Delta \epsilon_k is the contribution to the relative permittivity, and \tau_k is the relaxation time. Since this model computes a complex-valued permittivity, the conductivity is assumed to be zero. This is an alternate way to model frequency-dependent conductivity. The Sellmeier Dispersion model is available in the Wave Optics Module and is typically used for optical materials. It assumes zero conductivity and unit relative permeability and defines the relative permittivity in terms of the operating wavelength, \lambda, rather than frequency: \sum\frac{B_k \lambda^2}{\lambda^2-C_k} where the coefficients B_k and C_k determine the relative permittivity. The choice between these seven models will be dictated by the way the material properties are available to you in the technical literature. Keep in mind that, mathematically speaking, they enter the governing equation identically. Relative Permeability The relative permeability quantifies how a material responds to a magnetic field. Any material with \mu_r>1 is typically referred to as a magnetic material. The most common magnetic material on Earth is iron, but pure iron is rarely used for RF or optical applications. It is more typical to work with materials that are ferrimagnetic. Such materials exhibit strong magnetic properties with an anisotropy that can be controlled by an applied DC magnetic field. Opposed to iron, ferrimagnetic materials have a very low conductivity, so that high-frequency electromagnetic fields are able to penetrate into and interact with the bulk material. This tutorial demonstrates how to model ferrimagnetic materials. There are two options available for specifying relative permeability: The Relative Permeability model, which is the default for the RF Module, and the Magnetic Losses model. The Relative Permeability model allows you to enter a real- or complex-valued scalar or tensor value. The same Porous Media models described above for the electrical conductivity can be used for the relative permeability. The Magnetic Losses model is analogous to the Dielectric Loss model described above in that you enter the real and imaginary components of the relative permeability as real-valued numbers. An imaginary-valued permeability will lead to a magnetic loss in the material. Modeling and Meshing Notes In any electromagnetic modeling, one of the most important things to keep in mind is the concept of skin depth, the distance into a material over which the fields fall off to 1/e of their value at the surface. Skin depth is defined as: where we have seen that relative permittivity and permeability can be complex-valued. You should always check the skin depth and compare it to the characteristic size of the domains in your model. If the skin depth is much smaller than the object, you should instead model the domain as a boundary condition as described here: “Modeling Metallic Objects in Wave Electromagnetics Problems“. If the skin depth is comparable to or larger than the object size, then the electromagnetic fields will penetrate into the object and interact significantly within the domain. A plane wave incident upon objects of different conductivities and hence different skin depths. When the skin depth is smaller than the wavelength, a boundary layer mesh is used (right). The electric field is plotted. If the skin depth is smaller than the object, it is advised to use boundary layer meshing to resolve the strong variations in the fields in the direction normal to the boundary, with a minimum of one element per skin depth and a minimum of three boundary layer elements. If the skin depth is larger than the effective wavelength in the medium, it is sufficient to resolve the wavelength in the medium itself with five elements per wavelength, as shown in the left figure above. Summary In this blog post, we have looked at the various options available for defining the material properties within your electromagnetic wave models in COMSOL Multiphysics. We have seen that the material models for defining the relative permittivity are appropriate even for metals over a certain frequency range. On the other hand, we can also define metal domains via boundary conditions, as previously highlighted on the blog. Along with earlier blog posts on modeling open boundary conditions and modeling ports, we have now covered almost all of the fundamentals of modeling electromagnetic waves. There are, however, a few more points that remain. Stay tuned! Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Chemical Forums aims at helping chemistry students. That means many of the questions posted require use of formulae - either chemical ones (like H 2 PO 4 - ) or mathematical ones (like [itex]E = E_0 + \frac {RT}{nF}\ln Q[/itex]). You may even need to use a structural formula like To make your questions easier to understand for others, please do your best to properly format your posts, it increases your chances of getting a correct and fast answer. Entering structural formulae that are displayed as images is covered in the next post in this thread. In general the best way to format chemical formulae is to use subscripts and superscripts. For example to enter formula of water you can enter it as H2O, mark 2 and click sub button placed in the second row of icons above the edit field. It will add [sub][/sub] tags around 2 so that it will be displayed as 2 , and water formula will be displayed as H 2 O. Similarly you can use sup button to format charges with [sup][/sup] tags. To make things faster some of the most often used numbers are also present in the line Other symbols: above the edit field. While technically it is possible to enter chemical formulae using LaTeX, it is not recommended. To enter some simple symbols like ° for temperature, middle dot for water of crystallization, Greek letters in the text and arrows, just click their symbols above the edit field. Mathematical formulae should be entered using LaTeX. To enter formula that will stay in its own line surround it with [tеx][/tеx] tags, to enter formula that will stay inline surround it with [itеx][/itеx] tags. (It is also possible to use double $ and double # delimiters you may know from the LaTeX, but once again, we don't recommend it). There are many LaTeX tutorials on the web, and many quite comprehensive documents describing LaTeX use (like this one: http://en.wikibooks.org/wiki/LaTeX/Mathematics ), but the basics are very simple. For example if you enter ax^2 + bx +c = 0 (surrounded by the [tеx][/tеx] tags) it will be rendered as [tex]ax^2 + bx + c = 0[/tex] ^ is for superscript, _ is for subscript of the following character (or group of characters). In LaTeX formulae you can easily enter Greek letters (enter the name preceded by the backslash - \Delta for Δ and \delta for δ, Greek letters listed above the edit field won't work here), group symbols using curly braces, create fractions and root symbols with LaTeX codes \frac (two following symbols or symbol groups will land in the nominator and denominator) and \sqrt (again, following symbol or symbols group will land under the root symbol) and so on: [tex]x_{1,2} = \frac {-b\pm\sqrt\Delta}{2a}[/tex] [tex]\frac 1 {2I} \left(\frac\hbar i\right)^2 \left[\frac 1 {\sin\theta} \frac \delta {\delta \theta} (\sin \theta \frac \delta {\delta\theta})+\frac 1 {\sin^2\theta} \frac {\delta^2}{\delta \phi ^2}\right]\Psi=E\Psi[/tex] [tex]C \iiint^{+\infty}_{-\infty}e^{-\frac \beta {2m} (p^2_x+p^2_y+p^2_z)}dp_x dp_y dp_z = 1[/tex] Please right-click the above equations and select Show Math As->TeX Commands to see how it was entered. \pm generates ± symbol, \hbar generates ħ and so on. If you want to test your LaTeX expressions before posting them at chemicalforums, you can do so at the forkosh sandbox - just remember to add tex tags before posting your LaTeX here. Please don't mix LaTeX and tags in a single formula, it is never necessary and never makes sense. You can mix them in a single post, but entering sulfuric acid as H[itex]_2[/itex]SO[itex]_4[/itex] (LaTeX subscript in normal text) instead of H 2 SO 4 (tagged subscript in normal text) makes it look awkward and is not guaranteed to work correctly. Note: [B] is the opening tag used to mark bolded parts of the text. Unfortunately, exactly the same symbol is quite often used to denote concentration of a compound 'B', especially when dealing with equilibrium and kinetic equations. In effect in your post there will be no single [B] symbol present, instead, whole post from the first occurrence of [B] to the end will be bolded. There are two possible solutions. First, simple and fast one, is to enter additional space after the opening square bracket - that is, to use [ B] instead of [B]. Second it to use [nоbbc][/nоbbc] tags - whatever is between them is not treated as UBBC code, so [nоbbc][B][/nоbbc] is rendered as [B]. If you plan on using LaTeX you can be forced to use nobbc trick (outside of tex tags) as well. And don't cry "hеlp!" in your posts. We know you need help, that's why you came here and posted. Your "hеlp!" will be replaced by "delete me*". It is an old joke built into forum many years ago.
Linear Regression -- The Basics The basics Yeah. It’s not a good sign if I’m starting out already repeating myself. But that’s how things seem to be with linear regression, so I guess it’s fitting. It seems like every day one of my professors will talk about linear regression, and it’s not due to laziness or lack of coordination. Indeed, it’s an intentional part of the curriculum here at New College of Florida because of how ubiquitous linear regression is. Not only is it an extremely simple yet expressive formulation, it’s also the theoretical basis of a whole slew of other tactics. Let’s just get right into it, shall we? Linear Regression: Let’s say you have some data from the real world (and hence riddled with real-world error). A basic example for us to start with is this one: There’s clearly a linear trend there, but how do we pick which linear trend would be the best? Well, one thing we could do is pick the line that has the least amount of error from the prediction to the actual data-point. To do that, we have to say what we mean by “least amount of error”. For this post, we’ll calculate that error by squaring the difference between the predicted value and the actual value for every point in our data set, then averaging those values. This standard is called the Mean-Squared-Error (MSE). We can write the MSE as: where \( \hat Y_i\) is our predicted value of \(Y_i\) for a give \(X_i\). Being as how we want a linear model (for simplicity and extensibility), we can write the above equation as, for some \( \alpha, \beta \) that we don’t yet know. But since we want to minimize that error, we can take some derivatives and solve for \( \alpha, \beta \)! Let’s go ahead and do that! We want to minimize We can start by finding the \( \hat\alpha \) such that, \( \frac{d}{d\alpha}\sum\limits_{i=1}^N\left(\alpha + \beta X_i - Y_i\right)^2 = 0 \). And as long as we don’t forget the chain rule, we’ll be alright… and we’ll find the \( \beta \) such that \( \frac{d}{d\beta}\sum\limits_{i=1}^N\left(\alpha + \beta X_i - Y_i\right)^2 = 0 \) And following a similar pattern we find (sorry for the editing… Wordpress.com isn’t the greatest therein): But note: So, And: So, And then we can find \( \alpha \) by substituting in our approximation of \( \beta \). Using those coefficients, we can plot the line below, and as you can see, it really is a good approximation. Now we have it Okay, so now we have our line of “best fit”, but what does it mean? Well, it means that this line predicts the data we gave it with the least error. That’s really all it means. And sometimes, as we’ll see later, reading too much into that can really get you into trouble. But using this model we can now predict other data outside the model. So, for instance, in the model pictured above, if we were to try and predict \( Y \) when \( X=2 \), we wouldn’t do so bad by picking something around 10 for \( Y \). An example, perhaps? So I feel at this point, it’s probably best to give an example. Let’s say we’re trying to predict stock price given the total market price. Well, in practice this model is used to assess volatility, but that’s neither here nor there. Right now, we’re really only interested in the model itself. But without further ado, I present you with, the CAPM (Capital Asset Pricing Model): (where \( \epsilon \) is the error in our predictions). And you can fit this using historical data or what-have-you. There are a bunch of downsides to fitting it with historical data though, like the fact that data from 3 days ago really doesn’t have much to say about the future anymore. There are plenty of cool things you can do therein, but sadly, those are out of the scope of this post. For now, we move on to Multiple Regression What is this anyway? Well, multiple regression is really just a new name for the same thing: how do we fit a linear model to our data given some set of predictors and a single response variable? The only difference is that this time our linear model doesn’t have to be one dimensional. Let’s get right into it, shall we? So let’s say you have \( k \) many predictors arranged in a vector (in other words, our predictor is a vector in \( \mathbb{R}^n \)). Well, I wonder if a similar formula would work… Let’s figure it out… Firstly, we need to know what a derivative is in \( \mathbb{R}^n \). Well, if \( f:\mathbb{R}^n\to\mathbb{R}^m \) is a differentiable function, then for any \( x \) in the domain, \( f’(x) \) is the linear map \( A:\mathbb{R}^n\to\mathbb{R}^m \) such that \( \text{lim}_{h\to 0}\frac{||f(x+h) - f(x) - Ah||}{||h||} = 0 \). Basically, \( f’(x) \) is the tangent plane. So, now that we got that out of the way, let’s use it! We want to find the linear function that minimizes the Euclidean norm of the error terms (just like before). But note: the error term is \( \epsilon = Y - \hat Y = Y - \alpha -\beta X \), for some vector \( \alpha \) and some matrix \( \beta \). Now, since it’s easier and it’ll give us the same answer, we’re going to minimize the squared error term instead of just the error term (like we did in the one dimensional version). We’re also going to make one more simplification: That \( \alpha=0 \). We can do this safely by simply appending (or prepending) a 1 to the rows of our data (thereby creating a constant term). So for the following, assume we’ve done that. So, let’s find the \( \beta \) that minimizes that. So, now we see that the derivative is \( -2Y^TX + 2\beta^TX^TX \) and we want to find where our error is minimized, so we want to set that derivative to zero: And there we have it. That’s called the normal equation for linear regression. Maybe next time I’ll post about how we can find these coefficients given some data using gradient descent, or some modification thereof. Till next time, I hope you enjoyed this post. Please, let me know if something could be clearer or if you have any requests.
If $A+B+C=\pi$ : $$ \sin A + \sin B + \sin C \le \frac{3\sqrt{3}}{2} \\ \cos A + \cos B + \cos C \le \frac{3}{2} \\ \tan A + \tan B + \tan C \le 3\sqrt{3} $$ with the equalities holding in the case of an equilateral triangle ($A=B=C=\frac{\pi}{3}$). I've also found out that of all the triangles inscribed in a circle, an equilateral triangle has the largest area. Why does the maximum of the things I've described exist in the case of an equilateral triangle ? Is it just so or is there a reason for this fact ? Whenever I encounter a question which asks me to maximize something in the case of a triangle, I've taken to simply taking it as an equilateral triangle. Is this safe ? And what are the other situations in which the maximum of something is obtained in the case of an equilateral triangle ?
Let $f$ be Riemann-Stieltjes integrable with respect $G$ (increasing function). My definition for Riemann-Stieltjes integration is: for every $\epsilon$ there is a partition $\mathcal{P}_\epsilon$ such that when $\mathcal{P}_\epsilon \subset \mathcal{P}$ then $\left|S(\mathcal{P},f,G, \{t_i\}) - \int_a^bf dG \right| < \epsilon$ for any set of tags $\{t_i\}$. For the Riemann integral and it is true that the integral is the limit of Riemann sums: $\lim_{\|\mathcal{P}\| \to 0}S(\mathcal{P},f)= \int_a^bf(x)dx$. Is this true for the Riemann-Stieltjes integral? I know the integral may not exist when $f$ and $G$ are discontinuous at the same point, and I think I would need to add the condition that prevents this as well as $f$ is integrable and $G$ is increasing.
Let $\tau$ be a topology on $\mathbb{R}$ under which every absolutely convergent series converges. Lemma. If $(x_n) \to L$ under the standard topology, then $(x_n) \to L$ under $\tau$. Proof. Let $(y_n)$ be any subsequence of $(x_n)$. Then there exist $(n_k)$ such that $|y_{n_k} - L| < 2^{-k}$. Now consider the sequence $(a_k)_{k=0}^{\infty}$ defined by $$ a_k = \begin{cases}L, & k = 0 \\y_{n_j} - L, & k = 2j-1 \text{ for some } j \geq 1 \\L - y_{n_j}, & k = 2j \text{ for some } j \geq 1\end{cases} $$ Then $\sum_{k=0}^{\infty} |a_k| < \infty$, and so its partial sum $s_k = \sum_{j=0}^{k} a_j$ converges under $\tau$. Since $s_{2k} = L$, it follows that $s_k \to L$ under $\tau$. Then, since $s_{2k-1} = y_{n_k}$, we have $y_{n_k} \to L$ under $\tau$. So far, we have proved that every subsequence of $(x_n)$ has a further subsequence that converges to $L$ under $\tau$. This suffices to guarantee that $(x_n)$ converges to $L$ under $\tau$. //// By this lemma, any series which converges conditionally under the standard topology also converges under $\tau$.
Let me try to give you the answer in just the right amount of generality. A quantum code is just a short way to say a quantum error-correcting code. It is a special embedding of one vector space into another larger one that satisfies some additional properties. If we start with a Hilbert space $H$, then a code is a decomposition into $H = (A \otimes B) \oplus C$. The quantum information is encoded into system $A$. In the event that $B$ is trivial, then indeed this is just a subspace of $H$. When $B$ is nontrivial, we say call it a subsystem code. Let's specialize to the case of $n$ qubits, so that $H = (\mathbb{C}^2)^{\otimes n}$, and it is easiest to imagine that the dimensions of $A$, $B$, and $C$ are all powers of 2, though of course this discussion could be generalized. Let $P$ be the orthogonal projector onto $A\otimes B$, and let $\mathcal{E}$ be an arbitrary quantum channel, i.e. a completely positive trace preserving linear map. We say that $\mathcal{E}$ is recoverable if there exists another quantum channel $\mathcal{R}$ such that for all states $\rho_A \otimes \rho_B$, we have $$\mathcal{R}\circ\mathcal{E}(\rho_A \otimes \rho_B) = \rho_A \otimes \rho'_B,$$where $\rho'_B$ is arbitrary. This says that for any state which is supported on $A\otimes B$ and is initially separable, we can reverse the effects of $\mathcal{E}$ up to a change on system $B$. Fortunately, there are simpler equivalent conditions that one can check instead. For example, an equivalent condition can be stated in terms of the Kraus operators $E_j$ for the channel $\mathcal{E}$. The subsystem $A$ is correctable for $\mathcal{E}(\rho) = \sum_j E_j \rho E_j^\dagger$ if an only if for all $i,j$, there exists a $g^{ij}_B$ on subsystem $B$ such that$$ P E_i^\dagger E_j P = 1\hspace{-3pt}\mathrm{l}_A \otimes g^{ij}_B.$$You can interpret this condition as saying that no error process associated to the channel $\mathcal{E}$ can gain any information about subsystem $A$. Consider error channels which consist of Kraus operators that, when expanded in the Pauli basis, only have support on at most $d$ of the $n$ qubits in our Hilbert space. If every such channel is correctable for subsystem $A$, then we say our code has distance $d$. The largest such $d$ is called the distance of the code. For the toric code, this is the linear size of the lattice. This post imported from StackExchange Physics at 2014-04-05 17:30 (UCT), posted by SE-user Steve Flammia
To perform the gluing, we need: two metric spaces $X$ and $Y$ a set $A\subset X$, this is the part of $X$ covered in glue an isometric embedding $f:A\to Y$, which is a way to put the glue-covered part of $X$ over $Y$. After we firmly press the spaces together and let them sit for a while, a point $x\in A$ becomes identical with the point $f(x)\in Y$. The resulting space can be described as the quotient $(X\sqcup Y)/(x\sim f(x))$. It is usually denoted $X\cup_A Y$. Its metric is the standard quotient metric; however, in this case the formula for quotient metric can be simplified to $$\tilde d(p,q) = \begin{cases}d_X(p, q) & \mbox{if } p, q \in X \\d_Y(p, q) & \mbox{if } p, q \in Y \\\inf_{a\in A} [d_X(p, a)+d_Y(q, f(a))] & \mbox{if } p \in X \mbox{ and } q \in Y \\\inf_{a\in A} [d_Y(p, f(a)) + d_X(q, a)] & \mbox{if } p \in Y \mbox{ and } q \in X\end{cases}$$ A simple example to start with: $X=(-\infty,0]$, $A=\{0\}$, Y=$[0,\infty)$, $f(0)=0$. This means we glue two half-lines together at point $0$. The result is $\mathbb R$ with the standard metric. Another example: glue two closed disks of the same radius along their boundaries. This means $A$ is the boundary of one disk, and $f(A)$ is the boundary of the other. The resulting space is homeomorphic to a sphere, though the metric on each disk is still flat. It's like a sphere that someone sat on.
Let $\mathbf{R}$ be the relation on $Z \times (Z \setminus \{0\})$ given by $m \mathrel{\mathbf{R}} n$ iff $m - n =2k$ for some $k \in\mathbb Z$. I have proven that this is indeed an equivalence relation by meeting the three required properties, yet I am having trouble understanding the meaning of and determining the equivalence classes of $\mathbf{R}$. Any pointers would be a great help. I'm emitting a doubt about the symetrical property of $\mathbf R$ because $0$ is forbidden as a second argument. So $0$ is in relation to the left with some numbers, but there is no description of what happen to the right. I would have defined it on $\mathbb Z^2$ without restriction. Anyway, two numbers $m$ and $n$ are in the same equivalence class if they differ by and even number. So $\mathbf R$ is splitting $\mathbb Z$ into odd and even numbers, and there are two classes $\tilde 1$ (odd numbers) and $\tilde 2$ (even numbers). [and I still don't know what to think about $\tilde 0$, which without the restriction should be equal to $\tilde 2$]. I don't think your relation is symmetric since $0R0$ is not true. Remark that $R$ is defined over $\mathbb{Z}\times (\mathbb{Z}\setminus\{0\})$.
Suppose that the matrix $A=\begin{pmatrix}1+ci & w_1 \\ 2+i & z_2\end{pmatrix}$ with $c\in \mathbb{R}$ is hermitian, of order $1$ and the vector $(k_1, k_2)\in \mathbb{C}^2$ with $k_2$ positive real number, belongs to the orthogonal complement of the row space of $A$ and has norm $\sqrt{3}$. How can we determine the numbers $k_1$ and $k_2$ ? $$$$ We know that $A$ is hermitian, so it is equal to its own conjugate transpose. So we have $A=\overline{A^T}$, i.e. $$\begin{pmatrix}1+ci & w_1 \\ 2+i & z_2\end{pmatrix}=\begin{pmatrix}1-ci & 2-i \\ \overline{w_1} & \overline{z_2}\end{pmatrix}$$ From that we get that $c=0$, $w_1=2-i$ and $z_2=\overline{z_2}$ and so $z_2$ is real. Are the information correct so far? Then we have that $A$ has the order 1, that means that $A^1=I_2$, or not? Now we have to calculate the orthogonal complement of the row space of $A$, or not? How can we do that? The last information is that the vector $(k_1, k_2)$ has the norm $\sqrt{3}$ and so we get $\sqrt{k_1^2+k_2^2}=\sqrt{3}\Rightarrow k_1^2+k_2^2=3$, right?
ISSN: 1930-5311 eISSN: 1930-532X All Issues Journal of Modern Dynamics July 2010 , Volume 4 , Issue 3 Select all articles Export/Reference: Abstract: In this paper, we study Hölder-continuous linear cocycles over transitive Anosov diffeomorphisms. Under various conditions of relative pinching we establish properties including existence and continuity of measurable invariant subbundles and conformal structures. We use these results to obtain criteria for cocycles to be isometric or conformal in terms of their periodic data. We show that if the return maps at the periodic points are, in a sense, conformal or isometric then so is the cocycle itself with respect to a Hölder-continuous Riemannian metric. Abstract: We describe how a finite-state automorphism of a regular rooted tree changes the Bernoulli measure on the boundary of the tree. It turns out that a finite-state automorphism of polynomial growth, as defined by S. Sidki, preserves a measure class of a Bernoulli measure, and we write down the explicit formula for its Radon-Nikodym derivative. On the other hand, the image of the Bernoulli measure under the action of a strongly connected finite-state automorphism is singular to the measure itself. Abstract: We compute explicitly the action of the group of affine diffeomorphisms on the relative homology of two remarkable origamis discovered respectively by Forni (in genus $3$) and Forni and Matheus (in genus $4$). We show that, in both cases, the action on the nontrivial part of the homology is through finite groups. In particular, the action on some $4$-dimensional invariant subspace of the homology leaves invariant a root system of $D_4$ type. This provides as a by-product a new proof of (slightly stronger versions of) the results of Forni and Matheus: the nontrivial Lyapunov exponents of the Kontsevich-Zorich cocycle for the Teichmüller disks of these two origamis are equal to zero. Abstract: We prove absolute continuity of "high-entropy'' hyperbolic invariant measures for smooth actions of higher-rank abelian groups assuming that there are no proportional Lyapunov exponents. For actions on tori and infranilmanifolds the existence of an absolutely continuous invariant measure of this kind is obtained for actions whose elements are homotopic to those of an action by hyperbolic automorphisms with no multiple or proportional Lyapunov exponents. In the latter case a form of rigidity is proved for certain natural classes of cocycles over the action. Abstract: We prove a result motivated by Williams's classification of expanding attractors and the Franks--Newhouse Theorem on codimension-$1$ Anosov diffeomorphisms: If $\Lambda$ is a topologically mixing hyperbolic attractor such that $\dim\E^u$| $\Lambda$= 1, then either $\Lambda$ is expanding or is homeomorphic to a compact abelian group (a toral solenoid). In the latter case, $f$| $\Lambda$is conjugate to a group automorphism. As a corollary, we obtain a classification of all $2$-dimensional basic sets in $3$-manifolds. Furthermore, we classify all topologically mixing hyperbolic attractors in $3$-manifolds in terms of the classically studied examples, answering a question of Bonatti in [1]. Abstract: For a compact Riemannian manifold $M$, $k\ge2$ and a uniformly quasiconformal transversely symplectic $C^k$ Anosov flow $\varphi$:$\R\times M\to M$ we define the longitudinal KAM-cocycleand use it to prove a rigidity result: $E^u\oplus E^s$ is Zygmund-regular, and higher regularity implies vanishing of the longitudinal KAM-cocycle, which in turn implies that $E^u\oplus E^s$ is Lipschitz-continuous. Results proved elsewhere then imply that the flow is smoothly conjugate to an algebraic one. Abstract: We prove results for algebraic Anosov systems that imply smoothness and a special structure for any Lipschitz continuous invariant $1$-form. This has corollaries for rigidity of time-changes, and we give a particular application to geometric rigidity of quasiconformal Anosov flows. Several features of the reasoning are interesting; namely, the use of exterior calculus for Lipschitz continuous forms, the arguments for geodesic flows and infranilmanifoldautomorphisms are quite different, and the need for mixing as opposed to ergodicity in the latter case. Readers Authors Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
ISSN: 1930-5311 eISSN: 1930-532X All Issues Journal of Modern Dynamics October 2010 , Volume 4 , Issue 4 Special issue dedicated to Jan Boman on the occasion of his 75th birthday Select all articles Export/Reference: Abstract: We prove the local differentiable rigidity of generic partially hyperbolic abelian algebraic high-rank actions on compact homogeneous spaces obtained from split symplectic Lie groups. We also give examples of rigidity for nongeneric actions on compact homogeneous spaces obtained from SL$(2n,\RR)$ or SL$(2n,\CC)$. The conclusions are based on the geometric approach by Katok--Damjanovic and a progress towards computations of the generating relations in these groups. Abstract: We consider special flows over two-dimensional rotations by $(\alpha,\beta)$ on $\T^2$ and under piecewise $C^2$ roof functions $f$ satisfying von Neumann's condition $\int_{\T^2}f_x(x,y)dxdy\ne 0$ or $\int_{\T^2}f_y(x,y)dxdy\ne 0 $. Such flows are shown to be always weakly mixing and never partially rigid. It is proved that while specifying to a subclass of roof functions and to ergodic rotations for which $\alpha$ and $\beta$ are of bounded partial quotients the corresponding special flows enjoy the so-called weak Ratner property. As a consequence, such flows turn out to be mildly mixing. Abstract: We study a two-parameter family of one-dimensional maps and related $(a,b)$-continued fractions suggested for consideration by Don Zagier. We prove that the associated natural extension maps have attractors with finite rectangular structure for the entire parameter set except for a Cantor-like set of one-dimensional Lebesgue zero measure that we completely describe. We show that the structure of these attractors can be "computed'' from the data $(a,b)$, and that for a dense open set of parameters the Reduction theory conjecture holds, i.e., every point is mapped to the attractor after finitely many iterations. We also show how this theory can be applied to the study of invariant measures and ergodic properties of the associated Gauss-like maps. Abstract: In this article, following [29], we study critical subsolutions in discrete weak KAM theory. In particular, we establish that if the cost function $c: M \times M\to \R$ defined on a smooth connected manifold is locally semiconcave and satisfies twist conditions, then there exists a $C^{1,1}$ critical subsolution strict on a maximal set (namely, outside of the Aubry set). We also explain how this applies to costs coming from Tonelli Lagrangians. Finally, following ideas introduced in [18] and [26], we study invariant cost functions and apply this study to certain covering spaces, introducing a discrete analog of Mather's $\alpha$ function on the cohomology. Abstract: We study infinite translation surfaces which are $\ZZ$-covers of finite square-tiled surfaces obtained by a certain two-slit cut and paste construction. We show that if the finite translation surface has a one-cylinder decomposition in some direction, then the Veech group of the infinite translation surface is either a lattice or an infinitely generated group of the first kind. The square-tiled surfaces of genus two with one zero provide examples for finite translation surfaces that fulfill the prerequisites of the theorem. Abstract: In this paper, we show that if Rabinowitz Floer homology has infinite dimension, there exist infinitely many critical points of a Rabinowitz action functional even though it could be non-Morse. This result is proved by examining filtered Rabinowitz Floer homology. Readers Authors Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
How many real roots does $x^3+9x^2-49x+49=0$ have in the open interval $1\lt x\lt 2$? I've applied the intermediate value theorem and found that it has at least 1 root. But what about other roots? How can you find the exact numbers of real roots in a given interval. I would really appreciate if I could get some examples of higher degree as well. Hint: If $\;f(x)=x^3+9x^2-49x+49\;$ , then $$f(1)>0\;,\;\;f(2)<0\,,\,f(3)>0\;,\;\;\lim_{x\to-\infty}f(x)=-\infty$$ You may use Sturm's theorem (I leave it to wiki to explain what the method does). In the present context, the calculation goes as follows: $$ p_0(x)= x^3 + 9x^2-49 x+49, \ \ \ \ p_1(x) = p_0'(x) = 3x^2+18x-49$$ and then $$ p_2=-p_0 {\;\rm mod\;} p_1 = 50.666.. \;x - 98, \ \ p_3=-p_1 {\rm\; mod \;} p_2=2.96..$$ Then construct the vector $\sigma(x)={\rm sign } \; (p_0(x),p_1(x),p_2(x),p_3(x))$. You calculate the number of sign changes in $\sigma(1) = (+,-,-,+)$ (2 sign changes) and $\sigma(2)=(-,-,+,+)$ (1 sign change). Then $2-1=1$ is the exact number of real roots in the half-open interval $(1,2]$ (and $2$ is not a root). The extrema appear at the roots of $$3x^2+18x-49=0.$$ As none of them lies in the range $(1,2)$, the cubic has no other root. The Sturm Chain for $x^3+9x^2-49x+49$ is $$ \begin{array}{c} p(x)&p(-14)&p(-13)&p(1)&p(2)&p(3)\\ x^3+9x^2-49x+49&-245&10&10&-5&10\\ 3x^2+18x-49&287&224&-28&-1&32\\ \frac{152}3x-98&-\frac{2422}3&-\frac{2270}3&-\frac{142}3&\frac{10}3&54\\ \frac{17101}{5776}&\frac{17101}{5776}&\frac{17101}{5776}&\frac{17101}{5776}&\frac{17101}{5776}&\frac{17101}{5776}\\ \text{sign changes}&3&2&\color{#C00}{2}&\color{#C00}{1}&0 \end{array} $$ The number of roots in $(1,2)$ is $\color{#C00}{2}-\color{#C00}{1}=1$. The other two roots are in $(-14,-13)$ and $(2,3)$. We can use the Sturm's theorem for finding the real roots in $(a,b)$ of a polynomial $p\in\mathbb{R}[x]$. Besides, if $f(z)=a_nz^n+\ldots +a_1z+a_0\in\mathbb{C}[z]$ and $c$ is a root of $f(z)$ then, $|c|\le M$ with $$M=\max\left \{\left(n\left| \frac{a_{i-1}}{a_n}\right|\right)^{1/i}:i=1,\ldots,n\right\}.$$ This bound allows to discard roots on $(-\infty,M)\cup (M,+\infty).$ Call the LHS $f(x)$. Then $f'$ has exactly two zeroes $a$ and $b$ (compute them!), one negative and the other positive (say $a<b$). So $f$ increases to the left of $a$, decreases between $a$ and $b$, and increases to the right of $b$. $f$ has a local maximum at $a$ and a local minimum at $b$. Now since $f(a)>0$ and $f(b)<0$, $f$ must have three zeroes--one to the left of $a$, one between $a$ and $b$, and one to the right of $b$. In fact, since $1<2<b\approx 2.033$, and $f(1)=10$ and $f(2)=-5$, the middle zero of $f$ lies between $1$ and $2$. As mentioned above, the other zeroes lie to the left of $a\approx -8.033$ and to the right of $b\approx 2.033$.
Often statistical models are used in order to determine which of the predictor variables have a significant relationship with the response variable. LMM has a number of methods to aid with this kind of statistical inference. Below we will fit a linear mixed model using the Ruby gem mixed_models, and demostrate various types of hypotheses tests and confidence intervals available for objects of class LMM. Data and linear mixed model Likelihood ratio test Fixed effects hypotheses tests Fixed effects confidence intervals Random effects hypotheses tests We use the same data and model formulation as in a previous example, where we have looked at various parameter estimates, and have shown that the model fit is good. The data set, which is simulated, contains two numeric variables Age and Aggression, and two categorical variables Location and Species. These data are available for 100 (human and alien) individuals. We model the Aggression level of an individual as a linear function of the Age ( Aggression decreases with Age), with a different constant added for each Species (i.e. each species has a different base level of aggression). Moreover, we assume that there is a random fluctuation in Aggression due to the Location that an individual is at. Additionally, there is a random fluctuation in how Age affects Aggression at each different Location. Thus, the Aggression level of an individual of Species $spcs$ who is at the Location $lctn$ can be expressed as:$$Aggression = \beta_{0} + \gamma_{spcs} + Age \cdot \beta_{1} + b_{lctn,0} + Age \cdot b_{lctn,1} + \epsilon,$$where $\epsilon$ is a random residual, and the random vector $(b_{lctn,0}, b_{lctn,1})^T$ follows a multivariate normal distribution (the same distribution but different realizations of the random vector for each Location). That is, we have a linear mixed model with fixed effects $\beta_{0}, \beta_{1}, \gamma_{Dalek}, \gamma_{Ood}, \dots$, and random effects $b_{Asylum,0}, b_{Asylum,1}, b_{Earth,0},\dots$. We fit the model with the convenient method LMM#from_formula, which mimics the behaviour of the function lmer from the R package lme4. require 'mixed_models'alien_species = Daru::DataFrame.from_csv '../examples/data/alien_species.csv'# mixed_models expects that all variable names in the data frame are ruby Symbols:alien_species.vectors = Daru::Index.new(alien_species.vectors.map { |v| v.to_sym })model_fit = LMM.from_formula(formula: "Aggression ~ Age + Species + (Age | Location)", data: alien_species)puts "Fixed effects terms estimates and some diagnostics:"puts model_fit.fix_ef_summary.inspect(20)puts "Random effects correlation structure:"puts model_fit.ran_ef_summary.inspect(12) Fixed effects terms estimates and some diagnostics: #<Daru::DataFrame:47402186756480 @name = 1973d309-2726-4c76-9a4a-571bb5b14396 @size = 5> coef sd z_score WaldZ_p_value intercept 1016.2867207023459 60.19727495769054 16.882603430415077 0.0 Age -0.06531615342788907 0.0898848636725299 -0.7266646547504374 0.46743141066211646 Species_lvl_Human -499.69369529020855 0.2682523406941929 -1862.774781375937 0.0 Species_lvl_Ood -899.5693213535765 0.28144708140043684 -3196.2289922406003 0.0 Species_lvl_WeepingA -199.58895804200702 0.27578357795259995 -723.7158917283725 0.0 Random effects correlation structure: #<Daru::DataFrame:47402186580300 @name = e1572836-6bdf-435f-8e5c-4230b839e0e2 @size = 2> Location Location_Age Location 104.26376362 -0.059863903 Location_Age -0.059863903 0.1556707755 Given two nested models, the LMM.likelihood_ratio_test tests whether the restricted simpler model is adequate. In this context 'nested' means that all predictors used in the restricted model must also be predictors in the full model (i.e. one model is a reduced version of the other more complex model). This method works only if both models were fit using the deviance (as opposed to REML criterion) as the objective function for the minimization (i.e. fit with reml: false). LMM.likelihood_ratio_test returns the p-value of the test. Two methods are available to compute the p-value: approximation by a Chi squared distribution as delineated in section 2.4.1 in Pinheiro & Bates "Mixed Effects Models in S and S-PLUS" (2000), and a method based on bootstrapping as delineated in section 4.2.3 in Davison & Hinkley "Bootstrap Methods and their Application" (1997). For example we can test the model formulation as above against a simpler model, which assumes that Age is neither a fixed effect nor a random effect. We can compute a likelihood ratio using the method LMM.likelihood_ratio. complex_model = LMM.from_formula(formula: "Aggression ~ Age + Species + (Age | Location)", data: alien_species, reml: false)simple_model = LMM.from_formula(formula: "Aggression ~ Species + (1 | Location)", data: alien_species, reml: false)LMM.likelihood_ratio(simple_model, complex_model) 454.3606613877624 We perform the likelihood ratio test using the method LMM.likelihood_ratio_test with method: :chi2 to use the Chi squared approximation for the p-values. chi2_p_value = LMM.likelihood_ratio_test(simple_model, complex_model, method: :chi2) 3.693825760412622e-98 The p-value is tiny, which implies that Age is a significant predictor of Aggression, and that the more complex model should be prefered. However, often one may not be sure whether the assumptions required for the validity of the Chi squared test are satisfied. In that case, one can compute a p-value with the bootstrap method by specifying method: :bootstrap, which by default uses all available CPUs in parallel. bootstrap_p_value = LMM.likelihood_ratio_test(simple_model, complex_model, method: :bootstrap, nsim: 1000) 0.000999000999000999 Even though the p-value is not as extreme as with the Chi squared test, it still shows significance of the variable Age. Significance tests for the fixed effects can be performed with LMM#fix_ef_p (or its alias LMM#fix_ef_test). For a given fixed effects coefficient estimate the tested null hypothesis is that the true value of the coefficient is zero (i.e. no linear relationship to the response). That is, for the above model formulation we carry out hypotheses tests for each fixed effects terms $\beta_{i}$ or $\gamma_{species}$, testing the null $H_{0} : \beta_{i} = 0$ against the alternative $H_{a} : \beta_{i} \neq 0$, or respectively the null $H_{0} : \gamma_{species} = 0$ against the alternative $H_{a} : \gamma_{species} \neq 0$. LMM currently offers three methods of hypotheses testing for fixed effects: Wald Z test, likelihood ratio test, and a bootstrap test. For a good discussion of the validity of different methods see this entry from the wiki of the r-sig-mixed-models mailing list. Moreover, due to the equivalence of hypotheses tests and confidence intervals, an additional hypothesis testing tool are bootstrap confidence intervals which are described in a different section below. The likelihood ratio test for fixed effects is actually merely a convenience method. It is a convenient interface to LMM.likelihood_ratio_test with method: :chi2 described above. For example, we can test whether the fixed effects term Age is a significant predictor of Aggression in complex_model as follows. lrt_p_value = complex_model.fix_ef_p(variable: :Age, method: :lrt) 0.40176699624310697 We see that Age does not seem be significant as a fixed effects term, even though it is in general a significant predictor as we have seen before. Like the likelihood ratio test, the bootstrap test is merely a convenient shortcut to LMM.likelihood_ratio_test with method: :bootstrap. We can test the significance of the predictor variable Age in complex_model with: bootstrap_p_value = complex_model.fix_ef_p(variable: :Age, method: :bootstrap, nsim: 1000) 0.5314685314685315 This result confirms the conclusion based of method: :lrt. The covariance matrix of the fixed effects estimates is returned by LMM#fix_ef_cov_mat, and the standard deviations of the fixed effects coefficients are returned by LMM#fix_ef_sd. model_fit.fix_ef_sd {:intercept=>60.19727495769054, :Age=>0.0898848636725299, :Species_lvl_Human=>0.2682523406941929, :Species_lvl_Ood=>0.28144708140043684, :Species_lvl_WeepingAngel=>0.27578357795259995} model_fit.fix_ef_z {:intercept=>16.882603430415077, :Age=>-0.7266646547504374, :Species_lvl_Human=>-1862.774781375937, :Species_lvl_Ood=>-3196.2289922406003, :Species_lvl_WeepingAngel=>-723.7158917283725} Based on the above Wald Z test statistics, hypotheses tests for each fixed effects term can be carried out.The corresponding (approximate) p-values are obtained with LMM#fix_ef_p as follows. model_fit.fix_ef_p(method: :wald) {:intercept=>0.0, :Age=>0.46743141066211646, :Species_lvl_Human=>0.0, :Species_lvl_Ood=>0.0, :Species_lvl_WeepingAngel=>0.0} We see that Aggression of each Species is significantly different from the base level (which is the species Dalek in this model), while statistically we don't have enough evidence to conclude that Age of an individual is a good predictor of his/her/its aggression level (which agrees with the conclusion obtained above with method: :lrt and method: :bootstrap). Confidence intervals for the fixed effects terms can be computed with the method LMM#fix_ef_conf_int. The following types of confidence intervals are available: bootstrap_t_intervals = model_fit.fix_ef_conf_int(level: 0.98, method: :bootstrap, boottype: :studentized, nsim: 1000) {:intercept=>[874.4165005038727, 1139.1584607682103], :Age=>[-0.2706718136184743, 0.1493216868271593], :Species_lvl_Human=>[-500.28872315642866, -499.09663075641373], :Species_lvl_Ood=>[-900.2572251446181, -898.9341708622952], :Species_lvl_WeepingAngel=>[-200.24313298457852, -198.95081041642916]} We see that Age is the only fixed effects predictor whose confidence interval contains zero, which implies that it probably has little linear relationship as a fixed effect to the response variable Aggression. We can use the Wald Z statistic to compute confidence intervals as well. For example 90% confidence intervals for each fixed effects coefficient estimate can be computed as follows. conf_int = model_fit.fix_ef_conf_int(level: 0.9, method: :wald) {:intercept=>[917.2710134723027, 1115.302427932389], :Age=>[-0.21316359921454495, 0.0825312923587668], :Species_lvl_Human=>[-500.1349311310106, -499.2524594494065], :Species_lvl_Ood=>[-900.0322606117453, -899.1063820954076], :Species_lvl_WeepingAngel=>[-200.04258166587707, -199.13533441813698]} For greated visual clarity we can put the coefficient estimates and the confidence intervals into a Daru::DataFrame: df = Daru::DataFrame.rows(conf_int.values, order: [:lower90, :upper90], index: model_fit.fix_ef_names)df[:coef] = model_fit.fix_ef.valuesdf Daru::DataFrame:47402176301980 rows: 5 cols: 3 lower90 upper90 coef intercept 917.2710134723027 1115.302427932389 1016.2867207023459 Age -0.21316359921454495 0.0825312923587668 -0.06531615342788907 Species_lvl_Human -500.1349311310106 -499.2524594494065 -499.69369529020855 Species_lvl_Ood -900.0322606117453 -899.1063820954076 -899.5693213535765 Species_lvl_WeepingAngel -200.04258166587707 -199.13533441813698 -199.58895804200702 With method: :all, LMM#fix_ef_conf_int returns a Daru::DataFrame containing the confidence intervals obtained by each of the available methods. The data frame can be printed in form of a nice looking table for inspection. For example for the alien species data we obtain all types of 95% confidence intervals with: ci = model_fit.fix_ef_conf_int(method: :all, nsim: 1000) Daru::DataFrame:47402185048160 rows: 5 cols: 5 intercept Age Species_lvl_Human Species_lvl_Ood Species_lvl_WeepingAngel wald_z [898.3022305977157, 1134.2712108069761] [-0.2414872478168185, 0.11085494096104034] [-500.2194602132623, -499.1679303671548] [-900.1209474930289, -899.017695214124] [-200.12948391874875, -199.0484321652653] boot_basic [901.8226468130745, 1142.3725792755527] [-0.23613380807522952, 0.12082106157921091] [-500.23480713878746, -499.1808531803127] [-900.1538925933955, -899.0066844647126] [-200.18820176543932, -199.02291203098875] boot_norm [900.8638839567735, 1134.5288460466913] [-0.2473842439597385, 0.11510977545973858] [-500.2315699405288, -499.1525612096204] [-900.1465247092458, -898.9995195976951] [-200.1708564794685, -199.01484568123584] boot_t [901.8226468130744, 1142.3725792755527] [-0.23613380807522952, 0.12082106157921088] [-500.2348071387875, -499.1808531803127] [-900.1538925933955, -899.0066844647126] [-200.18820176543932, -199.02291203098875] boot_perc [890.2008621291391, 1130.7507945916173] [-0.25145336843498906, 0.10550150121945137] [-500.2065374001044, -499.15258344162964] [-900.1319582424403, -898.9847501137574] [-200.1550040530253, -198.98971431857473] Since here we are dealing with data that was simulated according to the assumptions of the linear mixed model, all parameters end up approximately meeting the normality assumptions, and therefore all confidence interval methods turn out to be pretty much equivalent. Often when analyzing less ideal data, this will not be the case. Then it might be necessary to compare different types of confidence intervals in order to draw the right conclusions. We can test individual random effects for siginificance with the method LMM#ran_ef_p (or its alias LMM#ran_ef_test). It offers two methods, :lrt and :bootstrap. Both are in fact merely convenient interfaces to LMM.likelihood_ratio_test described above. The likelihood ratio test for random effects can only be performed if the model was fit with option reml: false. For example we can test the random intercept term (due to Location) of complex_model as follows. complex_model.ran_ef_p(variable: :intercept, grouping: :Location, method: :lrt) 1.3846568414031767e-151 This suggests that the variability in Aggression is significantly influenced by the effect of Location. This test can also be performed only if the model was fit with reml: false. We can test whether there is a significant random variation of the effect of Age due to Location using LMM#ran_ef_p with method: :bootstrap as follows. complex_model.ran_ef_p(variable: :Age, grouping: :Location, method: :bootstrap, nsim: 1000) 0.000999000999000999 We see that in fact the change of Age due to Location is a significant random effect.
The equation of motion is $$m\frac{d\mathbf{v}}{dt}=q(\mathbf{E}+\mathbf{v} \times \mathbf{B})$$ in which the bold letters represent vectors : $$\mathbf{E}=\mathbf{j}E, \mathbf{B}=\mathbf{j}B, \mathbf{v}=\mathbf{i}\dot x+\mathbf{j}\dot y+\mathbf{k}\dot z$$ $$\mathbf{v}\times \mathbf{B}=\mathbf{k}\dot x B-\mathbf{i}\dot zB$$ Therefore $$\ddot x=-\frac{qB}{m}\dot z, \ddot y=\frac{qE}{m}, \ddot z=\frac{qB}{m}\dot x$$ Taking the derivative again and substituting : $$\dddot x=-\frac{qB}{m}\ddot z= -\omega^2\dot x$$ $$\dddot z=\frac{qB}{m}\ddot x=-\omega^2\dot z$$ in which $\omega=\frac{qB}{m}$. The general solution is $$\dot x=C\cos\omega t+D\sin\omega t$$ $$y=\frac12 a t^2+G t+H$$ where $a=\frac{qE}{m}$. Initial conditions at $t=0$ are $x=y=z=0, \dot x=u, \dot y=-v, \dot z=0$. Therefore $$\dot x=u\cos\omega t, x=\frac{u}{\omega}\sin\omega t$$ $$\ddot z=\omega\dot x=\omega u\cos\omega t, \dot z=u\sin\omega t, z=\frac{u}{\omega}(1-\cos\omega t)$$ $$y=\frac12 a t^2-vt=(\frac12 at-v)t$$ from which we can see that $$x^2+(z-\frac{u}{\omega})^2=(\frac{u}{\omega})^2$$ which is the equation of a circle. So the particle circles around the axis $(x,z)=(0,\frac{u}{\omega})$ while accelerating in the +y direction. Returning to the question we can see that the particle will return to the origin when $y=0$ which occurs when $$v=\frac12 at=\frac{qE}{2m}t$$ We must also have $x=z=0$ which requires that $\omega t=\pi n$ and $ \omega t=2\pi n$ respectively, where $n$ is an integer. The latter2 conditions are both satisfied when $\omega t=2\pi n$. Therefore $$v=\frac{qE}{2m}\frac{2\pi n}{\omega}=\frac{qE}{m}\pi n\frac{m}{qB}=\pi n \frac{E}{B}$$ The particle returns to the origin only once if $$n=\frac{vB}{\pi E}$$ is an integer, which is option (C). Simpler Solution The only force along the $y$ axis is $qE$ in the $+y$ direction. The acceleration in the $+y$ direction is $a=\frac{qE}{m}$. The particle is launched like a projectile with velocity $-v$ along the $y$ axis. The time it takes to return to the origin (with velocity $+v$) is $$T=\frac{v-(-v)}{a}=\frac{2mv}{qE}$$ In the $xz$ plane the only force is the magnetic force $quB$ which is always directed perpendicular to the velocity in this plane. The result is circular motion with cyclotron frequency $\omega=\frac{qB}{m}$. The time taken for the particle to return to the point in the $xz$ plane from which it was launched is $$t=\frac{2\pi}{\omega}=\frac{2\pi m }{qB}$$ The particle will return to the origin if it makes a whole number of orbits of the circle in the $xz$ plane in the same time that the projectile motion along the $y$ axis takes to return to the origin. The condition is that : $$T=nt$$ $$\frac{2mv}{qE}=n\frac{2\pi m}{qB}$$ $$n=\frac{vB}{\pi E}$$ which must be an integer.
Does anybody have the Bachelier model call option pricing formula for $r > 0$? All the references I've read assume $r = 0$. I don't speak French, so I can't read Bachelier's original paper. Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community We assume that, under the risk-neutral measure, the stock process $\{S_t, t \ge 0\}$ satisfies an SDE of the form \begin{align*} dS_t = r S_t dt + \sigma dW_t, \end{align*} where $r$ is the constant interest rate, $\sigma$ is the constant volatility, and $\{W_t, t \ge 0\}$ is standard Brownian motion. For $0 \le t \le T$, \begin{align*} S_T = S_t e^{r(T-t)} + \sigma\int_t^T e^{r(T-s)}dW_s. \end{align*} That is, \begin{align*} S_T \mid S_t &\sim N\left(S_t e^{r(T-t)},\, \frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right) \right)\\ &\sim S_t e^{r(T-t)} + \sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}\,\xi, \end{align*} where $\xi$ is standard normal random variable. Then \begin{align*} C_t &= e^{-r(T-t)}E\left(\left(S_T-K\right)^+ \mid \mathcal{F}_t \right)\\ &=e^{-r(T-t)}E\left(\left(S_t e^{r(T-t)} + \sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}\,\xi-K\right)^+ \mid \mathcal{F}_t \right)\\ &=e^{-r(T-t)}\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}E\left(\left(\xi -\frac{K-S_t e^{r(T-t)}}{\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}}\right)^+ \mid \mathcal{F}_t \right)\\ &=e^{-r(T-t)}\left(S_t e^{r(T-t)}-K\right)\Phi\left(\frac{S_t e^{r(T-t)}-K}{\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}}\right) \\ &\qquad + e^{-r(T-t)}\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}\,\phi\left(\frac{S_t e^{r(T-t)}-K}{\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}}\right), \end{align*} where $\Phi$ is the cumulative distribution function of a standard normal random variable, and $\phi$ is the corresponding density function. Comments Let $K^*=e^{-r(T-t)}K,$ and $$v^2(t, T) = \frac{\sigma^2}{2r}\left(1-e^{-2r(T-t)}\right).$$ Then, we can re-express the price as \begin{align*} C_t &= \left(S_t-K^*\right)\Phi\left(\frac{S_t-K^*}{v(t, T)}\right) +v(t, T)\,\phi\left(\frac{S_t-K^*}{v(t, T)}\right). \end{align*} See also Section 3.3 of the book Martingale Methods in Financial Modeling; however, note that there are a few typos in this book. One other possibility is to assume that \begin{align*} S_t = e^{rt}(S_0 + \sigma W_t). \end{align*} Then the corresponding option price can be similarly obtained. See also the book mentioned above. It's pretty simple to derive with basic knowledge of stochastic calculus. But since you are looking for the easy answer here it is: $$C_t=e^{-r(T-t)}\sigma\sqrt{T-t} (D \Phi(D)+\phi(D))$$ where $D=\frac{F_{t,T}-K}{\sigma \sqrt{T-t}}$ and $\Phi(\cdot)$ and $\phi(\cdot)$ are respectively the normal cdf and pdf. $F_{t,T}=S_te^{r(T-t)}$ is the forward price. You might want to differentiate between the growth rate $\mu$ and the discount rate $r$. Gordon's solution is the most logical thing to do. NSZ's solution amounts to assuming a process $$dF = \sigma dW$$ with a discount rate $r$ and $F(t,T) = S(t) e^{r(T-t)}$. We apply Ito's Lemma to $f(t,F) = F e^{r(t-T)}$ to obtain in terms of $S$: $$dS = r S dt + \sigma e^{r(t-T)} dW\,.$$
Mayghani, M., Alimohammadi, D. (2017). The structure of ideals, point derivations, amenability and weak amenability of extended Lipschitz algebras. International Journal of Nonlinear Analysis and Applications, 8(1), 389-404. doi: 10.22075/ijnaa.2016.493 Maliheh Mayghani; Davood Alimohammadi. "The structure of ideals, point derivations, amenability and weak amenability of extended Lipschitz algebras". International Journal of Nonlinear Analysis and Applications, 8, 1, 2017, 389-404. doi: 10.22075/ijnaa.2016.493 Mayghani, M., Alimohammadi, D. (2017). 'The structure of ideals, point derivations, amenability and weak amenability of extended Lipschitz algebras', International Journal of Nonlinear Analysis and Applications, 8(1), pp. 389-404. doi: 10.22075/ijnaa.2016.493 Mayghani, M., Alimohammadi, D. The structure of ideals, point derivations, amenability and weak amenability of extended Lipschitz algebras. International Journal of Nonlinear Analysis and Applications, 2017; 8(1): 389-404. doi: 10.22075/ijnaa.2016.493 The structure of ideals, point derivations, amenability and weak amenability of extended Lipschitz algebras 1Department of Mathematics, Payame Noor University, Tehran, 19359-3697, Iran 2Department of Mathematics, Faculty of Science, Arak University, Arak, Iran Abstract Let $(X,d)$ be a compact metric space and let $K$ be a nonempty compact subset of $X$. Let $\alpha \in (0, 1]$ and let ${\rm Lip}(X,K,d^ \alpha)$ denote the Banach algebra of all continuous complex-valued functions $f$ on $X$ for which $$p_{(K,d^\alpha)}(f)=\sup\{\frac{|f(x)-f(y)|}{d^\alpha(x,y)} : x,y\in K , x\neq y\}<\infty$$ when it is equipped with the algebra norm $||f||_{{\rm Lip}(X, K, d^ {\alpha})}= ||f||_X+ p_{(K,d^{\alpha})}(f)$, where $||f||_X=\sup\{|f(x)|:~x\in X \}$. In this paper we first study the structure of certain ideals of ${\rm Lip}(X,K,d^\alpha)$. Next we show that if $K$ is infinite and ${\rm int}(K)$ contains a limit point of $K$ then ${\rm Lip}(X,K,d^\alpha)$ has at least a nonzero continuous point derivation and applying this fact we prove that ${\rm Lip}(X,K,d^\alpha)$ is not weakly amenable and amenable.
This is a somewhat stupid question.. In my notes, the Arzela-Ascoli theorem is stated like this: Let $X$ be a compact space and let $M\subset C(X,\mathbb R)$. Then $M$ is relatively compact in $C(X,\mathbb R)$ if and only if it is uniformly bounded and equicontinuous. But since every function in $M$ is continuous (belong to $C(X,\mathbb R)$) and bounded (since $X$ is compact), aren't the conditions in the theorem just an interpretation of the fact that $M\subset C(X,\mathbb R)$? I come up with this question because I was trying to solve questions like: given a subset $M\subset C[0,1]$ with some other conditions, and to prove that $M$ is relatively compact in $C[0,1]$. I tried to use the theorem, but then I ended with: $M\subset C[0,1] \Rightarrow M$ is uniformly bounded. $[0,1]$ is compact $\Rightarrow$ $\forall f\in M$, $f$ is uniformly continuous. $\Rightarrow \forall \epsilon >0, \exists \delta>0$ such that $|x-y|<\delta \Rightarrow |f(x)-f(y)|< \epsilon$ Then doesn't this imply uniformly bounded and equicontinuous directly without using other condition of M? Any help is appreciated! This is a somewhat stupid question.. The conditions of Arzela-Ascoli are "uniformly bounded" and "equicontinuous" and not "each function is bounded" and "each function is uniformly continuous". The prefixes "uniformly" and "equi" suggest that those are conditions that are relevant for a set $M$ of functions and cannot be checked for each function $f \in M$ separately. Let us review the relevant definitions: A set $M \subseteq C(X,\mathbb{R})$ is uniformly bounded if there exists $L > 0$ such that for all$f \in M$ we have $\sup_{x \in X} |f(x)| \leq L$. It is true that each function in $M$ is bounded because it is a continuous function on a compact space but it doesn't mean that all the functions in $M$ can be bounded by the sameconstant. A set $M \subseteq C(X,\mathbb{R})$ is equicontinuous if for every $\varepsilon > 0$ there exists $\delta > 0$ such that for all$f \in C(X,\mathbb{R})$ we have $|x - y| < \delta \implies |f(x) - f(y)| < \varepsilon$. Again, it is true that each function in $M$ is uniformly continuous but it doesn't mean that we can automatically find a $\delta$ that works simultaneously for all the functions in $M$, only for each function separately.
Given a curve $C$ with parameter $t$ and origin $0$ and an arc length $s$, I am trying to find the point $P$ so that the length from $C(0)$ to $P$ along $C$ is equal to $s$. In this case, $C$ is a jointed curve of $n+1$ cubic Bézier curves in $\mathbb{R}^2$; with $(a_i), (b_i), (c_i), (d_i)$ denoting vectors of $\mathbb{R}^2$: $\forall t\in [0,n+1], C(t) = \sum_{i=0}^n(a_i(t - i)^3 + b_i(t - i)^2 + c_i(t - i) + d_i)\chi_{[i,i+1[}(t)$ Given that I can make sure that the curve is not self-intersecting, the curve length function is bijective from $[0,n+1]$ into $C$ ; thus I have thought of reversing the length formula, but that seems a bit extreme to me : $P = C(u), u \in \mathbb{R} / \int_0^u||C'(t)||dt = s$ This is for application in a computer program, so ideally this would be applicable in real time. In the end, I want to sample the curve with a constant step in length. Does anything come to mind ? Thanks by advance.
Entire case From the equation we have that $f^2-g^6=(f+g^3)(f-g^3)=-1$. Therefore, $f+g^3$, and $f-g^3$ don't vanish. This means that there is an entire $h$ such that $$\begin{align}f+g^3&=-e^{ih}\\f-g^3&=e^{-ih}\end{align}$$ It follows that $$\begin{align}f&=\frac{-e^{ih}+e^{-ih}}{2}\\g^3&=\frac{-e^{ih}-e^{-ih}}{2}\end{align}$$ But then $-e^{2ih}=2e^{ih}g^3+1=(2^{1/3}e^{ih/3}g)^3+1$. Since $e^{ih}$ doesn't take the value $0$, then $2^{1/3}e^{ih/3}g$ cannot take any of the three cubic roots of $1$. Since $2^{1/3}e^{ih/3}g$ is entire, by Picard's theorem it follows that $2^{1/3}e^{ih/3}g$ is constant. Therefore, $h$ is constant and so must be $f$ and $g$. Meromorphic case Let's start with a meromorphic solution of $A^3+B^3=1$. It is know that this has such solutions. Even more, all meromorphic solutions are of the form $$\begin{align}A&=\frac{1+3^{-1/2}\mathcal{P}'(a(z))}{2\mathcal{P}(a(z))}\\B&=\omega\frac{1-3^{-1/2}\mathcal{P}'(a(z))}{2\mathcal{P}(a(z))}\end{align}$$ for $a$ entire and $\omega$ a cube root of unity. See I.N. Baker, On a class of meromorphic functions. Proc. Amer. Math. Soc. 17 (1966),819–822. Observe that $A,B$ don't have common zeros and by multiplying the equation by the cube of an entire function, we can assume that they don't have poles, although now the equation looks like $$A^3+B^3=C^3$$ We can multiply the whole equation by the cube of an entire function such that, after multiplication, the zeros of $A$ and $B$ have even multiplicity. Therefore, $A,B$ are the square of two entire functions $A_0,B_0$, respectively. The equation now looks like $A_0^6+B_0^6=C^3$, for a new $C$. Dividing by $A_0^3$ the equation becomes $1+[(B_0/A_0)^3]^2=(C/A_0)^3$. Therefore, $g=C/A_0$, $f=(B_0/A_0)^3$ is a meromorphic solution to the original equation.
This question is similar to Is every forest with more than one node a bipartite graph?, but requires a proof by induction. This was a past exam question. - Let P(G) be the predicate that graph G=(V,E) is bipartite, and can be partitioned into G=($V_1,V_2$,E) s.t. $V_1 \lor V_2 = V$, and $\forall e \in E$, e joins a node from $V_1$ and one from $V_2$. (Note the definition we have learnt does not consider a bipartite graph as one with no odd cycles, so unless I prove that first it wouldn't be acceptable. I chose to try to prove it using the definition given) Basis step: A forest is any acyclic graph. Clearly, a forest with 1 or 2 nodes (or any amount of edges) and no edges is bipartite, since there is no restriction on how we define $V_1, V_2$ in this case. So consider the forest with 2 nodes and 1 edge, for example A $\rightharpoonup$ B. This can clearly be partitioned into {A} and {B}, and its only edge will join nodes from these two sets. - Inductive step: Since bipartite only places a restriction on classification based on edges, nodes with degree of 0 (no adjacent nodes/no edges) do not affect whether a graph is bipartite. Assume that some forest is bipartite. Consider the forest formed by removing all such nodes of degree 0 from that forest. Now, we need to prove that adding an arbritary edge to that forest, such that the resulting graph is still a forest (the new edge doesn't form a cycle), results in a bipartite graph. Then this edge requires that either 2, 1 or 0 nodes be added to the graph. These nodes may already have been nodes of degree 0 in the known bipartite forest, so specifically, this edge requires that either 2, 1 or 0 new nodes have their classification restricted by the definition of bipartite. Since the effect on the graph is the same, we can call this operating adding a node. If 2 new nodes are added to accomodate the new edge, then we can classify them such that they are in different partitions. The graph is still bipartite. If 1 new node was added, then we have a new edge that joins a classified node and an unclassified node. The unclassified node is only joined to one edge, so it can be classified such that it is in the other partition to the classified node on the same edge. The graph remains bipartite. -- This is where I got stuck. I thought that restricting the forest to ignore nodes with degree 0 would ensure that any new edge required at least one node. That is not true, however, since if we had two connected components: A $\rightharpoonup$ B, C $\rightharpoonup$ D, partitioned such that $V_1$ = {A,C}, $V_2$ = {B,D}, then this is bipartite. It would not create a cycle to add an edge A $\rightharpoonup$C, however, since B and D are not connected. So the result would still be a cycle. Clearly, this would be an edge between two members of $V_1$, so a re-partitioning would be required. I'm not sure how to prove that that re-partitioning is always possible.
I was reading this article on inequalities (which some of you may find useful) here. On page 7, I came across this question by Titu Andreescu, which I shall reproduce here: Question:Let f be a convex function on $\mathbb{R}$. If $x_1$, $x_2$ and $x_3$ lie in it's domain, prove that:$$f(x_1)+f(x_2)+f(x_3)+f\left(\frac{x_1+x_2+x_3}{3}\right) \geq \frac{4}{3}\left[f\left(\frac{x_1+x_2}{2}\right)+f\left(\frac{x_2+x_3}{2}\right)+f\left(\frac{x_3+x_1}{2}\right)\right]$$ My Attempt: I rewrote $f(x_1)+f(x_2)+f(x_3)$ as $\frac{f(x_1) + f(x_2)}{2}+\frac{f(x_2) + f(x_3)}{2}+\frac{f(x_3) + f(x_1)}{2}$. By Jensen's inequality, $$f(x_1)+f(x_2)+f(x_3) \geq f\left(\frac{x_1+x_2}{2}\right)+f\left(\frac{x_2+x_3}{2}\right)+f\left(\frac{x_3+x_1}{2}\right)$$ However since $$f\left(\frac{x_1+x_2+x_3}{3}\right)= f\left(\frac{1}{3}\left[\frac{x_1+x_2}{2}+\frac{x_2+x_3}{2}+\frac{x_3+x_1}{2}\right]\right) \leq \frac{1}{3}\left[f\left(\frac{x_1+x_2}{2}\right)+f\left(\frac{x_2+x_3}{2}\right)+f\left(\frac{x_3+x_1}{2}\right)\right]$$ I am stuck since the flipped inequality prevents this approach. I feel that there may be another manipulation that would work but I cannot see it as of now. I appreciate any help and hope you find the embedded article helpful to those who like symmetric inequalities etc.
Let $X_n$ be a binomial distribution with parameter $\theta$. Empirically, after $n$ throws, my estimate is $\hat{\theta}_n=\frac{S_n}{S_n+F_n}$, where $S/F$ are the successes and failures $(F_n:=n-S_n)$. From simulations on my dataset, I've found out that $-\log(\theta)$ is better approximated by a beta distribution, as opposed to the usual way of assuming $\theta$ has beta binomial prior $\mbox{Beta}(a+S_i,b+n-S_i)$. In other words, let $\phi_n:=\frac{-\log(\theta_n)-c}{d}$, ($c,d$ are shift/scaling constants) so that $\phi_n$ is assumed to have empirical distribution $\mbox{Beta}(a,b)$. Here's a qq-plot comparison, with $\theta$ assumed to be beta in the first image, and $-\log(\theta)$ assumed to be beta in the second: Question 1: Are there any good examples of real-life situations where the success probability is log-beta distributed? The literature search on this is extremely annoying because it seems like people call it either log-beta or exp-beta (I'm following the log-normal convention). Question 2: Is there any way of easily calculating the posterior update for the log-beta distribution, based solely on $S_n$ and $F_n$?. $\log(\theta_n)=\log(S_n)-\log(S_n+F_n)$, but this no longer has the interpretation of success/failure. The integral looks to be a sum of confluent hypergeometric functions
Let's talk Homotopy and AlgebraTheorem-of-the-day · So this post is going to be a bit more terse than most. In fact, the objective of this post will be to develop the theory of Homotopy very briefly, with the goal of proving the Fundamental Theorem of Algebra. Homotopy Given two continuous maps \(f,g : \mathbb{R}^n\to\mathbb{R}m\), a homotopy between \(f,\) and $g$ is a continuous map $h: \mathbb{I}\times\mathbb{R}^n\to\mathbb{R}^m$ such that $h(0,x)=f(x)$ and $h(1,x) = g(x)$. Basically it’s a continuous deformation of one map into the other. It’s a bit stronger than just homeomorphism of topological spaces. You can see this because two interlocked rings are topologically homeomorphic to non-interlocked rings, but they’re not homotopic because you’d have to split one of the rings. Enough of all that! Let’s get to the proof already! The Fundamental Theorem of Algebra The statement If $p\in\mathbb{C}[x]$ is a non-constant polynomial, then it has at least one root in $\mathbb{C}$. A consequence of this theorem is that any non-constant polynomial over $\mathbb{C}$ has all of its roots in $\mathbb{C}$. Or equivalently, any polynomial of degree $n$ over $\mathbb{C}$ has $n$ roots in $\mathbb{C}$. The proof Let $p(x) = \sum\limits_{i=1}^n a_i x^i \in\mathbb{C}[x]$ be a polynomial without any roots in $\mathbb{C}$. For simplicity, we assume that the leading coefficient is 1. There is no loss of generality in doing so. We then define a function Then for each $r$ in $\mathbb{C}$, $f_r:\mathbb{C}\to S^1$ is a map from $\mathbb{C}$ to $S^1$. Furthermore, given $r_0,r_1\in\mathbb{C}$, we can define a homotopy from $f_{r_0}$ to $f_{r_1}$ as $g(t,s) = f_{r_0+t(r_1-r_0)}(s)$. So for all $r\in\mathbb{C}$, $f_r$ is homotopic to $f_0$. Since $f_0(s) = 1$ for all $s\in\mathbb{C}$, we have that $f_r$ are all in the same homotopy class as the constant function on $S^1$. We now pick $z,r$ such that $|z| = r \gt \max(1,\sum |a_i| )$. We then note that Cauchy-Schwarz gives us the following: Which means $|z^n - \sum a_i z^i \neq 0|$. So, we can define yet another function $q(t,z) = z^n +t(\sum\limits_{i=1}^{n-1}a_iz^i)$. Then $q$ is a homotopy between $z^n$ and $p(z)$. We now note that when $t=1$, if we make a similar construction to $f_r$ but with this parameterized polynomial instead of $p$, we get a homotopy between $f_r$ and just going around the circle $n$ times (with $t=0$). But since $f_r$ is homotopic to a point on $S^1$, and going around the circle any non-zero number of times is not homotopic to a point, we know that $n=0$. So the only polynomials without roots in $\mathbb{C}$ are constant. Kinda cool, huh? This proof was courtesy of Algebraic Topology by Allen Hatcher.
There are a lot of neat little tricks in Machine Learning to make things work better. They can do many different things: make a network train faster, improve performance, etc. Today I’ll discuss LogSumExp, which is a pattern that comes up quite a bit in Machine Learning. First let’s define the expression: $$LogSumExp(x_1…x_n) = \log\big( \sum_{i=1}^{n} e^{x_i} \big)$$ When would we see such a thing? Well, one common place is calculating the cross entropy loss of the softmax function. If that sounded like a bunch of gobbeldy-gook: 1. get used to it, there’s a bunch of crazily named stuff in ML and 2. just realize it’s not that complicated. Follow that link to the excellent Stanford cs231n class notes for a good explanation, or just realize for the purposes of this post that the softmax function looks like this: $$\frac{e^{x_j}}{\sum_{i=1}^{n} e^{x_i}}$$ where the $x_j$ in the numerator is one of the values (one of the $x_i$s) in the denominator. So what this is doing is essentially exponentiating a few values and the normalizing so the sum over all possible $x_j$ values is 1, as is required to produce a valid probability distribution. So you can think of the softmax function as just a non-linear way to take any set of numbers and transforming them into a probability distribution. And for the cross entropy bit, just accept that it involves taking the log of this function. This ends up producing the LogSumExp pattern since: $$\begin{align}\log\left(\frac{e^{x_j}}{\sum_{i=1}^{n} e^{x_i}}\right) &= \log(e^{x_j}) \:-\: \log\left(\sum_{i=1}^{n} e^{x_i}\right) \\ &= x_j \:-\: \log\left(\sum_{i=1}^{n} e^{x_i}\right) & (1)\end{align}$$ It may seem a bit mysterious as to why this is a good way to produce a probability distribution, but just take it as an article of faith for now. Numerical Stability Now for why LogSumExp is a thing. First, in pure mathematics, it’s not a thing. You don’t have to treat LogSumExp expressions specially at all. But when we cross over into running math on computers, it does become a thing. The reason is based in how computers represent numbers. Computers use a fixed number of bits to represent numbers. This works fine almost all of the time, but sometimes it leads to errors since it’s impossible to accurately represent an infinite set of numbers with a fixed number of bits. To illustrate the problem, let’s take 2 examples for our $x_i$ sequence of numbers: {1000, 1000, 1000} and {-1000, -1000, -1000}. Due to my amazing mathematical ability, I know that feeding either of these sequences into the softmax function will yield a probability distribution of {1/3, 1/3, 1/3} and the log of 1/3 is a reasonable negative number. Now let’s try to calculate one of the terms of the summation in python: >>> import math >>> math.e**1000 Traceback (most recent call last): File " OverflowError: (34, 'Result too large') Whoops. Maybe we’ll have better luck with -1000: >>> math.e**-1000 0.0 That doesn’t look right either. So we’ve run into some numerical stability problems even with seemingly reasonable input values. The Workaround Luckily people have found a nice way to minimize these effects by relying on the fact that the product of exponentiations is equivalent to the exponentiation of the sum: $$e^a \cdot e^b = e^{a+b}$$ and the logarithm of a product is equivalent to the sum of the logarithms: $$\log(a \cdot b) = \log(a) + \log(b)$$ Let’s use these rules to start manipulating the LogSumExp expression. $$\begin{align} LogSumExp(x_1…x_n) &= \log\big( \sum_{i=1}^{n} e^{x_i} \big) \\ &= \log\big( \sum_{i=1}^{n} e^{x_i – c}e^{c} \big) \\ &= \log\big( e^{c} \sum_{i=1}^{n} e^{x_i – c} \big) \\ &= \log\big( \sum_{i=1}^{n} e^{x_i – c} \big) + \log(e^{c}) \\ &= \log\big( \sum_{i=1}^{n} e^{x_i – c} \big) + c & (2)\\ \end{align}$$ Ok! So first we introduced a constant $c$ into the expression (line 2) and used the exponentiation rule. Since $c$ is a constant, we can factor it out of the sum (line 3) and then use the log rule (line 4). Finally, log and exp are inverse functions, so those 2 operations just cancel out to produce $c$. Critically, we’ve been able to create a term that doesn’t involve a log or exp function. Now all that’s left is to pick a good value for c that works in all cases. It turns out $max(x_1…x_n)$ works really well. To convince ourselves of this, let’s construct a new expressin for log softmax by plugging equation 2 into equation 1: $$\begin{align} \log(Softmax(x_j, x_1…x_n)) &= x_j \:-\: LogSumExp(x_1…x_n) \\ &= x_j \:-\: \log\big( \sum_{i=1}^{n} e^{x_i – c} \big) \:-\: c \end{align}$$ and use this to calculate values for the 2 examples above. For {1000, 1000, 1000}, $c$ will be 1000 and $e^x_j – c$ will always be 1 as $x_i – c$ is always zero. so we’ll get: $$\begin{align} \log(Softmax(1000, \left[1000,1000,1000\right])) &= 1000 \:-\: log(3) \:-\: 1000 \\ &= \:- log(3)\end{align}$$ log(3) is a very reasonable number computers have no problem calculating. So that example worked great. Hopefully it’s clear that {-1000,-1000,-1000} will also work fine. The Takeaway By thinking through a few examples, we can reason about what will happen in general: If none of the $x_i$ values would cause any stability issues, the “naive” verson of LogSumExp would work fine. But the “improved” version also works. If at least one of the $x_i$ values is huge, the naive version bombs out. The improved version does not. For the other $x_i$ values that are similarly huge, we get a good calculation. For other $x_i$s that are not huge, we will essentially approximate them as zero. For large negative numbers, the signs get flipped and things work the same way. So while things aren’t perfect, we get some pretty reasonable behavior most of the time and nothing ever blows up. I’ve created a simple python example where you can play around with this to convince yourself that things actually work fine. So that’s a wrap on LogSumExp! It’s a neat little trick that’s actually pretty easy to understand once the mechanics are deconstructed. Once you know about it and the general numerical stability problem, it should demystify some of the documentation in libraries and source code. To cement this in your mind (and get some math practice), I would wait awhile and then try to work out the math yourself. Also think through various examples in your head and reason about what should happen. Then run my code (or rewrite the code yourself) and confirm your intuition. If you enjoyed reading this, subscribe in Feedly to never miss another post.
In the appendix on page 364 of 'String Theory', Polchinski defines the conformal group (Conf) in two dimensions to be the set of all holomorphic maps. On page 85 he explains how Conf is a subgroup of the direct product of the diffeomorphism (diff) and Weyl groups, denoted as (diff $\times$ Weyl) (here, diffeomorphisms refer to general coordinate transformations). This is shown by first showing that Conf is a subgroup of Diff, by choosing the transformation function, $f$ to be holomorphic ($f(z)$). This is followed by showing that specific Weyl transformations with Weyl function\begin{equation}\omega=\textrm{ln}|\partial_zf|\end{equation} can undo the conformal transformation. This seems to imply that Conf is a subgroup of Weyl. In other words, Conf is a subgroup of diff, and Conf is a subgroup of Weyl. This then implies that Conf is a subgroup of (diff $\times$ Weyl). However, in this post, Lubos Motl mentions that the conformal group is NOT a subgroup of the Weyl group. Why is there this inconsistency?This post imported from StackExchange Physics at 2016-07-29 21:43 (UTC), posted by SE-user Mtheorist
I very much dislike the "Big Oh" notation. It just doesn't stick in my mind. Suppose $f$ is a continuous function and $f \in \text{O}( 1/|x|^{1+\epsilon})$ when $|x| \rightarrow \infty$ and for $0< \epsilon < 1$. Does this mean that $$ \int_{-\infty}^\infty |f(x)|\cdot |x|^\epsilon \; dx < \infty ?$$ No. This only gives the trivial upper bound $$\int_1^{\infty} |f(x)| |x|^{\epsilon} \, dx \le \int_1^{\infty} \frac{C}{|x|} \, dx = \infty$$ for some constant $C$.
Solving $x' = 0 = y'$, we get\begin{aligned}x^2 &= \epsilon \\y &= 0 \, .\end{aligned}Therefore, no critical point (or equilibrium point) is obtained with $\epsilon < 0$. The origin is the only critical point when $\epsilon = 0$. One eigenvalue of the flow's Jacobian matrix at $(0,0)$ is $-1$, the other is zero. Therefore, this critical point is non hyperbolic. Below is the phase plane, which may confirm that the origin is a saddle-point: When $\epsilon > 0$, two critical points $(\pm\sqrt{\epsilon},0)$ arise. Let us compute the eigenvalues of the flow's Jacobian matrix at the critical points. Depending on the sign of the characteristic polynomial's determinant $\Delta^\pm = 1\pm 8\sqrt{\epsilon}$, the eigenvalues of the flow's Jacobian matrix at the critical points may be complex conjugate or real. if $0 < \epsilon < 1/64$, the determinant $\Delta^\pm$ at both critical points is positive. Therefore, all four eigenvalues $-\frac{1}{2} \pm \sqrt{\frac{1}{4}\pm 2\sqrt{\epsilon}}$ are real. At the critical point $(-\sqrt{\epsilon},0)$, both eigenvalues are negative, whereas at the critical point $(\sqrt{\epsilon},0)$, the eigenvalues have opposite signs. if $\epsilon = 1/64$, the determinant $\Delta^\pm$ is zero at the critical point $(-\sqrt{\epsilon},0)$. The double eigenvalue is negative. At the critical point $(\sqrt{\epsilon},0)$, the determinant $\Delta^\pm$ is positive, and the eigenvalues still have opposite signs. if $\epsilon > 1/64$, the determinant $\Delta^\pm$ is negative at the critical point $(-\sqrt{\epsilon},0)$. The eigenvalues are complex conjugates with negative real part. At the critical point $(\sqrt{\epsilon},0)$, the determinant $\Delta^\pm$ is positive, and the eigenvalues still have opposite signs. In all these cases, all eigenvalues have nonzero real part, so that according to the Hartman-Grobman theorem, the system will behave similarly to the linearized system in the vicinity of critical points. Below is the phase plane in the case $\epsilon = 1$:
Learning Objectives Describe the differences between rotational and translational kinetic energy Define the physical concept of moment of inertia in terms of the mass distribution from the rotational axis Explain how the moment of inertia of rigid bodies affects their rotational kinetic energy Use conservation of mechanical energy to analyze systems undergoing both rotation and translation Calculate the angular velocity of a rotating system when there are energy losses due to nonconservative forces So far in this section, we have been working with rotational kinematics: the description of motion for a rotating rigid body with a fixed axis of rotation. In this subsection, we define two new quantities that are helpful for analyzing properties of rotating objects: moment of inertia and rotational kinetic energy. With these properties defined, we will have two important tools we need for analyzing rotational dynamics. Rotational Kinetic Energy Any moving object has kinetic energy. We know how to calculate this for a body undergoing translational motion, but how about for a rigid body undergoing rotation? This might seem complicated because each point on the rigid body has a different velocity. However, we can make use of angular velocity—which is the same for the entire rigid body—to express the kinetic energy for a rotating object. Figure 10.17 shows an example of a very energetic rotating body: an electric grindstone propelled by a motor. Sparks are flying, and noise and vibration are generated as the grindstone does its work. This system has considerable energy, some of it in the form of heat, light, sound, and vibration. However, most of this energy is in the form of rotational kinetic energy. Figure \(\PageIndex{1}\): The rotational kinetic energy of the grindstone is converted to heat, light, sound, and vibration. (credit: Zachary David Bell, US Navy) Energy in rotational motion is not a new form of energy; rather, it is the energy associated with rotational motion, the same as kinetic energy in translational motion. However, because kinetic energy is given by K = \(\frac{1}{2}\) mv 2, and velocity is a quantity that is different for every point on a rotating body about an axis, it makes sense to find a way to write kinetic energy in terms of the variable \(\omega\), which is the same for all points on a rigid rotating body. For a single particle rotating around a fixed axis, this is straightforward to calculate. We can relate the angular velocity to the magnitude of the translational velocity using the relation v t = \(\omega\)r, where r is the distance of the particle from the axis of rotation and v t is its tangential speed. Substituting into the equation for kinetic energy, we find $$K = \frac{1}{2} mv_{t}^{2} = \frac{1}{2} m(\omega r)^{2} = \frac{1}{2} (mr^{2}) \omega^{2} \ldotp$$ In the case of a rigid rotating body, we can divide up any body into a large number of smaller masses, each with a mass m j and distance to the axis of rotation r j, such that the total mass of the body is equal to the sum of the individual masses: M = \(\sum_{j} m_{j}\). Each smaller mass has tangential speed v j, where we have dropped the subscript t for the moment. The total kinetic energy of the rigid rotating body is $$K = \sum_{j} \frac{1}{2} m_{j} v_{j}^{2} = \sum_{j} \frac{1}{2} m_{j} (r_{j} \omega_{j})^{2}$$ and since \(\omega_{j} = \omega\) for all masses, $$K = \frac{1}{2} \left(\sum_{j} m_{j} r_{j}^{2}\right) \omega^{2} \ldotp \tag{10.16}$$ The units of Equation 10.16 are joules (J). The equation in this form is complete, but awkward; we need to find a way to generalize it. Moment of Inertia If we compare Equation 10.16 to the way we wrote kinetic energy in Work and Kinetic Energy, \(\left(\dfrac{1}{2} mv^{2}\right)\), this suggests we have a new rotational variable to add to our list of our relations between rotational and translational variables. The quantity \(\sum_{j} m_{j} r_{j}^{2}\) is the counterpart for mass in the equation for rotational kinetic energy. This is an important new term for rotational motion. This quantity is called the moment of inertia I, with units of kg • m 2: $$I = \sum_{j} m_{j} r_{j}^{2} \ldotp \tag{10.17}$$ For now, we leave the expression in summation form, representing the moment of inertia of a system of point particles rotating about a fixed axis. We note that the moment of inertia of a single point particle about a fixed axis is simply mr 2, with r being the distance from the point particle to the axis of rotation. In the next section, we explore the integral form of this equation, which can be used to calculate the moment of inertia of some regular-shaped rigid bodies. The moment of inertia is the quantitative measure of rotational inertia, just as in translational motion, and mass is the quantitative measure of linear inertia—that is, the more massive an object is, the more inertia it has, and the greater is its resistance to change in linear velocity. Similarly, the greater the moment of inertia of a rigid body or system of particles, the greater is its resistance to change in angular velocity about a fixed axis of rotation. It is interesting to see how the moment of inertia varies with r, the distance to the axis of rotation of the mass particles in Equation 10.17. Rigid bodies and systems of particles with more mass concentrated at a greater distance from the axis of rotation have greater moments of inertia than bodies and systems of the same mass, but concentrated near the axis of rotation. In this way, we can see that a hollow cylinder has more rotational inertia than a solid cylinder of the same mass when rotating about an axis through the center. Substituting Equation 10.17 into Equation 10.16, the expression for the kinetic energy of a rotating rigid body becomes $$K = \frac{1}{2} I \omega^{2} \ldotp \tag{10.18}$$ We see from this equation that the kinetic energy of a rotating rigid body is directly proportional to the moment of inertia and the square of the angular velocity. This is exploited in flywheel energy-storage devices, which are designed to store large amounts of rotational kinetic energy. Many carmakers are now testing flywheel energy storage devices in their automobiles, such as the flywheel, or kinetic energy recovery system, shown in Figure 10.18. Figure \(\PageIndex{2}\): A KERS (kinetic energy recovery system) flywheel used in cars. (credit: “cmonville”/Flickr) The rotational and translational quantities for kinetic energy and inertia are summarized in Table 10.4. The relationship column is not included because a constant doesn’t exist by which we could multiply the rotational quantity to get the translational quantity, as can be done for the variables in Table 10.3. Table 10.4 - Rotational and Translational Kinetic Energies and Inertia Rotational Translational $$I = \sum_{j} m_{j} r_{j}^{2}$$ $$m$$ $$K = \frac{1}{2} I \omega^{2}$$ $$K = \frac{1}{2} mv^{2}$$ Example 10.8 Moment of Inertia of a System of Particles Six small washers are spaced 10 cm apart on a rod of negligible mass and 0.5 m in length. The mass of each washer is 20 g. The rod rotates about an axis located at 25 cm, as shown in Figure 10.19. (a) What is the moment of inertia of the system? (b) If the two washers closest to the axis are removed, what is the moment of inertia of the remaining four washers? (c) If the system with six washers rotates at 5 rev/s, what is its rotational kinetic energy? Figure \(\PageIndex{3}\): Six washers are spaced 10 cm apart on a rod of negligible mass and rotating about a vertical axis. Strategy We use the definition for moment of inertia for a system of particles and perform the summation to evaluate this quantity. The masses are all the same so we can pull that quantity in front of the summation symbol. We do a similar calculation. We insert the result from (a) into the expression for rotational kinetic energy. Solution $$I = \sum_{j} m_{j} r_{j}^{2} = (0.02\; kg)(2 \times (0.25\;m)^{2} + 2 \times (0.15\; m)^{2} + 2 \times (0.05\; m)^{2}) = 0.0035\; kg\; \cdotp m^{2} \ldotp$$ $$I = \sum_{j} m_{j} r_{j}^{2} = (0.02\; kg)(2 \times (0.25\;m)^{2} + 2 \times (0.15\; m)^{2}) = 0.0034\; kg\; \cdotp m^{2} \ldotp$$ $$K = \frac{1}{2} I \omega^{2} = \frac{1}{2} (0.0035\; kg\; \cdotp m^{2})(5.0 \times 2 \pi\; rad/s)^{2} = 1.73\; J \ldotp$$ Significance We can see the individual contributions to the moment of inertia. The masses close to the axis of rotation have a very small contribution. When we removed them, it had a very small effect on the moment of inertia. In the next subsection, we generalize the summation equation for point particles and develop a method to calculate moments of inertia for rigid bodies. For now, though, Figure 10.20 gives values of rotational inertia for common object shapes around specified axes. Figure \(\PageIndex{4}\): Values of rotational inertia for common shapes of objects. Applying Rotational Kinetic Energy Now let’s apply the ideas of rotational kinetic energy and the moment of inertia table to get a feeling for the energy associated with a few rotating objects. The following examples will also help get you comfortable using these equations. First, let’s look at a general problem-solving strategy for rotational energy. Problem-Solving Strategy: Rotational Energy Determine that energy or work is involved in the rotation. Determine the system of interest. A sketch usually helps. Analyze the situation to determine the types of work and energy involved. If there are no losses of energy due to friction and other nonconservative forces, mechanical energy is conserved, that is, K i+ U i= K f+ U f. If nonconservative forces are present, mechanical energy is not conserved, and other forms of energy, such as heat and light, may enter or leave the system. Determine what they are and calculate them as necessary. Eliminate terms wherever possible to simplify the algebra. Evaluate the numerical solution to see if it makes sense in the physical situation presented in the wording of the problem. Example 10.9 Calculating Helicopter Energies A typical small rescue helicopter has four blades: Each is 4.00 m long and has a mass of 50.0 kg (Figure 10.21). The blades can be approximated as thin rods that rotate about one end of an axis perpendicular to their length. The helicopter has a total loaded mass of 1000 kg. (a) Calculate the rotational kinetic energy in the blades when they rotate at 300 rpm. (b) Calculate the translational kinetic energy of the helicopter when it flies at 20.0 m/s, and compare it with the rotational energy in the blades. Figure \(\PageIndex{5}\): (a) Sketch of a four-blade helicopter. (b) A water rescue operation featuring a helicopter from the Auckland Westpac Rescue Helicopter Service. (credit b: “111 Emergency”/Flickr) Strategy Rotational and translational kinetic energies can be calculated from their definitions. The wording of the problem gives all the necessary constants to evaluate the expressions for the rotational and translational kinetic energies. Solution The rotational kinetic energy is $$K = \frac{1}{2} I \omega^{2} \ldotp$$We must convert the angular velocity to radians per second and calculate the moment of inertia before we can find K. The angular velocity \(\omega\) is $$\omega = \left(\dfrac{300\; rev}{1.00\; min}\right) \left(\dfrac{2 \pi\; rad}{1\; rev}\right) \left(\dfrac{1.00\; min}{60.0\; s}\right) = 31.4\; rad/s \ldotp$$The moment of inertia of one blade is that of a thin rod rotated about its end, listed in Figure 10.20. The total I is four times this moment of inertia because there are four blades. Thus, $$I = \frac{4MI^{2}}{3} = \frac{4 (50.0\; kg)(4.00\; m)^{2}}{3} = 1067\; kg\; \cdotp m^{2} \ldotp$$Entering \(\omega\) and I into the expression for rotational kinetic energy gives $$K = 0.5 (1067\; kg\; \cdotp m^{2})(31.4\; rad/s)^{2} = 5.26 \times 10^{5}\; J \ldotp$$ Entering the given values into the equation for translational kinetic energy, we obtain $$K = \frac{1}{2} mv^{2} = (0.5)(1000.0\; kg)(20.0\; m/s)^{2} = 2.00 \times 10^{5}\; J \ldotp$$To compare kinetic energies, we take the ratio of translational kinetic energy to rotational kinetic energy. This ratio is $$\frac{2.00 \times 10^{5}\; J}{5.26 \times 10^{5}\; J} = 0.380 \ldotp$$ Significance The ratio of translational energy to rotational kinetic energy is only 0.380. This ratio tells us that most of the kinetic energy of the helicopter is in its spinning blades. Example 10.10 Energy in a Boomerang A person hurls a boomerang into the air with a velocity of 30.0 m/s at an angle of 40.0° with respect to the horizontal (Figure 10.22). It has a mass of 1.0 kg and is rotating at 10.0 rev/s. The moment of inertia of the boomerang is given as I = \(\frac{1}{12}\)mL 2 where L = 0.7 m. (a) What is the total energy of the boomerang when it leaves the hand? (b) How high does the boomerang go from the elevation of the hand, neglecting air resistance? Figure \(\PageIndex{6\): A boomerang is hurled into the air at an initial angle of 40°. Strategy We use the definitions of rotational and linear kinetic energy to find the total energy of the system. The problem states to neglect air resistance, so we don’t have to worry about energy loss. In part (b), we use conservation of mechanical energy to find the maximum height of the boomerang. Solution Moment of inertia: $$I = \frac{1}{12} mL^{2} = \frac{1}{12}(1.0\; kg)(0.7\; m)^{2} = 0.041\; kg\; \cdotp m^{2} \ldotp$$Angular velocity: $$\omega = (10.0\; rev/s)(2 \pi) = 62.83\; rad/s \ldotp$$The rotational kinetic energy is therefore $$K_{R} = \frac{1}{2} (0.041\; kg\; \cdotp m^{2})(62.83\; rad/s){2} = 80.93\; J \ldotp$$The translational kinetic energy is $$K_{T} = \frac{1}{2} mv^{2} = \frac{1}{2} (1.0\; kg)(30.0\; m/s)^{2} = 450.0\; J \ldotp$$Thus, the total energy in the boomerang is $$K_{Total} = K_{R} + K_{T} = 80.93\; J + 450.0\; J = 530.93\; J \ldotp$$ We use conservation of mechanical energy. Since the boomerang is launched at an angle, we need to write the total energies of the system in terms of its linear kinetic energies using the velocity in the x- and y-directions. The total energy when the boomerang leaves the hand is $$E_{Before} = \frac{1}{2} mv_{x}^{2} + \frac{1}{2} mv_{y}^{2} + \frac{1}{2} I \omega^{2} \ldotp$$The total energy at maximum height is $$E_{Final} = \frac{1}{2} mv_{x}^{2} + \frac{1}{2} I \omega^{2} + mgh \ldotp$$By conservation of mechanical energy, E Before= E Finalso we have, after canceling like terms, $$\frac{1}{2} mv_{y}^{2} = mgh \ldotp$$Since v y= (30.0 m/s)(sin 40°) = 19.28 m/s, we find $$h = \frac{(19.28\; m/s)^{2}}{2 (9.8\; m/s^{2})} = 18.97\; m \ldotp$$ Significance In part (b), the solution demonstrates how energy conservation is an alternative method to solve a problem that normally would be solved using kinematics. In the absence of air resistance, the rotational kinetic energy was not a factor in the solution for the maximum height. Exercise 10.4 A nuclear submarine propeller has a moment of inertia of 800.0 kg • m 2. If the submerged propeller has a rotation rate of 4.0 rev/s when the engine is cut, what is the rotation rate of the propeller after 5.0 s when water resistance has taken 50,000 J out of the system? Contributors Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
Difference between revisions of "Multi-index notation" m m (Added category TEXdone) (6 intermediate revisions by 3 users not shown) Line 1: Line 1: + + $\def\a{\alpha}$ $\def\a{\alpha}$ $\def\b{\beta}$ $\def\b{\beta}$ Line 23: Line 25: The partial derivative operators are also abbreviated: The partial derivative operators are also abbreviated: $$ $$ − \partial_x=\biggl(\frac{\partial}{\partial x_1},\dots,\frac{\partial}{\partial x_n) + \partial_x=\biggl(\frac{\partial}{\partial x_1},\dots,\frac{\partial}{\partial x_n)=\partial\quad\text{if the choice of $x$ is clear from context.} $$ $$ The notation for partial derivatives is also quite natural: for a differentiable function $f(x_1,\dots,x_n)$ of $n$ variables, The notation for partial derivatives is also quite natural: for a differentiable function $f(x_1,\dots,x_n)$ of $n$ variables, Line 43: Line 45: (x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b. (x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b. $$ $$ − === + ===formula === + + + + + + + + + + + + + + + + + + + $$ $$ − + =\sum_{\\}\a\partial^\a $$ $$ + Latest revision as of 11:12, 12 December 2013 $\def\a{\alpha}$ $\def\b{\beta}$ An abbreviated form of notation in analysis, imitating the vector notation by single letters rather than by listing all vector components. Contents Rules A point with coordinates $(x_1,\dots,x_n)$ in the $n$-dimensional space (real, complex or over any other field $\Bbbk$) is denoted by $x$. For a multiindex $\a=(\a_1,\dots,\a_n)\in\Z_+^n$ the expression $x^\a$ denotes the product, $x_\a=x_1^{\a_1}\cdots x_n^{\a_n}$. Other expressions related to multiindices are expanded as follows:$$\begin{aligned}|\a|&=\a_1+\cdots+\a_n\in\Z_+^n,\\\a!&=\a_1!\cdots\a_n!\qquad\text{(as usual, }0!=1!=1),\\x^\a&=x_1^{\a_1}\cdots x_n^{\a_n}\in \Bbbk[x]=\Bbbk[x_1,\dots,x_n],\\\a\pm\b&=(\a_1\pm\b_1,\dots,\a_n\pm\b_n)\in\Z^n,\end{aligned}$$The convention extends for the binomial coefficients ($\a\geqslant\b$ means, quite naturally, that $\a_1\geqslant\b_1,\dots,\a_n\geqslant\b_n$):$$\binom{\a}{\b}=\binom{\a_1}{\b_1}\cdots\binom{\a_n}{\b_n}=\frac{\a!}{\b!(\a-\b)!},\qquad \text{if}\quad \a\geqslant\b.$$The partial derivative operators are also abbreviated:$$\partial_x=\biggl(\frac{\partial}{\partial x_1},\dots,\frac{\partial}{\partial x_n}\biggr)=\partial\quad\text{if the choice of $x$ is clear from context.}$$The notation for partial derivatives is also quite natural: for a differentiable function $f(x_1,\dots,x_n)$ of $n$ variables, $$\partial^a f=\frac{\partial^{|\a|} f}{\partial x^\a}=\frac{\partial^{\a_1}}{\partial x_1^{\a_1}}\cdots\frac{\partial^{\a_n}}{\partial x_n^{\a_n}}f=\frac{\partial^{|\a|}f}{\partial x_1^{\a_1}\cdots\partial x_n^{\a_n}}.$$If $f$ is itself a vector-valued function of dimension $m$, the above partial derivatives are $m$-vectors. The notation $$\partial f=\bigg(\frac{\partial f}{\partial x}\bigg)$$ is used to denote the Jacobian matrix of a function $f$ (in general, only rectangular). Caveat The notation $\a>0$ is ambiguous, especially in mathematical economics, as it may either mean that $\a_1>0,\dots,\a_n>0$, or $0\ne\a\geqslant0$. Examples Binomial formula $$ (x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b. $$ Leibniz formula for higher derivatives of multivariate functions $$ \partial^\a(fg)=\sum_{0\leqslant\b\leqslant\a}\binom\a\b \partial^{\a-\b}f\cdot \partial^\b g. $$ In particular, $$ \partial^\a x^\beta=\begin{cases} \frac{\b!}{(\b-\a)!}x^{\b-\a},\qquad&\text{if }\a\leqslant\b, \\ \quad 0,&\text{otherwise}. \end{cases} $$ Taylor series of a smooth function If $f$ is infinitely smooth near the origin $x=0$, then its Taylor series (at the origin) has the form $$ \sum_{\a\in\Z_+^n}\frac1{\a!}\partial^\a f(0)\cdot x^\a. $$ Symbol of a differential operator If $$D=\sum_{|\a|\le d}a_\a(x)\partial^\a$$is a linear ordinary differential operator with variable coefficients $a_\a(x)$, then its principal symbol is the function of $2n$ variables $S(x,p)=\sum_{|\a|=d}a_\a(x)p^\a$. How to Cite This Entry: Multi-index notation. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Multi-index_notation&oldid=25752
An Internet cafe is reached by three kind of clients: type I, type II, type III, according to (independent) Poisson processes of parameters $\lambda_1,~\lambda_2,~\lambda_3$, respectively. Evaluate the probability that $15$ clients of type I reach the cafe, before $6$ of the other categories reach the same cafe, on the interval $[0,t].$ Attempt. Let $\{N_1(t)\},~\{N_2(t)\},~\{N_3(t)\}$ denote the Poisson processes that express the arrivals of clients of types $I,~II,~III$, respectively. Then the desired probability is: $$P(N_1(t)=15,~N_2(t)+N_3(t)<6)=e^{-\lambda_1t}\frac{(\lambda_1t)^{15}}{15!}\cdot \sum_{k=0}^{5} e^{-(\lambda_2+\lambda_3)t}\frac{((\lambda_2+\lambda_3)t)^{k}}{k!}.$$ Am I on the right path? Thank you!
Assessment | Biopsychology | Comparative |Cognitive | Developmental | Language | Individual differences |Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | $ \operatorname{cov}(X, Y) = \operatorname{E}((X - \mu) (Y - \nu)), \, $ where E is the expected value. Intuitively, covariance is the measure of how much two variables vary together. That is to say, the covariance becomes more positive for each pair of values which differ from their mean in the same direction, and becomes more negative with each pair of values which differ from their mean in opposite directions. In this way, the more often they differ in the same direction, the more positive the covariance, and the more often they differ in opposite directions, the more negative the covariance. The definition above is equivalent to the following formula which is commonly used in calculations: $ \operatorname{cov}(X, Y) = \operatorname{E}(X Y) - \mu \nu. \, $ If X and Y are independent, then their covariance is zero. This follows because under independence, $ E(X \cdot Y)=E(X) \cdot E(Y)=\mu\nu, $ The converse, however, is not true: it is possible that X and Y are not independent, yet their covariance is zero. Random variables whose covariance is zero are called uncorrelated. If X and Y are real-valued random variables and c is a constant ("constant", in this context, means non-random), then the following facts are a consequence of the definition of covariance: $ \operatorname{cov}(X, X) = \operatorname{var}(X)\, $ $ \operatorname{cov}(X, Y) = \operatorname{cov}(Y, X)\, $ $ \operatorname{cov}(cX, Y) = c\, \operatorname{cov}(X, Y)\, $ $ \operatorname{cov}\left(\sum_i{X_i}, \sum_j{Y_j}\right) = \sum_i{\sum_j{\operatorname{cov}\left(X_i, Y_j\right)}}.\, $ $ \operatorname{cov}(X, Y) = \operatorname{E}((X-\mu)(Y-\nu)^\top).\, $ For vector-valued random variables, cov( X, Y) and cov( Y, X) are each other's transposes. The covariance is sometimes called a measure of "linear dependence" between the two random variables. That phrase does not mean the same thing that it means in a more formal linear algebraic setting (see linear dependence), although that meaning is not unrelated. The correlation is a closely related concept used to measure the degree of linear dependence between two variables.de:Kovarianz (Stochastik) es:Covarianza fr:Covarianceno:Kovarianspt:Covariância su:Kovarian This page uses Creative Commons Licensed content from Wikipedia (view authors).
Given a deterministic infinite word transducer (also called deterministic generalized sequential machine) with the following restriction: There is exactly one initial state All states are final The transducer is real-time, i.e. reads excactly one input character on each transition The transducer produces zero or more output characters on each transition The deterministic input automaton (obtained by eliminating the output on each transition) recognizes $\Sigma^\omega$, i.e. each state has $|\Sigma|$ transitions, one for each input character There are no output $\epsilon$-loops, i.e. any infinite path yields an infinite word output Are there special algorithms to decide equivalence for this type of automaton? I am particularly interested in those algorithms that admit a symbolic solution, e.g. using an SMT solver. Note that one way to decide equivalence of two such transducers is to decide if the union of both automata is single-valued. However, I am not aware of a symbolic algorithm to decide single-valuedness if both states and transitions are given symbolically (e.g. by bit-vector constraints).
$\newcommand{\ket}[1]{|#1\rangle}$$\newcommand{\bra}[1]{\langle#1|}$In Principles of Quantum Mechanics (2nd edition) by Shankar, Exercise 5.1.3 asks to find the wave function of the free particle by means of applying the propagator to an wave function in $x$-space. The propagator $U(t)$, which satisfies $\ket{\psi(t)} = U(t)\ket{\psi(0)}$ by definition, can be shown to have the form $U(t) = \exp(-iHt/\hbar)$ from Schrödinger's equation. Now, for the free particle, the Hamiltonian is given by $H = P^2 / 2m$. However, Shankar then says the propagator for this problem is given by $$U(t) = \exp\left[\displaystyle-\frac{it}{\hbar}\left(-\frac{\hbar^2}{2m}\frac{\partial ^2}{\partial x^2}\right)\right].$$ But $H \ne -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}$ !! Sure, this is its action on $\ket{\psi}$ in $x$-space, in the sense that $$\bra x H\ket{\psi(t)} = -\frac{\hbar^2}{2m}\frac{\partial ^2}{\partial x^2}\psi(x,t)$$ but to be a pedant about the mathematics, what Shankar used for $H$ is surely not the Hamiltonian operator acting on Hilbert space. It turns out that Shankar's "propagator" does in fact propagate the wave function in $x$-space in the sense that $\psi(x,t) = U(t)\psi(x,0)$. So it still smells like a propagator. Now for my actual question: (that was just context) What kind of mathematical object is Shankar's propagator (if it's not an operator on Hilbert space)? Is it an operator on a new vector space ($x$-space, perhaps)? Also, how does it relate to the actual propagator (the one that is an operator on Hilbert space)?
OpenCV 3.3.0 Open Source Computer Vision Structured forests for fast edge detection Filters Superpixels Image segmentation Fast line detector void cv::ximgproc::anisotropicDiffusion (InputArray src, OutputArray dst, float alpha, float K, int niters) Performs anisotropic diffusian on an image. More... void cv::ximgproc::niBlackThreshold (InputArray _src, OutputArray _dst, double maxValue, int type, int blockSize, double k, int binarizationMethod=BINARIZATION_NIBLACK) Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired. More... void cv::ximgproc::thinning (InputArray src, OutputArray dst, int thinningType=THINNING_ZHANGSUEN) Applies a binary blob thinning operation, to achieve a skeletization of the input image. More... void cv::ximgproc::anisotropicDiffusion ( InputArray src, OutputArray dst, float alpha, float K, int niters ) Performs anisotropic diffusian on an image. The function applies Perona-Malik anisotropic diffusion to an image. This is the solution to the partial differential equation: \[{\frac {\partial I}{\partial t}}={\mathrm {div}}\left(c(x,y,t)\nabla I\right)=\nabla c\cdot \nabla I+c(x,y,t)\Delta I\] Suggested functions for c(x,y,t) are: \[c\left(\|\nabla I\|\right)=e^{{-\left(\|\nabla I\|/K\right)^{2}}}\] or \[ c\left(\|\nabla I\|\right)={\frac {1}{1+\left({\frac {\|\nabla I\|}{K}}\right)^{2}}} \] src Grayscale Source image. dst Destination image of the same size and the same number of channels as src . alpha The amount of time to step forward by on each iteration (normally, it's between 0 and 1). K sensitivity to the edges niters The number of iterations void cv::ximgproc::niBlackThreshold ( InputArray _src, OutputArray _dst, double maxValue, int type, int blockSize, double k, int binarizationMethod = BINARIZATION_NIBLACK ) Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired. The function transforms a grayscale image to a binary image according to the formulae: \[dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}\] \[dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}\]where \(T(x,y)\) is a threshold calculated individually for each pixel. The threshold value \(T(x, y)\) is determined based on the binarization method chosen. For classic Niblack, it is the mean minus \( k \) times standard deviation of \(\texttt{blockSize} \times\texttt{blockSize}\) neighborhood of \((x, y)\). The function can't process the image in-place. _src Source 8-bit single-channel image. _dst Destination image of the same size and the same type as src. maxValue Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types. type Thresholding type, see cv::ThresholdTypes. blockSize Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on. k The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean. binarizationMethod Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods. void cv::ximgproc::thinning ( InputArray src, OutputArray dst, int thinningType = THINNING_ZHANGSUEN ) Applies a binary blob thinning operation, to achieve a skeletization of the input image. The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen. src Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values. dst Destination image of the same size and the same type as src. The function can work in-place. thinningType Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes
Introduction This is part 4 of a five part tutorial on our plasma simulation code Starfish. In this part we continue the surface interaction topic introduced in step 3. More specifically, we will learn how to export surface properties, such as surface flux and deposition rate. We will also set up averaging to obtain averaged field properties. Let’s assume that we want to determine the rate with which the ions are arriving at the object, and also how much stuff is sticking to it. These are just two examples of surface (boundary) properties that can be exported from the simulation. Steady State But before we start discussing surface flux, we need to introduce the concept of steady state. Many computer simulation methods, especially ones based on kinetic approaches, such as PIC and DSMC, work by integrating simulation particles forward in time from some known initial state. The simulation will initially pass through a transient state in which the results are constantly changing and are not indicative of the final steady solution. As such, we need to wait until steady state to start collection cumulative data. Starfish automatically waits until steady state before starting to collect properties such as surface flux. This is an important point to note if you want to export cumulative data. By default, the steady state is determined automatically. But you can also override it. As an example, here is a typical time command: time> <num_it>500</num_it> <dt>5e-7</dt> </time> Here is one that tells the code to pretend the steady state is reached at iteration 100: time> <num_it>500</num_it> <dt>5e-7</dt> <steady_state>100</steady_state> </time> This second approach forces Starfish to turn on “steady-state” data collection at time step 100, regardless of whether the simulation has reached an actual steady state. This second approach is needed whenever time-variant sources are used or whenever the physical solution does not reach a real steady state. Such is the case in some types of plasma thrusters which, despite constant inlet propellant flow, reach only an oscillatory steady state due complex interactions between different species. How is steady state determined? So how does Starfish characterize steady state? Steady state simply means that the properties of interest are no longer changing. Let’s consider the typical gas flow governing equations. First we have the mass conservation: $$\dfrac{\partial \rho}{\partial t} + \nabla\cdot(n\vec{u}) = 0$$ Here \(rho\) is the mass density. For steady state, we need the time dependent term on left to vanish. Clearly, one possible check is to iterate over the computational mesh and check the density at each node against the previous value. Disadvantage of this approach are the added memory requirements. Better approach is to consider the total mass, \(M\equiv = \int_v \rho \text{dV}\). With the mesh-based approach, \(M=\sum \rho_i V_i\), where we sum the density and the node volume over all nodes. Popular approach used in a number of PIC codes is to use the particles instead of the mesh and compare the number of simulation particles between two successive time steps. If the difference is smaller than some tolerance, the steady state is achieved: if (abs((part_count-part_count_old)/(part_count))<tol) steady_state = true; else part_count_old = part_count_old; This approach is analogous if all particles have the same specific weight. If not, then the variable weight needs to be taken into account. This approach works quite well in practice, but it checks only mass conservation. However, as we know, conservation of mass is just one of several governing equations that need to be satisfied by the flow. For instance, we also have the momentum equation, $$\rho\left(\frac{\partial\vec{v}}{\partial t} + \vec{v}\cdot \nabla \vec{v} \right) = \vec{F}$$ So to be more accurate, we should also check the system momentum to make sure that $$\dfrac{\partial L}{\partial t} \equiv \int_V \rho\frac{\partial|\vec{v}|}{\partial t} \text{dV} = 0$$. Of course, we can get even more involved: for instance, the three components of linear momentum could be considered separately. We could also consider the total energy. However, from experience, these two checks are generally sufficient. Starfish uses these two checks to implement the following algorithm: ratio_mass = (tot_mass - tot_mass_old)/tot_mass; ratio_momentum = (tot_momentum - tot_momentum_old)/tot_momentum; if (abs(ratio_mass)<1e-3 && abs(ratio_momentum)<1e-3) countown--; else countdown = 5; if (countdown<=0) steady_state = true; tot_mass_old = tot_mass; tot_momentum_old = tot_momentum_old; In other words, the code waits until both mass and momentum change by less than 0.1% between time steps for 5 consecutive time steps. Once this happens, the steady state is reached. This algorithm works well for most situations, but as noted above, there may be some special cases when you will need to override it, and set the steady state manually. Surface Flux One thing that occurs once steady state is reached is that the code will start collecting information about particles hitting surfaces. This includes properties such as flux of individual materials, as well as the mass deposition rate, corresponding to the particles that stick (are absorbed) to the surface. We can output these properties by adding list of variables to the output statement, <output type="boundaries" file_name="boundaries.dat" format="tecplot"> <variables>flux.o+, flux-normal.o+, deprate, depflux</variables> </output> Previously, the boundaries.dat file contained just the geometry of the cylinder. After this addition, the file will also contain four additional values corresponding to the number flux of oxygen ions and atoms (in #/m 2/s), total mass flux summed over all materials (in kg/m 2/s), and the mass deposition rate (in kg/s). These results are plotted below in Figure 1. Data Averaging Since results from kinetic codes are quite noisy, it is a good practice to average results over several time steps to get smoother plots, and to eliminate outlier data arising from statistical noise. This is done in Starfish with the averaging command. The syntax is <!-- setup averaging --> <averaging frequency="2"> <variables>phi,nd.o+,nd.o</variables> </averaging> The averaging starts automatically at steady state, and new data will be added every 2 time steps. The variables lines lists the variables to be averaged. Since averaging data adds a computational overhead, the code averages just the variables that are specified here. These averaged values are then exported using the standard output command, with the caveat that the averaged versions will have the base ending in “-ave”. For instance, <!-- save results --> <output type="2D" file_name="results/field.vts" format="vtk"> <scalars>phi, phi-ave, rho, nd.o+, nd.o, nd-ave.o+, nd-ave.o, t.o+, t.o</scalars> <vectors>[efi, efj], [u.o+,v.o+], [u.o,v.o]</vectors> </output> This command will output the following variables: instantaneous potential, averaged potential, charge density, instantaneous ion and neutral density, averaged ion and neutral density, and time averaged temperature. Because of the way temperature is calculated (by collecting velocity samples), this data is automatically time averaged. Figure 2 below shows the differences. Continue onto Part 5.
I am stuck on the following problem: Let $f(x,y) = \max \{ x^2 + y^2 , 1 \}$ and define a Borel measure $\mu$ on $\mathbb{R}$ by $\mu(E) : = (m \times m)(f^{-1}(E))$, where $m$ is Lebesgue measure. Find the Radon-Nikodym derivative $d \rho/d m$, where $\rho$ is the absolutely continuous part of $\mu$. It is easy to see that $\mu$ is a positive measure (but not finite), $\mu(- \infty, 1)) = 0$, and $\mu$ is not absolutely continuous with respect to $m$ (for instance $m(\{ 1\}) = 0$ while $\mu(\{1\}) = \pi$). I feel like I have a solid understanding of the measure, specifically it is not too difficult to compute the measure of intervals, but I am unsure how to go about the actual problem. In Folland's analysis text, his proof of the Lebesgue Radon Nikodym Theorem is not constructive and did not help me out too much. It is clear that I am looking at this problem in the wrong way, so I was hoping for a helpful hint in the right direction.
I haven't got much more to explain other than the title of the question itself. This question was written down in my notes that I took while studying Principles of Flight for my EASA commercial license. Ideally it would not. With increasing Mach number the lift curve slope of the vertical tail goes up proportional to the Prandtl-Glauert factor $\frac{1}{\sqrt{1-Ma^2}}$ while the destabilising fuselage as a slender body is mainly unaffected by Mach effects. But that is only theory which assumes a perfectly rigid body. In reality, elasticity decreases the effectivity of the vertical tail. Higher loads lead to higher deformations which in turn decrease the loads and with them the stabilising moments. What happens in detail depends on the structure: A light aircraft will experience decreasing directional stability as dynamic pressure increases. At supersonic speed, fin effectivity is further reduced by the now again decreasing lift curve slope as Mach increases further. Now even the rigid aircraft experiences decreasing directional stability and elasticity will reduce stability further. Directional stability over Mach, from Ray Whitford's Fundamentals of Fighter Design It depends- there are two things in play here and the net effect would be dependent on their interaction. The most important contributor to lateral stability is dihedral effect, $C_{l_{\beta}}$, which is not usually affected by high Mach numbers. However, in case of a highly effective vertical tail (which would be improving the lateral stability by introducing a rolling moment), the increase in lift curve slope of the vertical tail increases in its contribution, which would improve lateral stability. However, lateral stability would degrade if the lift distribution over the wing is affected by shock wave formation. The net effect would depend on the magnitude of these things. Change in lateral stability derivatives with Mach number, from Chapter on lateral-directional stability from USAF test pilots school. It can be seen that the $C_{l_{\beta}}$ decreases in transonic regime, which improves lateral stability. However, at these speeds, aeroelastic considerations become important, leading to effects like aileron reversal and reduction in lateral control effectiveness due to flow separation as a result of shock formation.
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
Just for fun, I have been attempting to find polynomials with integer coefficients from a given root. I have been able to do this with square roots: for example, if say I wanted to find a polynomial with a root $\sqrt 2$, we would set an equation like this: $$x-\sqrt 2=0$$ Then just move the square root to the other side, square, and end with a result of $$x=\sqrt 2$$ $$x^2=2$$ $$x^2-2=0$$ This strategy can also be done if given a single nth root, or multiple square roots, or a single nth root along with any number of square roots. However, I'm not quite sure how to tackle multiple nth roots like say $3^\frac{1}{3} + 5^\frac{1}{3}$ or $3^\frac{1}{3} + 5^\frac{1}{5}$. Just taking the first example, if we start with the equation $$x-3^\frac{1}{3}-5^\frac{1}{3}=0$$ and move the cube root of 5 to the other side and cube it, we get $$x-3*3^\frac{1}{3}x^2+3*3^\frac{2}{3}x-3=5$$ After getting here I'm unsure of where to go next, as I don't know how to handle both the $3^\frac{1}{3}$ and $3^\frac{2}{3}$ at the same time. I would really appreciate it if someone could help with both this problem and a more general problem solving technique for ones like the other example I showed or something more complicated like say $\frac{3^\frac{2}{3}+5^\frac{1}{5}}{7^\frac{1}{7}+\sqrt 2}$ Let $\,x=\sqrt[3]{3}+\sqrt[3]{5}\,$, then using the identity $\,(a+b)^3=a^3+b^3+3ab(a+b)\,$: $$ x^3 = \left(\sqrt[3]{3}\right)^3+\left(\sqrt[3]{5}\right)^3 + 3 \cdot \sqrt[3]{3}\sqrt[3]{5}\cdot(\sqrt[3]{3}+\sqrt[3]{5}) = 3+5+3 \cdot \sqrt[3]{15} \cdot x = 8 + 3 \sqrt[3]{15} \, x $$ It follows that $\,x^3-8=3 \sqrt[3]{15} \, x\,$, and therefore: $$(x^3-8)^3=3^3 \cdot 15 \cdot x^3 \quad\iff\quad x^9 - 24 x^6 - 213 x^3 - 512 = 0$$ [ EDIT] Radicals are algebraic numbers, and the algebraic numbers form a field (i.e. closed under addition/multiplication and their inverses), so any rational expression in algebraic numbers is itself algebraic. A polynomial with integer coefficients having such an expression as a root (though not necessarily the minimalone) can be determined algorithmically using polynomial resultants. For example, one would write $\,x = \displaystyle\frac{\sqrt[3]{9}+\sqrt[5]{5}}{\sqrt[7]{7}+\sqrt 2} = \frac{a+b}{c+d}\,$ as the polynomial system: $$ \begin{cases} \begin{align} cx+dx-a-b=0 \\ a^3-9=0 \\ b^5-5=0 \\ c^7-7=0 \\ d^2 - 2 = 0 \end{align} \end{cases} $$ Then, using resultants repeatedly, the variables $\,a,b,c,d\,$ can be successively eliminated from the system, leaving in the end an equation in $\,x\,$ with integer coefficients. (However, the calculations are not pretty, and would normally be done using some CAS rather than by hand.) What follows is a "school algebra" way to rationalize $\;r_1\sqrt[3]{s_1} + r_2\sqrt[3]{s_2} + r_3\sqrt[3]{s_3},\;$ where $r_1,$ $r_2,$ $r_3,$ $s_1,$ $s_2,$ and $s_3$ are rational numbers. Incidentally, I am not assuming that any of $\sqrt[3]{s_1},$ $\sqrt[3]{s_2},$ or $\sqrt[3]{s_3}$ is irrational. Note that it suffices to rationalize $\;\sqrt[3]{k} + \sqrt[3]{m} + \sqrt[3]{n},\;$ where $k,$ $m,$ and $n$ are rational numbers (e.g. write $\;r_1\sqrt[3]{s_1} = \sqrt[3]{(r_1)^3s_1},\;$ etc.). In fact, without loss of generality, we can assume $k,$ $m,$ and $n$ are integers, although doing so does not make any difference in what follows. Actually, the reduction to $\sqrt[3]{k} + \sqrt[3]{m} + \sqrt[3]{n}$ is not needed. I only did it to reduce the symbol clutter. Let $\;a = \sqrt[3]{k}\;$ and $\;b = \sqrt[3]{m}\;$ and $\;c = \sqrt[3]{n}.\;$ Then $$ (a+b+c)(a^2 + b^2 + c^2 - ab - bc - ac) \;\; = \;\; a^3 + b^3 + c^3 - 3abc$$ $$ = \;\; (k + m + n) - 3\sqrt[3]{kmn} \;\; = \;\; (k+m+n) -\sqrt[3]{27kmn} $$ Let $\;x = (k+m+n)\;$ and $\;y = \sqrt[3]{27kmn}.\;$ Then $$ (x-y)(x^2 + xy + y^2) \;\; = \;\; x^3 - y^3 \;\; = \;\; (k + m + n)^3 - 27kmn $$ Therefore, $$ \frac{1}{\sqrt[3]{k} + \sqrt[3]{m} + \sqrt[3]{n}}$$ can be rationalized by multiplying both the numerator and the denominator by $$ (a^2 + b^2 + c^2 - ab - bc - ac)(x^2 + xy + y^2)$$ where $a,$ $b,$ $c,$ $x,$ and $y$ are expressions involving $k,$ $m,$ and $n,$ as indicated above. The result will be $$ \frac{(a^2 + b^2 + c^2 - ab - bc - ac)(x^2 + xy + y^2)}{(k + m + n)^3 - 27kmn} $$
Suppose that $M$ is a matrix of order $n$ such that entries of matrix $M$ com from finite field $GF(2^n)$. The matrix $M$ is called MDS (Maximum Distance Separable) matrix if and only if every sub-matrix of $M$ is non-singular over $GF(2^n)$. For a matrix of order $n$, we should obtain $\sum_{i=1}^n \, {n \choose i }^2$ determinant over $GF(2^n)$, to find out that a matrix is MDS or not. So, when the order of a matrix is less than $10$, we can use the mentioned definition to check that is it MDS matrix or not. But for large order of matrix, this definition is not applicable. My question: Is there a probabilistic method to find out a matrix is MDS or not MDS. I would be appreciate for any suggestions. Edition1: With searching, I found one class of MDS matrices over field of real numbers. Consider the following companion matrix in the following form $$ C_n=\left( \begin{array}{cccccc} 0 &1 &0 &\cdots &\cdots &0 \\ 0 &0 &1 &\ddots &\ddots &\vdots \\ \vdots &\ddots &\ddots &\ddots &\ddots &\vdots \\ \vdots &\ddots &\ddots &\ddots &\ddots &0 \\ 0 &\cdots &\cdots &0&0 &1 \\ 1 &1 &1 &\cdots &1 &1 \end{array} \right)_{n\times n} $$ The $n\times (n-1)$ power of matrix $C_n$ is a MDS matrix and the matrices $C_n^i$, $1\leq i \leq n^2-n-1$ are not MDS matrices. For example, assume the matrix $C_3$ as shown $$ C_3= \left( \begin {array}{ccc} 0&1&0\\ 0&0&1\\ 1&1&1 \end {array} \right) $$ The matrices $C_3^i$, $1\leq i \leq 5$ are not MDS matrix but the matrix $C_3^6$ as follows $$ C_3^6= \left( \begin {array}{ccc} 4&6&7\\ 7&11&13\\ 13&20&24 \end {array} \right) $$ is a MDS matrix. It means, the entries of $C_3^6$ are non-zero and the determinant of $C_3^6$ is $1$. In addition all square sub-matrices of order $2$ of matrix $C_3^6$ have non-zero determinant. Edition2: As an answer to @Daniel McLaury comment, you can check that the matrices $C_3^j$, $15 \leq j \leq 18$, are not MDS. For example, the matrix $C_3^{17}$ is as follows $$ C_3^{17}= \left[ \begin {array}{ccc} 3136&4841&5768\\ 5768& 8904&10609\\ 10609&16377&19513 \end {array} \right] $$ the determinant of the following two sub-matrices of $C_3^{17}$ are zero $$ \begin{array}{ccc} \left[ \begin {array}{cc} 5768&8904\\ 10609&16377 \end {array} \right] &, & \left[ \begin {array}{cc} 3136&5768\\ 5768&10609 \end {array} \right] \end{array} $$ In fact, design of MDS matrix is on the finite field. I just made this example in the field of real numbers to clarify my question. You can see the original form of my question with example, in my Mathover.
I'm trying to find conditions on the gluing map between two manifolds so that the quotient space will be a smooth manifold, and the inclusion map will be a diffeomorphism. Specifically, Suppose $U_j$ is an open subset of a smooth $m$-manifold $M_j$, for $j \in \{1,2\}$, and $h: U_1 \to U_2$ is a diffeomorphism. Let $\sim$ be the smallest equivalence relation on the disjoint union $M_1 \sqcup M_2$ such that $u \sim h(u)$ for all $u\in U_1$. Let $\bar{M} = (M_1 \sqcup M_2) / \sim$, define $\pi:M_1\sqcup M_2 \to \bar{M}$ to be the quotient map, and equip $\bar{M}$ with the quotient topology. Find conditions on $h$ such that $\bar{M}$ admits the structure of a smooth $m$-manifold such that $$\pi|_{M_j}: (M_1 \sqcup M_2) \supset M_j \to \pi(M_j) \subset \bar{M}$$ is a diffeomorphism onto an open set of $\bar{M}$ for $i\in \{1,2\}$. My attempt so far: If $\{A_i, \phi_i \}$ is an atlas for $M_1$ and $\{B_j, \psi_j\}$ is an atlas for $M_2$, I'm looking for a natural way to define an atlas $C_k, \zeta_k$ on $\bar{M}$. If I can find that, then I need to show that $$\zeta_k \circ \pi_{M_1} \circ \phi_i^{-1}$$ is a diffeomorphism for all $A_i$. $h$, I'm thinking, should somehow be compatible with the charts $\phi_i, \psi_j$ on $A_i \cap U_1$ and $B_j \cap U_2$. But here I'm kind of stuck. Any ideas?
[Questions about Machine Learning] Chapter I Mathematics Fundamentals In this chapter, we will discuss some basic mathematics knowledge that you need to know for further study. Q. What are the relations between scalar, vector, matrix, and tensor? A. A vector is an ordered finite list of numbers. Vectors are usually written as vertical arrays, surrounded by square or curved brackets, as seen below. $$\begin{pmatrix}-1.1 \\ 0.0 \\ 3.6 \\ -7.2 \end{pmatrix} or \begin{bmatrix}-1.1 \\ 0.0 \\ 3.6 \\ -7.2 \end{bmatrix}$$ Sometimes, they are written as numbers separated by commas and surrounded by parentheses. As seen below. $$(-1.1, 0.0, 3.6, -7.2)$$ Vector is often denoted by a lowercase symbol $a$. We can get the element (also known as entries, coefficients or components) of a vector by the index, and the $i$th element of the vector $a$ is therefore denoted as $a_i$ where the subscript $i$ is an integer index of the vector. (Obviously, $0<i<n$). If two vectors have the same size, and more importantly, each of the corresponding entries is the same, then the two vectors are equal, which is denoted as $a=b$. A scalar is a number or a value. In most applications, scalars are real numbers. We usually use an italic lowercase symbol to denote a scalar. For example, $\textit{a}$ is a scalar. A matrix is a rectangular array, which means it is a 2-dimensional data table at the same time. Matrix is a collection of items that have the same feature and character. In a matrix, a column indicates a feature, and a row indicates an item. Matrix is usually denoted as a capital letter, $A$ for example. A tensor is an array with more than 2 dimensions. Generally, if the elements of an array are distributed in a regular grid with several dimensions, we would call it a tensor. We use a capital letter to denote a tensor, same with the matrix. $A$ for example. An element in a tensor is denoted as $A_(i,j,k)$. Relations between them Scalar is a 0-dimensional tensor. Vector is a 1-dimensional tensor. For example, with a scalar, we could get the length of a rod, but we cannot know the direction of this rod. With a vector, we could know both the length and direction of a rod. With a tensor, we may be able to know both the length and direction of a rod, and we could even know more about the rod. (for example, the degree of deflection) Q. What are the differences between tensor and matrix? From the aspect of algebra, the matrix is a generation of the vector, the matrix is a 2-dimensional table. $n$-dimensional is a so-called $n$-dimensional table. Noted that this is not a strict definition of the tensor. For the aspect of geometry, a matrix is a geometric sense value. It does not change with the coordinate transformation of the frame of reference. the vector has this feature too. The tensor can be represented by a $3$x$3$ matrix or an $n$x$n$ matrix. A scalar can be regarded as a $1$x$1$ matrix while a vector with $n$ items can be regarded as $1$x$n$ matrix. Q. What will happen if I multiply a matrix and a tensor? A. You can only multiply an $m$x$n$ matrix and a $n$ items vector. Then you will get a $m$ items vector. The key to this is regarded each row of the matrix as a vector, and multiply the given vector. For example, If you are going to multiply the following: $$\begin{bmatrix}1, 2 \\ 0.0, 1 \\ 3.6, 3 \\ -7.2,2 \end{bmatrix}$$ and $$\begin{bmatrix}-1.1 \\ 0.0 \\ 3.6 \\ -7.2 \end{bmatrix}$$ Q. What is the norm? In mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space. There are many different types of norms for a vector or a matrix. For example, 1-norm: $ ||x|| 1 = \sum{i=1}^N |x _i| $ 2-norm or Euclid norm: $ ||x|| _2 = $ 3-norm: Q. What are the norms of matrix and vector? We define a vector as $\vec{x}=(x 1,x2,...,x_N)$. Its norm will be: Q. What is the positive definite matrix? Q. How to judge if a matrix is the positive definite matrix? Q. What is a derivative? Q. How to calculate the derivatives? Q. What are the differences between derivatives and partial derivatives? Q. What is eigenvalue? What is eigenvector? What is eigenvalue decomposition? Q. What is the singular value? What is singular value decomposition? Q. What are the differences between singular value and eigenvalue? and what about their decomposition? Q. What is the probability? Q. What are the differences between variable and random variable? Q. What are the common probability distribution? Q. What is the conditional probability? Q. What is joint distribution? What is marginal distribution? What are their relations? Q. What is the chain rule for conditional probability? Q. What is independence and conditional independence? Q. What is the expectation? What is variance? What is covariance? What is the correlation coefficient?
Is it enough to show that MSE = 0 as $n\rightarrow\infty$? I also read in my notes something about plim. How do I find plim and use it to show that the estimator is consistent? EDIT: Fixed minor mistakes. Here's one way to do it: An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation $\mathrm{plim}_{n\rightarrow\infty}T_n = \theta $. Convergence in probability, mathematically, means $\lim\limits_{n\rightarrow\infty} P(|T_n - \theta|\geq \epsilon)= 0$ for all $\epsilon>0$. The easiest way to show convergence in probability/consistency is to invoke Chebyshev's Inequality, which states: $P((T_n - \theta)^2\geq \epsilon^2)\leq \frac{E(T_n - \theta)^2}{\epsilon^2}$. Thus, $P(|T_n - \theta|\geq \epsilon)=P((T_n - \theta)^2\geq \epsilon^2)\leq \frac{E(T_n - \theta)^2}{\epsilon^2}$. And so you need to show that $E(T_n - \theta)^2$ goes to 0 as $n\rightarrow\infty$. EDIT 2: The above requires that the estimator is at least asymptotically unbiased. As G. Jay Kerns points out, consider the estimator $T_n = \bar{X}_n+3$ (for estimating the mean $\mu$). $T_n$ is biased both for finite $n$ and asymptotically, and $\mathrm{Var}(T_n)=\mathrm{Var}(\bar{X}_n)\rightarrow 0$ as $n\rightarrow \infty$. However, $T_n$ is not a consistent estimator of $\mu$. EDIT 3: See cardinal's points in the comments below.
I have been reading chapter 13.4. ("Power Spectrum Estimation Using the FFT") of the Numerical Recipies Book. Some things related to the expectation value of the "periodogram estimate of the power spectrum" became not clear to me though. Background Suppose we have an equally-spaced $N$-point sample $ c_0 ... c_{N-1} $ of the discrete signal $ c(t) $ and its Fourier transform $$ C_k = \sum_{j=0}^{N−1} = c_j \exp{ 2πijk/N} $$ $$k = 0, . . . , N − 1 $$ Then we can define the "periodogram estimate of the power spectrum" $P(f_k)$ as $$ P(f_k) = \frac{1}{N^2} [{|C_k|^2 + |C_{N−k}|^2]} $$ with $k = 1, 2, . . . , \frac{N}{2}− 1$ and $f_k = \frac{k}{N\Delta}$ for $k = 0, 1, . . . ,\frac{N}{2}$ ($\Delta$ is the sampling interval). So far everything is clear. Now the book says "... the variance of the periodogram estimate at a frequency $f_k$ is always equal to the square of its expectation value at that frequency. In other words, the standard deviation is always 100 percent of the value, independent of N!" My questions What is the expectation value of the periodogram estimate at a frequency $f_k$? Why (can you show me how to derive it)? The expectation value doesn't change with $N$, so changing the length of the input doesn't improve the power spectrum estimation. But if I have a non-integral number of periods in my sample, more samples (longer sampling time) means less weight on the end of the signal (where there is a disconnect) so less smearing of frequency components measured due to the disconnect between the beginning and the end of the signal. So in this case a higher $N$ means a better estimation? Where am I wrong? If the SD is always 100 percent of the expectation value, doesn't that make this simple periodogram estimation (using just a single DFT of $N$ samples) pretty much useless? Should one always opt for Welch's or Bartlett's method e.g.? Thank you very much in advance!
Thanks to Sudix I found this answer that helped me find out the solution, which I will repeat here for the sake of completeness (I corrected one mistake in the expression for $a_{m,k}$). So, first let us re-label the summation indices in the conjecture to coincide with the notation of the cited answer that we will use ($k\rightarrow m$, $a\rightarrow k$, $b\rightarrow r$ and note what Sudix also pointed out, that the the sums are summing the same thing, therefore it is squared): $$\sum_{k=0}^{n} k {n\choose{m}}{n\choose{k}}^{-1} \left(\sum_{r=\max[0,k+m-n]}^{\min[m,k]} (-1)^{r} {m\choose{r}}{n-m\choose{k-r}}\right)^2=2^{n-1}n$$ The credit for this proof goes to Dan Carmon. Let $V$ be the space of homogeneous polynomial of degree $n$ in 2 variable $x$ and $y$ and let $T$ be the linear map defined as: $T(p)(x,y)=p(x+y,x-y)$. Now note that: $T^{2}(p)(x,y)=T(p)(x+y,x-y)=p((x+y)+(x-y)),(x+y)-(x-y))=p(2x,2y)=2^{n}p(x,y)$ Let $A$ be the matrix representation of the linear transformation $T$ with the base $\left \{ x^{n},x^{n-1}y,...,xy^{n-1},y^{n} \right.\left. \right \}$ $\sum_{k=0}^{n}a_{m,k}x^{n-k}y^{k}=T(x^{n-m}y^{m})=(x-y)^{m}(x+y)^{n-m}\\=\sum_{k=0}^{n}\sum_{r=max(0,k+m-n)}^{min(m,k)}(-1)^{r}\binom{m}{r}\binom{n-m}{k-r}x^{n-k}y^{k}$ We are getting closer to end because now we know that $a_{m,k}=\sum_{r=max(0,k+m-n)}^{min(m,k)}(-1)^{r}\binom{m}{r}\binom{n-m}{k-r}$ and with sum (some) manipulation $\binom{m}{r}\binom{n-m}{k-r}\binom{n}{m}=\frac{m!(n-m)!n!}{r!(m-r)!(k-r)!(n-m-k+r)!(m-n)!m!}\\=\frac{n!}{r!(m-r)!(k-r)!(n-m-k+r)!}=\frac{n!(n-k)!k!}{r!(m-r)!(k-r)!(n-m-k+r)!k!(n-k)!}=\binom{k}{r}\binom{n-k}{m-r}\binom{n}{k}$ So $\binom{n}{m}a_{m,k}=\binom{n}{k}a_{k,m}$ The identity we seek to prove is equivalent to $\sum_{k=0}^{n}\frac{\binom{n}{m}a_{m,k}^{2}}{\binom{n}{k}}=2^{n}$ but the last identity yield: $\sum_{k=0}^{n}\frac{\binom{n}{m}a_{m,k}^{2}}{\binom{n}{k}}=\sum_{k=0}^{n}a_{m,k}a_{k,m}=\left [ A^{2} \right ]_{m,m}=\left [ 2^{n} I\right ] _{m,m}=2^{n} $ $\blacksquare$ Note, that with the expression for $a_{m,k}$ we can rewrite what we have to prove: $$\sum_{k=0}^{n} k {n\choose{m}}{n\choose{k}}^{-1} a_{m,k}^2=2^{n-1}n\tag{1}\label{1}$$ It is important to note that everything that has been said in the proof before stays valid. Therefore, using that $\binom{n}{m}a_{m,k}=\binom{n}{k}a_{k,m}$ we can write that $$\sum_{k=0}^{n} k {n\choose{m}}{n\choose{k}}^{-1} a_{m,k}^2=\sum_{k=0}^{n}k \,a_{m,k}a_{k,m}\tag{2}\label{2}$$ Let us note, that $$a_{m,k}a_{k,m}=a_{m,n-k}a_{n-k,m}\tag{3}\label{3}$$ (see proof below) Consider the following identity (we are summing the very same thing going through the same indices just from the other direction) $$\sum_{k=0}^{n}k \,a_{m,k}a_{k,m}=\sum_{k=0}^{n}(n-k) \,a_{m,n-k}a_{n-k,m}\tag{4}\label{4}$$ Using \eqref{3} we have that $$\sum_{k=0}^{n}k \,a_{m,k}a_{k,m}=\sum_{k=0}^{n}(n-k) \,a_{m,k}a_{k,m}\tag{5}\label{5}$$$$2\sum_{k=0}^{n}k \,a_{m,k}a_{k,m}=n\sum_{k=0}^{n} \,a_{m,k}a_{k,m}\tag{6}\label{6}$$We know from the proof that $\sum_{k=0}^{n} \,a_{m,k}a_{k,m}=2^n$. Therefore, we have that $$\sum_{k=0}^{n}k \,a_{m,k}a_{k,m}=n2^{n-1}$$which is exactly what we wanted (see \eqref{2}). Now let us prove \eqref{3}. $$a_{m,k}=\sum_{r=max(0,k+m-n)}^{min(m,k)}(-1)^{r}\binom{m}{r}\binom{n-m}{k-r}\tag{7}\label{7}$$$$a_{m,n-k}=\sum_{r=max(0,m-k)}^{min(m,n-k)}(-1)^{r}\binom{m}{r}\binom{n-m}{n-k-r}=\sum_{r=max(0,m-k)}^{min(m,n-k)}(-1)^{r}\binom{m}{m-r}\binom{n-m}{k+r-m}$$ Now if we introduce $s=m-r$ and we take care of the limits we get \eqref{7} and an extra $(-1)^m$ factor, that is $a_{m,k}=(-1)^m a_{m,n-k}$. It is easy to check that also $a_{k,m}=(-1)^m a_{n-k,m}$. Therefore, $$a_{m,k}a_{k,m}=(-1)^{2m} a_{m,n-k}a_{n-k,m}=a_{m,n-k}a_{n-k,m}$$. Thank you for all your help! A nice combinatorical solution as InterstellarProbe suggested would still be desirable though.
I was just doing some practice questions for a test, but have been stumped by the following for the past couple of hours. I'm given a system such that: $$\frac{du}{dt} = v ~ ~ \& ~ ~ \frac{dv}{dt} = -f(u)$$ with Hamiltonian $$H = \frac{1}{2} \left(\frac{du}{dt}\right)^2 + \int f du.$$ I have to show that using the forward Euler method leads to a global error for $H$ that grows like $nh^2$ for step size $h$ and number of steps $n$. I know that the global error can be calculated via $ \epsilon = |U^n - U(T)|$ but I'm not sure how to apply it in this case. Thanks for any help! EDIT: So if I understand correct, I have to calculate $|H_{n+1}-H_n| $
The Fourier Transform is one of the most frequently used computational tools in earthquake seismology. Using an FFT requires some understanding of the way the information is encoded (frequency ordering, complex values, real values, etc) and these are generally well documented in the various software packages used in the field. I’ll assume that you have a good handle on these issues and that you know the basic idea about using forward and inverse transforms. I want to focus on an aspect of FFT use that often confuses students, the scaling provided by the transforms. Scaling can be confusing because to make the functions more efficient and flexible, the scaling is often omitted, or it is implicitly included by assuming that you plan to forward and inverse transform the same signal. In seismology we usually start with a signal that has a specified sample rate, \(\delta t\), and a length, \(N\). I’ll assume that we are dealing with a signal for which \(N\) is a power of two, which used to be the case whenever we used FFT’s but is less required now that computers are much faster. FFT routines don’t use a specific value of sample rate, \(\delta t\), at all, but they don’t always completely ignore it – they sometimes include \(\delta t\) implicitly in their scaling. For most cases, when you compute a forward transform, you have to scale the result by the sample rate to get the correct values (this \(\delta t\) arises from the \(\delta t\) in the Fourier Transform that the DFT is approximating). Consider the case in Matlab. We’ll construct a unit area “spike” in Matlab by creating a signal with a single non-zero value. So we can keep track of the physical units, which are also ignored in the computation but are essential to keep track of when doing science, I’ll assume that our signal is a displacement seismogram with units of meters and that the units of \(\delta t\) are seconds. To create a signal with unit area, we must set the amplitude equal to \(1 / \delta t ~\mbox{meters}\). That is the spike amplitude has the same numerical value as \(1/\delta t\), but units of length. For a spike with an amplitude \(1/\delta t\), the area under the curve in the time domain is (using a the formula for the area of a triangle with width 2 dt): \[Area = \frac{1}{2} \cdot 2 \times \delta t~\mbox{[s]} \cdot \frac{1}{\delta t}~\mbox{[m]}= 1 \mbox{ [m-s] .} \] The Fourier Transform of a unit area spike should have a value of unity. Let’s see what we get in Matlab. >> dt = 0.1; >> s = [1/dt,0,0,0,0,0,0,0]; >> shat = fft(s) % amplitudes are wrong without the correct scaling shat = 10 10 10 10 10 10 10 10 The spectrum is a factor of \(1/\delta t\) too large. We get the correct result by multiplying by \(\delta t\). >> shat = fft(s) * dt % you must scale by dt to get the correct values shat = 1 1 1 1 1 1 1 1 When we inverse transform, we have to account for the \(\delta f\) in the Fourier Transform Integral. That means we multiply by \(\delta f\). For a signal of length \(N\) and sample rate \(\delta t\), the frequency sampling rate, \(\delta f\) is \[\delta f = \frac{1}{N \cdot \delta t}~. \] Here’s the tricky part, Matlab multiplies the inverse transform by \(1/N = \delta t \cdot \delta f\). So we have to correct for that by either dividing by \(\delta t\) or simply remove the \(1/N\) factor and apply the \(\delta f\), which makes things easier to understand. Here’s a summary, I explicitly multiply the results of the inverse transform ( ifft) by \(N\) to remove Matlab’s scaling, then I scale by \(\delta f\). dt = 0.1; N = 8; df = 1/(N*dt); % define a unit area signal s = [1/dt,0,0,0,0,0,0,0] % forward transform - scale by dt shat = fft(s) * dt % inverse transform - remove 1/N factor, then scale by df ifs = (N * ifft(shat)) * df Summary The first thing that I do when I start using a new FFT, is to test the amplitudes using a unit-area delta-function approximation (which is really a triangle). The unit area time domain signal must have a value of unity at the zero frequency, and if it’s a spike, the spectrum should be flat. If you think this is not that critical, then you need to think about deconvolutions, earthquake-source spectra calculations, etc. To get the correct amplitudes, you have to apply your scaling (unless you do a forward and then inverse transform without applying any operations using physical quantities to the original signal). There’s nothing special about Matlab here – you can use the same tests to understand the scaling in Mathematica, Python, Fortran, C, JavaScript, …
Unfortunately, as with many `real world' examples, before we can start to actually do any probabilistic calculations we have to determine what model we would like to use. In this instance, before we pick a distribution for the problem, we would want to know more about how the typos occur. Consider the two following descriptions: The author of the book has told you that they deliberately put exactly 500 typos in, but not where they are. The author accidentally makes typos as they are writing the book, and at the end sees that the spell checker says that there are a total of 500. I claim that you would want very different probability distributions to handle these two interpretations: even though they both represent the simpler question in the original post. One argument for this is: consider the scenario where every typo occurs on the first page. If we were in scenario 1 (the author has deliberately planted typos) then we might say that this should have equal probability to any other distribution of the typos in the book. In the second scenario, we would say that this is a highly unlikely scenario, as the typos are occurring by accident, and if there were so many typos on the first page, you'd expect a large number throughout (violating the maximum of 500). In the below I will discuss three methods, and their limitations. Firstly I address the original posters approach, then a second similar method, and finally a very different method which addresses the second problem description above. The model proposed in the original post I generalize the set up and suppose that there are $n$ typos ($n=500$ in the original post), over $m$ pages ($m=500$ in the original). In the original post, the model suggested is to consider each possible assignment $(x_1,\ldots,x_m)$ such that $\sum_{k=1}^m x_m =n$ as equally likely; therefore this is in line with the first proposed scenario. In the form given in the post, it also assumes that the typos are indistinguishable from each other, which gives rise to the enumeration of exactly $\binom{n+m-1}{m-1}$ such vectors $\underline x$, see `stars and bars'. From which the assumed model is for all valid $\underline x$ \begin{align*}P(\underline x) &= \frac{1}{\# \left\{ (x_1,\ldots, x_m) \, \colon \, \sum_{k=1}^m x_k =n , \, x_k \in \mathbb{N}\right\}} \\& = \frac1{\binom{n+m-1}{m-1}}\end{align*}The original poster then correctly identifies the probability that a given page (we'll consider page $1$ without loss of generality) contains exactly $j$ typos is: \begin{align*}P(x_1 = j) & = P\left( \textstyle \sum_{k=2}^m x_k = n-j \right) \\& = \frac{\# \left\{ (x_1,\ldots, x_{m-1}) \, \colon \, \sum_{k=1}^{m-1} x_k =n-j , \, x_k \in \mathbb{N}\right\}}{\# \left\{ (x_1,\ldots, x_m) \, \colon \, \sum_{k=1}^m x_k =n , \, x_k \in \mathbb{N}\right\}} \\& = \frac{\binom{n-j + m-2}{m-2}}{\binom{n+m-1}{m-1}}\end{align*} Unfortunately this binomial expression does not have a simple closed form, and does not describe a commonly used `stock' probability distribution. Therefore the chances of getting a nice closed form expression for $P(x_1 \leq t)$ for some $t \geq 0$ ($t = 3$ in the original post), is unlikely. A second model for the same scenario One question that arises from this approach is: why treat the typos as indistinguishable? By this, I am referring to the fact that in enumerating the number of vectors $\underline x$ above, using stars and bars, we were answering the combinatorial question of: how many ways are there to put $n$ indistinguishable balls in $m$ boxes. One could instead consider that each typo is distinguishable: i.e. in the scenario that the author had told us the typos are "nathematics", "qrobability", "slackexchange", etc. This is another case of choosing how to interpret the question, and has to be decided by the user ahead of doing any calculations. Since the typos are now distinguishable, the number of ways to distribute $n$ typos across $m$ pages is given by $m^n$. Moreover the equivalent probability distribution for $\underline x$ (with $x_k$ still the number of typos on page $k$) is now given a special case of the multinomial distribution: $$P( x_1,\ldots, x_m) = m^{-n} \binom{n}{x_1,\ldots, x_m}. $$This is a special case, as generally the distribution has additional paramters $p_1, \ldots, p_m$ which allow you to say that some pages are more likely to have typos than others. Under this model, to find the probability that the first page has exactly $j$ errors: we first note that there are $\binom{n}{j}$ ways to pick the errors (since they are distinguishable), and that then the remaining $(n-j)$ are distributed across the remaining pages according to a multinomial distribution with $m-1$ pages. So we have $$P(x_1 = j) = \binom{n}{j} \frac{(m-1)^{n-j}}{m^n}$$ Note that we can do a quick check to see that: \begin{align*}\sum_{j=0}^n P(x_1 = j) & = \sum_{j=0}^n \binom{n}{j} \frac{(m-1)^{n-j}}{m^n} \\& = m^{-n} \sum_{j=0}^n\binom{n}{j} (m-1)^{n-j}1^j \\& = m^{-n} \big( (m-1) + 1 \big)^n \\& = 1,\end{align*}as expected. Again, to get a closed form expression for the statement $P(x_1 \leq t)$, seems out of reach because (at least to my immediate knowledge) there is no way to take the partial sum of binomial coefficients in the above. A model for the second scenario Both models above have considered the case of the first scenario I described, where we assume that any distribution of errors is equally as likely. A more realistic distribution might be to assume that each page should roughly have the same number of errors. This is in line with comment by @stubbornAtom, who mentions a Poisson distribution. In this case we would say that each page has an independent number of errors, for us we'll assume this is Poisson distributed with some rate $\lambda$, so independently: $$P(x_i = j) = e^{-\lambda} \frac{\lambda^j}{j!}$$ Now the information we have about the total number of errors should be used to form a conditional distribution. That is, we are interested in the probability $$P(x_1 = j \, | \, \textstyle \sum_{k=1}^m x_k = n)$$ To compute this we need two things: Bayes rule, and the fact that sums of independent poisson variables are poisson, in particular if $x_1,\ldots, x_m \sim \text{Poi}(\lambda)$ then $\sum_{k=1}^m x_i \sim \text{Poi}(m \lambda)$. So now: \begin{align*}P(x_1 = j \, | \, \textstyle \sum_{k=1}^m x_k = n) & = \frac{P( \sum_{j=1}^m x_j = n\, | \, x_1 = j ) P(x_1 = j) }{ P(\sum_{j=1}^m x_j = n)} \\& =\frac{P( \sum_{j=2}^m x_j = n-j ) P(x_1 = j)}{ P(\sum_{j=1}^m x_j = n)} \\& = \frac{P_{(m-1)\lambda}(n-j) P_{\lambda}(j)}{P_{m\lambda}(n)}\end{align*}where in the last line I introduced the notation $P_\mu(k)$ to denote the probability that a Poisson $\mu$ variable is equal to $k$. Writing this in terms of the Poisson distribution function we have: \begin{align*}P(x_1 = j \, | \, \textstyle \sum_{k=1}^m x_k = n) & =\frac{ e^{-(m-1)\lambda} \frac{((m-1)\lambda)^{n-j}}{(n-j)!} \times e^{-\lambda} \frac{\lambda^j}{j!}}{e^{-m\lambda} \frac{(m\lambda)^n}{n!}} \\& = \binom{n}{j} \frac{(m-1)^{(n-j)}}{m^n},\end{align*}which follows after some cancellations and rearranging. Noteably, in the end we observe two things: the solution is independent of the Poisson parameter $\lambda$, and secondly that the solution returns the same value as the second method! This in fact reflects a more general property that the individual summands of a sum of Poisson variables conditioned to equal $n$ is given by a multinomial distribution, see for instance here. As such, in this model we still cannot expect to derive a closed form expression for $P(x_1 \leq t)$.
For a Poisson distribution with mean $\mu$ the variance is also $\mu$. Within the framework of generalized linear models this implies that the variance function is $$V(\mu) = \mu$$ for the Poisson model. This model assumption can be wrong for many different reasons. Overdispersed count data with a variance larger than what the Poisson distribution dictates is, for instance, often encountered. Deviations from the variance assumption can in a regression context take several forms. The simplest one is that the variance function equals$$V(\mu) = \psi \mu$$with $\psi > 0$ a dispersion parameter. This is the quasi-Poisson model. It will give the same fitted regression model, but the statistical inference ($p$-values and confidence intervals) is adjusted for over- or underdispersion using an estimated dispersion parameter. The functional form of the variance function can also be wrong. It could be a second degree polynomial $$V(\mu) = a\mu^2 + b \mu + c,$$say. Examples include the binomial, the negative binomial and the gamma model. Choosing any of these models as an alternative to the Poisson model will affect the fitted regression model as well as the subsequent statistical inference. For the negative binomial distribution with shape parameter $\lambda > 0$ the variance function is $$V(\mu) = \mu\left( 1 + \frac{\mu}{\lambda}\right).$$We can see from this that if $\lambda \to \infty$ we get the variance function for the Poisson distribution. To determine if the variance function for the Poisson model is appropriate for the data, we can estimate the dispersion parameter as the OP suggests and check if it is approximately 1 (perhaps using a formal test). Such a test does not suggest a specific alternative, but it is most clearly understood within the quasi-Poisson model. To test if the functional form of the variance function is appropriate, w e could construct a likelihood ratio test of the Poisson model ($\lambda = \infty$) against the negative binomial model ($\lambda < \infty$). Note that it has a nonstandard distribution under the null hypothesis. Or we could use AIC-based methods in general for comparing non-nested models. Regression-based tests for overdispersion in the Poisson model explores a class of tests for general variance functions. However, I would recommend to first of all study residual plots, e.g. a plot of the Pearson or deviance residuals (or their squared value) against the fitted values. If the functional form of the variance is wrong, you will see this as a funnel shape (or a trend for the squared residuals) in the residual plot. If the functional form is correct, that is, no funnel or trend, there could still be over- or underdispersion, but this can be accounted for by estimating the dispersion parameter. The benefit of the residual plot is that it suggests more clearly than a test what is wrong with the variance function if anything. In the OP's concrete case it is not possible to say if 0.8 indicates underdispersion from the given information. Instead of focusing on the 5 and 0.8 estimates, I suggest to first of all investigate the fit of the variance functions of the Poisson model and the negative binomial model. Once the most appropriate functional form of the variance function is determined, a dispersion parameter can be included, if needed, in either model to adjust the statistical inference for any additional over- or underdispersion. How to do that easily in SAS, say, is unfortunately not something I can help with.
Is the x_e equation in CAMB correct or not? Post Reply 3 posts • Page 1of 1 I am looking at the Antony Lewis' paper https://arxiv.org/pdf/0804.3865.pdf for Eq.(B3) on page 11, there is this equation of number of free electron per hydrogen atom. But for this equation, if [tex]z \rightarrow[/tex] large values, then y=(1+z)^3/2 will also become large values, while y(z_re) and [tex]\Delta_y[/tex] are fixed. Since "[tex]\tanh(x \rightarrow \infty) \rightarrow 1[/tex]", then for large redshift, [tex]x_{\rm e} \rightarrow 1[/tex]. If [tex]z \rightarrow 0[/tex], then [tex]\tanh[/tex] function will becomes a negative value but greater than -1, then [tex]x_{\rm e} \rightarrow 0[/tex]. So this is completely opposite to the trend of Fig.6 on the same page. Can anyone explain what is going on here? Perhaps I made some stupid mistake, please point it out. Thank you. https://arxiv.org/pdf/0804.3865.pdf for Eq.(B3) on page 11, there is this equation of number of free electron per hydrogen atom. But for this equation, if [tex]z \rightarrow[/tex] large values, then y=(1+z)^3/2 will also become large values, while y(z_re) and [tex]\Delta_y[/tex] are fixed. Since "[tex]\tanh(x \rightarrow \infty) \rightarrow 1[/tex]", then for large redshift, [tex]x_{\rm e} \rightarrow 1[/tex]. If [tex]z \rightarrow 0[/tex], then [tex]\tanh[/tex] function will becomes a negative value but greater than -1, then [tex]x_{\rm e} \rightarrow 0[/tex]. So this is completely opposite to the trend of Fig.6 on the same page. Can anyone explain what is going on here? Perhaps I made some stupid mistake, please point it out. Thank you. Oh, then this typo affects the Planck reionization paper, the published version of Planck intermediate results XLVII. Planck constraints on reionization history, page 5, equation 2 also takes this typo. page 5, equation 2 also takes this typo.
It is well-known that the function $$f(x) = \begin{cases} e^{-1/x^2}, \mbox{if } x \ne 0 \\ 0, \mbox{if } x = 0\end{cases}$$ is smooth everywhere, yet not analytic at $x = 0$. In particular, its Taylor series exists there, but it equals $0 + 0x + 0x^2 + 0x^3 + ... = 0$, so while it has radius of convergence $\infty$, it is not equal to $f$ even in a tiny neighborhood of $0$. There is also a function $$f(x) = \sum_{n=0}^{\infty} e^{-\sqrt{2^n}} \cos(2^n x)$$ which is smooth everywhere (that is, $C^{\infty}$) yet analytic nowhere. In particular, the Taylor series at every point has radius of convergence $0$. In fact, "most" smooth functions are not analytic. But this gets me wondering. Could there exist some function which is smooth everywhere, analytic nowhere, yet its Taylor series at any point has nonzero radius of convergence, and so converges to something, but that something is not the function, not even in a tiny neighborhood about the point of expansion? If yes, what is an example of such a function? If no, what is the proof that such a thing is impossible? And also, if no, what sort of restrictions exist on the convergence of the T.s.? At how many/what distribution of points can it converge to something which is not the function? I note that if we multiply together the two functions just given above, we have another smooth-everywhere, analytic-nowhere function, but this time at $0$ we have a convergent Taylor series (the same zero series as before -- just use the generalized Leibniz rule) which doesn't converge to the function in even a tiny neighborhood of $0$. EDIT (Dec 31, 2013): With some Googling I came across a post to mathoverflow: The Taylor series of the Fabius function at any dyadic rational actually has infinite radius of convergence (only finitely many terms are nonzero) but does not represent the function on any interval. So it seems it is possible to have a function whose Taylor series converges to "the wrong thing" at a dense set of expansion points. But it still doesn't answer the question of whether that is possible for all expansion points on the entire real line.
Search Now showing items 11-20 of 53 Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE (Elsevier, 2017-11) Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ... Electroweak boson production in p–Pb and Pb–Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV with ALICE (Elsevier, 2017-11) W and Z bosons are massive weakly-interacting particles, insensitive to the strong interaction. They provide therefore a medium-blind probe of the initial state of the heavy-ion collisions. The final results for the W and ... Investigating the Role of Coherence Effects on Jet Quenching in Pb-Pb Collisions at $\sqrt{s_{NN}} =2.76$ TeV using Jet Substructure (Elsevier, 2017-11) We report measurements of two jet shapes, the ratio of 2-Subjettiness to 1-Subjettiness ($\it{\tau_{2}}/\it{\tau_{1}}$) and the opening angle between the two axes of the 2-Subjettiness jet shape, which is obtained by ... Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Multiplicity dependence of identified particle production in proton-proton collisions with ALICE (Elsevier, 2017-11) The study of identified particle production as a function of transverse momentum ($p_{\text{T}}$) and event multiplicity in proton-proton (pp) collisions at different center-of-mass energies ($\sqrt{s}$) is a key tool for ... Probing non-linearity of higher order anisotropic flow in Pb-Pb collisions (Elsevier, 2017-11) The second and the third order anisotropic flow, $V_{2}$ and $V_3$, are determined by the corresponding initial spatial anisotropy coefficients, $\varepsilon_{2}$ and $\varepsilon_{3}$, in the initial density distribution. ... The new Inner Tracking System of the ALICE experiment (Elsevier, 2017-11) The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ... Neutral meson production and correlation with charged hadrons in pp and Pb-Pb collisions with the ALICE experiment at the LHC (Elsevier, 2017-11) Among the probes used to investigate the properties of the Quark-Gluon Plasma, the measurement of the energy loss of high-energy partons can be used to put constraints on energy-loss models and to ultimately access medium ... Direct photon measurements in pp and Pb-Pb collisions with the ALICE experiment (Elsevier, 2017-11) Direct photon production in heavy-ion collisions provides a valuable set of observables to study the hot QCD medium. The direct photons are produced at different stages of the collision and escape the medium unaffected. ... Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE (Elsevier, 2017-11) Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Let a (free) particle move in $[0,a]$ with cyclic boundary condition $\psi(0)=\psi(a)$. The solution of the Schrödinger-equation can be put in the form of a plane wave. In this state the standard deviation of momentum is $0$, but $\sigma_x$ must be finite. So we find that $\sigma_x\sigma_p=0$. Is something wrong with the uncertainty principle? This is what happens if one cares not for the subtlety that quantum mechanical operators are typically only defined on subspaces of the full Hilbert space. Let's set $a=1$ for convenience. The operator $p =-\mathrm{i}\hbar\partial_x$ acting on wavefunctions with periodic boundary conditions defined on $D(p) = \{\psi\in L^2([0,1])\mid \psi(0)=\psi(1)\land \psi'\in L^2([0,1])\}$ is self-adjoint, that is, on the domain of definition of $p$, we have $p=p^\dagger$, and $p^\dagger$ admits the same domain of definition. The self-adjointness of $p$ follows from the periodic boundary conditions killing the surface terms that appear in the $L^2$ inner product $$\langle \phi,p\psi\rangle - \langle p^\dagger \phi,\psi\rangle = \int\overline{\phi(x)}\mathrm{i}\hbar\partial_x\psi(x) - \overline{\mathrm{i}\hbar\partial_x\phi(x)}\psi(x) = 0$$ for every $\psi\in D(p)$ and every $\phi\in D(p^\dagger) = D(p)$, but not for $\phi$ with $\phi(0)\neq\phi(1)$. Now, for the question of the commutator: the multplication operator $x$ is defined on the entire Hilbert space, since for $\psi\in L^2([0,1])$ $x\psi$ is also square-integrable. For the product of two operators $A,B$, we have the rule$$ D(AB) = \{\psi\in D(B)\mid B\psi\in D(A)\}$$and $$ D(A+B) = D(A)\cap D(B)$$so we obtain\begin{align}D(px) & = \{\psi\in L^2([0,1])\mid x\psi\in D(p)\} \\D(xp) & = D(p)\end{align}and $x\psi\in D(p)$ means $0\cdot \psi(0) = 1\cdot\psi(1)$, that is, $\psi(1) = 0$. Hence we have$$ D(px) = \{\psi\in L^2([0,1])\mid \psi'\in L^2([0,1]) \land \psi(1) = 0\}$$and finally$$ D([x,p]) = D(xp)\cap D(px) = \{\psi\in L^2([0,1])\mid \psi'\in L^2([0,1])\land \psi(0)=\psi(1) = 0\}$$meaning the plane waves $\psi_{p_0}$ do not belong to the domain of definition of the commutator $[x,p]$ and you cannot apply the naive uncertainty principle to them. However, for self-adjoint operators $A,B$, you may rewrite the uncertainty principle as$$ \sigma_\psi(A)\sigma_\psi(B)\geq \frac{1}{2} \lvert \langle \psi,\mathrm{i}[A,B]\rangle\psi\rvert = \frac{1}{2}\lvert\mathrm{i}\left(\langle A\psi,B\psi\rangle - \langle B\psi,A\psi\rangle\right)\rvert$$where the r.h.s. and l.h.s. are now both defined on $D(A)\cap D(B)$. Applying this version to the plane waves yields no contradiction. Notice that $\psi(x)$ is defined on a circle of circumference $a$. Multiplying $x$ on this circle is really multiplying a periodic extension of $x$, i.e., the sawtooth function $x - a\lfloor x/a\rfloor$, where $\lfloor y\rfloor$ means the largest integer not greater than $y$. So, the commutator of the position and momentum operators involves the derivative of not only $x$ but also the discontinuous part $-a\lfloor x/a\rfloor$. Therefore, \begin{equation} \sigma_{x} \sigma_p \geq \frac{1}{2}\Big|\langle \psi|\,[\hat{x},\hat{p}]\,|\psi\rangle\Big| = \frac{\hbar}{2}\Bigg|\Big\langle\psi\,\Big|\frac{d}{dx}\big(x - a\lfloor x/a\rfloor\big)\Big|\,\psi\Big\rangle\Bigg| = \frac{\hbar}{2}\Big|1-a|\psi(0)|^{2}\Big|. \end{equation} For a plane wave $\psi(x) = e^{ikx}/\sqrt{a}$, the above reduces to $\sigma_{x} \sigma_p\ge0$, as desired. There are two ways to interpret the boundary conditions you are imposing. The first case is that of a system which is infinite in extent, but has a periodic regularity. This is like an electron in an idealised 1D crystal, where the periodic boundary condition is imposed by the presence of nuclei regularly spaced. In this case, the plane wave solution has $\sigma_p$ = 0 but $\sigma_x$ is infinite. The second case, is that of a particle in a ring. In this case, you can imagine the particle as being constrained within the ring by a infinitely deep potential well. The system is not actually 1D, it is 2D. Now you have to consider both $\sigma_x \sigma_{p_x}$ and $\sigma_y \sigma_{p_y}$, and even though $\sigma_x = \sigma_y \sim a$, the uncertainty in momentum will be imposed by the thickness of the ring. The plane wave solution will in fact represent angular momentum eigenstates.
This article introduces simple regression analysis illustrated with a simple example and shows how to calculate it using Python. Table of Contents 1. Simple regression analysis 1.1 Introduction In layman’s terms, given a bunch of data points \({(x_i, y_i), i = 1, …, n}\), simple regression analysis is to find a straight line that fits them the best. (two dimensions) $$ y=\alpha +\beta \cdot x $$ where \(\alpha\) and \(\beta\) are so-called intercept and slop respectively. Such a equation can be used for predicting the unknown value. Before diving into details, let’s take an example. There are 15 data points, the height and mass (weight), from a sample of American women of age 30–39. # Height (m)x = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83]# Mass (kg)y = [52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46] We make a scatter plot for the above data points. Fig. 1: An example illustrates simple regression analysis Clearly, no matter how we plot the straight line, some points have a distance to that line. This is known as the error term, denoted by \(\varepsilon\). Thus, each data point \((x_i, y_j)\) can be precisely described by, $$ y_i = \alpha + \beta \cdot x_i + \varepsilon_i $$ So very naturally the question arises: how do we find the ‘best’ line? One possible solution is to find a line that minimizes the sum of squared errors. Such a approach is so-called ordinary least squares (OLS). Formally, find \(\alpha\) and \(\beta\) such that, $$ \min_{\alpha ,\,\beta}Q(\alpha,\beta) =\sum_{i=1}^{n}\varepsilon_{i}^{\,2}=\sum_{i=1}^{n}(y_{i}-\alpha -\beta x_{i})^{2} $$ We can derive \(\alpha\) and \(\beta\) by using either calculus, the geometry of inner product spaces, or simply expanding to a quadratic expression. 1.2 Calculate using Python scipy.stats.linregress is used to calculate a linear least-squares regression for two sets of measurements. import scipy.stats# Calculate a linear least-squares regression for two sets of measurements.slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x, y=None)# Returns slope # float, slope of the regression line intercept # float, intercept of the regression line r_value # float, correlation coefficient p_value # float, two-sided p-value for a hypothesis test whose null hypothesis is that the slope is zero. std_err # float, standard error of the estimate Apply scipy.stats.linregress to the above example, we have, slop 61.2721865421intercept -39.0619559188R-squared 0.989196922446p-value 3.60351533955e-14standard error 1.77592275222 As described earlier, slope is denoted by \(\beta\) and intercept is denoted by \(\alpha\). We are able to derive an equation. It can be used to predict the height with the known mass and vice versa. $$ y = -39.06 + 61.27 \cdot x $$ R value Pearson product-moment correlation coefficient, also known as r, R, or Pearson’s r, a measure of the strength and direction of the linear relationship between two variables that is defined as the (sample) covariance of the variables divided by the product of their (sample) standard deviations. R squared In statistics, the coefficient of determination, denoted \(R^2\) or \(r^2\) and pronounced “R squared”, is a number that indicates the proportion of the variance in the dependent variable that is predictable from the independent variable. P value In frequentist statistics, the p-value is a function of the observed sample results (a test statistic) relative to a statistical model, which measures how extreme the observation is. Standard error In regression analysis, the term “standard error” is also used in the phrase standard error of the regression to mean the ordinary least squares estimate of the standard deviation of the underlying errors. 2. The source code The source code of the above example is hosted on my GitHub, here. #!/usr/bin/env python# -*- coding: utf-8 -*-#from __future__ import divisionimport scipy.statsimport matplotlib.pyplot as pltimport numpy as npdef main(): ''' This example concerns the data set from the ordinary least squares article. This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. ''' x = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83] y = [52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46] # Step 1: regression analysis slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x,y) print "slop\t", slope print "intercept\t", intercept print "R-squared\t", r_value**2 print "p-value\t", p_value print "standard error\t", std_err # Step 2: plot the graph fig, ax = plt.subplots() # plot scatter ax.plot(x, y, 'bx', markersize=10) # plot the straight line new_x = np.linspace(min(x), max(x), 1000) new_y = slope * new_x + intercept ax.plot(new_x, new_y, 'r', linewidth=2) ax.set_xlabel('Height (m)') ax.set_ylabel('Mass (kg)') plt.grid() out_file = 'simple_regression_analysis.png' plt.savefig(out_file, bbox_inches='tight') plt.show()if __name__ == '__main__': main()
Integer programming A branch of mathematical programming in which one investigates problems of optimization (maximization or minimization) of functions of several variables that are related by a number of equations and (or) inequalities and that satisfy the condition of being integral valued. (Other terms are discrete programming, discrete optimization.) Sources of integer programming problems are technology, economy and defense. The condition of the variables being integral valued formally reflects: a) the physical indivisibility of the objects (for example, in the distribution of enterprises or in the choice of combat actions); b) the finiteness of the set of feasible variants on which the optimization proceeds (for example, the set of permutations in problems of ordering); and c) the presence of logical conditions, which by holding or not holding exert a change in the form of the objective function and the constraints of the problem. The most widely studied and most used integer programming problem is the so-called integer linear programming problem: Maximize $$\sum_{j=1}^nc_jx_j$$ subject to $$\sum_{j=1}^na_{ij}x_j=b_i,\quad i=1,\dots,m,$$ where $x_j\geq0$, $j=1,\dots,n$, the $x_j$ are integers for $j=1,\dots,p$, $p\leq n$, the $a_{ij}$, $b_i$ and $c_j$ are given integers, and the $x_j$ are variables. The solution methods for integer programming problems (relaxation (cf. Relaxation method), cutting planes, dynamic programming, "branch-and-bound", and others) are based on a reduction in the amount of feasible solutions. The "naive" approach to the solution of integer programming problems, which consists of a complete enumeration of all feasible solutions (if there are finitely many), requires an amount of computational work that grows exponentially with the number of variables and turns out to be impractical. The complexity of theoretical and numerical problems that arise in the solution of integer programming problems can be illustrated by the fact that Fermat's so-called last theorem can be stated in the following equivalent form: Minimize $$(x_1^t+x_2^t-x_3^t)^2$$ subject to $$x_1\geq1,\quad x_2\geq1,\quad x_3\geq1,\quad t\geq3,$$ where $t$, $x_1$, $x_2$, and $x_3$ are integers. If by some method of integer programming the answer obtained is a positive value for the minimum of the objective function, then this would be a constructive proof of Fermat's theorem, and if the answer is 0, then it would be a refutation. The central theoretical problem in integer programming is: Can one avoid complete enumeration in solving integer programming problems? One of the mathematical formulations of this problem is: Do the classes $\mathcal P$ and $\mathcal{NP}$ coincide? The class $\mathcal P$ (or $\mathcal{NP}$) consists of all decision problems that can be solved on a deterministic (non-deterministic) Turing machine in polynomial time, that is, in a number of computational operations that depends as a polynomial on the so-called "input size" of the problem. The class $\mathcal{NP}$ includes all decision versions of integer programming problems that have an exponential number of feasible solutions (relative to the "input size" of the problem). The problem "P=NP?" remains open at present (1988). References [1] E.G. Gol'shtein, D.B. Yudin, "New directions in linear programming" , Moscow (1966) (In Russian) [2] A.A. Korbut, Yu.Yu. Finkel'shtein, "Discrete programming" , Moscow (1969) (In Russian) [3] M.R. Garey, D.S. Johnson, "Computers and intractability: a guide to the theory of NP-completeness" , Freeman (1979) [4] G.L. Nemhauser, L.A. Wolsey, "Integer and combinatorial optimization" , Wiley (1988) Comments For more information on $\mathcal P$ and $\mathcal{NP}$ see Complexity theory. Unless $\mathcal P=\mathcal{NP}$, the general integer programming problem is not solvable in polynomial time, whereas the general linear programming problem is solvable in polynomial time. References [a1] A. Schrijver, "Theory of linear and integer programming" , Wiley (1986) How to Cite This Entry: Integer programming. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Integer_programming&oldid=34302
Functions An online exercise on function notation, inverse functions and composite functions. This is level 3, solve the equations given in function notation. You can earn a trophy if you get at least 9 correct and you do this activity online. Instructions Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help. When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file. Transum.org This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available. Please contact me if you have any suggestions or questions. Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician? Comment recorded on the 9 May 'Starter of the Day' page by Liz, Kuwait: "I would like to thank you for the excellent resources which I used every day. My students would often turn up early to tackle the starter of the day as there were stamps for the first 5 finishers. We also had a lot of fun with the fun maths. All in all your resources provoked discussion and the students had a lot of fun." Comment recorded on the 25 June 'Starter of the Day' page by Inger.kisby@herts and essex.herts.sch.uk, : "We all love your starters. It is so good to have such a collection. We use them for all age groups and abilities. Have particularly enjoyed KIM's game, as we have not used that for Mathematics before. Keep up the good work and thank you very much Answers There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer. A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves. Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members. If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe Go Maths Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school. Maths Map Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic. Teachers If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows: Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments. © Transum Mathematics :: This activity can be found online at: www.transum.org/Maths/Exercise/Functions.asp?Level=3 Close Level 1 - Describe function machines using function notation. Level 2 - Evaluate the given functions. Level 3 - Solve the equations given in function notation. Level 4 - Find the inverse of the given functions. Level 5 - Simplify the composite functions. Level 6 - Mixed questions. Exam Style questions are in the style of GCSE or IB/A-level exam paper questions and worked solutions are available for Transum subscribers. The following notes are intended to be a reminder or revision of the concepts and are not intended to be a substitute for a teacher or good textbook. Function notation is quite different to the algebraic notation you have learnt involving brackets. \(f(x)\) does not mean the value of f multiplied by the value of x. In this case f is the name of the function and you would read \(f(x) = x^2\) as "f of x equals x squared". In terms of function machines, if the input is \(x\) then the output is \(f(x)\). Example \(x \to \)\( + 3 \)\( \to \)\( \times 4 \)\( \to f(x)\) In this case 3 is added to \(x\) and then the result is multiplied by 4 to give \(f(x)\) \( (x+3) \times 4 = f(x) \) \( f(x) = 4(x+3) \) Example if \(f(x)=x^2 + 3\) calculate the value of \(f(6)\) This means replace the \(x\) with a 6 in the given function to obtain the result. \(f(6) = 6^2+3\) \(f(6) = 39\) Example \(f(x)=3(x+7) \) find \(x\) if \(f(x) = 30\) \(3(x+7)=30\) \(x+7 = 10\) \(x = 3\) The inverse of a function, written as \(f^{-1}(x) \) can be thought of as a way to 'undo' the function. If the function is written as a function machine, the inverse can be thought of as working backwards with the output becomming the input and the input becoming the output. Example \( f(x) = 4(x+3) \) \(x \to \)\( + 3 \)\( \to \)\( \times 4 \)\( \to f(x)\) \( f^{-1}(x) \leftarrow \)\( - 3 \)\( \leftarrow \)\( \div 4 \)\( \leftarrow x \) \( f^{-1}(x) = \frac{x}{4} - 3 \) A quicker way of finding the inverse of \(f(x)\) is to replace the \(f(x)\) with \(x\) on the left side of the equals sign and replace the \(x\) with \( f^{-1}(x) \) on the right side of the equals sign. Then rearrange the equation to make \( f^{-1}(x) \) the subject. A composite function contains two functions combined into a single function. One function is applied to the result of the other function. You should evaluate the function closest to \(x\) first. Example if \(f(x)=2x+7\) and \(g(x)=5x^2\) find \(fg(3)\) \(g(3) = 5 \times 3^2\) \(g(3) = 5 \times 9\) \(g(3) = 45\) \(f(45) = 2 \times 45 + 7\) \(f(45) = 97\) so \( fg(3) = 97\) Example if \(f(x)=x+2\) and \(g(x)=3x^2\) find \(gf(x)\) \( gf(x) = 3(x+2)^2\) \( gf(x) = 3(x^2+4x+4) \) \( gf(x) = 3x^2+12x+12 \) Example Find \(f(x-2)\) if \(f(x)=5x^2+3\) \(f(x-2) =5(x-2)^2+3\) \(f(x-2) =5(x^2-4x+4)+3\) \(f(x-2) =5x^2-20x+20+3\) \(f(x-2) =5x^2-20x+23\) TI-nSpire: Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen. Close
Newspace parameters Level: \( N \) = \( 2016 = 2^{5} \cdot 3^{2} \cdot 7 \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 2016.l (of order \(2\) and degree \(1\)) Newform invariants Self dual: No Analytic conductor: \(1.00611506547\) Analytic rank: \(0\) Dimension: \(2\) Coefficient field: \(\Q(i)\) Coefficient ring: \(\Z[a_1, \ldots, a_{11}]\) Coefficient ring index: \( 2 \) Projective image \(D_{2}\) Projective field Galois closure of \(\Q(\sqrt{-6}, \sqrt{-7})\) Artin image size \(16\) Artin image $D_4:C_2$ Artin field Galois closure of 8.0.1792336896.7 The \(q\)-expansion and trace form are shown below. Character Values We give the values of \(\chi\) on generators for \(\left(\mathbb{Z}/2016\mathbb{Z}\right)^\times\). \(n\) \(127\) \(577\) \(1765\) \(1793\) \(\chi(n)\) \(1\) \(-1\) \(-1\) \(1\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. Label \(\iota_m(\nu)\) \( a_{2} \) \( a_{3} \) \( a_{4} \) \( a_{5} \) \( a_{6} \) \( a_{7} \) \( a_{8} \) \( a_{9} \) \( a_{10} \) 433.1 0 0 0 0 0 −1.00000 0 0 0 433.2 0 0 0 0 0 −1.00000 0 0 0 Char. orbit Parity Mult. Self Twist Proved 1.a Even 1 trivial yes 7.b Odd 1 CM by \(\Q(\sqrt{-7}) \) yes 24.h Odd 1 CM by \(\Q(\sqrt{-6}) \) yes 168.i Even 1 RM by \(\Q(\sqrt{42}) \) yes 3.b Odd 1 yes 8.b Even 1 yes 21.c Even 1 yes 56.h Odd 1 yes This newform can be constructed as the kernel of the linear operator \(T_{11}^{2} \) \(\mathstrut +\mathstrut 4 \) acting on \(S_{1}^{\mathrm{new}}(2016, [\chi])\).
In mathematics, an arithmetic progression (AP) or arithmetic sequence is a sequence of numbers such that the difference between the consecutive terms is constant. For instance, the sequence 5, 7, 9, 11, 13, 15 … is an arithmetic progression with common difference of 2. If the initial term of an arithmetic progression is a_1 and the common difference of successive members is d, then the nth term of the sequence (a_n) is given by: \ a_n = a_1 + (n - 1)d, and in general \ a_n = a_m + (n - m)d. A finite portion of an arithmetic progression is called a finite arithmetic progression and sometimes just called an arithmetic progression. The sum of a finite arithmetic progression is called an arithmetic series. The behavior of the arithmetic progression depends on the common difference d. If the common difference is: Positive, the members (terms) will grow towards positive infinity. Negative, the members (terms) will grow towards negative infinity. Contents Sum 1 Product 2 Standard deviation 3 See also 4 References 5 External links 6 Sum 2 + 5 + 8 + 11 + 14 = 40 14 + 11 + 8 + 5 + 2 = 40 16 + 16 + 16 + 16 + 16 = 80 Computation of the sum 2 + 5 + 8 + 11 + 14. When the sequence is reversed and added to itself term by term, the resulting sequence has a single repeated value in it, equal to the sum of the first and last numbers (2 + 14 = 16). Thus 16 × 5 = 80 is twice the sum. The sum of the members of a finite arithmetic progression is called an arithmetic series. For example, consider the sum: 2 + 5 + 8 + 11 + 14 This sum can be found quickly by taking the number n of terms being added (here 5), multiplying by the sum of the first and last number in the progression (here 2 + 14 = 16), and dividing by 2: \frac{n(a_1 + a_n)}{2} In the case above, this gives the equation: 2 + 5 + 8 + 11 + 14 = \frac{5(2 + 14)}{2} = \frac{5 \times 16}{2} = 40. This formula works for any real numbers a_1 and a_n. For example: \left(-\frac{3}{2}\right) + \left(-\frac{1}{2}\right) + \frac{1}{2} = \frac{3\left(-\frac{3}{2} + \frac{1}{2}\right)}{2} = -\frac{3}{2}. Derivation To derive the above formula, begin by expressing the arithmetic series in two different ways: S_n=a_1+(a_1+d)+(a_1+2d)+\cdots+(a_1+(n-2)d)+(a_1+(n-1)d) S_n=(a_n-(n-1)d)+(a_n-(n-2)d)+\cdots+(a_n-2d)+(a_n-d)+a_n. Adding both sides of the two equations, all terms involving d cancel: \ 2S_n=n(a_1 + a_n). Dividing both sides by 2 produces a common form of the equation: S_n=\frac{n}{2}( a_1 + a_n). An alternate form results from re-inserting the substitution: a_n = a_1 + (n-1)d: S_n=\frac{n}{2}[ 2a_1 + (n-1)d]. Furthermore the mean value of the series can be calculated via: S_n / n: \overline{n} =\frac{a_1 + a_n}{2}. In 499 AD Aryabhata, a prominent mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, gave this method in the Aryabhatiya (section 2.18). Product The product of the members of a finite arithmetic progression with an initial element a 1, common differences d, and n elements in total is determined in a closed expression a_1a_2\cdots a_n = d \frac{a_1}{d} d (\frac{a_1}{d}+1)d (\frac{a_1}{d}+2)\cdots d (\frac{a_1}{d}+n-1)=d^n {\left(\frac{a_1}{d}\right)}^{\overline{n}} = d^n \frac{\Gamma \left(a_1/d + n\right) }{\Gamma \left( a_1 / d \right) }, where x^{\overline{n}} denotes the rising factorial and \Gamma denotes the Gamma function. (Note however that the formula is not valid when a_1/d is a negative integer or zero.) This is a generalization from the fact that the product of the progression 1 \times 2 \times \cdots \times n is given by the factorial n! and that the product m \times (m+1) \times (m+2) \times \cdots \times (n-2) \times (n-1) \times n \,\! for positive integers m and n is given by \frac{n!}{(m-1)!}. Taking the example from above, the product of the terms of the arithmetic progression given by a = 3 + ( n n-1)(5) up to the 50th term is P_{50} = 5^{50} \cdot \frac{\Gamma \left(3/5 + 50\right) }{\Gamma \left( 3 / 5 \right) } \approx 3.78438 \times 10^{98}. Standard deviation The standard deviation of any arithmetic progression can be calculated via: \sigma = |d|\sqrt{\frac{(n-1)(n+1)}{12}} where n is the number of terms in the progression, and d is the common difference between terms See also References Sigler, Laurence E. (trans.) (2002). Fibonacci's Liber Abaci. Springer-Verlag. pp. 259–260. External links Hazewinkel, Michiel, ed. (2001), "Arithmetic series", Weisstein, Eric W., "Arithmetic progression", MathWorld. Weisstein, Eric W., "Arithmetic series", MathWorld. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
While the idea of superposition is relatively straightforward, actually adding the displacements of the waves at every point for all time is a lot of tedious work. We are now going to specialize superposition to the interference of two infinite harmonic waves with the same frequency. Instead of keeping track of both the wave functions \(\Delta y_1\) and \(\Delta y_2\) this means that we only have to look at the difference in total phase \(\Delta \Phi\). For example if we know that at a particular location the peaks of both waves arrive simultaneously, and the troughs of both waves are occurring simultaneously then we would say the waves are in phase. Our obvious guess would be that \(\Delta \Phi \equiv \Phi_2 - \Phi_1 = 0\) because the peaks and the troughs are arriving together. However we know that if the total phase changes by \(2 \pi, 4 \pi, 6 \pi, . . .\) then the wave will look exactly the same (this is because the sine function repeats over intervals of \(2 \pi\)). If we have constructive interference all we know is that \(\Delta \Phi\) could be \(2 \pi \) or \( −2 \pi\) or \(4 \pi\)... To see what \(\Delta \Phi\) tells us about the type of interference that occurs, it helps to recall two important characteristics of the sine function: As stated above, the sine function is periodic, so \(\sin(\Phi) = \sin(\Phi + 2 \pi n)\) where \(n\) is any integer. \(\sin(\Phi + \pi) = − \sin(\Phi)\). In other words, changing \(\Phi\) by an amount \(\pi\) has the same effect as multiplying the equation by -1. The same result holds if we replace \(\pi\) by \(3 \pi, 5 \pi, - \pi, . . .\) Recalling that the total displacement is given by \[\Delta y_{total}(x,t) = A_1 \sin(\Phi_1) + A_2 \sin(\Phi_2)\]We see that when the sines are the same (when \(\Phi_1 - \Phi_2 = \Delta \Phi = 0, 2 \pi, 4 \pi...\) we have constructive interference. when the sines have opposite signs (when \(\Delta \Phi = \pi, 3 \pi, 5 \pi...\) we have destructive interference. Anything else is partial interference. Interference Type \(\Delta \Phi =\) Constructive Even multiples of pi: \(0, 2 \pi, 4 \pi, -2 \pi ...\) Destructive Odd multiples of pi: \(\pi, 3 \pi, 5 \pi, - \pi ...\) Partial Other Recall that the total phase \(\Phi (x, t)\) for each wave depends on both \(x\) and \(t\), so \(\Delta \Phi\) can also depend on both \(x\) and \(t\). Strictly speaking we should not talk about whether two waves have constructive, destructive or partial interference, but rather if two waves at a specific location, at a specific time, have constructive, destructive or partial interference. To keep track of all the terms that can contribute to the change in phase, we introduce the phase chart . The phase chart does not contain more information than the three equations \[\Phi_1 = 2 \pi \dfrac{t}{T_1} \pm 2 \pi \dfrac{x_1}{\lambda_1} + \phi_1\] \[\Phi_2 = 2 \pi \dfrac{t}{T_2} \pm 2 \pi \dfrac{x_2}{\lambda_2} + \phi_2\] \[\Delta \Phi = \Phi_2 - \Phi_1\] but it is meant to remind you to think about each term. The phase chart is shown below \(2 \pi \frac{t}{T}\) \(\pm 2 \pi \frac{x}{\lambda}\) \(\phi\) \(\Phi\) Wave 1 Wave 2 Change \(\Delta \Phi\) It is the lower-right hand corner of this chart in the box, marked \(\Delta \Phi\), that determines if the interference is constructive, destructive or partial. To build our intuition we are going to look at simplified examples and study the effect of each effect individually. In the next three subsections we will study: The effect of changing path length:By keeping the sources creating waves in phase and at the same frequency, we can study what the effect is of moving one source around. The effect of changing phase or synchronization :By keeping the same frequency and amplitude, but now allowing the sources to create waves that are not in phase, we can study how out of sync sources affect interference. Beats:Looking at the interference of two waves with different frequencies. There are a couple of other general comments to make. The first is that because we are combining waves in the same place, the waves must be in the same medium. Therefore the two waves have the same wave speed \(v_{wave}\). Because they have the same wave speed and same frequency, they must also have the same period \(T = 1/f\) and the same wavelength \(\lambda = v_{wave}/f\). The quantities which may be different are the distance from the source to the detector \(x\), and the phase constant \(\phi\). By changing either of these quantities we can accomplish either constructive or destructive interference. Path Length Difference For this part of the notes we will assume that the two waves are at the same frequency and have the same amplitude. We are also going to assume that the two sources are in phase with one another. The most important assumption is that the frequencies are the same, and we should discuss the consequences of this assumption before doing anything else. By having the same frequency we know that the waves both have the same period (\(T = 1/f\)). Thus, if the waves started oscillating "in phase" or "out of phase," they will stay in phase or out of phase for all \(t\). In other words, if both harmonic waves have the same frequency, then the type of interference depends on where you are, but unlike the completely general case does not depend on when you ask about the type of interference. We can obtain this result from the phase chart by inserting the information we just discussed in to the time component column for each wave. We see that the time components of each wave are the same, so the they do not contribute anything to the change in total phase: \(2 \pi \frac{t}{T}\) \(\pm 2 \pi \frac{x}{\lambda}\) \(\phi\) \(\Phi\) Wave 1 \(2 \pi \frac{t}{T}\) Wave 2 \(2 \pi \frac{t}{T}\) Change 0 \(\Delta \Phi\) As both our waves are traveling in the same medium we know that \(v_{wave} = f \lambda\) is the same. Because the frequency is the same and the speed \(v_{wave}\) is the same, both sources must then have the same wavelength \(\lambda\). As we can see, looking at two waves with the same frequency leads to large simplifications. Let us start with two sources that are creating waves in phase with one another, and located the same distance from the detector. A picture of the situation may look like the one below: By adding these waves together we find that, at the detector, the total wave has twice the amplitude of either wave alone – this is constructive interference. The waves at the detector look identical if we shift one of the sources one wavelength closer to the detector; this is because after moving a distance of one wavelength, the wave looks exactly the same. Shifting the source by any integer multiple of wavelengths still leads to constructive interference, as the waves still look identical after shifting: How does shifting the source by one wavelength affect the change in phase? Assuming waves 1 and 2 are propagating outward (so we may use the − sign) we have \[\Delta x = x_1 - x_2 = n \lambda\]where \(n\) is an integer; the waves are shifted by \(n\) wavelengths. We can then write \[\Delta \Phi = -2 \pi \dfrac{\Delta x}{\lambda} + \Delta \phi\] \[\Delta \Phi = - 2 \pi \dfrac{n \lambda}{\lambda} + 0\] \[\Delta \Phi = -(2 \pi) n\] In the third line we used the fact that the sources were in phase (meaning that they were creating peaks together, and creating troughs together) so \(\phi = 0\). The quantity \(\Delta x \equiv x_1 - x_2 \) tells us how much further wave 1 had to travel to reach the detector than wave 2, and is referred to as the path length difference. By shifting one of the sources half a wavelength closer to the detector, we ensure that every peak in wave 1 coincides with a trough in wave 2. This leads to destructive interference as in the picture below: Changing the separation by a wavelength (so the total separation is one and a half wavelengths) does not change what the waves look like at the detector, so the waves still interfere destructively. In fact, it is not difficult to see that having a path length difference of \(\lambda (n + \frac{1}{2})\), where \(n\) is an integer, will lead to destructive interference. To see this is consistent with our understanding of phase difference we calculate \(\Delta \Phi\): \[\Delta \Phi = -2 \pi \dfrac{\Delta x}{\lambda} + \Delta \phi\] \[\Delta \Phi = -2 \pi \dfrac{(n+ \frac{1}{2}) \lambda}{\lambda} + \] \[\Delta \Phi = -2 \pi \left( n + \dfrac{1}{2} \right) = -2 \pi n + \pi = \textrm{(odd)} \pi\] Note that \(\Delta \phi = 0\) still; the waves are still creating peaks (or troughs) at the same time as one another. By changing separation distances, we can create the waves in phase that still exhibit destructive interference when the two waves come together. It is important to distinguish the separation of the sources and the path length difference. In all of the above examples, these are the same. However, consider two sources separated by half a wavelength, but place the detector equal distances from both sources: Now even though the sources are separated by \(\lambda /2\), the wave from each source must travel exactly the same distance to get to the detector. Therefore the path length difference \(\Delta x\) is zero – peaks created at the same time will arrive at the same time and will still show constructive interference. Even though we can think of \(x\) as a vector quantity as we did in Physics 7B we don’t need to; the only thing of interest is how far the waves travel from their source. Constant Phase Difference Another way of changing the total phase is ensuring that the two sources are not creating peaks together. This is done by manipulating \(\phi_1\) and \(\phi_2\), the phase constants. To keep things simple we are again going to assume that the frequencies (and the wavelengths) of the waves produced by the two sources are the same. If we start with two sources in the same location, but make one source create a trough while the other source creates a peak we have destructive interference at the location of the detector: These two sources are now out of phase. Here the constant phase difference \(\Delta \phi = \pi\). We could also manipulate the phase constants to achieve constructive interference by insuring the waves are in phase where \(\Delta \phi = 0\) or \(2 \pi\). Using Phase Charts Let us see how we can reproduce some of the results we had earlier. Let us look at the case where the two sources had the same phase constant (\(\phi_1 = \phi_2\ \equiv \phi\)), but the sources were separated by one wavelength: For the two waves we have \[\Phi_1 = 2 \pi \dfrac{t}{T} - 2 \pi \dfrac{x_1}{\lambda} + \phi\] \[\Phi_2 = 2 \pi \dfrac{t}{T} - 2 \pi \dfrac{x_1}{\lambda} + \phi\] These waves have the same period, so they must have the same frequency (\(T= 1/ f\)). They travel in the same medium, so they must have the same speed \(v_{wave}\) and wavelength \(\lambda = v_{wave}/f\). The total phase difference is then \[\Delta \Phi = \Phi_1-\Phi_2 = 2 \pi t \left( \dfrac{1}{T} - \dfrac{1}{T} \right) - \dfrac{2 \pi}{\lambda} (x_1-x_2) + (\phi - \phi)\] \[= -\dfrac{2 \pi}{\lambda} \Delta x\] By using \(\Delta x = \lambda\), from our picture, we get \(\Delta \Phi = -2 \pi\), which means the interference is constructive. (If you're given a picture, the easier way of doing this is to look at the waves at the detector. They're clearly in phase, so the interference must be constructive.) As a second example, let us consider the case where the two waves were out of phase and separated by half a wavelength as shown: There are now two contributions to our change in total phase. We have \(\phi_1-\phi_2 = \pi\) and \(x_1-x_2= \lambda /2\). The change in phase is now \[\Delta \Phi = 2 \pi t \left( \dfrac{1}{T} - \dfrac{1}{T} \right) - \dfrac{2 \pi}{\lambda} (x_1-x_2) + (\phi - \phi)\] \[= 2 \pi t (0) - \dfrac{2 \pi}{\lambda} \dfrac{\lambda}{2} + \pi\] \[ = 0 - \pi + \pi = 0\] Which gives us constructive interference. A phase chart can be useful in making sure to include every term in our phase difference calculation. Limits of Phase Charts By using phase charts, we assume that in between the source and the detector, the waves look exactly the same. But there are many real world examples where that is not the case. Consider these waves, their sources separated by 2.5 wavelengths. We see that we get destructive interference just as we expect. Keeping the same separation, but inserting another medium (shown in blue) can lead to constructive interference at the detector. This is because the wave travels at a different speed in the new medium, so its wavelength changes value. When it leaves the medium, the waves are back in step. Notice that at the detector the wavelengths are identical: \(\lambda_1 = \lambda_2\). In using phase charts, we assume that the two waves have the same wavelength along the whole path; our picture above is a counterexample to this. We can get rid of our assumption if we carefully keep track of the total phase of each wave from one medium to another, or if we use the wave equations directly and dispose of the phase chart. Is our counterexample relevant to any real world examples? Yes! The subject of thin-film interference is based around light interfering, where one ray goes through two mediums and the other ray only goes through one. Thin film interference is responsible for the pretty colors we see on soap bubbles and in puddles on the street where small amounts of oil sit on the surface. Thin film interference is also responsible for the different colors that are seen reflecting in the surface of pearls (the layers are calcium carbonate and water). Thin film interference finds important applications in photography as well. Contributors Authors of Phys7C (UC Davis Physics Department)
Questions about the reasons aircrafts fly are frequent among scientist. Since the time I was in high school, even if I now work on the other side of the fluid world (Low $Re$ regime), I've kept asking my professors, advisors, colleagues, what was their own explanation of flight. I know about the controversy about the push downward, considered a common fallacy by the NASA website, and about the Anderson argument, denying the erroneous principles of equal times, and the overstimated role of the Bernoulli theorem. My best overall and simplest explanation is taken from Anderson, and consists in the following: Somehow the air reaching the first edge of the wing, after the interaction with it, is going donward. This must be the result of some kind of force, and therefore, for the third Newton's Law, there must be an opposite force of equal strenght in the opposite direction, which pushes the aircraft up. First: why does the air go down? Answer: the angle of attack and shape of the airfoil, together with simple pressure and stagnation arguments. Second: which is the role of the Bernoulli theorem here? If the air is pushed down by means of the "geometry", we don't need the difference of velocity between the upper and the lower part of the wing, but we have this just as a consequence of the change of pressure (due to the shape). Is that right? My second question, actually, is about the most common and sophisticated explanation: the Starting Vortex Balance. The main argument is: due to the Kutta condition (a body with a sharp trailing edge which is moving through a fluid will create about itself a circulation of sufficient strength to hold the rear stagnation point at the trailing edge) the vorticity "injected"by means of the viscous diffusion by the boundary layer generated near the airfoil, to the surrounding flow, transforms in a continuum of mini-starting-vortexes. This leaves the airfoil, and remains (nearly) stationary in the flow.It rapidly decays through the action of viscosity. By means of the Kelvin Theorem, which in the 2D case is nothing but the statement that the vorticity is constant along every particle's path, this vorticity must be balanced by the formation of an equal but opposite "bound vortex" around the airfoil. This vortex, being causes an higher velocity on the top of the wing, and a lower velocity under it, causing the arising of the lift by means of the Bernoulli phenomenon. Now, I wonder: 1) We are assuming that the vorticity leaving the wall (the no-slip condition make the airfoil wall a sheet of infinite vorticity) by diffusion, leaves the boundary layer and enters in the region where the Reynolds number is high enough to allow us to apply the Euler Equation, then the Kelvin Theorem (valid only for inviscid fluids). I usually explain this using the vorticity equation, that is a local version (in 2D) of the Kelvin Theorem: $$\partial_t\omega+\boldsymbol u \cdot \nabla \omega=\nu \nabla^2 \omega$$ in the boundary layer the viscosity terme dominates, whereas in the outer layer it can be neglected. When the vorticity arrives in the outer layer, vorticity is conserved and we can say that the structures arriving from the boundary layer must be balanced (in terms of vorticity) by the fluid in this region. And we can do this only because the circulation, which is the actual constant, is a line-integral, and if we don't cross the airfoil/BL region we don't have problem of any sort. Is this correct? 2) In this explanation the Bernoulli theorem seems to be a cause, generating the lift through the difference of velocity. Is this right? Thanks in advance. I feel always a kid, asking about that issue. And at the same time, I feel an actual ignorant but curious scientist.
I'm new to Discrete mathemathics, in particular in generating functions. This is probably easy to determinate. However I'm having trouble. For $a_n=\frac{n+1}{(-2)^n}$, and $b_n=\frac{n+1}{3^n}$ $$A(x)=\sum_{n=0}^{\infty} a_nx^n $$ $$B(x)=\sum_{n=0}^{\infty} b_nx^n $$ Let $A(x)$ and $B(x)$ be the generating functions of $a_n$ and $b_n$. Determinate $A(x)$ and $B(x)$. The solution of the exercise is: $A(x)=\frac{4}{(x+2)^2} $ and $B(x)=\frac{9}{(3-x)^2}$
Find $A,B\in\Bbb K^{2\times 2}$ such that $AB\neq BA$ but $e^{A+B}=e^Ae^B$. Hint: $e^{2k\pi i}=1$ for all $k\in\Bbb Z$. This is the exercise 10 in page 146 of Analysis II of Amann and Escher. I dont know exactly what to do here more than just try things blindly (trying to guess what is the hint about). I know that $$A:=\begin{bmatrix}a&b\\0&c\end{bmatrix}\implies e^A=\begin{cases}\begin{bmatrix}e^{a}&\frac{b}{c-a}(e^{c}-e^{a})\\0&e^{c}\end{bmatrix},& c\neq a\\\begin{bmatrix}e^{a}&be^{a}\\0&e^{a}\end{bmatrix},& c= a\end{cases}\tag1$$ $$A:=\begin{bmatrix}0&-\omega\\\omega&0\end{bmatrix}\implies e^{A}=\begin{bmatrix}\cos \omega&-\sin\omega \\\sin\omega &\cos\omega \end{bmatrix}\tag2$$ What I thought about the hint is setup some matrices as in $(1)$ such that $i(A+B)=i2\pi I$, by example $$A:=\begin{bmatrix}a&b\\0&c\end{bmatrix},\quad B:=\begin{bmatrix}2\pi-a&-b\\0&2\pi-c\end{bmatrix}\implies e^{i(A+B)}=I$$ However we have that $AB=BA$, then I must find other way. Some help will be appreciated, thank you.
So far we have restricted our discussion of waves to waves that travel. In all our examples until this part, one could follow the location of a maximum of the wave, for example, and observe it moving with the rest of the disturbance. Another important class of waves exist called standing waves. For a standing wave, the position of the maximum and minima do not travel, but remain in place. There are many real-world standing waves; you may have noticed standing waves when you wiggled one end of a string, slinky, rope, etc while the other end was held fixed. We will begin our discussion of standing waves by noting what occurs experimentally when standing waves form. Initially, it may be unclear why standing waves fit into our wave unit. After all, we emphatically emphasized that waves required propagation of a disturbance. Once we establish the idea of standing waves, we will use our model of interference to make sense of them. What is a Standing Wave? In the most general sense, we have already defined a standing wave as a wave that does not travel. How do these waves come to exist? Imagine you have a string attached at both ends that is under tension, like a guitar string. If you try to vibrate it at a particular rate, you may or may not be successful depending on the frequency you choose. At most frequencies, the wave you start will travel to the one end intact, but upon reaching it the shape of the wave distorts and overall the string no longer appears to carry a wave. Nowhere will the string displace very far from equilibrium. Only at certain frequencies will you see a sizeable displacement. If you begin vibrating at an extremely low frequency and gradually increase the frequency, the first wave-like response of the string will look like the image below. Each of the seven lines in the image is like a photograph of the string at a particular instant. These serve as displacement versus position graphs of the string, at seven different times. Notice that the displacement at both ends is zero. This makes a great deal of sense, because both ends are attached and thus cannot move. Any part of a standing wave that experiences no displacement over time is called a node; the above picture features two nodes. Also notice there is one spot in the middle that experiences the maximum amount of displacement at each time. Any spot that exhibits this behavior will be called an antinode If we increase the frequency of our vibrations, we lose the wave shape for a while, but it returns at higher frequency values. The next three frequencies we find to exhibit wave-like behavior will give us the waves shown below. In each image, the arrows highlight a distance of a half wavelength. If we use \(L\) to denote the length of the string, then for the first frequency, \(\lambda = 2L\) because only half of a wavelength fits on the string. For the second lowest frequency, \(\lambda = L\) since an entire wavelength fits on the string. Similar relationships can be established for \(n = 3\) and \(n = 4\). In general, for waves on a string that are attached at both ends, \[\lambda = \dfrac{2L}{n}\] Here, the various \(n\) values specify which harmonic we are discussing. The lowest harmonic, with \(n = 1\), is called the fundamental harmonic. We have now developed a relationship between harmonic \(n\) and wavelength \(\lambda\). If we knew the wave speed on the string, we could determine the frequency we have produced using \(v_{wave} = \lambda f = f \frac{2L}{n}\). Because frequency does not change between media, whatever frequency is produced on the string is reproduced in the air and eventually makes it into our ear. It is the frequency that we hear as a particular note. To make sure the instrument plays the correct note, a musician must first tune the instrument so it produces the correct frequency. To do this the musician can change the tension that the string is under (which adjusts the wave speed, from \(v_{wave} = \sqrt{\mu / \tau}\). While playing the guitar a guitar player chooses different notes by putting fingers on the fretboard, which changes the effective length of the string (and lowers the frequency, from \(v_{wave}= f \frac{2L}{n}\)). When we name the note the instrument plays as a single frequency, such as 440 Hz, we're referring to the fundamental harmonic of that note. In general, musical instruments produce many harmonics when playing any particular note. The different combinations of harmonics give instruments different sounds, allowing people to differentiate between pianos, guitars, and violins even though all three instruments use vibrating strings. We have now explored standing waves with both ends attached in some detail. A similar analysis could be done for standing waves with only one end attached and the other end free. In this case, the attached end will still behave like a node, but now the free end will behave like an antinode. For instance, only one quarter (\(\lambda /4\)) of a wave will fit along the length of the string for the fundamental frequency. Exercise Draw the first four harmonics for a wave with one end attached and one end free. Can you determine a general relationship between the string length and the wavelength? Applying the Interference Model Now that we have some sense of what standing waves are, it is time to make sense of them. There are two independent ways of making sense of this phenomenon in terms of the interference that we've seen in Superposition Basics and Superposition of Harmonic Waves. We will explore both briefly. Both methods involve waves travelling down the medium in opposite directions and interfering along the way. Reflection at the Boundary First, we send a continuous periodic wave down our medium, it hits the boundary at the end, and reflects. We now have two waves in the medium: the wave we are creating and the reflected wave. The two waves have the same frequency, determined by us, the source of the waves. Since both waves travel in the same medium, they have the same speed and thus the same wavelength as well. Because the waves travel in opposite directions, the position components of their equations have opposite signs. Mathematically, we have \[y_1 (x,t) = A \sin \left( \dfrac{2 \pi t}{T} - \dfrac{2 \pi x}{\lambda} + \phi_1 \right)\] \[y_2 (x,t) = A \sin \left( \dfrac{2 \pi t}{T} + \dfrac{2 \pi x}{\lambda} + \phi_2 \right)\] Notice that we left subscripts on the phase constant term, \(\phi\) . When the wave hits the end of the medium, two different things can happen to the phase constant. In one case, called a soft reflection, the phase constant remains unchanged and \(\phi_2 = \phi_1\). In the other case, called a hard reflection, the phase constant of the reflected is completely out of phase with the phase constant of the incoming wave, so \(\phi_2 = \phi_1 + \pi\). The gif below helps to visualize what we just described. The blue wave is the wave we've created and the green wave is the wave reflecting back towards us. The red wave is the actual displacement of the rope; it's obtained by superposing the green and blue waves. The free end of the rope, the end we were initially moving to create the waves, is a location of a constructive interference. This location has constructive interference for all times. The rope itself alternates between a maximum, flat, and a minimum, but the interference is always constructive. This might seem counterintuitive at first, but thinking back to our introduction to wave interference, there can several places where the displacement of both waves is zero (because the total phase \(\Phi = 0\) or (2 \pi\) etc.), so the total displacement of the sum is also zero. Nonetheless, this is a spot of constructive interference. Infinite Interference Wave There is a second way to apply the interference model to understand standing waves. In this case, we do not imagine any reflections, so do not need to worry about whether a boundary change is ‘hard’ or ‘soft.’ Unfortunately, it is a bit more abstract. Imagine there are infinite waves traveling in opposite directions. Although we will be thinking about a small section of medium, like a length of string, we imagine that the waves extend beyond the medium in question. Throughout all of space, these waves are interfering. Their interference is like a giant standing wave in all space that has nodes and antinodes in every direction, with no ends. To use this idea, we think about the particular type of interference we have (like both ends fixed, or node-node interference). We then consider only the portion of the total interference pattern that meets these restrictions, apply this to our specific phenomenon, and ignore the rest. Example 1 Determine if reflection at the free end of a rope is a hard reflection or a soft reflection Solution To visualize the phenomenon better, let’s first sketch the situation. We could choose any harmonic, and any behavior for the other end we want. The sketch chosen above shows the fundamental with one end attached and one end free, at five different times. At the free end of a rope, there is an antinode. At some specific times, the free end has a maximal displacement. In general, at any given instant in time, the free end has more displacement than any other part of the rope. We will keep this in mind. As with any interference problem, there are three terms to consider that might cause interference. The time term. In this case, the waves in question have the same period. There will be no total phase difference contribution from the time term. The spatial term. We are examining the wave just as it turns around. Neither wave has traveled further than the other wave. There will be no total phase difference contribution from the spatial term. The phase constant term. We seek to determine what the phase constant might be. We know the choices for \(\Delta \phi\) are 0 (soft reflection) or \(\pi\) (hard reflection). At this point, we know that the interference at the free end is entirely from the difference in phase constant, so \(\Delta \Phi = \Delta \phi\). We either have \(\Delta \Phi= 0\) or \(\Delta \Phi = \pi\). In the first case, we would have constructive interference, and in the second case destructive interference. Clearly we are not seeing destructive interference, but we are seeing constructive. Thus, the reflection at a free end must be a soft reflection with \(\Delta \phi = 0\). Contributors Authors of Phys7C (UC Davis Physics Department)
So as part of my new resolution to start reading the books on my shelves, I recently read through Probability with Martingales. I’d be lying if I said I fully understood all the material: It’s quite dense, and my ability to read mathematics has atrophied a lot (I’m now doing a reread of Rudin to refresh my memory). But there’s one very basic point that stuck out as genuinely interesting to me. When introducing measure theory, it’s common to treat sigma-algebras as this annoying detail you have to suffer through in order to get to the good stuff. They’re that family of sets that it’s really annoying that it isn’t the whole power set. And we would have gotten away with it, if it weren’t for that pesky axiom of choice. In Probability with Martingales this is not the treatment they are given. The sigma algebras are a first class part of the theory: You’re not just interested in the largest sigma algebra you can get, you care quite a lot about the structure of different families of sigma algebras. In particular you are very interested in sub sigma algebras. Why? Well. If I may briefly read too much into the fact that elements of a sigma algebra are called measurable sets… what are we measuring them with? It turns out that there’s a pretty natural interpretation of sub-sigma algebras in terms of measurable functions: If you have a sigma-algebra \(\mathcal{G}\) on \(X\) and a family of measurable functions \(\{f_\alpha : X \to Y_\alpha : \alpha \in A \}\) then you can look at the the smallest sigma-algebra \(\sigma(f_\alpha) \subseteq \mathcal{G}\) for which all these functions are still measurable. This is essentially the measurable sets which we can observe by only asking questions about these functions. It turns out that every sub sigma algebra can be realised this way, but the proof is disappointing: Given \(\mathcal{F} \subseteq \mathcal{G}\) you just consider the identify function \(\iota: (X, \mathcal{F}) \to (X, \mathcal{G})\) and \(\mathcal{G}\) is the sigma-algebra generated by this function. One interesting special case of this is sequential random processes. Suppose we have a set of random variables \(X_1, \ldots, X_n, \ldots\) (not necessarily independent, identically distributed, or even taking values in the same set). Our underlying space then captures an entire infinite chain of random variables stretching into the future. But we are finite beings and can only actually look at what has happened so far. This then gives us a nested sequence of sigma algebras \(\mathcal{F_1} \subseteq \ldots \subseteq \mathcal{F_n} \subseteq \ldots \) where \(\mathcal{F_n} = \sigma(X_1, \ldots, X_n)\) is the collection of things we we can measure at time n. One of the reasons this is interesting is that a lot of things we would naturally pose in terms of random variables can instead be posed in terms of sigma-algebras. This tends to very naturally erase any difference between single random variables and families of random variables. e.g. you can talk about independence of sigma algebras (\(\mathcal{G}\) and \(\mathcal{H}\) are independent iff for \(\mu(G \cap H) = \mu(G) \mu(H)\) for \(G \in \mathcal{G}, H \in \mathcal{H}\)) and two families of random variables are independent if and only if the generated sigma algebras are independent. A more abstract reason it’s interesting is that it’s quite nice to see the sigma-algebras play a front and center role as opposed to this annoyance we want to forget about. I think it makes the theory richer and more coherent to do it this way.
Divisor (algebraic geometry) For other meanings of the term 'Divisor' see the page Divisor (disambiguation) In algebraic geometry, the term divisor is used as a generalization of the concept of a divisor of an element of a commutative ring. First introduced by E.E. Kummer[Ku] under the name of "ideal divisor" in his studies on cyclotomic fields. The theory of divisors for an integral commutative ring $A$ with a unit element consists in constructing a homomorphism $\def\phi{\varphi}\phi$ from the multiplicative semi-group $A^*$ of non-zero elements of $A$ into some semi-group $D_0$ with unique factorization, the elements of which are known as (integral) divisors of the ring $A$. The theory of divisors makes it possible to reduce a series of problems connected with prime factorization in $A$, where this factorization may be not unique, to the problem of prime factorization in $D_0$. The image $\phi(a)\in D_0$ of an element $a\in A^*$ is denoted by $(a)$ and is called the principal divisor of the element $a$. One says that $a\in A^*$ is divisible by the divisor $\def\fa{\mathfrak{a}} \fa\in D_0$ if $\fa$ divides $(a)$ in $D_0$. More exactly, let $D_0$ be a free Abelian semi-group with a unit element, the free generators of which are known as prime divisors, and let a homomorphism $\phi : A^* \to D_0$ be given. The homomorphism $\phi$ defines a theory of divisors of the ring $A$ if it satisfies the following conditions. 1) For $a,b\in A^*$ the element $a$ divides $b$ in $A$ if and only if $(a)$ divides $(b)$ in $D_0$. 2) For any $a\in D_0$, $$\{ a\in A \;|\; \fa \textrm{ divides } (a)\} \cup \{ 0 \}$$ is an ideal of $A$. 3) If $\fa,\fa'\in D_0$ and if, for any $a\in A^*$, $(a)$ is divisible by $\fa$ if and only if $(a)$ is divisible by $\fa'$, then $\fa=\fa'$. If a homomorphism $\phi$ exists, it is uniquely determined, up to an isomorphism, by the conditions just listed. The kernel $\ker \phi$ coincides with the group of unit elements of $A$. The elements of $D_0$ are called positive divisors of $A$. Let $K$ be the field of quotients of $A$, and let $D\supset D_0$ be the free Abelian group generated by the set of prime divisors. Then for any $c \in K$, $K^* = K\setminus 0$, it is possible to define a principal divisor $\def\f#1{\mathfrak {#1}}\f c \in D$. If $c = a/b$ where $a,b\in A^*$, then $(c) = (a)/(b)$. The elements of the group $D$ are known as fractional divisors (or, simply, divisors) of $A$ (or of $K$). Any divisor $\f a\in D$ may be written in the form $$\f a = \f p_1^{n_1}\cdots\f p_r^{n_r},$$ where $\f p_i$ is a prime divisor. In additive notation: $\f a = n_1\f p_1+\cdots+n_r\f p_r$. If $a\in K^*$ and $(a) = \sum n_i\f p_i$, the mapping $a\mapsto \sum n_i$ is a discrete valuation on $K$, and is known as the essential valuation of $K$. The homomorphism $\phi$ is extended to a homomorphism $\psi : K^* \to D$, where $\psi(c) = (c)$, contained in the exact sequence $$1\to U(A) \to K^* \xrightarrow{\psi} D \to C(A) \to 1.$$ Here $U(A)$ is the group of invertible elements of $A$, while the group $C(A)$ is called the divisor class group of $A$ (or of $K$). Two divisors which belong to the same equivalence class by the subgroup of principal divisors are called equivalent (in algebraic geometry, where a large number of other divisor equivalences are considered, this equivalence is known as linear). The theory of divisors is valid for any Dedekind ring, in particular for rings of integral elements in algebraic number fields, and the elements of $D_0$ are in one-to-one correspondence with the non-zero ideals of the ring $A$ (to the divisor $\f a$ corresponds the ideal of all elements of $A$ that are divisible by $\f a$). This is why, in a Dedekind ring, the group of divisors is also called the group of ideals, while the divisor class group is called the ideal class group. The divisor class group of an algebraic number field is finite, and many problems in algebraic number theory involve the computation of its order (the number of classes) and structure [BoSh]. More generally, the theory of divisors is valid for Krull rings (cf. Krull ring, [Bo]). In such a case the role of $D_0$ is played by the semi-group of divisorial ideals (cf. Divisorial ideal) of the ring, while the part of $D$ is played by the group of fractional divisorial ideals. The concept of a Weil divisor is a generalization of the concept of a fractional divisorial ideal of a commutative ring to algebraic varieties or analytic spaces $X$. The name Weil divisor is given to integral formal finite linear combinations $\sum n_WW $ of irreducible closed subspaces $W$ in $X$ of codimension 1. A Weil divisor is called positive, or effective, if all $n_W \ge 0$. All Weil divisors form a group $Z^1(X)$ (the group of Weil divisors). If $X$ is a smooth algebraic variety, the concept of a Weil divisor coincides with that of an algebraic cycle of codimension 1. If $A$ is a Noetherian Krull ring, each prime divisorial ideal $\f p$ in $A$ defines a subspace $V(\f p)$ of codimension 1 in the scheme $X=\def\Spec{\textrm{Spec}}\Spec(A)$, while each divisor $\f a = \f p_1^{n_1}\cdots\f p_k^{n_k}$ may thus be identified with the Weil divisor $\sum n_iV((\f p)$. Let $X$ be a normal scheme and let $f$ be a rational (meromorphic in the analytic case) function on $X$. A principal Weil divisor is defined canonically: $$(f) = \sum n_W W.$$ Here $n_W$ is the value of the discrete valuation of the ring $\def\c#1{\mathcal{#1}}\c O_{X,W}$ of the subvariety $W$ on the representative of $f$ in $\c O_{X,W}$. If $$(f) = \sum n_W^+W + \sum n_W^- W,$$ where $n_w^+ > 0$ and $n_w^- < 0$, the Weil divisor $(f)_0 = \sum n_W^+ W.$ is known as the divisor of the zeros, while $\sum n_W^- W$ is known as the divisor of the poles of the function $f$. The set of principal Weil divisors is a subgroup $Z_p^1(X)$ of the group $Z^1(X)$. The quotient group $Z^1(X)/Z_p^1(X)$ is denoted by $C(X)$ and is known as the divisor class group of the scheme $X$. If $X=\Spec\; A$, where $A$ is a Noetherian Krull ring, $C(X)$ coincides with the divisor class group of the ring $A$. Let $K$ be an algebraic function field. A divisor of $K$ is sometimes defined as a formal integral combination of discrete valuations of rank 1 of $K$. If $K$ is a field of algebraic functions in one variable, each such divisor may be identified with the Weil divisor of its complete non-singular model. Let $X$ be a regular scheme or a complex variety and let $D=\sum n_W W$ be a Weil divisor. For any point $x\in X$ there exists an open neighbourhood $U$ such that the restriction of $D$ on $U$, $$D|_U = \sum n_W(W\cap U)$$ is the principal divisor $(f_U)$ for a certain meromorphic function $f_U$ on $U$. The function $f_U$ is uniquely defined, up to an invertible function on $U$, and is known as the local equation of the divisor $D$ in the neighbourhood $U$, while the correspondence $U\mapsto f_U$ defines a section of the sheaf $M_X/\c O_X^*$. In general, a Cartier divisor on a ringed space $(X,\c O_X)$ is defined as a global section of the sheaf $M_X^*/\c O_X^*M_X/\c O_X^*$ of germs of divisors. Here $M_X$ denotes the sheaf of germs of meromorphic (or rational) functions on $X$, i.e. the sheaf which brings into correspondence each open $U\subset X$ with the total quotient ring of the ring $\Gamma(U,\c O_X)$, while $M_X^*$ and $\c O_X^*$ are the sheaves of invertible elements in $M_X$ and $\c O_X$, respectively. A Cartier divisor may be defined by a selection of local equations $$f_i \in \Gamma(U_i,M_X^*),$$ where $\{U_i\}$ is an open covering of $X$, and the functions $f_i/f_j$ should be a section of the sheaf $\c O_X^*$ over $U_i\cap U_j$. In particular, a meromorphic function $f$ defines a divisor $\def\div{\textrm{div}}\div(f)$ known as a principal divisor. The set of $x\in M$ such that $(f_i)_X\notin \c O_{X,x}^* $ is called the support of the divisor. The Cartier divisors form an Abelian group $\def\Div{\textrm{Div}}\Div(X)$, while the principal divisors form a subgroup of it, $\Div_l(X)$. Each divisor $D\in \Div(X) $ defines an invertible sheaf $\c O_X(D)$ contained in $M_X$: If $D$ is represented by the local equations $f_i$ on the covering $\{U_i\}$, then $$\c O_X(D)|_{U_i} = f_i^{-1}\c O|_{U_i} \subset M_X|_{U_i}$$ The correspondence $D\mapsto \c O_X(D)$ is a homomorphism of the group $\Div(X)$ into the Picard group $\def\Pic{\textrm{Pic}}\Pic(X) = H^1(X,\c O_X^*)$. This homomorphism is included in the exact sequence $$\Gamma(X,M_X^*)\to \Div(X)\xrightarrow{\delta} \Pic(X) \to H_1(X,M_X^*),$$ which is obtained from the exact sequence of sheaves $$0\to \c O)X^*\to M_X^* \to M_X^*/\c O_X^* \to 0.$$ Thus, $\ker\delta = \Div_l(X)$. If $D-D_1$ is a principal divisor, $D$ and $D_1$ are said to be linearly equivalent. If $X$ is a quasi-projective algebraic variety or a complex Stein space, the homomorphism $\delta : \Div(X) \to \Pic(X)$ is surjective and induces an isomorphism of the group of classes of linearly equivalent divisors $\Div(X)/\Div_l(X)$ onto the Picard group $\Pic(X)$. If $X$ is a complex space, the problem arises as to when a given divisor is a principal divisor; this is the so-called second Cousin problem (cf. Cousin problems). For example, the divisor class group on a complex Stein space $(X,\c O)$ is trivial if and only if $H^2(X,\Z)=0$. A divisor $D$ is said to be effective (or positive) if $\c O_X\subset \c O_X(D)$. In such a case $\c O_X(-D)$ is a sheaf of ideals in $\c O_X$; the support of a divisor $D$ with structure sheaf $\c O_X/\c O_X(-D)$ forms a subspace in $X$, which is also denoted by $D$. For a normal Noetherian scheme or a normal analytic space $X$ there is a natural homomorphism: $$\def\cyc{\textrm{cyc}} \cyc : \Div (X) \to Z^1(X),$$ mapping $D\in\Div(X)$ into $\sum_n_W W$, where $n_W = \nu_W(f)$ and where $f$ is a local equation of $D$ in the neighbourhood $W$, while $\nu_W$ is the discrete valuation corresponding to $W$ [We]. The homomorphism cyc is injective and maps effective divisors to effective cycles; cyc is bijective if and only if $X$ is locally factorial (e.g. when $X$ is a non-singular scheme or an analytic manifold). If cyc is bijective, Weil and Cartier divisors coincide. Let $f:X'\to X$ be a morphism of schemes which is flat in codimension 1. Then, for any Cartier or Weil divisor $D$ on $X$ the inverse image $f^*(D)$ is defined; also, $\cyc(f^*(D)) = f^*(\cyc(D))$. The mapping $D\to f^*(D)$ is a homomorphism of groups which maps principal divisors to principal ones, and thus defines a homomorphism of groups $$f^* : \Pic(X)\to\Pic(X')$$ (respectively, $$f^* : C(X)\to C(X')).$$ If $X'$ is an open set in $X$ whose codimension of the complement is at least 2 and if $f$ is the imbedding of $X'$ into $X$, then $f^* : C(X)\to C(X')$ is an isomorphism, while $f^* : \Pic(X)\to\Pic(X')$ is an isomorphism if the scheme $X$ is locally factorial. Let $X$ be a smooth projective variety over $\C$. Any divisor $D$ on $X$ defines a homology class $$[D] \in H_{2\dim X -2}(X,\Z).$$ The cohomology class which is Poincaré dual to $[D]$ is identical with the Chern class $c_1(\c O_X(D))\in H^2(X,\Z)$ of the invertible sheaf $\c O_X(D)$. Thus there appears a homological equivalence on $\Div(X)$. There exists a theory of intersections of divisors [Sh], leading to the concept of algebraic equivalence of divisors (cf. Algebraic cycle). The group $$\def\a{\alpha}\Pic^0(X) = \Div_\a(X)/\Div_l(X),$$ where $\Div_\a(X)$ denotes the group of divisors which are algebraically equivalent to zero, is naturally provided with the structure of an Abelian variety (the Picard variety; if $X$ is a curve, it is also called the Jacobi variety of $X$). The group $\Div(X)/\Div_\a(X)$, known as the Néron–Severi group, has a finite number of generators. The last two facts also apply to algebraic varieties over an arbitrary field. If $X$ is a one-dimensional complex manifold (a Riemann surface), a divisor on $X$ is a finite linear combination $$D=\sum_i k_ix_i,$$ where $k_i\in\Z$, $x_i\in X$. The number $\sum k_i$ is called the degree of the divisor $D$. For a compact Riemann surface $X$ of genus $g$ the group of divisor classes of degree zero is a $g$-dimensional Abelian variety and is identical with the Picard variety (or with the Jacobi variety). If $f$ is a meromorphic function on $X$, a principal divisor is $$\div(f) = \sum_i m_ix_i - \sum_j n_jy_j,$$ where $x_i$ are the zeros and $y_j$ are the poles of $f$ and $m_i,$, $n_j$ are their multiplicities. Then $\sum_i m_i = \sum_j n_j,$, i.e. a principal divisor has degree 0. A divisor of degree 0 on $X$ is principal if and only if there exists a singular one-dimensional chain $C$ such that $$\partial C = D \textrm{ and } \int_C \omega = 0$$ for all holomorphic forms $\omega$ of degree 1 on $X$ (Abel's theorem). See also Abelian differential. References [Bo] N. Bourbaki, "Elements of mathematics. Commutative algebra", Addison-Wesley (1972) (Translated from French) MR0360549 Zbl 0279.13001 [BoSh] Z.I. Borevich, I.R. Shafarevich, "Number theory", Acad. Press (1966) (Translated from Russian) (German translation: Birkhäuser, 1966) MR0195803 Zbl 0145.04902 [Ca] P. Cartier, "Questions de rationalité des diviseurs en géometrie algébrique" Bull. Soc. Math. France, 86 (1958) pp. 177–251 MR0106223 Zbl 0091.33501 [Ch] C. Chevalley, "Introduction to the theory of algebraic functions of one variable", Amer. Math. Soc. (1951) MR0042164 Zbl 0045.32301 [Ch2] S.S. Chern, "Complex manifolds without potential theory", Springer (1979) MR0533884 Zbl 0444.32004 [Gr] A. Grothendieck, "Eléments de géometrie algébrique IV. Etude locale des schémas et des morphismes des schémas" Publ. Math. IHES : 32 (1967) MR0238860 Zbl 0144.19904 Zbl 0135.39701 Zbl 0136.15901 [GrHa] P.A. Griffiths, J.E. Harris, "Principles of algebraic geometry", Wiley (Interscience) (1978) pp. 178; 674; 179; 349; 525; 532; 535; 632; 743 MR0507725 Zbl 0408.14001 [GuRo] R.C. Gunning, H. Rossi, "Analytic functions of several complex variables", Prentice-Hall (1965) MR0180696 Zbl 0141.08601 [Ha] R. Hartshorne, "Algebraic geometry", Springer (1977) MR0463157 Zbl 0367.14001 [Ku] E.E. Kummer, "Ueber die Zerlegung der aus Wurzeln der Einheit gebildeten complexe Zahlen in ihre Primfaktoren" J. Reine Angew. Math., 35 (1847) pp. 327–367 [Mu] D. Mumford, "Lectures on curves on an algebraic surface", Princeton Univ. Press (1966) MR0209285 Zbl 0187.42701 [Sh] I.R. Shafarevich, "Basic algebraic geometry", Springer (1977) (Translated from Russian) MR0447223 Zbl 0362.14001 [Sp] G. Springer, "Introduction to Riemann surfaces", Addison-Wesley (1957) MR0092855 Zbl 0078.06602 [We] A. Weil, "Introduction à l'étude des variétés kahlériennes", Hermann (1958) [We2] R.O. Wells jr., "Differential analysis on complex manifolds", Springer (1980) MR0608414 Zbl 0435.32004 How to Cite This Entry: Divisor(2). Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Divisor(2)&oldid=28592
Talk:Kelvin-Stokes Theorem Filling in the details I appreciate what is being done here, but I wonder whether it would be better to extract the complexity out into another page (where we may be able to invoke some already-documented vector-calculus identities) and hence structure it in a more easily-digested form (e.g. extracting the sub-calculations as building blocks). At the moment there is just too much going on in any given line, and as a result, not only does it not fit on a page, but also it's difficult to comprehend what is actually being done. --prime mover (talk) 10:51, 4 July 2019 (EDT) I am of the same mind generally, though I am am having difficultly thinking about/finding relevant identities themselves, other than Derivative of Dot Product of Vector-Valued Functions, and naming them if they do not yet have pages. In the mean time, I could try re-adding some comments I removed (but as new lines within the final equation) and then remove the work in progress tag so that someone with more experience breaking articles into new pages could work on this. The issue I encountered is that by keeping it structured as one equation, all the comments I had added were pushed too the far right and were basically unreadable. Formatting the final equation without using the equation template should alleviate some of this. Mizar (talk) 11:18, 4 July 2019 (EDT) You could try splitting the lines up using the "ro" parameter to put the operator in. Then you have something like this: \(\displaystyle \) \(=\) \(\displaystyle \iint_R \Biggl( \paren { \dfrac {\partial f_3} {\partial y} \dfrac{\partial y}{\partial s} \dfrac{\partial z}{\partial t} - \dfrac {\partial f_3} {\partial y} \dfrac{\partial z}{\partial s} \dfrac{\partial y}{\partial t} - \dfrac {\partial f_2} {\partial z} \dfrac{\partial y}{\partial s} \dfrac{\partial z}{\partial t} + \dfrac {\partial f_2} {\partial z} \dfrac{\partial z}{\partial s} \dfrac{\partial y}{\partial t} }\) \(\displaystyle \) \(\) \(\, \displaystyle + \, \) \(\displaystyle \paren { \dfrac {\partial f_1} {\partial z} \dfrac{\partial z}{\partial s} \dfrac{\partial x}{\partial t} - \dfrac {\partial f_1} {\partial z} \dfrac{\partial x}{\partial s} \dfrac{\partial z}{\partial t} - \dfrac {\partial f_3} {\partial x} \dfrac{\partial z}{\partial s} \dfrac{\partial x}{\partial t} + \dfrac {\partial f_3} {\partial x} \dfrac{\partial x}{\partial s} \dfrac{\partial z}{\partial t} }\) \(\displaystyle \) \(\) \(\, \displaystyle + \, \) \(\displaystyle \paren { \dfrac {\partial f_2} {\partial x} \dfrac{\partial x}{\partial s} \dfrac{\partial y}{\partial t} - \dfrac {\partial f_2} {\partial x} \dfrac{\partial y}{\partial s} \dfrac{\partial x}{\partial t} - \dfrac {\partial f_1} {\partial y} \dfrac{\partial x}{\partial s} \dfrac{\partial y}{\partial t} + \dfrac {\partial f_1} {\partial y} \dfrac{\partial y}{\partial s} \dfrac{\partial x}{\partial t} } \Biggr) \rd s \rd t\)