text
stringlengths 256
16.4k
|
|---|
It's my understanding that General Relativity abstracts away the concept of gravity as a force, and instead describes it as a feature of spacetime by which massive objects cause curvature. Then it follows that what we experience as a force is simply the difference between a geodesic on this curved surface and our perceived Euclidean space. What I am unsure of, exactly, is the implication of this.
$$S[q] \equiv \int L(q(t), \tfrac{ \delta q }{ \delta t }(t), t)dt$$ and Hamilton's Principle states that $$\tfrac{\delta S}{\delta q(t)} = 0.$$
If $$F(q(t)) = -\nabla U(q(t)) = \nabla (T(\tfrac{ \delta q }{ \delta t}(t)) - U(q(t))) = \nabla L,$$ which, as I understand is true for a conservative field like gravitation, then are these two statements not equivalent?
What additional insight do we gain by using principles of differential geometry versus classical potential theory?
|
I have packet streams $1...k$ and, streams with Prob(err) $p1...pk$. The $p$'s are consts $>0$. I'd like to maximize the probability all make it simultaneously while I'm allowing at most $N$ packets to pass through, hence, every stream should get some portion of $N$. i.e.
max $f=(1-p_1^{n_1})(1-p_2^{n_2})...(1-p_2^{n_2})$ or max $\prod_{i=1}^k (1-p_i^{n_i})$
s.t. $\sum_{i=1}^k n_i=N$
The problem is to find here the optimal set of portions, or $n$'s.
Now, to simplify, I remove the requirement from $n$'s to be integers, now $n \in R+$
...
I have tried to solve $max \ log(f)$ which reduces this problem to $\sum_{i=1}^k ln(1-p_i^{n_i})$
Now, I've tried to do Lagrangian and by solving it for the case of only $n1,n2$ I get
$n1=\frac{1}{lnp_1}ln\bigg(\frac{\lambda}{\lambda+lnp_1}\bigg)$ and a similar form for $n_2$. My problem is that later, for $\lambda$, I get a closed form expression like,
$ lnp_2ln\bigg(\frac{\lambda}{\lambda+lnp_1}\bigg)+lnp_1ln\bigg(\frac{\lambda}{\lambda+lnp_2}\bigg) = Nlnp_1lnp_2$
My solution was to convert to,
$ lnp_2ln\bigg(1+\frac{lnp_1}{\lambda}\bigg)+lnp_1ln\bigg(1+\frac{lnp_2}{\lambda}\bigg) = -Nlnp_1lnp_2$
and use Taylor series $ln(1+x)=x-x^2/2+...$. I am not getting sensible results there if to take the two first elements, i.e. $ln(1+x)\approx x-x^2/2$, taking better approximation converts to an intimidating polynomial :)...
Maybe I've made a mistake somewhere earlier?
|
Kumari, M and Nath, G (2004)
Transient MHD rotating flow over a rotating sphere in the vicinity of the equator. In: International Journal of Engineering Science, 42 (17-18). pp. 1817-1829.
PDF
MHD.pdf
Restricted to Registered users only
Download (351kB) | Request a copy
Abstract
Transient rotating flow of a laminar incompressible viscous electrically conducting fluid over a rotating sphere in the vicinity of the equator has been investigated. We have considered the situation where prior to the time t = 0 both the fluid and the sphere are at rest and at time t = 0 they are impulsively rotated with different angular velocities either in the same direction or in opposite directions and subsequently maintained at the same angular velocities. The effects of surface suction and the magnetic field are considered in the analysis. The non linear coupled parabolic partial differential equations governing the boundary layer flow have been solved by using an implicit finite-difference method. The computation has been carried out starting from time t = 0 to t \rightarrow \infty when the steady-state is reached. For large suction and magnetic field, analytical solutions have been obtained for the steady state case. Also the asymptotic behaviour of the steady-state equations for large independent variable \eta (\eta \rightarrow \infty) has been examined. The early flow development is governed by the Rayleigh type of equations and the steady state is governed by the Bodewadt type of equations and there is a smooth transition from the early flow development to the steady-state flow. The surface shear stresses in the meridional and rotational directions decrease with increasing time until the steady state is reached. The surface shear stress in the rotational direction is found to increase with magnetic field and suction, but the surface shear stress in the meridional direction decreases.
Item Type: Journal Article Additional Information: Copyright of this article belongs to Elsvier Ltd. Department/Centre: Division of Physical & Mathematical Sciences > Mathematics Depositing User: Mr. Ramesh Chander Date Deposited: 22 Feb 2008 Last Modified: 19 Sep 2010 04:42 URI: http://eprints.iisc.ac.in/id/eprint/12912 Actions (login required)
View Item
|
Previous | Next
In the following section, we presuppose Set Theory and Nonstandard Analysis. The exponential simplex and the polynomial intex method (
inter-/ extrapolation) solve linear programmes (LPs).
Diameter theorem for polytopes: The diameter of an \(n\)-dimensional polytope defined by \(m\) constraints with \(m, n \in {}^{\omega}\mathbb{N}_{\ge2}\) is at most \(2(m + n - 3)\).
Proof: We can assemble at most \(\acute{m}\) hyperplanes into an incomplete cycle (of dimension 2) and have to consider \(n - 2\) alternatives sidewards (in the remaining dimensions). Since we can pass each section with a maximum of two edges, the factor is 2. This theorem can be extended to polyhedra analogously by dropping the requirement of finiteness.\(\square\)
Definition: Let \(\omega\) be the limit, to which all variables in Landau notation tend, and \(\vartheta := \ell \omega\). A method is
polynomial ( exponential) if its computation time in seconds and (or) its memory consumption in bits is \(\mathcal{O}({\omega}^{\mathcal{O}(1)}) (\mathcal{O}({e}^{|\mathcal{O}(\omega)|}))\).
Theorem: The simplex method is exponential.
Proof and algorithm: Let \(P := \{x \in {}^{\omega}\mathbb{R}^{n} : Ax \le b, b \in {}^{\omega}\mathbb{R}^{m}, A \in {}^{\omega}\mathbb{R}^{m \times n}, m, n \in {}^{\omega}\mathbb{N}^{*}\}\) be the feasible domain of the LP max \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in P\}\). By taking the dual or setting \(x := {x}^{+} - {x}^{-}\) with \({x}^{+}, {x}^{-} \ge 0\), we obtain \(x \ge 0\). We first solve max \(\{-z : Ax - b \le {(z, ..., z)}^{T} \in {}^{\omega}\mathbb{R}^{m}, z \ge 0\}\) to obtain a feasible \(x\) when \(b \ge 0\) does not hold. Initial and target value are \(z := |\text{min } \{{b}_{1}, ..., {b}_{m}\}|\) and \(z = 0\). We begin with \(x := 0\) as in the first case. Pivoting if necessary, we may assume that \(b \ge 0\).
Let \(i, j, k \in {}^{\omega}\mathbb{N}^{*}\) and let \({a}_{i}^{T}\) the \(i\)-th row vector of \(A\). If \({d}_{j} \le 0\) for all \(j\), the LP is solved. If for some \({d}_{j} > 0 \; ({d}_{j} = 0)\), \({a}_{ij} \le 0\) for all \(i\), the LP is positively unbounded (for now, we may drop \({d}_{j}\) and \({A}_{.j}\) as well as \({b}_{i}\) and \({a}_{i}\), but only when \({a}_{ij} < 0\) holds). The inequality \({a}_{ij}{x}_{j} \ge 0 > {b}_{i}\) for all \(j\) has no solution, too. If necessary, divide all \({a}_{i}^{T}x \le {b}_{i}\) by \(||{a}_{i}||\) and all \({d}_{j}\) and \({a}_{ij}\) by the minimum of \(|{a}_{ij}|\) such that \({a}_{ij} \ne 0\) for each \(j\). This will be reversed later. If necessary, renormalise by \(||{a}_{i}||\).
We may always remove redundant constraints (with \({a}_{i} \le 0\)). Select for each \({d}_{j} > 0\) and non-base variable \({x}_{j}\) the minimum ratio \({b}_{k}/{a}_{kj}\) for \({a}_{ij} > 0\). The variables with \({}^{*}\) are considered in the next step. The next potential vertex is given by \({x}_{j}^{*} = {x}_{j} + {b}_{k}/{a}_{kj}\) for feasible \({x}^{*}\). To select the steepest edge, select the pivot \({a}_{kj}\) corresponding to \({x}_{j}\) that maximises \({d}^{T}\Delta x/||\Delta x||\) or \({d}_{j}^{2}/(1 + ||{A}_{.j}{||}^{2})\) for \(\Delta x := {x}^{*} - x\) in the \(k\)-th constraint.
If there are multiple maxima, select max\({}_{k,j} {d}_{j}{b}_{k}/{a}_{kj}\) according to the rule of best pivot value or alternatively maybe less well the smallest angle min \({(1, ..., 1)}^{T}d^{*}/||(\sqrt{n}) d^{*}||\). If we cannot directly maximise the objective function, we perturb, which means that we relax, the constraints with \({b}_{i} = 0\) by the same, minimal modulus. These do not need to be written into the tableau: We simply set \({b}_{i} = ||{a}_{i}||\).
If another multiple vertex is encountered, despite this being unlikely, simply increase the earlier \({b}_{i}\) by \(||{a}_{i}||\). The cost of eliminating a multiple vertex, after which we revert the relaxation, corresponds to an LP with \(d > 0\) and \(b = 0\). Along the chosen path, the objective function increases otherwise strictly monotonically. We can then simply calculate \({d}_{j}^{*}, {a}_{ij}^{*}\) and \({b}_{i}^{*}\) using the rectangle rule (cf. [775], p. 63).
In the worst-case scenario, the simplex method is not polynomial despite the diameter theorem for polytopes under any given set of pivoting rules, since an exponential "drift" can be constructed with Klee-Minty or Jeroslow polytopes, or others, creating a large deviation from the shortest path by forcing the selection of the least favourable edge. This is consistent with existing proofs. The result follows.\(\square\)
Theorem: The intex method solves every solvable LP in \(\mathcal{O}({\vartheta}^{3})\).
Proof and algorithm: First, we normalise and scale \({b}^{T}y - {d}^{T}x \le 0, Ax \le b\) and \({A}^{T}y \ge d\). Let the
height \(h\) and \(v := ({{x}^{T}, {y}^{T})}^{T} \in {}^{\omega}\mathbb{R}^{m+n}\) have the initial values \({h}_{0} := |\text{min } \{{b}_{1}, ..., {b}_{m}, {-d}_{1}, ..., {-d}_{n}\}| + s\) for sufficient clearance \(s \in {}^{\omega}\mathbb{R}_{>0}\) and 0. We compute the LP min \(\{h \in [0, {h}_{0}] : 0 \le x \in {}^{\omega}\mathbb{R}^{n}, 0 \le y \in {}^{\omega}\mathbb{R}^{m}, {b}^{T}y - {d}^{T}x \le h, Ax - b \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}^{m}, d - {A}^{T}y \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}^{n}\}\) via the (dual) programme min \(\{{b}^{T}y : 0 \le y \in {}^{\omega}\mathbb{R}^{m}, {A}^{T}y \ge d\}\) for the (primal) programme max \(\{{d}^{T}x : d \in {}^{\omega}\mathbb{R}^{n}, x \in {P}_{\ge 0}\}\).
We successively interpolate all \({v}_{k}^{*} := (\text{max } {v}_{k} + \text{min } {v}_{k})/2\) until all \(\Delta{v}_{k}\) are sufficiently small in the point \(({v}^{T}, h)^{T}\), and repeat this in the height \({h}^{*} := (h_0 + \text{min } h)/2\) for the point \(({v}^{*T}, {h}^{*})^{T}\). Then, we extrapolate \(({v}^{T}, h)^{T}\) via \(({v}^{*T}, {h}^{(*)})^{T}\) stopping just before the boundary of the polytope. There, we start over until \(x\) and \(y\) are optimal or \(h\) cannot be minimised anymore. Since \(h\) at least roughly halves itself for each iteration step in \(\mathcal{O}({\omega\vartheta}^{2})\), the claim follows by the strong duality theorem ([775], p. 60 - 65).\(\square\)
Corollary: If neither a primally feasible \(x\) nor a dual solution \(y\) needs to be computed, the runtime of the LP can roughly be halved by setting \(h := {d}^{T}x.\square\)
Remarks: Simplex method and face algorithm ([916], p. 580 f.) may solve the LP faster for small \(m\) and \(n\). We can easily change the current stock of constraints or variables, because the intex method is a non-transforming method and faster than all known (worst-case) LP-solving algorithms in \(\mathcal{O}({\ell\omega}^{4.5})\). We simply have to adjust \(h\) if necessary, because we can initialise additional variables with 0. Increasing the precision can make sense.
Corollary: Every solvable linear system (LS) \(Ax = b\) for \(x \in {}^{\omega}\mathbb{R}^{n}\) can be solved as LP min \(\{h \in [0, \text{max } \{|{b}_{1}|, ..., |{b}_{m}|\}] : \pm(Ax - b) \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}^{m}\}\) in \(\mathcal{O}({\vartheta}^{3}).\square\)
Corollary: Every solvable LP min \(\{h \in [0, 1] : \pm(Ax - \lambda x) \le (h, ..., h)^{T} \in {}^{\omega}\mathbb{R}^{n}\}\) can determine an eigenvector \(x \in {}^{\omega}\mathbb{R}^{n} \setminus \{0\}\) of the matrix \(A \in {}^{\omega}\mathbb{R}^{n \times n}\) for the eigenvalue \(\lambda \in {}^{\omega}\mathbb{R}\) in \(\mathcal{O}({\vartheta}^{3}).\square\)
Corollary: Let \({\alpha }_{j}\) the \(j\)-th column vector of the matrix \({A}^{-1} \in {}^{\omega}\mathbb{R}^{n \times n}\) and let \({\delta}_{ij}\) the Kronecker delta. Every LS \({A \alpha }_{j} = {({\delta}_{1j}, ..., {\delta}_{nj})}^{T}\) to determine the inverse \({A}^{-1}\) of the regular matrix \(A\) can be solved for \(j = 1, ..., n\) in \(\mathcal{O}({\vartheta}^{3})\). Whether \(A\) is regular can be also determined in \(\mathcal{O}({\vartheta}^{3}).\square\)
Corollary: If only finite floating-point numbers are used and if \(q \in [0, 1]\) is the density of \(A, \mathcal{O}({\vartheta}^{3})\) can above be everywhere replaced by max \(\{\mathcal{O}(qmn), \mathcal{O}(m + n)\}\) or max \(\{\mathcal{O}(q{n}^{2}), \mathcal{O}(n)\}.\square\)
Remarks: The four corollaries can be easily transferred to complex ones. The intex method is numerically very stable, since we can keep rounding errors small also by resorting to the initial data and a Kahan-Babuška-Neumaier summation modified according to Klein, especially near the end. It can be solved in \({}^{c}\mathbb{R}^{c}\) if modified by distributed computing in \(\mathcal{O}(1)\). It is also well-suited for (mixed) integer problems and (non-)convex (Pareto) optimisation.
Corollary: Every solvable convex programme min \(\{{f}_{1}(x) : x \in {}^{\omega}\mathbb{R}^{n}, {({f}_{2}(x), ..., {f}_{m}(x))}^{T} \le 0\}\) where the \({f}_{i} \in {}^{\omega}\mathbb{R}\) are convex functions for \(i = 1, ..., m\) may be solved by the intex method and two-dimensional bisection or Newton's methods in polynomial runtime, if the number of operands \({x}_{j}\) of the \({f}_{i}\) is \(\le {\omega}^{c-3}\) and if an \(x\) exists so that \({f}_{i}(x) < 0\) for all \(i > 1\) (see [939], p. 589 ff.).\(\square\)
code of simplex method
© 11.02.2019 by Boris Haase
• disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top
|
I know that all compact Riemann surfaces with the same genus are topologically equivalent. Moreover they are diffeomorphic. But are they biholomorphic, too? In other words, is the complex structure conserved?
Some magic words for this question are "moduli space" or "moduli stack". In the early days, one was interested in a variety or variety-like object which would classify projective complex curves (compact Riemann surfaces) of given genus $g$, i.e., whose points correspond to isomorphism classes of curves (or biholomorphism classes of compact Riemann surfaces). This is nowadays called a "coarse moduli space". As GH and François commented, there is a whole continuum of points in the coarse moduli space of genus 1; the same is true for any genus $g > 1$.
Over time, it became apparent that the coarse moduli space is not
a very pleasant thing the most fundamental object of study. Some information that is desirable to have that the coarse moduli space misses is: what are the possible automorphisms on a fixed compact Riemann surface? For example, in the case of an elliptic curve (genus 1), the automorphism group is infinite and acts transitively on the curve. ( Edit: this remark may be slightly misleading because it is more usual to consider elliptic curves with a chosen origin, and this cuts way down on the automorphism group. Thanks to Donu Arapura for pointing this out in comments.) Not so in higher genus; curves of higher genus are much more rigid, and in fact have only finite automorphism groups.
(I think to me this was a bigger shock than finding out about the plenitude of complex manifold structures on a given curve. In ordinary smooth manifold theory, all the points are pretty much alike, in that one can construct a diffeomorphism that takes one point to another. But in complex curve theory, points can have different "personalities"; for example, cf. Weierstrass points.)
Anyway, the better object of study in these questions, which parametrizes not only isomorphism classes of curves but also isomorphisms between them, is called a
moduli stack. You can begin reading about them here.
The answer is no. For example, if $\Lambda_1$ and $\Lambda_2$ are two lattices in $\mathbb{C}$, then the surfaces $\mathbb{C}/\Lambda_1$ and $\mathbb{C}/\Lambda_2$ are conformally equivalent if and only if $\Lambda_1$ and $\Lambda_2$ are similar. This follows from the theory of elliptic functions (or elliptic curves).
Identify the opposite sides of the unit square to get a torus $A$. Identify the opposite sides of a rectangle of side lengths $\pi$ and $\frac{1}{\pi}$ to get a torus $B$.
The extremal length of every closed curve in $A$ is an algebraic integer, which is not true of $B$. Since the set of extremal lengths of curves is a conformal invariant, $A$ and $B$ are not biholomorphic.
The answers here are great, but I think anyone drawn to this question should find the keyword "Teichmüller space" somewhere and maybe some references. Here is the wikipedia page as a start.
Note: with the top of page 10 (ibid) in mind, in general complex smooth varieties that are biholomorphic need not be biregular (see here).
|
Previous | Next
In the following section, the Set Theory is presupposed.
Definition: A family of sets \(\mathbb{Y} \subseteq \mathcal{P}(X)\) is called
topology on \(X \subseteq R\) if every intersection and union of sets of \(\mathbb{Y}\) belongs apart from \(\emptyset\) and \(X\) to \(\mathbb{Y}\). The pair \((X, \mathbb{Y})\) is called topological space. If \(\mathbb{Y} = \mathcal{P}(X)\), the topology is called discrete. A set \(B \subseteq \mathbb{Y}\) is called a base of \(\mathbb{Y}\) if every set of \(\mathbb{Y}\) can be written as union of any number of sets of \(B\). Every irreflexive relation \(N \subseteq {A}^{2}\) defines a neighbourhood relation in \(A \subseteq X\) for the underlying set \(X\). If \((a, b) \in N\), \(a\) is called neighbour of or neighbouring to \(b\).
Examples: The base for \(\mathbb{N}, \mathbb{Z}, \mathbb{Q}, \mathbb{A}_\mathbb{R}, \mathbb{A}_\mathbb{C}, \mathbb{R}\) and \(\mathbb{C}\) is precisely each related discrete topology.
Definition: In particular, an element \(x \in A \subseteq X\) is called neighbour of an element \(y \in A\), where \(x \ne y\) if we have for all \(z \in X\) and a mapping \(d: {X}^{2} \rightarrow \mathbb{R}_{\ge 0}\): (1) \(d(x, y) \le \text{max }\{\text{min }\{d(x, z), d(z, x)\}, \text{min }\{d(y, z), d(z, y)\}\}\) and (2) \(d(z, z) = 0\). Here \(d\) is called
neighbourhood metric. Let \(P = R \cup V\) be the set of all points partitioned into actual points \(R\) and virtual points \(V\) for \(R, V \ne \emptyset = R \cap V\).
Definition: The set \(A' := R \setminus A\), where \(A \subseteq R\), is called
complement of \(A\) in \(R\). When \(R\) is clear from context, it can be omitted and \(A'\) can be called the exterior of \(A\). The set \(\partial V \; (\partial A)\) consists of all points of \(V \; (A)\) that have a neighbour in \(R \; (A' \cup V)\), and is called the (inner) boundary of \(V \; (A)\). Here \('\) takes precedence over \(\partial\). When we apply \(\partial\) successively beyond that, we assume the argument to be without complement. The set \(A ° := A \setminus \partial A\) is called the interior of \(A\).
Definition: A set \(S \subseteq R \; (V)\) is said to be
connected if we have for every partition of \(S\) into \(Y \cup Z\) such that \(Y, Z \ne \emptyset = Y \cap Z\): \(\partial Y' \cap \partial Z \ne \emptyset \ne \partial Z' \cap \partial Y\). \(S \subseteq R\) is moreover said to be simply connected if we have: Both \(\partial Y' \cap \partial Z \cup \partial Z' \cap \partial Y\) for every partition into connected \(Y\) and \(Z\) and \(S' \cup (\partial)V\) for \( S'\) as complement of \(S\) in \(R\) are connected for a connected (\(\partial)V\). Let \(P\) and \(R\) be simply connected.
Definition: An \(h\)-homogeneous subset of \(R := \mathbb{R}^{m}\) for \(m \in \mathbb{N}^{*}\) is \(n\)
-dimensional, where \(m \ge n \in \mathbb{N}^{*}\), if and only if it contains at least one \(n\)-cube with edge length \(h \in \mathbb{R}_{>0}\) and maximum \(n\). The definition for \(R := \mathbb{C}^{m}\) is analogous. Let be dim \({}^{(\omega)}\mathbb{C} = 2\). The set \({\mathbb{B}}_{r}(a) := \{z \in K := {}^{(\omega)}\mathbb{K}^{n} : ||z - a|| \le r\}\) for \(\mathbb{K} = \mathbb{R} \; (\mathbb{C})\) is called real (complex) (2)n-ball or briefly ball with radius \(r \in {}^{(\omega)}\mathbb{R}_{>0}\) around its centre \(a \in K\) and its boundary is called real (complex) (2)n-sphere \({\mathbb{S}}_{r}(a)\) or briefly sphere.
Examples: Every ball is simply connected and for \(r > d0\) every real \(n\)-sphere, where \(n \ge 2\), is only connected and every real 1-sphere is not connected.
Definition: When \(a = 0\) and \(r = 1\), we obtain the
unit ball with the special case of the unit disc \(\mathbb{D}\) for \(\mathbb{K} = \mathbb{C}\) and \(n = 1\). Every \(U \subseteq R\) is called neighbourhood of \(x \in R\) if \(x \in U°\). A function between two topological spaces is said to be continuous if we have for every point that can be mapped: for every neighbourhood of the image of this point there is a neighbourhood of the point whose image lies completely in the neighbourhood of the image of this point.
Remark: The neighbouring boundary points of the conventional closed [0, 1] and the conventional open ]0, 1[ especially have not the Hausdorff property. So not every metric space can be a Hausdorff space or normal and (pre-) regular spaces are limited. The spaces \(\mathbb{C}^{n}\) and \(\mathbb{R}^{n}\) with \(n \in {}^{\omega }\mathbb{N}^{*}\) have therefore only the Fréchet topology. The situation is, however, different in partially imprecise conventional mathematics.
© 05.04.2019 by Boris Haase
• disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top
|
Does fixing the reparameterization invariance of the string action, for example by choosing the light-cone gauge
$$ X^{+} = \beta\alpha' p^{+}\tau $$
$$ p^{+} = \frac{2\pi}{\beta} P^{\tau +} $$
correspond to some kind of orbifolding?
This answer explains that gauge systems are orbifolds after removing the gauge redundancy. So, as the reparameterization invariance of the string action is nothing else but the worldsheet diffeomorphism invariance which is a gauge symmetry, does fixing it by the light-cone gauge correspond to some kind of orbifolding too?
And if so, what are the characteristics of this orbifold, what singularities does it have, and is there some kind of double strike which projects certain states out of the theory and leads at the same time to the emergence new ones?
If this way of thinking is wrong, I would highly appreciate any clarifications of what I am confusing.
|
The generating function approach:
$$P(x)=(1+x+x^2+x^3+x^4+x^5)^6=\sum a_i x^i$$
Then $a_i$ counts the number of ways of getting a total of $i+6$ from $6$ dice.
Now, to find the even terms, you can compute $$\frac{P(1)+P(-1)}{2}=\sum_i a_{2i}.$$
But $P(1)=6^6$ and $P(-1)=0$. So $$\frac{P(1)+P(-1)}{2}=\frac{6^6}{2},$$ or exactly half, as you conjectured.
For another example, let $N_{i}$ be the number of ways to roll $6$ dice and getting a value $\equiv i\pmod{5}$. Then it turns out that if $z$ is a primitive $5$th root of unity, then the value can be counted by defining:
$$Q_i(x)=x^{6-i}(1+x+x^2+x^3+x^4+x^5)^6$$then computing $$N_i=\frac{Q_i(1)+Q_i(z)+Q_i(z^2)+Q_i(z^3)+Q_i(z^4)}{5}$$
This gives the result:
$$N_i =\begin{cases}\frac{6^6+4}{5}&i\equiv 1\pmod 5\\\frac{6^6-1}{5}&\text{otherwise}\end{cases}$$
More generally, if $N_{n,i}$ is the number of ways to get $\equiv i\pmod 5$ when $n$ dice are rolled, you get:
$$N_{n,i} =\begin{cases}\frac{6^n+4}{5}&i\equiv n\pmod 5\\\frac{6^n-1}{5}&\text{otherwise}\end{cases}$$
It's this simple because of the fact that $6=5+1$.
If each die has $d$ sides, and you ask what are the number of ways to get a total $\equiv i\pmod {d-1}$, then you get:
$$N_{d,n,i} =\begin{cases}\frac{d^n+{d-2}}{d-1}=\frac{d^n-1}{d-1}+1&i\equiv n\pmod {d-1}\\\frac{d^n-1}{d-1}&\text{otherwise}\end{cases}$$
|
This answer is the proof given by Ashutosh, but formulated in terms of the splitting number.
PropositionIf the splitting number $s$ is $\aleph_{1}$, then every nonseparable metric space contains a sequence of subsets with no convergent subsequence.
Proof: Following Sierpinski, since the metric space $M$ is non-separable, there exist $d > 0$ and a sequence $\{p_\xi\}_{\xi<\omega_{1}}$ of points in $M$ such that $\varrho(p_\xi,p_\eta)\ge d$ for $\xi<\eta<\omega_{1}$, where $\varrho(x,y)$ is the metric on $M$.
Let $S$ be a splitting family (for $[\omega]^{\omega}$) of size $\aleph_{1}$, $S = \lbrace s^\xi : \xi < \omega_{1} \rbrace$, where $s^\xi = \langle n_1^\xi,n_2^\xi,n_3^\xi,\ldots\rangle$; for a given $k \in \mathbb{N}$, let $E_k$ be the set of all $p_\xi$ such that $k\in \{n_1^\xi,n_2^\xi,\ldots\}$.
The sequence $E_1,E_2,E_3,\ldots$ does not contain any convergent subsequence. For, if $E_{k_1}, E_{k_2},\ldots$ where $k_1<k_2<\cdots$ is an arbitrary subsequence of $E_1,E_2,\ldots$, then there exists $\alpha<\omega_{1}$, such that $s^\alpha$ splits $K = \lbrace k_{n} : n < \omega \rbrace$ into infinite $a = K \cap s^{\alpha}$ and $b = K \setminus s^{\alpha}$ say. Now $p_\alpha\in E_{k_{i}}$ for $k_{i} \in a$ and $p_\alpha \notin E_{k_{j}}$ for $k_{j} \in b.$ The open ball with centre $p_\alpha$ and radius $d$ (which is actually just the singleton $p_\alpha$) intersects each $E_{k_{i}}$ for $k_{i} \in a$ non-trivially but is disjoint from $E_{k_{j}}$ for $k_{j} \in b$. Consequently, $E_{k_1},E_{k_2},E_{k_3},\ldots$ is not convergent. So the sequence $E_1,E_2,\ldots$ contains no convergent subsequence, q.e.d.
Corollary (Sierpinski)CH implies the assertion (*) every nonseparable metric space contains a sequence of subsets with no convergent subsequence.
Proof. CH implies $s = \aleph_{1}$.
Corollary (Ashutosh)The assertion (*) does not imply CH.
Proof. It is relatively consistent that $s = \aleph_{1} < 2^{\aleph_{0}}$. q.e.d.
|
DG - MP - PDE Seminar: Nassif Ghoussoub (UBC) Date: 09/13/2011 Time: 15:30
University of British Columbia
A self-dual polar factorization for vector fields
Abstract
We show that any non-degenerate vector field u in L^{\infty}(\Omega, \R^N), where \Omega is a bounded domain in \R^N, can be written as {equation} \hbox{u(x)= \nabla_1 H(S(x), x) for a.e. x \in \Omega}, {equation} where S is a measure preserving point transformation on \Omega such that S^2=I a.e (an involution), and H: \R^N \times \R^N \to \R is a globally Lipschitz anti-symmetric convex-concave Hamiltonian. Moreover, u is a monotone map if and only if S can be taken to be the identity, which suggests that our result is a self-dual version of Brenier's polar decomposition for the vector field u as u(x)=\nabla \phi (S(x)), where \phi is convex and S is a measure preserving transformation. We also describe how our polar decomposition can be reformulated as a self-dual mass transport problem.
For further information, please see the event page at: http://www.math.ubc.ca/Dept/Events/index.shtml?period=future&series=all.
|
While much more can be said about sequences, we now turn to ourprincipal interest, series. Recall that a series, roughly speaking, isthe sum of a sequence: if $\ds\{a_n\}_{n=0}^\infty$ is a sequence then theassociated series is$$\sum_{i=0}^\infty a_n=a_0+a_1+a_2+\cdots$$Associated with a series is a second sequence, called the
sequence of partial sums $\ds\{s_n\}_{n=0}^\infty$:$$s_n=\sum_{i=0}^n a_i.$$So$$s_0=a_0,\quad s_1=a_0+a_1,\quad s_2=a_0+a_1+a_2,\quad \ldots$$A series converges if the sequence of partial sums converges, and otherwise the series diverges.
Example 13.2.1 If $\ds a_n=kx^n$, $\ds\sum_{n=0}^\infty a_n$ is called a
geometric series.A typical partial sum is$$s_n=k+kx+kx^2+kx^3+\cdots+kx^n=k(1+x+x^2+x^3+\cdots+x^n).$$We note that$$\eqalign{ s_n(1-x)&=k(1+x+x^2+x^3+\cdots+x^n)(1-x)\cr &=k(1+x+x^2+x^3+\cdots+x^n)1-k(1+x+x^2+x^3+\cdots+x^{n-1}+x^n)x\cr &=k(1+x+x^2+x^3+\cdots+x^n-x-x^2-x^3-\cdots-x^n-x^{n+1})\cr &=k(1-x^{n+1})\cr}$$so$$\eqalign{ s_n(1-x)&=k(1-x^{n+1})\cr s_n&=k{1-x^{n+1}\over 1-x}.\cr}$$If $|x|< 1$, $\ds\lim_{n\to\infty}x^n=0$ so$$ \lim_{n\to\infty}s_n=\lim_{n\to\infty}k{1-x^{n+1}\over 1-x}= k{1\over 1-x}.$$ Thus, when $|x|< 1$ the geometric series converges to $k/(1-x)$. When, for example, $k=1$ and $x=1/2$:$$ s_n={1-(1/2)^{n+1}\over 1-1/2}={2^{n+1}-1\over 2^n}=2-{1\over 2^n} \quad\hbox{and}\quad \sum_{n=0}^\infty {1\over 2^n} = {1\over 1-1/2} = 2.$$We began the chapter with the series$$\sum_{n=1}^\infty {1\over 2^n},$$namely, the geometric series without the first term $1$. Each partialsum of this series is 1 less than the corresponding partial sum for the geometric series, so of course the limit is also one less than thevalue of the geometric series, that is,$$\sum_{n=1}^\infty {1\over 2^n}=1.$$
It is not hard to see that the following theorem follows from theorem 13.1.2.
1. $\ds\sum ca_n$ is convergent and $\ds\sum ca_n=c\sum a_n$
2. $\ds\sum (a_n+b_n)$ is convergent and $\ds\sum (a_n+b_n)=\sum a_n+\sum b_n$.
The two parts of this theorem are subtly different. Suppose that $\sum a_n$ diverges; does $\sum ca_n$ also diverge if $c$ is non-zero? Yes: suppose instead that $\sum ca_n$ converges; then by the theorem, $\sum (1/c)ca_n$ converges, but this is the same as $\sum a_n$, which by assumption diverges. Hence $\sum ca_n$ also diverges. Note that we are applying the theorem with $a_n$ replaced by $ca_n$ and $c$ replaced by $(1/c)$.
Now suppose that $\sum a_n$ and $\sum b_n$ diverge; does $\sum (a_n+b_n)$ also diverge? Now the answer is no: Let $a_n=1$ and $b_n=-1$, so certainly $\sum a_n$ and $\sum b_n$ diverge. But $\sum (a_n+b_n)=\sum(1+-1)=\sum 0 = 0$. Of course, sometimes $\sum (a_n+b_n)$ will also diverge, for example, if $a_n=b_n=1$, then $\sum (a_n+b_n)=\sum(1+1)=\sum 2$ diverges.
In general, the sequence of partial sums $\ds s_n$ is harder to understand and analyze than the sequence of terms $\ds a_n$, and it is difficult to determine whether series converge and if so to what. Sometimes things are relatively simple, starting with the following.
Proof. Since $\sum a_n$ converges, $\ds\lim_{n\to\infty}s_n=L$ and $\ds\lim_{n\to\infty}s_{n-1}=L$, because this really says the same thing but "renumbers'' the terms. By theorem 13.1.2, $$ \lim_{n\to\infty} (s_{n}-s_{n-1})= \lim_{n\to\infty} s_{n}-\lim_{n\to\infty}s_{n-1}=L-L=0. $$ But $$ s_{n}-s_{n-1}=(a_0+a_1+a_2+\cdots+a_n)-(a_0+a_1+a_2+\cdots+a_{n-1}) =a_n, $$ so as desired $\ds\lim_{n\to\infty}a_n=0$.
This theorem presents an easy divergence test: if given a series $\suma_n$ the limit $\ds\lim_{n\to\infty}a_n$ does not exist or has a valueother than zero, the series diverges. Note well that the converse is
not true: If $\ds\lim_{n\to\infty}a_n=0$ then the series doesnot necessarily converge.
Example 13.2.4 Show that $\ds\sum_{n=1}^\infty {n\over n+1}$ diverges.
We compute the limit: $$\lim _{n\to\infty}{n\over n+1}=1\not=0.$$ Looking at the first few terms perhaps makes it clear that the series has no chance of converging: $${1\over2}+{2\over3}+{3\over4}+{4\over5}+\cdots$$ will just get larger and larger; indeed, after a bit longer the series starts to look very much like $\cdots+1+1+1+1+\cdots$, and of course if we add up enough 1's we can make the sum as large as we desire.
Example 13.2.5 Show that $\ds\sum_{n=1}^\infty {1\over n}$ diverges.
Here the theorem does not apply: $\ds\lim _{n\to\infty} 1/n=0$, so it looks like perhaps the series converges. Indeed, if you have the fortitude (or the software) to add up the first 1000 terms you will find that $$\sum_{n=1}^{1000} {1\over n}\approx 7.49,$$ so it might be reasonable to speculate that the series converges to something in the neighborhood of 10. But in fact the partial sums do go to infinity; they just get big very, very slowly. Consider the following:
$\ds 1+{1\over 2}+{1\over 3}+{1\over 4} > 1+{1\over 2}+{1\over 4}+{1\over 4} = 1+{1\over 2}+{1\over 2}$
$\ds 1+{1\over 2}+{1\over 3}+{1\over 4}+ {1\over 5}+{1\over 6}+{1\over 7}+{1\over 8} > 1+{1\over 2}+{1\over 4}+{1\over 4}+{1\over 8}+{1\over 8}+{1\over 8}+{1\over 8} = 1+{1\over 2}+{1\over 2}+{1\over 2}$
$\ds 1+{1\over 2}+{1\over 3}+\cdots+{1\over16}> 1+{1\over 2}+{1\over 4}+{1\over 4}+{1\over 8}+\cdots+{1\over 8}+{1\over16}+\cdots +{1\over16} =1+{1\over 2}+{1\over 2}+{1\over 2}+{1\over 2}$
and so on. By swallowing up more and more terms we can always manageto add at least another $1/2$ to the sum, and by adding enough ofthese we can make the partial sums as big as we like. In fact, it'snot hard to see from this pattern that$$1+{1\over 2}+{1\over 3}+\cdots+{1\over 2^n} > 1+{n\over 2},$$so to make sure the sum is over 100, for example, we'd addup terms until we get to around $\ds 1/2^{198}$, that is,about $\ds 4\cdot 10^{59}$ terms. This series, $\sum (1/n)$, is called the
harmonic series.
Exercises 13.2
Ex 13.2.1Explain why $\ds\sum_{n=1}^\infty {n^2\over 2n^2+1}$diverges.(answer)
Ex 13.2.2Explain why $\ds\sum_{n=1}^\infty {5\over 2^{1/n}+14}$diverges.(answer)
Ex 13.2.3Explain why $\ds\sum_{n=1}^\infty {3\over n}$diverges.(answer)
Ex 13.2.4Compute $\ds\sum_{n=0}^\infty {4\over (-3)^n}- {3\over 3^n}$. (answer)
Ex 13.2.5Compute $\ds\sum_{n=0}^\infty {3\over 2^n}+ {4\over 5^n}$. (answer)
Ex 13.2.6Compute $\ds\sum_{n=0}^\infty {4^{n+1}\over 5^n}$.(answer)
Ex 13.2.7Compute $\ds\sum_{n=0}^\infty {3^{n+1}\over 7^{n+1}}$.(answer)
Ex 13.2.8Compute $\ds\sum_{n=1}^\infty \left({3\over 5}\right)^n$.(answer)
Ex 13.2.9Compute $\ds\sum_{n=1}^\infty {3^n\over 5^{n+1}}$.(answer)
|
J. D. Hamkins and J. Reitz, “The set-theoretic universe $V$ is not necessarily a class-forcing extension of HOD,” ArXiv e-prints, 2017. (manuscript under review)
@ARTICLE{HamkinsReitz:The-set-theoretic-universe-is-not-necessarily-a-forcing-extension-of-HOD, author = {Joel David Hamkins and Jonas Reitz}, title = {The set-theoretic universe {$V$} is not necessarily a class-forcing extension of {HOD}}, journal = {ArXiv e-prints}, year = {2017}, volume = {}, number = {}, pages = {}, month = {September}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1709.06062}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/the-universe-need-not-be-a-class-forcing-extension-of-hod}, }
Abstract.In light of the celebrated theorem of Vopěnka, proving in ZFC that every set is generic over $\newcommand\HOD{\text{HOD}}\HOD$, it is natural to inquire whether the set-theoretic universe $V$ must be a class-forcing extension of $\HOD$ by some possibly proper-class forcing notion in $\HOD$. We show, negatively, that if ZFC is consistent, then there is a model of ZFC that is not a class-forcing extension of its $\HOD$ for any class forcing notion definable in $\HOD$ and with definable forcing relations there (allowing parameters). Meanwhile, S. Friedman (2012) showed, positively, that if one augments $\HOD$ with a certain ZFC-amenable class $A$, definable in $V$, then the set-theoretic universe $V$ is a class-forcing extension of the expanded structure $\langle\HOD,\in,A\rangle$. Our result shows that this augmentation process can be necessary. The same example shows that $V$ is not necessarily a class-forcing extension of the mantle, and the method provides a counterexample to the intermediate model property, namely, a class-forcing extension $V\subseteq V[G]$ by a certain definable tame forcing and a transitive intermediate inner model $V\subseteq W\subseteq V[G]$ with $W\models\text{ZFC}$, such that $W$ is not a class-forcing extension of $V$ by any class forcing notion with definable forcing relations in $V$. This improves upon a previous example of Friedman (1999) by omitting the need for $0^\sharp$.
In 1972, Vopěnka proved the following celebrated result.
Theorem. (Vopěnka) If $V=L[A]$ where $A$ is a set of ordinals, then $V$ is a forcing extension of the inner model $\HOD$.
The result is now standard, appearing in Jech (Set Theory 2003, p. 249) and elsewhere, and the usual proof establishes a stronger result, stated in ZFC simply as the assertion: every set is generic over $\HOD$. In other words, for every set $a$ there is a forcing notion $\mathbb{B}\in\HOD$ and a $\HOD$-generic filter $G\subseteq\mathbb{B}$ for which $a\in\HOD[G]\subseteq V$. The full set-theoretic universe $V$ is therefore the union of all these various set-forcing generic extensions $\HOD[G]$.
It is natural to wonder whether these various forcing extensions $\HOD[G]$ can be unified or amalgamated to realize $V$ as a single class-forcing extension of $\HOD$ by a possibly proper class forcing notion in $\HOD$. We expect that it must be a very high proportion of set theorists and set-theory graduate students, who upon first learning of Vopěnka’s theorem, immediately ask this question.
Main Question. Must the set-theoretic universe $V$ be a class-forcing extension of $\HOD$?
We intend the question to be asking more specifically whether the universe $V$ arises as a bona-fide class-forcing extension of $\HOD$, in the sense that there is a class forcing notion $\mathbb{P}$, possibly a proper class, which is definable in $\HOD$ and which has definable forcing relation $p\Vdash\varphi(\tau)$ there for any desired first-order formula $\varphi$, such that $V$ arises as a forcing extension $V=\HOD[G]$ for some $\HOD$-generic filter $G\subseteq\mathbb{P}$, not necessarily definable.
In this article, we shall answer the question negatively, by providing a model of ZFC that cannot be realized as such a class-forcing extension of its $\HOD$.
Main Theorem. If ZFC is consistent, then there is a model of ZFC which is not a forcing extension of its $\HOD$ by any class forcing notion definable in that $\HOD$ and having a definable forcing relation there.
Throughout this article, when we say that a class is definable, we mean that it is definable in the first-order language of set theory allowing set parameters.
The main theorem should be placed in contrast to the following result of Sy Friedman.
Theorem. (Friedman 2012) There is a definable class $A$, which is strongly amenable to $\HOD$, such that the set-theoretic universe $V$ is a generic extension of $\langle \HOD,\in,A\rangle$.
This is a postive answer to the main question, if one is willing to augment $\HOD$ with a class $A$ that may not be definable in $\HOD$. Our main theorem shows that in general, this kind of augmentation process is necessary.
It is natural to ask a variant of the main question in the context of set-theoretic geology.
Question. Must the set-theoretic universe $V$ be a class-forcing extension of its mantle?
The mantle is the intersection of all set-forcing grounds, and so the universe is close in a sense to the mantle, perhaps one might hope that it is close enough to be realized as a class-forcing extension of it. Nevertheless, the answer is negative.
Theorem. If ZFC is consistent, then there is a model of ZFC that does not arise as a class-forcing extension of its mantle $M$ by any class forcing notion with definable forcing relations in $M$.
We also use our results to provide some counterexamples to the intermediate-model property for forcing. In the case of set forcing, it is well known that every transitive model $W$ of ZFC set theory that is intermediate $V\subseteq W\subseteq V[G]$ a ground model $V$ and a forcing extension $V[G]$, arises itself as a forcing extension $W=V[G_0]$.
In the case of class forcing, however, this can fail.
Theorem. If ZFC is consistent, then there are models of ZFC set theory $V\subseteq W\subseteq V[G]$, where $V[G]$ is a class-forcing extension of $V$ and $W$ is a transitive inner model of $V[G]$, but $W$ is not a forcing extension of $V$ by any class forcing notion with definable forcing relations in $V$. Theorem. If ZFC + Ord is Mahlo is consistent, then one can form such a counterexample to the class-forcing intermediate model property $V\subseteq W\subseteq V[G]$, where $G\subset\mathbb{B}$ is $V$-generic for an Ord-c.c. tame definable complete class Boolean algebra $\mathbb{B}$, but nevertheless $W$ does not arise by class forcing over $V$ by any definable forcing notion with a definable forcing relation.
More complete details, please go to the paper (click through to the arxiv for a pdf).
J. D. Hamkins and J. Reitz, “The set-theoretic universe $V$ is not necessarily a class-forcing extension of HOD,” ArXiv e-prints, 2017. (manuscript under review)
@ARTICLE{HamkinsReitz:The-set-theoretic-universe-is-not-necessarily-a-forcing-extension-of-HOD, author = {Joel David Hamkins and Jonas Reitz}, title = {The set-theoretic universe {$V$} is not necessarily a class-forcing extension of {HOD}}, journal = {ArXiv e-prints}, year = {2017}, volume = {}, number = {}, pages = {}, month = {September}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1709.06062}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/the-universe-need-not-be-a-class-forcing-extension-of-hod}, }
|
What is the simplest ay to describe the difference between these two concepts, that often go by the same name?
The Wilsonian effective action is an action with a given scale, where all short wavelength fluctuations (up to the scale) are integrated out. Thus the theory describes the effective dynamics of the long wavelength physics, but it is still a quantum theory and you still have an path integral to perform. So separating the fields into long and short wavelength parts $\phi = \phi_L + \phi_S$, the partition function will take the form (N.B. I'm using euclidean path integral)
$$ Z = \int\mathcal D\phi e^{-S[\phi]} =\int\mathcal D\phi_{L}\left(\int D\phi_{S}e^{-S[\phi_L+\phi_S]}\right)=\int\mathcal D\phi_{L}e^{-S_{eff}[\phi_L]}$$ where $S_{eff}[\phi_L]$ is the Wilsonian effective action.
The 1PI effective action doesn't have a length scale cut-off, and is effectively looking like a classical action (but all quantum contribution are taken into account). Putting in a current term $J\cdot \phi$ we can define $Z[J] = e^{-W[J]}$ where $W[J]$ is the generating functional for connected correlation functions (analogous to the free energy in statistical physics). Define the "classical" field as $$\Phi[J] = \langle 0|\hat{\phi}|0\rangle_J/\langle 0| 0 \rangle_J = \frac 1{Z[J]}\frac{\delta}{\delta J}Z[J] = \frac{\delta}{\delta J}\left(-W[J]\right).$$
The 1PI effective action is given by a Legendre transformation $\Gamma[\Phi] = W[J] + J\cdot\Phi$ and thus the partition function takes the form
$$Z = \int\mathcal D e^{-S[\phi] + J\cdot \phi} = e^{-\Gamma[\Phi] + J\cdot \Phi}.$$ As you can see, there is no path integral left to do.
|
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
How can I prove that the Cartesian product of two countable sets is also countable?
closed as off-topic by Xander Henderson, Xam, Arnaud Mortier, Ethan Bolker, David Hill Mar 22 '18 at 20:28
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Xander Henderson, Xam, Arnaud Mortier, Ethan Bolker, David Hill
In your answer you use Cantor's pairing function. It is an important function indeed. However using Cantor-Bernstein's theorem we only need to find an injection from $\mathbb N\times\mathbb N$ into $\mathbb N$.
A useful example is: $$f(m,n) = 2^m\cdot 3^n$$
If $f(m,n)=f(l,k)$ then by the fundamental theorem of arithmetics we have that $m=l, n=k$.
We now can find an injection $g\colon\mathbb N\to\mathbb N\times\mathbb N$, for example $g(n)=(0,n)$.
Now Cantor-Bernstein's theorem tells us that if $f\colon A\to B$ and $g\colon B\to A$ are two injective functions, then there is a bijection from $A$ into $B$.
From this to $\mathbb N^k$ being countable, you can either go with induction, as you suggested, or map $(m_1,\ldots,m_k)$ to $p_1^{m_1}\cdot\ldots p_k^{m_k}$, where $p_i$ is the $i$-th prime number.
So I know $\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ via (provided from class proof): $$f(x,y) = \frac{(x + y - 2)(x + y - 1)}{2}$$ Then it would mean that two countable sets, $A$ and $B$, can be set up as $f:\mathbb{N} \to A $ and $g: \mathbb{N} \to B$. This points to: $$f \times g : \mathbb{N} \times \mathbb{N} \to A \times B$$ There is now a surjection $\mathbb{N} \times \mathbb{N}$ to $A \times B$ $\implies$ $A \times B$ is also countable. So then induction can be used in the number of sets in the collection.
We have a general result.
We use the notation $[n] = \{0,1,2,3,\cdots ,n-1 \}$.
Proposition 1: Let $A$ be a set and let ${(G_k)}_{k \in \mathbb N}$ be a countable family of sets satisfying:
$\tag 1 \text{Each } G_k \text{ is a nonempty finite subset of } A \text{ with cardinality } \alpha_k$
$\tag 2 \text{The family } G_k \text{ of sets is a partition of } A$
For each $k \ge 0$ let there be given a bijective mapping
$\tag 3 \tau_k: [\alpha_k] \to G_k$
Then there exist a bijection $f: \mathbb N \to A$.
Proof For each $m \in \mathbb N$ let $K_m = \sum_{i=0}^{m-1} \alpha_i$; note that $K_0 = 0$. For each $n \ge 0$, define $\lambda(n) = \text{max(} \{ m \, | \, K_m \le n \} \text{)}$. Since $0 \le n - K_{\lambda(n)} \le \alpha_{\lambda(n)} - 1$, we can define a function
$\quad f(n) = \tau_{\lambda(n)}(n - K_{\lambda(n)})$
It can be shown that this function $f$ is a bijective correspondence between $\mathbb N$ and $A$. $\quad \blacksquare$
Note that we defined an explicit function based on a sequence of
given functions, so we are not using any form of the Axiom of Choice. This is a subtle point - we know the functions exist but an axiom in necessary if we want to pull them together out of 'thin air'.
Exercise: Define a bijective correspondence between $\mathbb N$ and $\mathbb N \times \mathbb N$ using the Cantor pairing function, as described in Wikipedia as
Although using the summation notation might be using some implied induction/recursion, Proposition 1 describes a function directly. So the function doesn't have to carry an 'internal state baggage' as it executes. In practice, the mechanism will be much easier to program/calculate than it might appear.
Following is a Python program that implements the bijection.
L = 1while True: for y in range(0, L): x = L - 1 - y print((x,y), '', end='') if x==0: break if L == 10: print('...', end='') print("\nProgram stopping after printing", L, 'levels.') break else: L = L + 1 continue
OUTPUT:
(0, 0) (1, 0) (0, 1) (2, 0) (1, 1) (0, 2) (3, 0) (2, 1) (1, 2) (0, 3) (4, 0) (3, 1) (2, 2) (1, 3) (0, 4) (5, 0) (4, 1) (3, 2) (2, 3) (1, 4) (0, 5) (6, 0) (5, 1) (4, 2) (3, 3) (2, 4) (1, 5) (0, 6) (7, 0) (6, 1) (5, 2) (4, 3) (3, 4) (2, 5) (1, 6) (0, 7) (8, 0) (7, 1) (6, 2) (5, 3) (4, 4) (3, 5) (2, 6) (1, 7) (0, 8) (9, 0) (8, 1) (7, 2) (6, 3) (5, 4) (4, 5) (3, 6) (2, 7) (1, 8) (0, 9) ...Program stopping after printing 10 levels.
|
Set Difference with Empty Set is Self
Jump to navigation Jump to search
Theorem $S \setminus \O = S$ Proof
From Set Difference is Subset:
$S \setminus \O \subseteq S$
From the definition of the empty set:
$\forall x \in S: x \notin \O$ Let $x \in S$.
Thus:
\(\displaystyle x \in S\) \(\leadsto\) \(\displaystyle x \in S \land x \notin \O\) Rule of Conjunction \(\displaystyle \) \(\leadsto\) \(\displaystyle x \in S \setminus \O\) Definition of Set Difference \(\displaystyle \) \(\leadsto\) \(\displaystyle S \subseteq S \setminus \O\) Definition of Subset Thus we have: $S \setminus \O \subseteq S$
and:
$S \subseteq S \setminus \O$
So by definition of set equality:
$S \setminus \O = S$
$\blacksquare$
Also see Sources 1964: W.E. Deskins: Abstract Algebra... (previous) ... (next): Exercise $1.1: \ 8 \ \text{(b)}$ 1971: Allan Clark: Elements of Abstract Algebra... (previous) ... (next): Chapter $1$: The Notation and Terminology of Set Theory: $\S 8 \ \text{(d)}$ 1993: Keith Devlin: The Joy of Sets: Fundamentals of Contemporary Set Theory(2nd ed.) ... (previous) ... (next): $\S 1$: Naive Set Theory: $\S 1.2$: Operations on Sets: Exercise $1.2.5 \ \text{(i)}$ 2012: M. Ben-Ari: Mathematical Logic for Computer Science(3rd ed.) ... (previous) ... (next): Appendix $\text{A}.2$: Theorem $\text{A}.11$
|
If a random variable is discrete, and we are interested in its quantile value, how to define a proper back testing procedure?
For example, the
underlying variable with a discrete value is
$$ d(\mbox{account}) = \mbox{PaymentDate} - \mbox{BillingDate} $$
the
observing variable:
$$ y = \mbox{percentile}(d, 95\%, \mbox{month}) $$
or $y$ is the 95th percentile value of $d$, for a particular month. e.g. 95% of credit cards are paid within 20 days from the billing, in 2013 Jan.
How could I define a back-testing approach?
Background
To define an estimation-backtesting method for a continous random variable is easier. Now in my group we have such a non-parametric approach:
underlying variable:
$$r(\mbox{month}) = \mbox{monthly credit-card account default rate}$$
For example, 2013 Feb default rate is 1.1%, 2013 Jan is 1.2%...
observing variable:
$$ x = \mbox{percentile}(r, 95\%) $$
$x$ is the 95% percentile value of $r$. Here $x$ definition is similar to VaR.
point forcast:
$$ \hat x(\mbox{month}) = \mbox{percentile}(r(\mbox{month}), N, 95\%) $$
$\hat x$ is the 95% percentile value of $r$, based on $N$ historic observations.
For example, take $N=36$, retrieve back 36 months, the 95% percentile value of default rate $r$ is 2.3%. then $\hat x = 2.3\%$.
point forecast Exception:
$$ \mbox{PFException}(t) = \begin{cases} 0 & r(\mbox{month}) \leq \hat x(\mbox{month}) \\ 1 & \text{otherwise} \end{cases} $$
By right 95% of the time there shall have no exception, while 5% of the time exception happens.
backtesting:
There are POF test, checking the rate of the exception; and independent test, checking the correlation of exceptions.
For example, Kupiec (1995) proposed a POF test checks exceptions happened in previouis 36 months' point forecasts: 0-4 exceptions are ok, green light, 4-7 exceptions are yellow light, while more than 8 exceptions are red light.
Christoffersen (1998) proposed an independent test.
Kupiec, P. (1995). Techniques for verifying the accuracy of risk management models. Journal of Derivatives 3, 73–84.
Christoffersen, P. (1998). Evaluating interval forecasts. International Economic Review 39, 841–62.
|
There are already some good answers, but I still feel like adding yet another explanation, because I consider this topic extremely important for the understanding of many aspects of digital signal processing.
First of all it is important to understand that the DFT does not 'assume' periodicity of the signal to be transformed. The DFT is simply applied to a finite signal of length $N$ and the corresponding DFT coefficients are defined by
$$X[k]=\sum_{n=0}^{N-1}x[n]e^{-j2\pi nk/N},\quad k=0,1,\ldots,N-1\tag{1}$$
From (1) it is obvious that only samples of $x[n]$ in the interval $[0,N-1]$ are considered, so no periodicity is assumed. On the other hand, the coefficients $X[k]$ can be interpreted as Fourier coefficients of the periodic continuation of the signal $x[n]$. This can be seen from the inverse transform
$$x[n]=\sum_{k=0}^{N-1}X[k]e^{j2\pi nk/N}\tag{2}$$
which computes $x[n]$ correctly in the interval $[0,N-1]$, but it also computes its periodic continuation outside this interval because the right-hand side of (2) is periodic with period $N$. This property is inherent in the definition of the DFT, but it need not bother us because normally we're only interested in the interval $[0,N-1]$.
Considering the DTFT of $x[n]$
$$X(\omega)=\sum_{n=-\infty}^{\infty}x[n]e^{-jn\omega}\tag{3}$$
we can see by comparing (3) with (1), that if $x[n]$ is a
finite sequence in the interval $[0,N-1]$, the DFT coefficients $X[k]$ are samples of the DTFT $X(\omega)$:
$$X[k]=X(2\pi k/N)\tag{4}$$
So one use of the DFT (but certainly not the only one) is to compute samples of the DTFT. But this only works if the signal to be analyzed is of
finite length. Usually this finite length signal is constructed by windowing a longer signal. And it is this windowing which causes spectral leakage.
As a last remark, note that the DTFT of the periodic continuation $\tilde{x}[n]$ of the finite sequence $x[n]$ can be expressed in terms of the DFT coefficients of $x[n]$:
$$\tilde{x}[n]=\sum_{k=-\infty}^{\infty}x[n-kN]\tag{5}$$$$\tilde{X}(\omega)=\frac{2\pi}{N}\sum_{k=-\infty}^{\infty}X[k]\delta(\omega-2\pi k/N)\tag{6}$$
EDIT: The fact that $\tilde{x}[n]$ and $\tilde{X}(\omega)$ given above are a DTFT transform pair can be shown as follows. First note that the DTFT of a discrete time impulse comb is a Dirac comb:
$$\sum_{k=-\infty}^{\infty}\delta[n-kN]\Longleftrightarrow\frac{2\pi}{N}\sum_{k=-\infty}^{\infty}\delta(\omega-2\pi k/N)\tag{7}$$
The sequence $\tilde{x}[n]$ can be written as the convolution of $x[n]$ with an impulse comb:
$$\tilde{x}[n]=x[n]\star \sum_{k=-\infty}^{\infty}\delta[n-kN]\tag{8}$$
Since convolution corresponds to multiplication in the DTFT domain, the DTFT $\tilde{X}(\omega)$ of $\tilde{x}[n]$ is given by the multiplication of $X(\omega)$ with a Dirac comb:
$$\begin{align}\tilde{X}(\omega)&=X(\omega)\cdot\frac{2\pi}{N}\sum_{k=-\infty}^{\infty}\delta(\omega-2\pi k/N)\\&=\frac{2\pi}{N}\sum_{k=-\infty}^{\infty}X(2\pi k/N)\delta(\omega-2\pi k/N)\end{align}\tag{9}$$
Combining $(9)$ with $(4)$ establishes the result $(6)$.
|
I am currently working with Pesaran & Timmermann test version from year 2009. Since I could not find any R package that contains function to calculate it (
rugarch has 1992 version in
DACTest). I recently posted thread in cross validated with similar question where kind user mlofton advised me to look into this list. Since cross-posting is not acceptable practice after over week of hard research I managed to develop R code for
newPT_test. It seems to be correct, but since I had lots of doubts maybe someone will clarify my concerns.
Here is the formula I used (Pönkä 2017):
$$PT=(T-1)(S^{-1}_{yy,w}S_{y\hat{y},w}S^{-1}_{\hat{y}\hat{y},w}S_{\hat{y}y,w}) \sim {\chi}^2_1, $$
$$S_{yy,w}=(T-1)^{-1}Y'M_wY, $$ $$S_{\hat{y}\hat{y},w}=(T-1)^{-1}\hat{Y}'M_w\hat{Y}, $$ $$S_{y\hat{y},w}=(T-1)^{-1}\hat{Y}'M_w\hat{Y},$$ $$S_{y\hat{y},w}=(T-1)^{-1}Y'M_w\hat{Y}, $$ $$M_w=I_{T-1}-W(W'W)^{-1}W', $$ $$W=(\tau_{T-1},Y_{-1},\hat{Y}_{-1}),$$
"...and $Y=(y_2,...,y_T)'$",$\hat{Y}=(\hat{y}_2,...,\hat{y}_T)$,$Y_{-1}=(y_1,...,y_{T-1})$,$\hat{Y}=(\hat{y}_1,...,\hat{y}_{T-1})$ and $\tau_t$ is as $(T-1)\times1$ vector of ones..." From other source I learned that $I_t$ is identity matrix.
My main issue was notation. I understand that $y_t$ represents ones and zeroes depending on whether change between actual value and its last observation was positive ($1$) or negative ($0$)? Same for forecast $\hat{y}_t$. I am not sure how to understand vectors: $Y=(y_2,...,y_t)'$ and $Y'$ in $S$ elements. I know that $'$ at the end of $(y_2,...,y_t)'$ means it is column vector, so $Y'$ indicates that now its row vector now?. Also it starts from $y_2$ which means I remove first observation from binary series while in $Y_{-1}=(y_1,...,y_{T-1})$ I remove last? That makes $\tau_{t-1}$ a $(T-1)\times3$ matrix right?
Here is R code:
nwPT_test=function(actual,forecast){yt=actual #assign actual to yt to make code shorter xt=forecast #assign forecast to xt... delta_yt=as.matrix(cbind(ifelse(yt-lag(yt)>0,1,0)[-1])) #calc change of yt (delta)delta_xt=as.matrix(cbind(ifelse(xt-lag(xt)>0,1,0)[-1])) #calc change of xt(delta)nT=length(delta_yt) #number of Time periodsYt=cbind(delta_yt[-1]) #Yt=(y2,...,yT) Xt=cbind(delta_xt[-1]) #Xt=(x2,...,xT)Yt2=as.vector(rbind(delta_yt[-nT])) #Yt2=(y1,...,yT-1)Xt2=as.vector(rbind(delta_xt[-nT])) #Xt2=(x1,...,xT-1)teta=rep(1,nT-1) #T-1 vector of onesI=diag(nT-1) #Identity matrixW=cbind(teta,Yt2,Xt2) #W matrix as in formulaMw=I-(W%*%((t(W)%*%W)^(-1))%*%t(W))#calcualting Mw as in formula#calculating elements S as in formulaSyy.w=((nT-1)^(-1))*t(Yt)%*%Mw%*%YtSxx.w=((nT-1)^(-1))*t(Xt)%*%Mw%*%XtSxy.w=((nT-1)^(-1))*t(Xt)%*%Mw%*%YtSyx.w=((nT-1)^(-1))*t(Yt)%*%Mw%*%XtPT=(nT-1)*(Syy.w^(-1)*Syx.w*Sxx.w^(-1)*Sxy.w)#finally calculating PT p.value=1-pchisq(PT,df=1)#calculating p-value#some code to make it looks nicersummary=c(PT,p.value)names(summary)=c("PT statistic","p.value")summary}
I managed to solve most issues from my previous thread. Here is the link:: https://stats.stackexchange.com/questions/372776/confusions-about-pesaran-timmermann-test-2009-version?noredirect=1#comment701286_372776 It might help understand the way I approached it. So far my code seems to give correct results.
Please answer only questions from current thread. Hope I updated my question enough to not be treated as double post... And thanks again to mlofton for informing me about this great SE subsite.
|
I am trying to do something very simple in Mathematica 9. I want to play around with option pricing and for that I thought it best to use the new stochastic process functionality.
So, first of all I simulate one instance of a geometric brownian motion:
$$ \frac{dX_t}{X_t} = \mu dt + \sigma dW_t\\ dW_t \sim N(0, 1) $$
Which in Mathematica is:
proc = ItoProcess[ \[DifferentialD]x[t]/ x[t] == σ \[DifferentialD]w[t] + μ \[DifferentialD]t, x[t], {x, x0}, t, w \[Distributed] WienerProcess[]];
And here's an example of what I get, when I plot it, assuming $X_0 = 100$.
So, okay, when I create a plot of a
RandomFunction of the process, then I actually plot the
TemporalData for $X_t$ and not $dX_t$. Cool, whatever.
But now I want to plot, say $f(dX_t)$ or $f(X_t)$, where I would like to define $f$ as I see fit. And this is where I hit a brick wall. I have tried looking for hints in the docs or for answers here, but there are no definitive ones or the ones that seem to work.
I also feel, that I'm missing something fundamental here. Could somebody kindly suggest an answer or the venue of inquiry?
|
DIFFICULT EUCLEDIAN GEOMETRY QUESTION ON CIRCLES
Including results for difficult euclidean geometry question on circles.Do you want results only for difficult eucledian geometry question on circles?
Geometry Problems with Solutions and Answers for Grade 12 Geometry problems with solutions and answers for grade 12. Free Mathematics Tutorials. Twitter; What is the measure of angle BOC where O is the center of the circle? . Free Questions and Problems With Answers More Primary Math (Grades 4 and 5) with Free Questions and Problems With Answers Home Page. Popular Pages.
A Hard Geometry Problem on circle - Stack Exchange A Hard Geometry Problem on circle. Ask Question Asked 2 years, 4 months ago. Active 1 year, MathJax is meant for complicated formulae that would be ambiguous or difficult to read as plain text. Hard Euclidean Geometry question. 0.Without loss of generality, we can consider $R=OC=OB=1$. From $\Delta OBC: \frac{OB}{\sin20}=\frac{BC}{\sin140}\Rightarrow BC=\frac{\sin140}{\sin20..Best answer · 3Join OA and AC angle AOC = 2xangle ABC=60 deg (center angle and circumference angle) OC=OA (radii) Triangle OAC is equilateral AC=OA=OC angle CAB =..3Hint. a bit of angle-chasing you should be able to establish $\angle ADC=70$. You can then use the sine rule in triangles $ODB$ and $ODC$ (as..0$$/angle(ABC)=30°///angle(AOC)=2*angle(ABC)=60°//$$So AOC is equilateral triangle (AO=AC=CO=R)$$/angle(CAB)=/angle(COB)/2=70°//angle(ACD)=70°$$..0Produce CD and meet the circle at EJoin OA and ACangle APC = 60 deg (centre and circumference angles)OA = AC = OCJoin BEangle CAB = angle CEB..0BCD is equal to 40 (OCD plus BCO). So the unknown corner of the triangle is 180 - (40 + 30) which is 110. ADC, CDB and ODB are supplementary, so AD..First join $AC$ and $AO$. Now, $\angle ABC$ is $30^\circ$,so AOC is 60*=OC,OAC=OCA=(180*-60*)÷2=60*. Triangle AOC is equilateral Triangle,..[PDF]
Euclidean Geometry: Circles - learnsetca Tangents drawn from a common point outside the circle are equal in length. Knowledge of geometry from previous grades will be integrated into questions in the exam. - Euclidean Geometry makes up of Maths P2 - If you have attempted to answer a question more than once, make sure you cross out the answer you do not
Geometry Problems with Solutions and Answers Geometry problems with solutions and answers for grade 11. Geometry Problems with Solutions and Answers. Grade 11 geometry problems with detailed solutions are presented. Problems. The two circles below are concentric (have same center). The radius of the large circle is 10 and that of the small circle is 6. Middle School Maths (Grades 6, 7
Super hard Euclidean Geometry - Stack Exchange Super hard Euclidean Geometry. Ask Question Asked 6 years, 4 months ago. I have puzzled over this problem from my book on innovative Euclidean Geometry for months. Browse other questions tagged geometry euclidean-geometry or ask your own question.[PDF]
EUCLIDEAN GEOMETRY: (±50 marks) - vocfm EUCLIDEAN GEOMETRY: (±50 marks) Grade 11 theorems: 1. The line drawn from the centre of a circle perpendicular to a chord bisects the chord. 2. The perpendicular bisector of a chord passes through the centre of the circle. 3. The angle subtended by an arc at the centre of a circle is double the size of
Circle Geometry | Euclidean Geometry | Siyavula 8.2 Circle geometry (EMBJ9) Terminology. The following terms are regularly used when referring to circles: Arc — a portion of the circumference of a circle. Chord — a straight line joining the ends of an arc. Circumference — the perimeter or boundary line of a circle.
World's Hardest Easy Geometry Problem This is the hardest problem I have ever seen that is, in a sense, easy. It really can be done using only elementary geometry. This is not a trick question. Here is a very small hint. Here is a small hint. These hints are not spoilers. There is a review of everything you need to know about elementary geometry [PDF]
MATHEMATICS WORKSHOP EUCLIDEAN GEOMETRY MATHEMATICS WORKSHOP EUCLIDEAN GEOMETRY TEXTBOOK GRADE 11 (Chapter 8) Presented by: Jurg Basson tangent s e c a n t d i a m e t e r c h or d arc r a d i u s sector. seg ment CHAPTER 8 EUCLIDEAN GEOMETRY BASIC CIRCLE TERMINOLOGY THEOREMS INVOLVING THE CENTRE OF A CIRCLE THEOREM 1 A In all questions, O is the centre. (a) Calculate the
What Are Euclidean and Non-Euclidean Geometry? Balloons, Triangles, and AnglesWhat Is Euclidean Geometry?What Is Non-Euclidean Geometry?Non-Euclidean Geometry in The Real WorldWrap UpA few months ago, my daughter got her first balloon at her first birthday party. Ever since that day, balloons have become just about the most amazing thing in her world. After her party, she decided to call her balloon “ba,” and now pretty much everything that’s round has also been dubbed “ba.” A ball? A “ba.” The Moon? Yep, also a “ba.\"Why did she decide that balloons—and every other round object—are so fascinating? I might be biased in this belief, but I’ve come to the conclusion that it’s..See more on quickanddirtytips
Related searches for difficult euclidean geometry question on euclidean geometryeuclidean geometry exampleeuclidean geometry pdfeuclidean geometry termscircle geometry questionseuclidean geometry quizleteuclidean geometry historyeuclidean geometry factsIncluding results for difficult euclidean geometry question on circles.Do you want results only for difficult eucledian geometry question on circles?
|
In a
pure diffusion setting, you can equivalently write no calendar arbitrage constraints:
In terms of
implied volatility: total implied variance should be non decreasing in time, and that, for any given forward moneyness level, see Gatheral top of page 4.
In terms of
European option prices: see Gatheral end of page 3.
The price-based constraint builds on the following lemma
[Lemma] If $X_t$ is a martingale, $L < \infty$ a real constant and $0 < t_1 < t_2$ two future times, then
$$E[(X_{t_2}-L)^+] \geq E [(X_{t_1}-L)^+] $$
[Proof]
\begin{align}
E[(X_{t_2}-L)^+ \vert \mathcal {F}_0 ] &= E[\ E [(X_{t_2}-L)^+ \ \vert \mathcal {F}_1 \ ]\ \vert \mathcal {F}_0 ] \\
&\geq E[\ \left( E[X_{t_2}-L \ \vert \mathcal {F}_1 \ ] \right)^+ \ \vert \mathcal {F}_0 ] \\
&\geq E[ (X_{t_1}-L)^+ \ \vert \mathcal {F}_0 ]
\end{align}
where we have used, in respective order:
Tower property of conditional expectation
Jensen's inequality ($f : x \rightarrow x^+$ is a convex function)
The fact that $X_t$ is a martingale
In a more general setting, one can still
use this lemma to derive price-based constraints. The only question is: what martingale $X_t$ should we consider?
[Proportional dividends]
In a pure diffusion setting, it made sense to use $X_t = S_t/F (0,t) $, because $dS_t/S_t = \mu_t dt + \sigma_t dW_t \Rightarrow S_t = F(0,t)X_t$, where $X_t=\mathcal{E}(\int_0^t \sigma_s dW_s)$ is indeed a martingale (Doléans-Dade exponential). Applying the lemma then gives:
\begin{align*}& E[(X_{t_2}-L)^+] \geq E [(X_{t_1}-L)^+] \\\iff & E\left[\left(\frac{S_{t_2}}{F(0,t_2)}-L\right)^+\right] \geq E \left[\left(\frac{S_{t_1}}{F(0,t_1)}-L\right)^+\right] \\\iff & \frac{1}{F(0,t_2)} E[(S_{t_2}-LF(0,t_2))^+] \geq \frac{1}{F(0,t_1)} E[(S_{t_1}-LF(0,t_1))^+] \\\iff & \frac{\tilde{C}(K_2,t_2)}{F(0,t_2)} \geq \frac{\tilde{C}(K_1,t_1)}{F(0,t_1)} \end{align*}where $\tilde{C}(K,T)$ denotes an
undiscounted call price and $K_1=LF(0,t_1)$, $K_2=LF(0,t_2)$. This is precisely Gatheral's result:$$ \frac{C_2}{K_2} \geq \frac{C_1}{K_1} $$(in his paper, he always uses undiscounted call prices and he chose $L=e^k$), since$$ \frac{K_2}{K_1} = \frac{F(0,t_2)}{F(0,t_1)} $$
[Cash & Proportional dividends]
In a more elaborate setting, it will depend on how you model dividends. Buehler for instance suggests a no-arbitrage pricing framework which can accommodate cash dividends, proportional dividends, and/or any mix of the two. In his model, it makes sense to use the martingale $X_t = (S_t-D_t)/(F (0,t) - D_t) $ where $D_t $ is related to the future dividend stream (all divs are assumed to be known in advance). Applying the lemma gives:
\begin{align*}& E[(X_{t_2}-L)^+] \geq E [(X_{t_1}-L)^+] \\\iff & E\left[\left(\frac{S_{t_2}-D_{t_2}}{F(0,t_2)-D_{t_2}}-L\right)^+\right] \geq E \left[\left(\frac{S_{t_1}-D_{t_1}}{F(0,t_1)-D_{t_1}}-L\right)^+\right] \\\iff & \frac{E[(S_{t_2}-(D_{t_2}+L(F(0,t_2)-D_{t_2})))^+] }{F(0,t_2)-D_{t_2}} \geq \frac{E[(S_{t_1}-(D_{t_1}+L(F(0,t_1)-D_{t_1})))^+]}{F(0,t_1)-D_{t_1}} \\\iff & \frac{\tilde{C}(K_2,t_2)}{F(0,t_2)-D_{t_2}} \geq \frac{\tilde{C}(K_1,t_1)}{F(0,t_1)-D_{t_1}}\end{align*}where $\tilde{C}(K,T)$ denotes an
undiscounted call price and $K_1=D_{t_1}+L(F(0,t_1)-D_{t_1}))$, $K_2=D_{t_2}+L(F(0,t_2)-D_{t_2}))$.
Observe that when $(D_t)_{t\geq0} = 0$, we fall-back on Gatheral's result. In Buehler, $(D_t)_{t\geq0} = 0$ iff there are
no cash dividends (meaning there could be either proportional dividends or no dividends at all). This is completely consistent with what we have said in the pure diffusion case.
[Arbitrage opportunity]
Finally, note that the above inequalities describe the calendar arbitrage opportunities. In the second situation for instance, a PF where you are long $\tilde{C}(K=K_2,T=t_2)$ and short $(F(0,t_2)-D_{t_2})/(F(0,t_1)-D_{t_1}) $ units of $\tilde{C}(K=K_1,T=t_1)$ should always have a positive value (we just showed that). If not, it is an arbitrage opportunity.
|
Ground state homoclinic solutions for a second-order Hamiltonian system
Department of Mathematics, Xiangnan University, Chenzhou, Hunan 42300, China
$ \ddot{u}-L(t)u+\nabla W(t, u) = 0, $
$ t\in {\mathbb{R}}, u\in {\mathbb{R}}^{N} $
$ L: \mathbb{R}\rightarrow {\mathbb{R}}^{N\times N} $
$ W: {\mathbb{R}}\times {\mathbb{R}}^{N}\rightarrow {\mathbb{R}} $
$ L $
$ W $
$ t $
$ 0 $
$ \sigma\left(-\frac{d^2}{dt^2} +L\right) $ Keywords:Homoclinic solution, hamiltonian system, periodic potentials, ground state homoclinic solution, variational method. Mathematics Subject Classification:Primary: 34C37; Secondary: 58E05. Citation:Xiaoping Wang. Ground state homoclinic solutions for a second-order Hamiltonian system. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2163-2175. doi: 10.3934/dcdss.2019139
References:
[1] [2] [3] [4] [5] [6]
V. Coti Zelati and P. H. Rabinowitz,
Homoclinic orbits for second second order Hamiltonian systems possessing superquadratic potentials,
[7] [8] [9] [10]
D. E. Edmunds and W. D. Evans,
Spectral Theory and Differential Operators, Clarendon Press, Oxford, 1987.
Google Scholar
[11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]
X. H. Tang,
Ground state solutions of Nehari-Pankov type for a superlinear Hamiltonian elliptic system on $ \mathbb{R} ^N$,
[24] [25]
X. H. Tang, X. Y. Lin and J. S. Yu, Nontrivial solutions for Schrödinger equation with local super-quadratic conditions,
[26]
X. H. Tang and L. Xiao,
Homoclinic solutions for nonautonomous second order Hamiltonian systems with a coercive potential,
[27] [28] [29] [30]
J. Wang, F. Zhang and J. Xu,
Existence and multiplicity of homoclinic orbits for the second order Hamiltonian systems,
[31] [32]
D. L. Wu, X. P. Wu and C. L. Tang,
Homoclinic solutions for a class of nonperiodic and noneven second-order Hamiltonian systems,
[33]
D. L. Wu, X. P. Wu and C. L. Tang,
Subharmonic and homoclinic solutions for second order Hamiltonian systems with new superquadratic conditions,
[34] [35]
J. Yang and F. Zhang,
Infinitely many homoclinic orbits for the second order Hamiltonian systems with super-quadratic potentials,
[36]
M. H. Yang and Z. Q. Han,
The existence of homoclinic solutions for second-order Hamiltonian systems with periodic potentials,
[37] [38] [39]
Z. Zhou and J. S. Yu,
On the existence of homoclinic solutions of a class of discrete nonlinear periodic systems,
[40]
show all references
References:
[1] [2] [3] [4] [5] [6]
V. Coti Zelati and P. H. Rabinowitz,
Homoclinic orbits for second second order Hamiltonian systems possessing superquadratic potentials,
[7] [8] [9] [10]
D. E. Edmunds and W. D. Evans,
Spectral Theory and Differential Operators, Clarendon Press, Oxford, 1987.
Google Scholar
[11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]
X. H. Tang,
Ground state solutions of Nehari-Pankov type for a superlinear Hamiltonian elliptic system on $ \mathbb{R} ^N$,
[24] [25]
X. H. Tang, X. Y. Lin and J. S. Yu, Nontrivial solutions for Schrödinger equation with local super-quadratic conditions,
[26]
X. H. Tang and L. Xiao,
Homoclinic solutions for nonautonomous second order Hamiltonian systems with a coercive potential,
[27] [28] [29] [30]
J. Wang, F. Zhang and J. Xu,
Existence and multiplicity of homoclinic orbits for the second order Hamiltonian systems,
[31] [32]
D. L. Wu, X. P. Wu and C. L. Tang,
Homoclinic solutions for a class of nonperiodic and noneven second-order Hamiltonian systems,
[33]
D. L. Wu, X. P. Wu and C. L. Tang,
Subharmonic and homoclinic solutions for second order Hamiltonian systems with new superquadratic conditions,
[34] [35]
J. Yang and F. Zhang,
Infinitely many homoclinic orbits for the second order Hamiltonian systems with super-quadratic potentials,
[36]
M. H. Yang and Z. Q. Han,
The existence of homoclinic solutions for second-order Hamiltonian systems with periodic potentials,
[37] [38] [39]
Z. Zhou and J. S. Yu,
On the existence of homoclinic solutions of a class of discrete nonlinear periodic systems,
[40]
[1]
Changrong Zhu, Bin Long.
The periodic solutions bifurcated from a homoclinic solution for parabolic differential equations.
[2]
Jun Wang, Junxiang Xu, Fubao Zhang.
Homoclinic orbits for a class of Hamiltonian systems
with superquadratic or asymptotically quadratic potentials.
[3]
Juntao Sun, Jifeng Chu, Zhaosheng Feng.
Homoclinic orbits for first order periodic
Hamiltonian systems with spectrum point zero.
[4]
Jian Zhang, Wen Zhang, Xianhua Tang.
Ground state solutions for Hamiltonian elliptic system with inverse square potential.
[5]
Jian Zhang, Wen Zhang.
Existence and decay property of ground state solutions for Hamiltonian elliptic system.
[6] [7] [8]
Ying Lv, Yan-Fang Xue, Chun-Lei Tang.
Homoclinic orbits for a class of asymptotically quadratic Hamiltonian systems.
[9] [10]
Samir Adly, Daniel Goeleven, Dumitru Motreanu.
Periodic and homoclinic solutions for a class of unilateral problems.
[11]
Ziheng Zhang, Rong Yuan.
Infinitely many homoclinic solutions for damped vibration problems with subquadratic potentials.
[12]
Aleksa Srdanov, Radiša Stefanović, Aleksandra Janković, Dragan Milovanović.
"Reducing the number of dimensions of the possible solution space" as a method for finding the exact solution of a system with a large number of unknowns.
[13]
Jingli Ren, Zhibo Cheng, Stefan Siegmund.
Positive periodic solution for Brillouin electron beam focusing system.
[14] [15] [16] [17]
Oksana Koltsova, Lev Lerman.
Hamiltonian dynamics near nontransverse homoclinic orbit to saddle-focus equilibrium.
[18] [19] [20]
Norihisa Ikoma.
Existence of ground state solutions to the nonlinear Kirchhoff type equations with potentials.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top]
|
Summer Term 2016
Please note that this term the seminars will be held on Thursdays, from 2pm to 3pm.
José Miguel Manzano, Compact stable surfaces with constant mean curvature in Killing submersions . Thursday 5th May, 2-3pm, Huxley 139.
Abstract: A Killing submersion is a Riemannian submersion from an orientable 3-manifold to an orientable surface, such that the fibres of the submersion are the integral curves of a Killing vector field without zeroes. The interest of this family of structures is the fact that it represents a common framework for a vast family of 3-manifolds, including the simply-connected homogeneous ones and the warped products with 1-dimensional fibres, among others.
In the first part of this talk we will discuss existence and uniqueness of Killing submersions in terms of some geometric functions defined on the base surface, namely the Killing length and the bundle curvature. We will show how these two functions and the metric in the base encode the geometry and topology of the total space of the submersion. In the second part, we will prove that if the base is compact and the submersion admits a global section, then it also admits a global minimal section. This gives a complete solution to the Bernstein problem (i.e., the classification of entire graphs with constant mean curvature) when the base surface is assumed compact. Finally we will talk about some results on compact orientable stable surfaces with constant mean curvature immersed in the total space of a Killing submersion. In particular, if they exist, then either (a) the base is compact and it is one of the above global minimal sections, or (b) the fibres are compact and the surface is a constant mean curvature torus.
This is based upon a joint work with Ana M. Lerma. It is available online at http://arxiv.org/abs/1604.00542.
Pieter Blue. Hidden symmetries and decay of fields outside black holes. Thursday 12th May, 2-3pm, Huxley 139
Abstract: I will discuss energy and Morawetz (or integrated local decay) estimates for fields outside black holes, in particular the Vlasov equation. This builds on earlier work for the wave and Maxwell equation. Much of the work on these problems in the last decade has used the vector-field method and its generalisations. One generalisation has focused on using symmetries, differential operators that take solutions of a PDE to solutions. In this context, a hidden symmetry is a symmetry that does not decompose into first-order symmetries coming from a smooth family of isometries of the underlying manifold. In this talk, I will build on applications of the vector-field method to the Vlasov equation to prove an integrated energy decay for the Vlasov equation outside a very slowly rotating Kerr black hole, and I will discuss some new features of the symmetry algebra for the Vlasov equation, which illustrate the difficulties in passing to pointwise-decay estimates for the Vlasov equation in this context.
Jacobus Portegies, Intrinsic Flat and Gromov-Hausdorff convergence. Thursday 15th June, 2-3pm, Clore Lecture Theatre (Huxley, ground floor) – please note the exceptional date and room.
Abstract: We show that for a noncollapsing sequence of closed oriented Riemannian manifolds with Ricci curvature uniformly bounded from below and diameter bounded above, Gromov-Hausdorff convergence essentially agrees with intrinsic flat convergence.
Gerasim Kokarev. Eigenvalue problems on minimal submanifolds: old and new. Thursday 16th June, 2-3pm, Huxley 139
Abstract: I will give a short survey of inequalities for Laplace eigenvalues on Euclidean domains, and discuss their versions for minimal submanifolds. I will report on the work in progress and will describe a number of new results, generalizing previous work by Li and Yau, and other authors.
Elena Mäder-Baumdicker. Willmore minimizing Klein bottles in Euclidean space. Thursday 23rd June, 2-3pm, Huxley 139
Abstract: I will present results concerning immersed Klein bottles in euclidean n-space with low Willmore energy. Together with P. Breuning and J. Hirsch I proved that there is a smooth embedded Klein bottle that minimizes the Willmore energy among immersed Klein bottles when n\geq 4. I will shortly explain that the minimizer is probably already known: Lawson’s bipolar \tilde\tau_{3,1}-Klein bottle, a minimal Klein bottle in S^4. If n=4, there are three distinct homotopy classes of immersed Klein bottles that are regularly homotopic to an embedding. One contains the above mentioned minimizer. The other two are characterized by the property of having Euler normal number +4 or -4. I will explain that the minimum of the Willmore energy in these two classes is 8\pi. Furthermore, there are infinity many distinct embedded surfaces minimizing the Willmore energy in these classes. The proof is based on the twistor theory of the Euclidean four-space.
There will be no seminars on 19/05, 26/05, 02/06 and 09/06.
|
[The following includes a) two specific questions (at the end), b) an attempt to capture a dispositional concept (excitability) in geometric and physical terms.]
I assume that "excitability of a neuron" is a reasonable concept (and measure) to distinguish neurons: there are (as will be seen) neurons that are more or less excitable (in a quantifiable way).
What does "excitability" mean?
Excitability $E$ may be operationally defined as the sensitivity of a neuron to excitatory synaptical inputs. Mathematically stated: as the (decimal logarithm of the reciprocal of the) proportion of excitatory synapses that must be simultaneously active (i.e. generate an excitatory postsynaptic potential, EPSP) in order to collectively evoke an action potential.
Let $N$ be the number of all excitatory synapses of the neuron, $n$ be the the minimal number of simultaneously active synapses that is needed to evoke an action potential:
$$E = \log \frac{N}{n}$$
If the proportion is 100% (
all excitatory synapses must be active), $E = \log (1) = 0$, if it's only 10%, $E=1$ (higher excitability), if it's 1%, $E = 2$, and so on.
How to determine $E$ for a given neuron? With the operational definition given above:
Count the actual number $N$ of excitatory synapses.
Estimate the minimal number $n$ of simultaneously active synapses that is needed to evoke an action potential (by repeated experiments).
Especially the second number $n$ will be quite hard to achieve experimentally. But there is an alternative way to estimate this number, given a simplified model of the neuron:
All synapses have equal (mean) distance $r$ to the axon hillock (where the action potential is generated). The EPSP generated at a synapse is $u$. The EPSPs travelling from the synapse to the axon hillock have a constant decay constant $\alpha$. The threshold value for action potential generation is $\theta$.
When $m$ synapses simultaneously generate an EPSP of size $u$, these EPSPs will sum up at the axon hillock to
$$ U = m\times u\times 10^{-\alpha r}$$
Only if $U \geq \theta$ an action potential will be generated, that is, the minimal number of simultaneously active synapses to generate an action potential is
$$ n = 10^{\alpha r}\ \frac{\theta}{u}$$
With this we can estimate the excitability $E = \log\frac{N}{n} = \log 10^{-\alpha r}\frac{N\ u}{\theta}$ by
$$E = -\alpha r + \log(N) + \log \frac{u}{\theta} $$
$\alpha$, $u$ and $\theta$ are more physical constants (more or less the same for all types of neurons?), $r$ and $N$ depend heavily on the arbitrary geometry/morphology of the neuron (especially do longer and more strongly branched dendrites have more synapses).
My specific questions are (please feel free to answer only with "yes" or "no"):
Has the property of excitability (in the operational sense above) been investigated a) for single neurons, b) for morphological types of neurons?
Is there an observed significant correlation between morphological type and excitability? Note that larger dendritic trees have larger $r$ (decreasing $E$) but also larger $N$ (increasing $E$).
(Final remark: The relation between $r$ and $N$ of a neuron is partially reflected in the structure – especially the degree of branching – of its dendritic tree.)
|
Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.
Description
Given a set of $n$ points in $\ell_{1}$, how many dimensions are needed to represent all pairwise distances within a specific distortion ? This dimension-distortion tradeoff question is well understood for the $\ell_{2}$ norm, where $O((\log n)/\epsilon^{2})$ dimensions suffice to achieve $1+\epsilon$ distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in $\ell_{1}$. A recent result shows that distortion $1+\epsilon$ can be achieved with $n/\epsilon^{2}$ dimensions. On the other hand, the only lower bounds known are that distortion $\delta$ requires $n^{\Omega(1/\delta^2)}$ dimension and that distortion $1+\epsilon$ requires $n^{1/2-O(\epsilon \log(1/\epsilon))}$ dimensions. In this work, we show the first near linear lower bounds for dimension reduction in $\ell_{1}$. In particular, we show that $1+\epsilon$ distortion requires at least $n^{1-O(1/\log(1/\epsilon))}$ dimensions. Our proofs are combinatorial, but inspired by linear programming. In fact, our techniques lead to a simple combinatorial argument that is equivalent to the LP based proof of Brinkman-Charikar for lower bounds on dimension reduction in $\ell_{1}$.
Questions and Answers You need to be logged in to be able to post here.
ADS
|
ISO Sensitivity (or ISO speed) is a measure of how strongly an image sensor and/or camera responds to light. The higher the sensitivity, the less light (smaller aperture and/or shorter exposure time) required to capture a good quality image. Unfortunately there are several measures of Sensitivity, and they are not consistent. Imatest calculates two of them: Saturation-based ISO sensitivity and Standard Output Sensitivity. Exposure Index (EI) is a camera setting derived from one or more of the Sensitivity measurements. It is used to determine the camera’s exposure in response to a light level measurement. Exposure Index and Sensitivity are closely related and sometimes used interchangeably, but should be kept distinct. For example, one might say, “The camera’s Saturation-based ISO sensitivity is 80 when the Exposure Index is set to 100.” Increasing the Exposure Index increases the (analog) gainat the image sensor output, prior to digitization (A-to-D conversion), allowing the camera to operate with less light, increases noise, degrading overall image quality, causes the system to saturate at lower light levels(though the light level that saturates the sensoris unchanged), increasing both sensitivity measurements. Measurements are most representative of the sensor when the camera is set to its minimumEI (though it may not be perfectlyrepresentative if the system saturates— reaches maximum pixel level— before the sensor itself saturates).
The following table summarizes the two sensitivity measurements supported by Imatest.
Saturation-based and Standard Output Sensitivity Saturation-based Sensitivity S sat Standard Output Sensitivity S SOS Measures Sensitivity relative to the luminance level that saturates the sensor or system.When the image is exposed for a region with 18% reflectance with EI derived from S sat, a luminance equivalent to 141% reflectance will saturate the system (41% headroom). Sensitivity that results in a standard output level for the region used to determine the exposure.Exposure is typically based on a gray region with 18% (0.18) reflectance. The Standard Output level is a normalized pixel level = 0.18 encoding gamma, i.e., pixel level = 116 (out of 255 maximum) for encoding gamma = 1/2.2. Affected by Sensor saturation level and analog gain prior to A-to-D conversion (set by the camera’s EI setting). Not affected by the tonal response curve (TRC). All signal processing factors, especially the “shoulder” of the TRC applied during RAW conversion to reduce likelihood of sensor saturation. A TRC shoulder tends to increase S SOS. Measurement accuracy About ±10%. Best with straight gamma encoding (no shoulder), which can be obtained by decoding the RAW image with dcraw). May be decreased by a “shoulder” in the TRC. Degraded by underexposure. Improved by slight to moderate overexposure (lightest patch at or near saturation). About ±10%. Since S SOS includes the effects of signal processing (the TRC, etc.) measurement accuracy is not affected by the TRC. But it may be degraded by “adaptive” processing, where different parts of the image are processed differently. Origin Original ISO-12232 standard (1997) CIPA DC-004 (2004)
For measuring the luminance of transmissive charts that have density reference files (for calculations other than Exposure Index), note that the measured reference file densities include the base density. You must therefore measure the Luminance of the lightbox without the chart, then apply the equation,
Patch Luminance = Lightbox Luminance * 10
Sensitivity is calculated in five Imatest Master modules: eSFR ISO, SFRplus, Color/Tone Interactive, Stepchart and Colorcheck, when
the incident light level in lux is entered (a lux meter is described on the Test Lab page), and EXIF data is available in the image file or Aperture (F-stop number) and Exposure (time in seconds) are entered manually. Fred Harvey’s EXIFtool is strongly recommended. Without it, EXIF data is only decoded for JPEG files.
Typically the incident lux level is entered in the input dialog box, and the sensitivity is displayed in one of the output plots and in the CSV output file, if it is written. Details vary slightly with the module.
Stepchart
The incident light level in lux is entered in a box near the lower-left of the Stepchart input dialog box. If it is blank or zero, sensitivity will not be calculated.
If EXIF data is available
Aperture (F-stop number) and Exposure (time in seconds) are displayed below Incident Lux. If these boxes are empty they must be entered manually for the sensitivity analysis to run.
Results ( S sat , S SOS , and the incident lux level) are displayed in the upper-right of the Stepchart noise detail figure (the second figure when all are checked). Colorcheck
The incident light level in lux is entered in a box near the lower-center of the Colorcheck input dialog box. If it is blank or zero, sensitivity will not be calculated. Results ( S sat , S SOS , and the incident lux level) are displayed in the Density response plot in the upper-left of the Colorcheck noise detail figure (the second figure when all are checked). SFRplus
The incident light level in lux is entered in the
Settings area of the SFRplus settings & options window, which is opened by clicking on in the Rescharts SFRplus setup window. If it is blank or zero, sensitivity will not be calculated.
If EXIF data is available,
Aperture (F-stop number) and Exposure (time in seconds) are displayed in the Optional parameters area near the bottom of the Settings window. If these boxes are empty they must be entered manually for the sensitivity analysis to run.
Results (
S sat , S SOS , and the incident lux level) are displayed in the lower right of the Density response plot in the Tonal response & gamma display.
Color/Tone Interactive and Color/Tone Auto
The incident light level in lux is entered in the Display area on the right of the Color/Tone Interactive window when
Display is set to 7. Black & White density. If it is blank or zero, sensitivity will not be calculated.
To check the EXIF data (or to enter the Aperture and Exposure settings if EXIF data is absent), click the
Color matrix and additional settings menu. The portion for displaying and/or entering Aperture and Exposure is shown below.
Results ( S sat , S SOS , and the incident lux level) are displayed in the lower-right of the Black & White density plot. When is pressed, results appear in CSV file [root name]_multicharts.csv only if Black & White density has been displayed. General background
The two key sensitivity measurements calculated by Imatest are both derived from the same equation.
\(\displaystyle \text{Sensitivity} = \frac{10}{H} = \frac{10}{I_{sensor}t}\) (1)
H is the exposure at the image sensor in lux-seconds for the object (test chart patch) used to determine exposure, typically a region of 18% reflectance. I sensor is the illuminance in Lux at the sensor. t is exposure time in seconds. The key issues are how to specify the illuminance at the object plane (i.e., Lux at the test chart) and output signal (i.e., pixel level) corresponding to H, and, the criteria for selecting H.
The relationship between test chart luminance and sensor illuminance can be derived from an equation in The manual of photography by Jacobson, Ward, Ray, Attridge, and Axford, Focal Press, Chapter 5, eqn. (5), p. 65,
\(\displaystyle I = \pi \frac{T L_{obj}}{4N^2} \) (2)
where
I is the illuminance at the Image (sensor) plane in lux, T is the lens transmittance fraction, generally assumed to be 0.9, L obj is the luminance at the Object plane (test chart) in candelas per square meter, N is the f-stop (aperture).
The sensitivity standards contain two factors in addition to
T that reduce the light passing through the lens. There are vignetting factor v, assumed to be 0.98 and cos 4( T), where T is assumed to be 10 degrees. cos 4(10 degrees ) = cos 4( p/18 radians) = 0.9406. This number is somewhat high for typical Imatest test conditions. Using T = 6 degrees, cos 4( T) = 0.9783. With these additional factors,
\(\displaystyle I = \pi T v L_{obj} \frac{\cos^4(T)}{4N^2} = \pi (0.9)(0.98)(0.9783)\frac{L_{obj}}{4N^2} = 0.6777 \frac{L_{obj}}{N^2}\) (3)
Plugging
I back into equation (1) gives
\(\displaystyle \text{Sensitivity} = \frac{10}{H} = \frac{10}{It} = 10 \frac{N^2}{0.6777 L_{obj} t} = 14.76 \frac{N^2}{L_{obj} t} \) (4)
Note that for
T = 10 degrees, Sensitivity = \(15.4\frac{N^2}{L_{obj}t}\) , in agreement with the OnSemi ISO Measurement document. Sensitivity has units of 1/(Light level * exposure). This can be visualized by recalling that exposure is inversely proportional to light level and sensitivity, i.e., as light or sensitivity increases, exposure decreases.
Imatest modules that measure sensitivity analyze images of grayscale step charts, which consist of patches of known density
d or reflectivity r, where \( d= -\log_{10}(r); \; r= 10^{-d}\). The X-Rite Colorchecker and the major charts supported by Color/Tone Interactive contain such patterns. For reference, pure white surfaces have r of about 90% (0.9), equivalent to d of about 0.05. The luminance of a chart patch is
\( L_{obj} = Ir / \pi \) where
I is the illuminance of the test chart in Lux. (5)
Reference: Basic Photographic Materials and Processes, Second Edition, by Stroebel, Current, Compton, and Zakia, Chapter 1, p. 27 (footnote), “Metric: A perfectly diffusely reflecting surface (100%) illuminated by 1 metercandle (… 1 lux) will reflect … 1 /
π candela per square meter.”
Illuminance in Lux is easily measurable by an incident light meter such as the inexpensive BK Precision 615.
This measurement assumes the image is exposed using a standard gray card with 18% reflectivity (
r = 0.18) and that the the sensor saturates (reaches its maximum output) at 141% reflectivity (well above the 90% reflectivity for pure white) in order to provide some “headroom”. The patch (and corresponding luminance L sat and saturation reflectance r sat) where the sensor saturates is calculated by extrapolating the brightest unsaturated patches. (If any patches are saturated they cannot used directly because they contain no real information. In such cases the extrapolated saturation luminance cannot be larger than the first saturated patch.)
The equation is derived by assuming that
L obj is 18/141 of the value that saturates the sensor, \( L_{sat} = \frac{Ir_{sat}}{p}\), where r sat and hence L satare determined by extrapolating patch pixel levels to locate the saturation point, i.e., at the test chart we must use
\(L_{obj} = \frac{0.18}{1.414} L_{sat} = 0.1273 L_{sat}\)
\(\displaystyle \text{ISO Sensitivity} = S_{sat} = 14.76 \frac{N^2}{L_{obj} t} = 116 \frac{N^2}{L_{sat}t} = 116\frac{pN^2}{I r_{sat} t} = 364.6 \frac{ N^2}{r_{sat}t}\). (6)
Saturation-based ISO sensitivity is not affected by signal processing, though measurement accuracy can be strongly affected. For best accuracy RAW files are recommended.
Standard Output Sensitivity S SOS (from CIPA DC-004) S SOS(from CIPA DC-004)
This measurement assumes the image is exposed using a standard gray card with 18% reflectivity (
r = 0.18), and that the normalized pixel level for this region is 0.18^(1 / gamma), where gamma is the display gamma corresponding to the color space.
Many standard color spaces such as sRGB and Adobe RGB are designed for display with gamma = 2.2, so the normalized pixel level is 0.4586 (pixel level = 116 for 8-bit pixels where the maximum is 255). (0.461 or pixel level 118 is used in DC-004 because sRGB gamma is not
exactly 2.2.) The patch density d 46 = -log 10( r 46 ) and luminance \( L_{obj} = L_{46}\) where \( L_{obj} = I r_{46} / p\), corresponding to this pixel level is found using a second order polynomial fit to the log pixel levels as a function of patch density. If L obj is the patch luminance,
\(\displaystyle \text{Standard Output Sensitivity} = S_{SOS} = 1476 \frac{N^2}{L_{obj}t} = 46.37 \frac{N^2}{I r_{46}t}\) . (7)
SOS is strongly dependent on signal processing. It is increased when a “shoulder” is present in the tonal response curve (TRC). Shoulders are widely used to reduce the likelihood of highlight “burnout” (saturation). For files encoded with a straight gamma = 1/2.2 curve (what you get when you decode a RAW file with dcraw), \( S_{SOS} = 0.71 S_{sat}\), i.e., the image is assumed to saturate at 100% reflectivity (no headroom). When a shoulder is present,
S SOS is generally larger than S sat.
Headroom is the amount of exposure available between the exposure for a 100% reflective patch and system saturation. When Exposure Index (EI) is set to the Saturation-based ISO Sensitivity,
S sat, headroom is always Hdr sat = 41.4% or 1/2 f-stop (or EV or Zone).
Standard Output Sensitivity
S SOS and its corresponding headroom Hdr SOS are both strong functions of the response curve. Increasing the response “shoulder” (curvature near saturation) increases both S SOS and Hdr SOS. The formulas are fairly simple:
\(Hdr_{SOS}(\text{f-stops}) = \log_2(S_{SOS} / S_{sat}) + 0.5\) (f-stops or EV or zones) (8)
dcraw input dialog box (appears when RAW files are opened)showing recommended settings for ISO sensitivity measurement Auto white level be must unchecked. Demosaicingshould be set to Normal RAW conversion. (We haven’t found the right combination of settings that produces reliable results with Bayer RAW files.) Output color spaceis normally set to sRGB. Output gammashould be set to sRGBor 2.2.
Q-14 and Colorcheck image,
cropped and reduced
The determination of the saturation point is made by extrapolation or interpolation, depending on whether any of the brightest patches are saturated. Because RAW files converted with dcraw have a straight gamma curve (a straight line on a logarithmic scale, i.e., no “shoulder”), the saturation point can be determined with good accuracy.
The outputs below are for Stepchart runs
on the the full-sized version of the image shown cropped and reduced on the right.
These are the results for a RAW (CR2) file converted with dcraw. Note the straight line tonal response (slope = gamma) on the upper left. As prediced,
S SOS is approximately 0.71 * S sat, though there is some experimental error. Stepchart Density response and sensitivity from RAW (CR2) file converted with dcraw.
The results below show the tonal response curve for the same exposure, but using the JPEG image instead of the RAW image. The “shoulder” (curvature) in the bright areas on the right of the curve makes it difficult to accurately estimate the saturation point, although it’s not a bad thing pictorially. The shoulder reduces the likelihood of highlight saturation, resulting in generally more pleasing images. The value of
S sat derived from the RAW image (above) is more reliable. Note that the Standard Output Sensitivity S SOS = 110.8 has more than doubled. Headroom is \( \log_2 \bigl(\frac{110.8}{75}\bigr) + 0.5 = 1.06\) f-stops. Stepchart Density response and sensitivity from JPEG. Same horiz. scale.
Wikipedia – Film Speed The place to start, as usual.
OnSemi Image Sensor – ISO Measurement Describes the saturation-based ISO sensitivity measurement, but uses a different saturation level (106% reflectivity, relative to 18% used for determining exposure) than the other documents (141%, which gives greater “headroom”). Mentions a noise-based measurement, which is used infrequently.
CIPA DC-004: Sensitivity of digital cameras (July 27, 2004) Two definitions of camera sensitivity: Standard Output Sensitivity (SOS) and Recommended Esposure Index (REI). Published by the Camera & Imating Products Association (Japan).
New Measures of the Sensitivity of a Digital Camera, Douglas A. Kerr, Aug. 30, 2007. A useful commentary on the different sensitivity measurements.
|
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
|
Learning Objectives Calculate labor force percentages and the unemployment rate Calculating the Unemployment Rate
Remember that the unemployed are those who are out of work and who are actively looking for a job. We can calculate the unemployment rate by dividing the number of unemployed people by the total number in the labor force, then multiplying by 100.
Figure 1 shows the three-way division of the over-16 adult population. In 2016, 62.8% of the adult population was in the labor force; that is, either employed or without a job but looking for work. Those in the labor force can be divided into the employed and the unemployed. These values are also shown in Table 1. The unemployment rate
[latex]\text{Unemployment rate}=\frac{\text{Unemployed people}}{\text{Total labor force}}\times{100}[/latex]
Table 1. U.S. Employment and Unemployment, 2016 Total adult population over the age of 16 253.5 million In the labor force 159.1 million (62.8%) Employed 151.4 million Unemployed 7.7 million Out of the labor force 94.4 million (37.2%) Source: www.bls.gov
Based on the data in Table 1, what’s the unemployment rate in 2016? In this example, the unemployment rate can be calculated as 7.7 million unemployed people divided by 159.1 million people in the labor force, which works out to an 4.8% rate of unemployment. Read on to walk through the steps of calculating this percentage.
Calculating Labor Force Percentages
So how do economists arrive at the percentages in and out of the labor force and the unemployment rate? We will use the values in Table 1 to illustrate the steps.
To determine the percentage in the labor force: Step 1. Divide the number of people in the labor force (159.1 million) by the total adult (working-age) population (253.5 million). Step 2. Multiply by 100 to obtain the percentage.
[latex]\begin{array}{l}\text{Percentage in the labor force}=\frac{159.1}{253.5}\\\text{Percentage in the labor force}=0.628\\\text{Percentage in the labor force}=62.8\text{ percent}\end{array}[/latex]
To determine the percentage out of the labor force: Step 1. Divide the number of people out the labor force (94.4 million) by the total adult (working-age) population (253.5 million). Step 2. Multiply by 100 to obtain the percentage.
[latex]\begin{array}{l}\text{Percentage out of the labor force}=\frac{94.4}{253.5}\\\text{Percentage out of the labor force}=0.372\\\text{Percentage out of the labor force}=37.2\text{ percent}\end{array}[/latex]
To determine the unemployment rate: Step 1. Divide the number of unemployed people (7.7 million) by the total labor force (159.2 million). Step 2. Multiply by 100 to obtain the rate.
[latex]\begin{array}{l}\text{Unemployment rate}=\frac{7.7}{159.2}\\\text{Unemployment rate}=0.0487\\\text{Unemployment rate}=4.8\text{ percent}\end{array}[/latex]
Try It
Hidden Unemployment
Even with the “out of the labor force” category, there are still some people who are mislabeled in the categorization of employed, unemployed, or out of the labor force. There are some people who have only part time or temporary jobs and who are looking for full time and permanent employment that are counted as employed, though they are not employed in the way they would like or need to be. Additionally, there are individuals who are
underemployed . This includes those that are trained or skilled for one type or level of work who are working in a lower paying job or one that does not utilize their skills. For example, an individual with a college degree in finance who is working as a sales clerk would be considered underemployed. They are, however, also counted in the employed group. All of these individuals fall under the umbrella of the term “ hidden unemployment .” Discouraged workers, those who have stopped looking for employment and, hence, are no longer counted in the unemployed also fall into this group. Labor Force Participation Rate
Another important statistic is the
labor force participation rate . This is the percentage of adults in an economy who are either employed or who are unemployed and looking for a job. So, using the data in Figure 1 and Table 1, those included in this calculation would be the 159.2 million individuals in the labor force. The rate is calculated by taking the number employed, divided by the total adult population and multiplying by 100 to get the percentage. For the data from 2016, the labor force participation rate is 62.8%. In the United States the labor force participation rate is usually around 66-68%, though it has declined over the last decade. Reporting Employment and Unemployment The Establishment Payroll Survey
When the unemployment report comes out each month, the Bureau of Labor Statistics (BLS) also reports on the number of jobs created—which comes from the establishment payroll survey (EPS). The payroll survey is based on a survey of about 140,000 businesses and government agencies throughout the United States. It generates payroll employment estimates by the following criteria: all employees, average weekly hours worked, and average hourly, weekly, and overtime earnings. One of the criticisms of this survey is that it does not count the self-employed. It also does not make a distinction between new, minimum wage, part time or temporary jobs and full time jobs with “decent” pay.
How Is the U.S. Unemployment Data Collected?
The unemployment rate announced by the U.S. Bureau of Labor Statistics each month is based on the Current Population Survey (CPS), which has been carried out every month since 1940 by the U.S. Bureau of the Census. Great care is taken to make this survey representative of the country as a whole. The country is first divided into 3,137 areas. Then 729 of these areas are chosen to be surveyed. The 729 areas are then divided into districts of about 300 households each, and each district is divided into clusters of about four dwelling units. Every month, Census Bureau employees call about 15,000 of the four-household clusters, for a total of 60,000 households. Households are interviewed for four consecutive months, then rotated out of the survey for eight months, and then interviewed again for the same four months the following year, before leaving the sample permanently.
Based on this survey, unemployment rates are calculated by state, industry, urban and rural areas, gender, age, race or ethnicity, and level of education. A wide variety of other information is available, too. For example, how long have people been unemployed? Did they become unemployed because they quit, or were laid off, or their employer went out of business? Is the unemployed person the only wage earner in the family?
THE CPS and EPS
While both the Current Population Survey (CPS) and the Establishment Payroll Survey (EPS) both provide reports about jobs, the CPS measures the percentage of the labor force that is unemployed, while the EPS measures the net change in jobs created for the month.
Criticisms of Measuring Unemployment
There are always complications in measuring the number of unemployed. For example, what about people who do not have jobs and would be available to work, but have gotten discouraged at the lack of available jobs in their area and stopped looking? Such people, and their families, may be suffering the pains of unemployment. But the survey counts them as out of the labor force because they are not actively looking for work. Other people may tell the Census Bureau that they are ready to work and looking for a job but, truly, they are not that eager to work and are not looking very hard at all. They are counted as unemployed, although they might more accurately be classified as out of the labor force. Still other people may have a job, perhaps doing something like yard work, child care, or cleaning houses, but are not reporting the income earned to the tax authorities. They may report being unemployed, when they actually are working.
Although the unemployment rate gets most of the public and media attention, economic researchers at the Bureau of Labor Statistics publish a wide array of surveys and reports that try to measure these kinds of issues and to develop a more nuanced and complete view of the labor market. It is not exactly a hot news flash that economic statistics are imperfect. Even imperfect measures like the unemployment rate, however, can still be quite informative, when interpreted knowledgeably and sensibly.
Glossary discouraged workers: those who have stopped looking for employment due to the lack of suitable positions available labor force participation rate: this is the percentage of adults in an economy who are either employed or who are unemployed and looking for a job underemployed: individuals who are employed in a job that is below their skills unemployment rate: the percentage of adults who are in the labor force and thus seeking jobs, but who do not have jobs
|
2007 Senior Project Archive Section Navigation
Titles are hyperlinked to pdf copies of the final project write-up. Course coordinator: Barry Balof
TITLE: Baire One Functions
AUTHOR: Johnny Hu ABSTRACT: This paper gives a general overview of Baire one functions, including examples as well as several interesting properties involving bounds, uniform convergence, continuity, and $F_\sigma$ sets. We conclude with a result on a characterization of Baire one functions in terms of the notion of first return recoverability, which is a topic of current research in analysis. ADVISOR: Bob Fontenot
TITLE: Upper bounds on the $L(2;1)$-labeling Number of Graphs with Maximum Degree $\Delta$
AUTHOR: Andrew Lum ABSTRACT: $L(2;1)$-labeling was first defined by Jerrold Griggs [Gr, 1992] as a way to use graphs to model the channel assignment problem proposed by Fred Roberts [Ro, 1988]. An $L(2;1)$-labeling of a simple graph $G$ is a nonnegative integer-valued function $f : V(G)\rightarrow \{0,1,2,\ldots\}$ such that, whenever $x$ and $y$ are two adjacent vertices in $V(G)$, then $|f(x)-f(y)|\geq 2$, and, whenever the distance between $x$ and $y$ is 2, then $|f(x)-f(y)|\geq 1$. The $L(2;1)$- labeling numberof $G$, denoted $\lambda(G)$, is the smallest number $m$ such that $G$ has an $L(2;1)$-labeling with no label greater than $m$. Much work has been done to bound $\lambda(G)$ with respect to the maximum degree $\Delta$ of $G$ ([Cha, 1996], [Go, 2004], [Gr, 1992], [Kr, 2003], [Jo, 1993]). Griggs and Yeh [Gr, 1992] conjectured that $\lambda \leq \Delta^2$ when $\Delta \geq 2$.
In §1, we review the basics of graph theory. This section is intended for those with little or no background in graph theory and may be skipped as needed. In §2, we introduce the notion of $L(2;1)$-labeling. In §3, we give the labeling numbers for special classes of graphs. In §4, we use the greedy labeling algorithm to establish an upper bound for $\lambda$ in terms of $\Delta$. In §5, we use the Chang-Kuo algorithm to improve our bound. In §6, we prove the best known bound for general graphs.
ADVISOR: David Guichard
TITLE: Bijections on Riordan Objects(voted outstanding senior project)
AUTHOR: Jacob Menashe ABSTRACT: The Riordan Numbers are an integer sequence closely related to the well-known Catalan Numbers [2]. They count many mathematical objects and concepts. Among these objects are the Riordan Paths, Catalan Partitions, Interesting Semiorders, Specialized Dyck Paths, and Riordan Trees. That these objects have been shown combinatorially to be counted by the same sequence implies that a bijection exists between each pair. In this paper we introduce algorithmic bijections between each object and the Riordan Paths. Through function composition, we thus construct 10 explicit bijections: one for each pair of objects. ADVISOR: Barry Balof
TITLE: The Problem of Redistricting: the Use of Centroidal Voronoi Diagrams to Build Unbiased Congressional Districts
AUTHOR: Stacy Miller ABSTRACT: This paper is a development of the use of MacQueen’s method to draw centroidal Voronoi diagrams as apart of the redistricting process. We will use Washington State as an example of this method. Since centroidal Voronoi diagrams are inherently compact and can be created by an unbiased process, they could create congressional districts that are not only free from political gerrymandering but also appear to the general public as such. ADVISOR: Albert Schueller
TITLE: Signal Analysis
AUTHOR: David Ozog ABSTRACT: Signal processing is the analysis, interpretation, and manipulation of any time varying quantity [1]. Signals of interest include sound files, images, radar, and biological signals. Potentials for application in this area are vast, and they include compression, noise reduction, signal classification, and detection of obscure patterns. Perhaps the most popular tool for signal processing is Fourier analysis, which decomposes a function into a sum of sinusoidal basis functions. For signals whose frequencies change in time, Fourier analysis has disadvantages which can be overcome by using a windowing process called the Short Term Fourier Transform. The windowing process can be improved further using wavelet analysis. This paper will describe each of these processes in detail, and will apply a wavelet analysis to Pasco weather data. This application will attempt to localize temperature fluctuations and how they have changed since 1970. ADVISOR: Doug Hundley All of the documents available here are rendered in the Portable Document Format (PDF). A free PDF viewer is available from Adobe.com.schuelaw@whitman.edu Last updated: Thu May 24 13:49:47 PDT 2007
|
A divide and conquer algorithm's work at a specific level can be simplified into the equation:
$\qquad \displaystyle O\left(n^d\right) \cdot \left(\frac{a}{b^d}\right)^k$
where $n$ is the size of the problem, $a$ is the number of sub problems, $b$ is the factor the size of the problem is broken down by at each recursion, $k$ is the level, and $d$ is the exponent for Big O notation (linear, exponential etc.).
The book claims if the ratio is greater than one the sum of work is given by the last term on the last level, but if it is less than one the sum of work is given by the first term of the first level. Could someone explain why this is true?
|
I'm very confused about this bound, please give me any suggestions on how to prove it. (Note: $a \ll b$ is just a neater way to write $a = O(b)$)
I am starting with the bound $$f(n) \ll \frac{n}{\log(n)^2}\prod_{p|n}\left(1-\frac{1}{p}\right)^{-1}$$
then I don't see where $\phi$ comes from in $$\frac{n}{\log(n)^2}\prod_{p|n}\left(1-\frac{1}{p}\right)^{-1} \ll \frac{\phi(n)}{\log(n)^2}$$ I know that $\phi(n) = n \prod_{p|n}\left(1-\frac{1}{p}\right)$ but I'm confused because of the $-1$ power.
Then $$\sum_{n\le x} f(n)^2 \ll \sum_{n\le x}\frac{\phi(n)^2}{\log(n)^2}$$ but I don't understand why it's not $\log(n)^4$ even though it is permitted to replace it with a lower power in the denominator.
And finally $$\sum_{n\le x}\frac{\phi(n)^2}{\log(n)^2} \ll \frac{x^3}{\log(x)^4}$$ and I have no idea how to get that last bound at all. I tried Abel summation which didn't help and I tried using $\frac{\varphi(n)\sigma(n)}{n^2} < 1$ I've searched a lot of lecture notes and looked in Apostol and I don't see how to deduce it. One idea I had was that maybe it was a typo for $\log(x)^4$ in the denominator and they pulled that out, but that's not permitted since $n \le x$.
Thanks for any help.
|
Here is another geometric application of the integral: find the length of a portion of a curve. As usual, we need to think about how we might approximate the length, and turn the approximation into an integral.
We already know how to compute one simple arc length, that of a line segment. If the endpoints are $\ds P_0(x_0,y_0)$ and $\ds P_1(x_1,y_1)$ then the length of the segment is the distance between the points, $\ds \sqrt{(x_1-x_0)^2+(y_1-y_0)^2}$, from the Pythagorean theorem, as illustrated in figure 11.4.1.
Now if the graph of $f$ is "nice'' (say, differentiable) it appears that we can approximate the length of a portion of the curve with line segments, and that as the number of segments increases, and their lengths decrease, the sum of the lengths of the line segments will approach the true arc length; see figure 11.4.2.
Now we need to write a formula for the sum of the lengths of the line segments, in a form that we know becomes an integral in the limit. So we suppose we have divided the interval $[a,b]$ into $n$ subintervals as usual, each with length $\Delta x =(b-a)/n$, and endpoints $\ds a=x_0$, $\ds x_1$, $\ds x_2$, …, $\ds x_n=b$. The length of a typical line segment, joining $\ds (x_i,f(x_i))$ to $\ds (x_{i+1},f(x_{i+1}))$, is $\ds\sqrt{(\Delta x )^2 +(f(x_{i+1})-f(x_i))^2}$. By the Mean Value Theorem (6.5.2), there is a number $\ds t_i$ in $\ds (x_i,x_{i+1})$ such that $\ds f'(t_i)\Delta x=f(x_{i+1})-f(x_i)$, so the length of the line segment can be written as $$ \sqrt{(\Delta x)^2 + (f'(t_i))^2\Delta x^2}= \sqrt{1+(f'(t_i))^2}\,\Delta x. $$ The arc length is then $$ \lim_{n\to\infty}\sum_{i=0}^{n-1} \sqrt{1+(f'(t_i))^2}\,\Delta x= \int_a^b \sqrt{1+(f'(x))^2}\,dx. $$ Note that the sum looks a bit different than others we have encountered, because the approximation contains a $\ds t_i$ instead of an $\ds x_i$. In the past we have always used left endpoints (namely, $\ds x_i$) to get a representative value of $f$ on $\ds [x_i,x_{i+1}]$; now we are using a different point, but the principle is the same.
To summarize, to compute the length of a curve on the interval $[a,b]$, we compute the integral $$\int_a^b \sqrt{1+(f'(x))^2}\,dx.$$ Unfortunately, integrals of this form are typically difficult or impossible to compute exactly, because usually none of our methods for finding antiderivatives will work. In practice this means that the integral will usually have to be approximated.
Example 11.4.1 Let $\ds f(x) = \sqrt{r^2-x^2}$, the upper half circle of radius $r$. The length of this curve is half the circumference, namely $\pi r$. Let's compute this with the arc length formula. The derivative $f'$ is $\ds \ds -x/\sqrt{r^2-x^2}$ so the integral is $$ \int_{-r}^r \sqrt{1+{x^2\over r^2-x^2}}\,dx =\int_{-r}^r \sqrt{r^2\over r^2-x^2}\,dx =r\int_{-r}^r \sqrt{1\over r^2-x^2}\,dx. $$ Using a trigonometric substitution, we find the antiderivative, namely $\ds \arcsin(x/r)$. Notice that the integral is improper at both endpoints, as the function $\ds \sqrt{1/(r^2-x^2)}$ is undefined when $x=\pm r$. So we need to compute $$ \lim_{D\to-r^+}\int_D^0 \sqrt{1\over r^2-x^2}\,dx + \lim_{D\to r^-}\int_0^D \sqrt{1\over r^2-x^2}\,dx. $$ This is not difficult, and has value $\pi$, so the original integral, with the extra $r$ in front, has value $\pi r$ as expected.
Exercises 11.4
Ex 11.4.1Find the arc length of $\ds f(x)=x^{3/2}$ on $[0,2]$.(answer)
Ex 11.4.2Find the arc length of $\ds f(x) = x^2/8-\ln x$on $[1,2]$.(answer)
Ex 11.4.3Find the arc length of $\ds f(x) = (1/3)(x^2 +2)^{3/2}$on the interval $[0,a]$.(answer)
Ex 11.4.4Find the arc length of $f(x)=\ln(\sin x)$ on theinterval $[\pi/4,\pi/3]$.(answer)
Ex 11.4.5Let $a>0$. Show that the length of $y=\cosh x$ on$[0,a]$ is equal to $\ds \int _0 ^a \cosh x\,dx$.
Ex 11.4.6Find the arc length of $f(x)=\cosh x$ on $[0, \ln 2]$.(answer)
Ex 11.4.7Set up the integral to find the arc length of $\sin x$ on the interval $[0,\pi]$; do not evaluate the integral. If you haveaccess to appropriate software, approximate the value of the integral.(answer)
Ex 11.4.8Set up the integral to find the arc length of $\ds y=xe^{-x}$on the interval $[2,3]$; do not evaluate the integral. If you haveaccess to appropriate software, approximate the value of the integral.(answer)
Ex 11.4.9Find the arc length of $\ds y=e^x$ on the interval $[0,1]$.(This can be done exactly; it is a bit tricky and a bit long.)(answer)
|
I'm using tex4ht for conversion of heavily maths loaded LaTeX files into HTML to be able to serve them in a web app. Can successfully convert all equations to MathML and jsMath. But as equations are not web optimized, some of them are rendered wrongly or dont even get rendered as they are in original PDF's. So I decided to use maths as LaTeX as I think Mathjax can handle LaTeX equations better than jsMath or MathML.
I can manage to leave inline maths as they are but struggling with aligned standalone maths.
For example:
After a time $t$, the ground state $\ket{g}$ and the excited state $\ket{e}$ will each have accumulated a phase that is proportional to their energies:\begin{align}\ket{\psi(0)} \to \ket{\psi(t)} = \frac{e^{-iE_1 t/\hbar}}{\sqrt{2}} \ket{g} + \frac{e^{-iE_2 t/\hbar}}{\sqrt{2}} \ket{e}\, .\end{align}We can take out the factor $e^{-iE_1 t/\hbar}$ as a global unobservable phase, and obtain\begin{align}\label{eq:atomequator}\ket{\psi(t)} = \frac{1}{\sqrt{2}} \ket{g} + \frac{e^{-i(E_2 - E_1)t/\hbar}}{\sqrt{2}} \ket{e}\, .\end{align}
So I can get tex4ht not to convert
$\ket{g}$ in first line into image but having issues with maths within {align}.
I'm using michal-h21's .cfg file described in this answer: https://tex.stackexchange.com/a/165119/52068
any help would be greatly appreciated.
PS: I'm not a LaTeX expert and did not write the documents myself. So I somehow need to find a way to workout the LaTeX I have without modifying or with minimum amount of modification.
|
I don’t understand why the following question
The decomposition of nitrosyl bromide $(\ce{NOBr})$ proceeds by the following reaction:
$$\ce{2 NOBr(g) <=> 2 NO(g) + Br2(g)} \qquad K = 0.0142$$
Calculate the $[\ce{NOBr}],$ $[\ce{NO}],$ and $[\ce{Br2}]$ when $\pu{10.0 mol}$ of nitrosyl bromine is placed in a $\pu{5.00 L}$ closed vessel and allowed to decompose.
has the answers that it does:
$$ \begin{align} [\ce{NOBr}] &= \pu{1.585 M} \\ [\ce{NO}] &= \pu{0.415 M} \\ [\ce{Br2}] &= \pu{0.207 M} \end{align} $$
From what I understand
$$K = \frac{\prod a_\mathrm{products}}{\prod a_\mathrm{reactants}},$$
where $a$ is the activity of the products/reactants. The activity of a gas is approximately equal to the partial pressure of the gas divided by a reference pressure (usually 1 atm). This yields
unitless quantities with the magnitude of the partial pressures of each gas. Thus
$$K = \frac{P_\ce{Br2}\cdot P_\ce{NO}^2}{P_\ce{NOBr}^2}$$
Converting from a partial pressure to a concentration can be done using the ideal gas equation
$$\frac{P}{RT} = \frac{n}{V} = M,$$
where $M$ is concentration. Thus,
$$K = \frac{[\ce{Br2}][\ce{NO}]^2\cdot RT}{[\ce{NOBr}]^2}$$
However, plugging the concentrations given into the solution of this problem doesn’t yield the given value of $K.$
Am I wrong in the way I am approaching this problem, or is the problem itself wrong? If the equilibrium constant was denoted $K_c$ instead of $K,$ the given solution would be correct, but I don’t think it’s correct to assume that $K_c$ is the same as $K.$
|
I have two scenarios where I use arrows with a super-scripted asterisk: math mode and
tikz-cd diagrams. I would like to be able to show such an arrow in both scenarios such that the arrows look the same, i.e., with respect to positioning of the asterisk.
Consider the below MWE. This is how I would like the arrow to look.
MWE
\documentclass{article}\usepackage[dvipsnames]{xcolor}\usepackage{tikz-cd}\newcommand*\dirinfsymname{Rightarrow}\newcommand*\directdatacolourname{PineGreen}\newcommand*\directdatacolour{\textcolor{\directdatacolourname}}\newcommand*\dirinfsym{\mathbin{\directdatacolour{\Rightarrow}}}\newcommand{\pathdirinfsym}[1][]{\mathrel{ \vphantom{\dirinfsym{#1}} \smash{\dirinfsym{#1}} \vphantom{\to}^{\textcolor{PineGreen}{*}}}}\begin{document}$a \pathdirinfsym b$\end{document}
This outputs:
Now, consider this MWE.
MWE
\documentclass{article}\usepackage[dvipsnames]{xcolor}\usepackage{tikz,tikz-cd}\usetikzlibrary{shapes,fit}\usetikzlibrary{positioning}\usetikzlibrary{decorations.pathmorphing}\newcommand*\dirinfsymname{Rightarrow}\newcommand*\directdatacolourname{PineGreen}\newcommand*\directdatacolour{\textcolor{\directdatacolourname}}\newcommand*\dirinfsym{\mathbin{\directdatacolour{\Rightarrow}}}\begin{document}\begin{tikzcd}[ column sep=small, cells={nodes={draw=black, ellipse, anchor=center, minimum height=2em}}] a \arrow[\dirinfsymname, \directdatacolourname, bend left]{rrrrr}{*} & a \arrow[\dirinfsymname, \directdatacolourname]{r}{*} & a & |[draw=none]|a\vphantom{1} & a & a\end{tikzcd}\end{document}
This outputs:
Notice how the two arrows have the asterisk positioned in the middle of the stem. I would like the asterisk positioned in the same position as in the first diagram.
Furthermore, I would like this to work for more than just
\Rightarrow. I would like like to be able to do the same for
\rightarrow.
|
In some reactions, the rate is
apparently independent of the reactant concentration. The rates of these zero-order reactions do not vary with increasing nor decreasing reactants concentrations. This means that the rate of the reaction is equal to the rate constant, \(k\), of that reaction. This property differs from both first-order reactions and second-order reactions. Origin of Zero Order Kinetics
Zero-order kinetics is
always an artifact of the conditions under which the reaction is carried out. For this reason, reactions that follow zero-order kinetics are often referred to as pseudo-zero-order reactions. Clearly, a zero-order process cannot continue after a reactant has been exhausted. Just before this point is reached, the reaction will revert to another rate law instead of falling directly to zero as depicted at the upper left.
There are two general conditions that can give rise to zero-order rates:
Only a small fraction of the reactant molecules are in a location or state in which they are able to react, and this fraction is continually replenished from the larger pool. When two or more reactants are involved, the concentrations of some are much greater than those of others
This situation commonly occurs when a reaction is catalyzed by attachment to a solid surface (
heterogeneous catalysis) or to an enzyme.
Example 1: Decomposition of Nitrous Oxide
Nitrous oxide will decompose exothermically into nitrogen and oxygen, at a temperature of approximately 575 °C
\[\ce{2N_2O ->[\Delta, \,Ni] 2N_2(g) + O_2(g)}\]
This reaction in the presence of a hot platinum wire (which acts as a catalyst) is zero-order, but it follows more conventional second order kinetics when carried out entirely in the gas phase.
\[\ce{2N_2O -> 2N_2(g) + O_2(g)}\]
In this case, the \(N_2O\) molecules that react are limited to those that have attached themselves to the surface of the solid catalyst. Once all of the sites on the limited surface of the catalyst have been occupied, additional gas-phase molecules must wait until the decomposition of one of the adsorbed molecules frees up a surface site.
Enzyme-catalyzed reactions in organisms begin with the attachment of the substrate to the active site on the enzyme, leading to the formation of an
enzyme-substrate complex. If the number of enzyme molecules is limited in relation to substrate molecules, then the reaction may appear to be zero-order.
This is most often seen when two or more reactants are involved. Thus if the reaction
\[ A + B \rightarrow \text{products} \tag{1}\]
is first-order in both reactants so that
\[\text{rate} = k [A][B] \tag{2}\]
If \(B\) is present in
great excess, then the reaction will appear to be zero order in \(B\) (and first order overall). This commonly happens when \(B\) is also the solvent that the reaction occurs in. Differential Form of the Zeroth Order Rate Law
\[Rate = - \dfrac{d[A]}{dt} = k[A]^0 = k = constant \tag{3}\]
where \(Rate\) is the reaction rate and \(k\) is the reaction rate coefficient. In this example, the units of \(k\) are M/s. The units can vary with other types of reactions. For zero-order reactions, the units of the rate constants are always M/s. In higher order reactions, \(k\) will have different units.
Integrated Form of the Zeroth Order Rate Law
Integration of the differential rate law yields the concentration as a function of time. Start with the general rate law equations
\[Rate = k[A]^n \tag{4}\]
First, write the differential form of the rate law with \(n=0\)
\[Rate = - \dfrac{d[A]^0}{dt} = k \tag{5}\]
then rearrange
\[{d}[A] = -kdt \tag{6}\]
Second, integrate both sides of the equation.
\[\int_{[A]_{0}}^{[A]} d[A] = - \int_{0}^{t} kdt \tag{7}\]
Third, solve for \([A]\). This provides the integrated form of the rate law.
\[[A] = [A]_0 -kt \tag{8}\]
The integrated form of the rate law allows us to find the population of reactant at any time after the start of the reaction.
Graphing Zero-order Reactions
\[[A] = -kt + [A]_0 \tag{9}\]
is in the form y = mx+b where slope = m = -k and the y- intercept = b = \([A]_0\)
Zero-order reactions are
only applicable for a very narrow region of time. Therefore, the linear graph shown below (Figure 2) is only realistic over a limited time range. If we were to extrapolate the line of this graph downward to represent all values of time for a given reaction, it would tell us that as time progresses, the concentration of our reactant becomes negative. We know that concentrations can never be negative, which is why zero-order reaction kinetics is applicable for describing a reaction for only brief window and must eventually transition into kinetics of a different order. Figure 2: (left) Concentration vs. time of a zero-order reaction. (Right) Concentration vs. time of a zero-order catalyzed reaction.
To understand where the above graph comes from, let us consider a catalyzed reaction. At the beginning of the reaction, and for small values of time, the rate of the reaction is constant; this is indicated by the blue line in Figures 2; right. This situation typically happens when a catalyst is saturated with reactants. With respect to Michaelis-Menton kinetics, this point of catalyst saturation is related to the \(V_{max}\). As a reaction progresses through time, however, it is possible that less and less substrate will bind to the catalyst. As this occurs, the reaction slows and we see a tailing off of the graph (Figure 2; right). This portion of the reaction is represented by the dashed black line. In looking at this particular reaction, we can see that reactions are
zero-order under all conditions. They are only zero-order for a limited amount of time. not
If we plot rate as a function of time, we obtain the graph below (Figure 3). Again, this only describes a narrow region of time. The slope of the graph is equal to k, the rate constant. Therefore, k is constant with time. In addition, we can see that the reaction rate is completely independent of how much reactant you put in.
Figure 3: Rate vs. time of a zero-order reaction. Relationship Between Half-life and Zero-order Reactions
The half-life. \(t_{1/2}\), is a timescale in which each half-life represents the reduction of the initial population to 50% of its original state. We can represent the relationship by the following equation.
\[[A] = \dfrac{1}{2} [A]_o \tag{10}\]
Using the integrated form of the rate law, we can develop a relationship between zero-order reactions and the half-life.
\[[A] = [A]_o - kt \tag{11}\]
Substitute
\[\dfrac{1}{2}[A]_o = [A]_o - kt_{\dfrac{1}{2}} \tag{12}\]
Solve for \(t_{1/2}\)
\[t_{1/2} = \dfrac{[A]_o}{2k} \tag{13}\]
Notice that, for zero-order reactions, the half-life
depends on the initial concentration of reactant and the rate constant. Questions Using the integrated form of the rate law, determine the rate constant kof a zero-order reaction if the initial concentration of substance A is 1.5 M and after 120 seconds the concentration of substance A is 0.75 M. Using the substance from the previous problem, what is the half-life of substance A if its original concentration is 1.2 M? If the original concentration is reduced to 1.0 M in the previous problem, does the half-life decrease, increase, or stay the same? If the half-life changes what is the new half-life? Given are the rate constants kof three different reactions: Reaction A: k = 2.3 M -1s -1 Reaction B: k = 1.8 Ms -1 Reaction C: k = 0.75 s -1
Which reaction represents a zero-order reaction?
True/False: If the rate of a zero-order reaction is plotted as a function of time, the graph is a strait line where \( rate = k\ ). Answers The rate constant kis 0.00624 M/s The half-life is 96 seconds. Since this is a zero-order reaction, the half-life is dependent on the concentration. In this instance, the half-life is decreased when the original concentration is reduced to 1.0 M. The new half-life is 80 seconds. Reaction B represents a zero-order reaction because the units are in M/s. Zero-order reactions always have rate constants that are represented by molars per unit of time. Higher order reactions, however, require the rate constant to be represented in different units. True. When using the rate function \( rate = k[A]^n \) with nequal to zero in zero-order reactions. Therefore, rate is equal to the rate constant k. Summary
The kinetics of any reaction depend on the reaction mechanism, or rate law, and the initial conditions. If we assume for the reaction A -> Products that there is an initial concentration of reactant of [A]
0 at time t=0, and the rate law is an integral order in A, then we can summarize the kinetics of the zero-order reaction as follows: Related Topics References Petrucci, Ralph H., William S. Harwood, Geoffrey Herring, and Jeffry D. Madura. General Chemistry: Principles & Modern Applications.Ninth ed. Upper Saddle River, N.J.: Pearson Education, 2007. Print. Contributors Rachael Curtis, Jessica Martin, David Cao
|
You have to distinguish between the distance the man swims, relative to the water around him, and the total distance the man travels, relative to an observer on the river bank. The total distance relative to an observer on the river bank is the distance the man swims measured relative to the water around him combined with the distance the water moves relative to the bank. If the man swims in a straight line the total distance will be the vector sum of the two distances (life gets more complicated if the man doesn't swim in a straight line).
If you're trying to minimise the crossing time, and you don't care where on the other bank you emerge, then you need to minimise the distance the man swims, because the time is this distance divided by the swimming velocity. This is done by swimming in a direction perpendicular to the bank.
You could have different criteria. For example you might want to minimise the total distance travelled. To achieve this the man would have to swim at a different angle that would depend on the speed of the river.
Response to comment:
The diagram below shows what happens as the man swims across the river,
I've drawn the man swimming at some arbitrary angle $\theta$ at a speed $v$. The river is flowing at a speed $V$, and the time the man takes to cross is $t$. The distance swum by the man is $d_m$ and the distance the water moves is $d_r$.
The key point is that the speed the river flows affects where the man emerges on the other side of the river, but it doesn't affect the time to cross. The time to cross is simply the distance swum, $d_m$, divided by the swimming speed, $v$:
$$ t = \frac{d_m}{v} $$
and by trigonometry the distance the man swims is related to the angle $\theta$ by:
$$ d_m = \frac{W}{\sin\theta} $$
so:
$$ t = \frac{W}{v \sin\theta} $$
Both $W$ and $v$ are constants, so to minimise the time you need to maximise $\sin\theta$, and the maximum value of $\sin\theta$ is 1 when $\theta$ = 90º i.e. perpendicular to the bank.
Response to response to comment:
If we take $x$ to be the direction along the river and $y$ the direction across it, the the time taken to cross is just:
$$ t = \frac{w}{U_y} $$
where $U$ is the total velocity and $U_y$ is its $y$ component. Because $U$ is the vector sum of $v$ and $V$, its $y$ component is simply:
$$ U_y = v_y + V_y $$
But the river is flowing in the $x$ direction i.e. $V_y$ is zero, and therefore $U_y$ = $v_y$ i.e. the $y$ component of the total velocity depends only only the man's swimming speed and not on the river speed. This is why the river speed doesn't affect the time to cross.
|
As metioned in Wikipedia's biography, Shanks used Machin's formula$$ \pi = 16\arctan(\frac15) - 4\arctan(\frac1{239}) $$
The standard way to use that (and the various Machin-
like formulas found later) is to compute the arctangents using the power series
$$ \arctan x = x - \frac{x^3}3 + \frac{x^5}5 - \frac{x^7}7 + \frac{x^9}9 - \cdots $$
Getting $\arctan(\frac15)$ to 707 digits requires about 500 terms calculated to that precision. Each requires two long divisions -- one to divide the previous numerator by 25, another to divide it by the denominator.
The series for $\arctan(\frac1{239})$ converges faster and only needs some 150 terms.
(You can know how many terms you need because the series is
alternating (and absolutely decreasing) -- so once you reach a term that is smaller than your desired precision, you can stop).
The point of Machin-like formulas is that the series for $\arctan x$ converges faster the smaller $x$ is. We could just compute $\pi$ as $4\arctan(1)$, but the series converges
hysterically slowly when $x$ is as large as $1$ (and not at all if it is even larger). The trick embodied by Machin's formula is to express a straight angle as a sum/difference of the corner angles of (a small number of different sizes of) long and thin right triangles with simple integer ratios between the cathetes.
The arctangent gets easier to compute the longer and thinner each triangle is, and especially if the neighboring side is an integer multiple of the opposite one, which corresponds to angles of the form $\arctan\frac{1}{\text{something}}$. Then going from one numerator in the series to the next costs only a division, rather than a division
and a multiplication.
Machin observed that four copies of the $5$-$1$-$\sqrt{26}$ triangle makes the same angle as an $1$-$1$-$\sqrt2$ triangle (whose angle is $\pi/4$) plus one $239$-$1$-$\sqrt{239^2+1}$ triangle. These facts can be computed exactly using the techniques displayed here.
Later workers have found better variants of Machin's idea, nut if you're in prison without reference works, it's probably easiest to rediscover Machin's formula by remembering that some number of copies of $\arctan\frac1k$ for some fairly small $k$ adds up to something very close to 45°.
|
I have a question on definition/motivation of Virasoro algebra. Recall that Virasoro algebra is an infinite Lie algebra generated by elements $L_n$ $(n\in \mathbb{Z})$ and $c$ over $\mathbb{C}$ with relations $$ [L_m,L_n]=(m-n)L_{m+n}+\frac{c}{12}(m^3-m)\delta_{m+n,0}. $$ A typical explanation of this definition is the following.
Define vector fields $l_n=-z^n\frac{\partial}{\partial z}$ on $\mathbb{C}\setminus \{0\}$. They form a Lie algebra of infinitesimal conformal transformation $$ [l_m,l_n]=(m-n)l_{m+n}. $$ So the Virasoro algebra is a central extension of this algebra by $c$. $c$ is called the central charge.
My questions are
How can one see that the Lie algebra above is associate to infinitesimal conformal transformation? What is the central charge $c$ intuitively? Why are we interested in such a central extension?
As to second question, I don't have enough physics background to check what the central charge $c$ means in physics literature.
At this point, I don't have any intuition and have trouble in digesting the concept. I would really appreciate your help.
|
Here is the question:
"A car travels round a bend which has radius $100~\text{m}$ and is banked at an angle of $20°$ to the horizontal. The car is travelling at a speed of $30 ~\text{m}\text{s}^{-1}$. What is the least possible value of the coefficient of friction if the car does not slip up the slope?"
The way my textbook says to answer it is to resolve the force of gravity and the friction $(F=μR)$ in the vertical and horizontal directions and to consider centripetal force to be completely horizontal.
Solving in this way will give you two equations (one from the vertical component and one from the horizontal component) which you can solve simultaneously to give the solution:
$μ = 0.416$
However, I tried answering it by instead resolving the centripetal force in the direction of the slope and then equating this component to the friction and the force of gravity down the slope as follows:
$F_\text{centripetal} = \frac{m(30^2)}{100} = 9~\text{m}$
$F_\text{gravity} = mg$
then equating the component of centripetal force down the slope to the friction and the component of gravity down the slope (where I have used F=μR for friction):
$9m\cos(20°) = mg\sin(20°) + μmg\cos(20°)$
Giving the solution $μ = 0.554$
I used the same value for gravity as they did in their answer: $g = 9.8~\text{m}\text{s}^{-2}$
Could somebody please tell me why there is a difference in the answers? Thanks!
|
Fill in each blank unshaded cell in the diagram below with a positive integer less than 100, such that every consecutive group of unshaded cells within a row or column is an arithmetic sequence.
This problem is from the USAMTS Round 3 problem set.
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community
Fill in each blank unshaded cell in the diagram below with a positive integer less than 100, such that every consecutive group of unshaded cells within a row or column is an arithmetic sequence.
This problem is from the USAMTS Round 3 problem set.
TL;DR. The solution is:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&71&83&95&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&59&77&95&\\ 27&&&30&&24&&32&&&77&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
And I prove that no other solution exists.
Here is the proof:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&-&-&-&&59&-&-&-&\\ &&-&-&&-&-&-&-&&&\\ -&10&-&-&&-&&-&-&-&-&\\ -&&&-&&-&&-&&&-&\\ -&-&-&-&31&26&&-&-&-&59&\\ \end{array}$$
Filling from the $26$ and $31$:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&-&-&-&&59&-&-&-&\\ &&-&-&&-&-&-&-&&&\\ -&10&-&-&&-&&-&-&-&-&\\ -&&&-&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Guessing what should go up from the $51$, we should get decrements up to $25$. If we try decrementing by $26$ or higher, we will get a negative number left to the $10$.
Lets try decrementing by $20$:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&-&-&-&&59&-&-&-&\\ &&-&??&&-&-&-&-&&&\leftarrow \text{Can't put -06, can't be negative.}\\ 11&10&09&08&&-&&-&-&-&-&\\ 31&&&22&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Lets try decrementing by $19$:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&-&-&-&&59&-&-&-&\\ &&-&??&&-&-&-&-&&&\leftarrow \text{Can't put -12, can't be negative.}\\ 13&10&07&04&&-&&-&-&-&-&\\ 32&&&20&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
What do this means?
Trying anything below $19$ would just produce negative numbers there.
Lets try decrementing by $21$:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&??&-&-&&59&-&-&-&\leftarrow \text{Can't put -06, can't be negative.}\\ &&-&00&&-&-&-&-&&&\\ 09&10&11&12&&-&&-&-&-&-&\\ 30&&&24&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Lets try decrementing by $22$:
$$\begin{array}{rrrrrrrrrrrl} 03&02&01&00&??&-&&59&-&-&-&\leftarrow \text{Can't put -01, can't be negative.}\\ &&-&06&&-&-&-&-&&&\\ 07&10&13&16&&-&&-&-&-&-&\\ 29&&&26&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Lets try decrementing by $23$:
$$\begin{array}{rrrrrrrrrrrl} 03&??&??&04&-&-&&59&-&-&-&\leftarrow \text{Can't put 3}\frac{1}{3}\text{and 3}\frac{2}{3}\text{, not integers.}\\ &&-&12&&-&-&-&-&&&\\ 05&10&15&20&&-&&-&-&-&-&\\ 28&&&28&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Lets try decrementing by $25$:
$$\begin{array}{rrrrrrrrrrrl} 03&??&??&20&-&-&&59&-&-&-&\leftarrow \text{Can't put 8}\frac{2}{3}\text{and 14}\frac{1}{3}\text{, not integers.}\\ &&-&24&&-&-&-&-&&&\\ 01&10&19&28&&-&&-&-&-&-&\\ 26&&&32&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
So:
It must be decremented by $24$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&-&-&-&&&\\ 03&10&17&24&&22&&-&-&-&-&\\ 27&&&30&&24&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Now lets try something right to the $20$.
Lets try $21$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&21&22&23&&&\\ 03&10&17&24&&22&&??&-&-&-&\\ 27&&&30&&24&&??&&&-&\\ 51&46&41&36&31&26&&??&-&-&59&\leftarrow \text{Can't put -89, can't be negative.}\\ \end{array}$$
We can infer that:
For $22$, we get $-81$ in that same spot. $-73$ for $23$, $-65$ for $24$, $-57$ for $25$, $-49$ for $26$, $-41$ for $27$, $-33$ for $28$, $-29$ for $29$, $-21$ for $30$, $-13$ for $31$, $-05$ for $32$. Going for numbers smaller than $21$ will also be always negative. We can't try $47$ or higher because this would produce something too large at the end of the row starting with $20$ (putting $46$ produces $72$ and $98$).
So, lets try $33$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&33&46&59&&&\\ 03&10&17&24&&22&&33&-&-&-&\\ 27&&&30&&24&&20&&&-&\\ 51&46&41&36&31&26&&07&??&??&59&\leftarrow \text{Can't put 24}\frac{1}{3}\text{and 41}\frac{2}{3}\text{, not integers.}\\ \end{array}$$
Lets try $34$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&34&48&62&&&\\ 03&10&17&24&&22&&37&-&-&-&\\ 27&&&30&&24&&26&&&-&\\ 51&46&41&36&31&26&&15&??&??&59&\leftarrow \text{Can't put 29}\frac{2}{3}\text{and 44}\frac{1}{3}\text{, not integers.}\\ \end{array}$$
Lets try $35$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&-&-&-&\\ 27&&&30&&24&&32&&&-&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
Now to finish, we see that the cell at the middle line and last column, must be odd, otherwise, it is impossible to fill the cell under it. To it be odd, all the remaining cells at the middle row must be odd also.
So, lets try a $43$ under the $65$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&87&??&??&\leftarrow \text{Can't put 115 and 143, too high.}\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&43&45&47&\\ 27&&&30&&24&&32&&&53&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
What we can try then?
Replacing the $43$ with something lower, will just make the cell above the $65$ go even higher. So we must replace it with something higher than $43$.
If we try $45$, the top-right cell will go to $137$, with $47$ will go to $131$, $49$ will go to $125$, $51$ will go to $119$, $53$ will go to $113$, $55$ will go to $107$, $57$ will go to $101$.
Then...
... lets try $59$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&71&83&95&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&59&77&95&\\ 27&&&30&&24&&32&&&77&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
And it is solved!
Do other solutions exists?
To the left part no, because $27$ is the only number that fits above $51$ (the first guess). So lets see if some other number fits in the second and third guesses.
Right to the $20$, lets try $36$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&36&52&68&&&\\ 03&10&17&24&&22&&45&-&-&-&\\ 27&&&30&&24&&38&&&-&\\ 51&46&41&36&31&26&&31&??&??&59&\leftarrow \text{Can't put 40}\frac{1}{3}\text{and 49}\frac{2}{3}\text{, not integers.}\\ \end{array}$$
Lets try $37$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&37&54&71&&&\\ 03&10&17&24&&22&&49&-&-&-&\\ 27&&&30&&24&&44&&&-&\\ 51&46&41&36&31&26&&39&??&??&59&\leftarrow \text{Can't put 45}\frac{2}{3}\text{and 52}\frac{1}{3}\text{, not integers.}\\ \end{array}$$
Lets try $38$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&38&56&74&&&\\ 03&10&17&24&&22&&53&-&-&-&\\ 27&&&30&&24&&50&&&-&\\ 51&46&41&36&31&26&&47&51&55&59&\\ \end{array}$$
What is going on?
In fact, right to the $20$ we must produce a number which added to one is multiple of $3$, otherwise the last line would not be able to hold integers. We already know that numbers lower than $35$ produce negative numbers in the $8$th column and that numbers greater than $46$ produces numbers greater than $100$ in the row starting with $20$. So, we may try only $35$, $38$, $41$ and $44$ in the place at the right side of the $20$. Further, we know that $35$ is able to solve the puzzle.
Continuing with the $38$, we must put some number below the $74$ which do not produces something too large in the upper right cell. The lower the number under $74$ is, the higher the number above it is. But, the higher it is, higher will be the number in the end of the middle row also. Further, this number must be odd, otherwise we get an even number in the end of the middle row and the number just below it would not be integer.
So, lets try $73$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&75&91&??&\leftarrow \text{Can't put 107, too high.}\\ &&13&18&&20&38&56&74&&&\\ 03&10&17&24&&22&&53&73&93&??&\leftarrow \text{Can't put 113, too high.}\\ 27&&&30&&24&&50&&&-&\\ 51&46&41&36&31&26&&47&51&55&59&\\ \end{array}$$
What do this means?
If we try something smaller than $73$ below the $74$, the number on the top-right will grow. If we try something larger, then the number on the end of the middle row will grow. So it is impossible with $38$ at the right of the $20$.
If we try $41$, we get $83$ in the end of the row starting with $20$. If we try $44$, we get $92$. And these numbers are far higher than $53$ and $59$, so either the middle or the top row would end with something higher than $100$. This proves that right to the $20$, the number must be $35$, so the second guess has a single solution.
Lets remember where it is before the third guess:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&-&-&-&\\ 27&&&30&&24&&32&&&-&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
What can we try?
We already know that below the $65$ the number must be odd and can't be lower than $59$ ($59$ solves it) because it would make the top-right number be higher than $100$. So we must try something higher than $59$.
What happens if we try $61$?
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&61&81&??&\leftarrow \text{Can't put 101, too high.}\\ 27&&&30&&24&&32&&&-&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
What do this means?
If we increase the number below $65$, the last number in the middle row will just go even higher. This way, the third guess can't be lower than $59$ nor higher than $59$, so $59$ is the only number where it works, and thus
the third guess has a single solution. So I proved that there is only one solution, and that this solution is:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&71&83&95&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&59&77&95&\\ 27&&&30&&24&&32&&&77&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
This is considering arithmetic progression as sequence alone
Steps:
The given sequence at the bottom has to be filled i.e. 26 31 up to 51 (E6 to E1) Then since 51 has to be filled up the number 2 rows above has to be odd number otherwise the row just above 51 cannot be filled with a whole number.
Considering the above and also considering that if we enter a number above 51 in cell C1 then since 10 is fixed at C2, the series (C1,C2...) would go to negative, the number has to be lower than 51 and cannot be higher than 19 or less than 1 (for in both these scenarios the adjacent numbers would become negative), once you choose a number and start filling the story will unfold gradually, just 2 constraints no number can go below 0 or above 99. If you try various numbers for C1 you will notice that the constraints would get violated for one of the cells (I tried many :) believe me, but feel free to try others, it either increases one number above 99 or decreases one below 0) so C1 has to be 3 which will fill the entire left side leaving B6 at 20. Then the tricky part is managing the right side with the two 59 constraints
One of the solution that I could find is :
Steps :
1. Begin with the given sequence at f5 and e5. 2. The number at d1 must be such that : (d5-d1) must be divisible by 4 and (d1-a1) must be divisible by 3. So it must be 12/24. Using brute force elimination, it comes out to be 12. The first half then can be solved easily. 3. Similarly for the second half, the number at h5 must be such that : (k5-h5) must be divisible by 3 and (h1-h5) must be divisible by 4. So it must be 11/23/35... Again using brute force elimination it is 23.
There is a unique solution:
To find it, consider
The value of $X$ can be at most $28$ (because of the $10$ in the same row), so $Y$ can be at most $16$. However, $Y\equiv 36\mod 4$ and $Y\equiv 3\mod 3$, so we must have $Y=12$. This allows us to fill in up to here:
Here $Y=3/2 X-10$. The sequence $59+P$, $A+Q$, $B+R$, $C+S$ is an arithmetic progression, and the first two terms are $2X$ and $2Y$, so we get $C+S=5X-60$.
Here is my answer. For explanation purpose I named the board like chess. X-Axis is labeled as a - k. Y-Axis is labeled as 1-5.
My approach :
In e5 and f5 we have adjacent numbers. So we know the diff is 5. So work the row upto a5.
Since the number at a5 is odd, the number at a3 should also be Odd ! number at a3 cannot be 9,7,5 because then the number at d5 become -ve. Once you get the number a3 as 3, the rest (left half of the board) can be worked out easily.
For the right half - Lets assume the number in h5 is N. And lets assume the delta in column h is 'y' and delta in row 5 (from h5 to k5) is 'x'. So
N = 59 + 4y
N = 59 + 3x
So we get 3x = 4y. Once you assume x = 4 and y = 3 then the rest can be worked out.
Note: There are multiple solutions to this which can be worked out by trying different values at a3 and different values of x and y.
|
I know that you asked for an intuitive explanation, but I'm afraid that if one wants to go beyond ron's comment, the technical aspects are somewhat necessary. It isn't incredibly important to understand the maths, but I have included it as I want to enable interested readers to go further, and it provides a basis for the last section, so I make no apology for it. I have also assumed some basic knowledge of NMR theory.
1 Linewidths and relaxation
Following a pulse sequence, the spectrometer detects transverse magnetisation (i.e. magnetisation in the
xy-plane). The signal that is detected is generally of the form $s(t) \propto \exp(\mathrm i \Omega t)\exp(t/T_2)$, where $T_2$ is the transverse or spin-spin relaxation time.
If one takes the Fourier transform of $s(t)$, one gets the spectrum $S(\Omega)$, and it can be shown that the widths of the Lorentzian peaks thus obtained are proportional to $1/T_2$. To be precise, the linewidth at half-height $\Delta \nu_{1/2}$ is given by
$$\Delta \nu_{1/2} = \frac{1}{\pi T_2}$$
Therefore, fast transverse relaxation (which corresponds to small $T_2$) leads to broad lines, and slow relaxation leads to narrow lines.
Quadrupoles and the electric field gradient
For quadrupolar nuclei ($I > 1/2$), a major source of relaxation arises due to interaction of the
nuclear electric quadrupole moment with the electric field gradient at the nucleus.
An introduction to the above two concepts is as follows: a quadrupole moment reflects an asymmetry in the charge distribution of the nucleus, and can be thought of as two opposing dipoles placed next to each other. The electric field gradient refers to the derivative of the electric field $E$, which is itself the derivative of the electric potential $V$ (so, the field gradient is the second derivative of the potential). These derivatives are evaluated at the spatial position of the nucleus.
The electric field gradient therefore most generally has nine components, since two derivatives, each with respect to 3 axes, are taken. These are represented by $q_{ij}$ (where $i,j$ are one of $x,y,z$):
$$q_{ij} \equiv \left.\frac{\partial^2V}{\partial i\,\partial j}\right|_\text{at nucleus}$$
However, by a suitable transformation of coordinates, the cross derivatives $q_{ij}$ ($i \neq j$) can be set to zero (this essentially involves diagonalising the matrix $\mathbf{q}$). The remaining quantities $q_{xx}$, $q_{yy}$, $q_{zz}$ are referred to as the
principal values, and without loss of generality we can adopt the convention $|q_{zz}| \geq |q_{yy}| \geq |q_{xx}|$. Since the electric potential obeys Laplace's equation $\nabla^2 V = 0$, this also means that $q_{xx} + q_{yy} + q_{zz}$ is necessarily equal to 0 (this is often referred to as "traceless", since the trace of the matrix $\mathbf{q}$ is 0). Therefore, there are only two independent parameters. These parameters are chosen to be $q_{zz}$, and $\eta$, a quantity called the biaxiality and defined as
$$\eta = \frac{q_{yy} - q_{xx}}{q_{zz}}.$$
Quadrupolar relaxation 2
The interaction of the quadrupole moment with the electric field gradient contributes to the energy of the nucleus (exactly analogous to how an electric dipole moment interacts with an electric field).
3 This interaction is modulated by molecular tumbling. That is to say, as the molecule rotates in solution, the magnitude of this interaction changes as well. 4
If this interaction varies at the frequency corresponding to the transition between two spin states, then it is possible to induce this transition, thereby leading to spin relaxation. (More strictly speaking, it has to contain a component that oscillates at the transition frequency.) In this aspect, it is analogous to the more well-known case of how magnetic dipolar interactions between two nuclei can lead to relaxation, which is described quite thoroughly in Keeler's book.
1 For spin-1/2 nuclei in solution-state NMR, dipolar interactions are the primary method of relaxation since these nuclei do not have quadrupole moments. For quadrupolar nuclei, though, quadrupolar relaxation is the primary method of relaxation, simply because this interaction term tends to have a large magnitude.
For our purposes, it is sufficient to use a derived result, which you seem to have already come across. I know that I am glossing over a lot, but as far as I can tell, the derivation of this is very involved:
5
$$\frac{1}{T_1} = \frac{1}{T_2} = \frac{3\pi^2}{10}\frac{(2I+3)}{I^2(2I-1)} \left(\frac{e^2q_{zz}Q}{h}\right)^2 \left(1 + \frac{1}{3}\eta^2 \right) \tau_\mathrm{c}$$
Definitions:
$T_1$ and $T_2$ are the spin-lattice (longitudinal) and spin-spin (transverse) relaxation times respectively; these only account for relaxation via the above mechanism and not for other possible mechanisms $I$ is the spin of the nucleus $e$ is the elementary charge, $\pu{1.602 \times 10^-19 C}$ $q_{zz}$ is the main component of the electric field gradient, as explained above $Q$ is the quadrupole moment of the nucleus $h$ is Planck's constant, $\pu{6.626 \times 10^-34 J s}$ $\eta$ is the biaxiality, defined above $\tau_\mathrm{c}$ is the correlation time, which is a constant that essentially measures the rate of molecular tumbling (see books in ref 1 for more information). Please, no more maths
OK - the only remaining maths is to analyse the equation above and identify the factors which contribute to relaxation, and hence, linewidths. For the actual question, the relevant portion is $e^2q_{zz}Q/h$, which is often written as $\chi$ and termed the
nuclear quadrupole coupling constant. Some representative values of $\chi$ are given in ref 2.
In particular, the value of $q_{zz}$ is mainly determined by the distribution of electrons close to the nucleus, and this provides an obvious link to the molecular geometry of the quadrupolar nucleus.
For nuclei in tetrahedral, octahedral, cubic, or spherical environments, $q_{zz}$ is, by symmetry, equal to zero. Therefore, quadrupolar relaxation by the above mechanism does not take place, $T_2$ is long, and linewidths are small. One sees very narrow linewidths for the central atoms in, for example, $\ce{Cl- (aq)}$ (neglecting solvation effects, spherical symmetry); $\ce{NMe4+}$ and $\ce{SO4^2-}$ (tetrahedral), and $\ce{[Co(CN)6]^3-}$ (octahedral). On the other hand, if such symmetry is lacking, then $q_{zz} \neq 0$, there is usually fast quadrupolar relaxation by the above mechanism, and linewidths are large.
$q_{zz}$ is directly tied to the distribution of electrons and not the distribution of bonds. Although there is nearly always a correlation, there are some interesting cases where this is not true; in $\ce{Et2N-NO2}$, for example, the $\ce{NO2}$ nitrogen has an extremely sharp peak, despite the local molecular symmetry not intuitively suggesting that it would be the case.
2 (The $\ce{Et2N}$ nitrogen, as expected, has a very broad peak.)
Since we went the full distance to obtain the above expression, we can also extract a bit more information from it. The rate of relaxation is also proportional to the nuclear quadrupole moment $Q$; some quadrupole moments are tabulated in ref 2. Some nuclei which have small quadrupole moments (namely deuterium, lithium-6, and cesium-133) have smaller linewidths and are much more amenable to NMR studies. Also, the factor of $I^2$ in the denominator also means that higher spins tend to give sharper lines.
A bonus: coupling to quadrupolar nuclei
$\ce{^13C}$ NMR experiments are often run with proton decoupling - this means that when the carbon signal is collected, the protons are irradiated with a radiofrequency field. This has the effect of inducing rapid transitions between the two spin states of the protons, and each carbon-13 nucleus effectively "sees" an average of the two spin states, causing the C–H couplings to be lost.
Since quadrupolar relaxation also induces rapid transitions between spin states of quadrupolar nuclei, it leads to what is effectively a "self-decoupling" of all quadrupolar nuclei. However, the couplings can still be seen when the quadrupolar nuclei are in highly symmetrical environments: for example, the $\ce{^19F}$ spectrum of $\ce{[BrF6]+}$ is shown here.
6 Both $\ce{^79Br}$ and $\ce{^81Br}$ have roughly equal abundance (~50%) and a nuclear spin of $3/2$, so we expect to see two 1:1:1:1 quartets of equal intensity, one arising from $\ce{[^79BrF6]+}$ and one from $\ce{[^81BrF6]+}$.
A few other examples are presented on Hans Reich's page. Of course, these can be understood from the discussion above about how molecular symmetry affects quadrupolar relaxation.
In fact, because deuterium has a small quadrupole moment and undergoes slow quadrupolar relaxation, coupling to deuterium can often be seen. All organic chemists should recall that $\ce{CDCl3}$ appears as a 1:1:1 triplet - this is why!
Notes and references
For an exposition of basic NMR theory see either Keeler,
Understanding NMR Spectroscopy, 2nd ed. (Wiley) or Hore, Nuclear Magnetic Resonance, 2nd ed. (OUP).
A necessarily mathematical, but still fairly accessible, introduction to the topic is provided in: Gerothanassis, I. P.; Tsanaktsidis, C. G. Nuclear electric quadrupole relaxation.
Concepts Magn. Reson. 1996, 8 (1), 63–74. DOI: 10.1002/(SICI)1099-0534(1996)8:1<63::AID-CMR5>3.0.CO;2-N.
The full Hamiltonian is provided in Appendix A.7 of Levitt,
Spin Dynamics, 2nd ed. (Wiley).
Abragam (ref 5) writes that this is best understood in terms of "a fluctuating electric field gradient acting on the quadrupole moment of the nucleus", and refers to: Bloembergen, N.; Purcell, E. M.; Pound, R. V. Relaxation Effects in Nuclear Magnetic Resonance Absorption.
Phys. Rev. 1948, 73 (7), 679–712. DOI: 10.1103/PhysRev.73.679. (Non-paywall version from Harvard available here.)
However, as far as I can tell, Bloembergen
et al. simply describe the case for dipolar relaxation of spin-1/2 nuclei, and then write that "the interaction of the electric quadrupole moment of the deuteron with a fluctuating inhomogeneous electric field can bring about thermal relaxation. We omit the analysis of this process, which parallels closely the treatment of dipole-dipole interaction".
This equation is given without explanation in Gunther,
NMR Spectroscopy, 3rd ed. (Wiley). An explanation can be found in Abragam, The Principles of Nuclear Magnetism (OUP), including the conditions under which this equation holds true. Quadrupolar relaxation is dealt with on p 313, but going through the earlier sections is necessary to understand it. (I didn't do that.)
Gillespie, R. J.; Schrobilgen, G. J. Hexafluorobromine(VII) cation, $\ce{BrF6+}$. Preparation of hexafluorobromine(1+) hexafluoroarsenate(1-) and hexafluorobromine(1+)-fluorodecafluorodinabinonate(1-) and characterization by fluorine-19 nuclear magnetic resonance and Raman spectroscopy.
Inorg. Chem. 1974, 13 (5), 1230–1235. DOI: 10.1021/ic50135a043.
|
let $x=\sum_{i=1}^{\infty}\delta_i2^{-i},\ \delta_i\in\{0,1\}$.
Is there an algorithm that converts the sequence $(\delta_0,\ \delta_1,\ ...)$ of the binary digits of $x$ to the sequence $[a_0;a_1,\ ...]$ of its continued fraction representation?
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
let $x=\sum_{i=1}^{\infty}\delta_i2^{-i},\ \delta_i\in\{0,1\}$.
Is there an algorithm that converts the sequence $(\delta_0,\ \delta_1,\ ...)$ of the binary digits of $x$ to the sequence $[a_0;a_1,\ ...]$ of its continued fraction representation?
Yes, there is. The algorithm is due to Bill Gosper - he is considering the more general problem of doing linear fractional transformations with continued fractions - adding $2^{-i}$ is a special case. See also Liardet and Stambul, 1998 for a fancier (and probably more readable) explanation.
No, there is not. Already $a_0$ is impossible to determine just by reading finitely many digits (namely, it is $1$ iff $\delta_i=1$ for all $i$). The same goes for the subsequent terms of the continued fraction, mutatis mutandis.
(This should be considered rather a comment on Emil's answer than an own answer; please feel free to downvote).
after some thinking about Emil's negative result, I found a way to possibly "resurrect" the existence of an algorithm by introducing the means of prediction and correction.
Emil's counterexample, in which $\delta_i=1$ for $i\ge i_0$, can be fixed by
as the $x_{i+1}-x_i$ is zero in case of equal digits, the calculated coefficients of continued fraction in case of a trailing sequence $(\delta_i=\delta_{i+1}=\ ...)$ will be that of $x$ and not modified after $i$ steps.
The above idea however doesn't resolve the case of rational $x$ in general; here the best approximation property of continued fractions may be of use in predicting the true continued fraction of $x$: experiments suggest, that $a_i=0, i>=i_0$ in case of rational $x$, and that the value of one of the non-zero parameters $a_k\in\{a_1,\ ...,a_{i-1}\}$ keeps growing as more digits are processed, while the first $k-1$ parameters do not change after a sufficient number of digits have been processed.
In that case we can set $a_k:=\infty$, yielding $[0;a_1,\ ...,a_{k-1}]$ as the continued fraction of $x\in\mathbb{Q}$ if enough periods in the digit stream have been encountered. I am however not an expert in continued fractions, so I'm not sure if my ideas are correct.
|
Hello, all!
I have a polynomial non-singular square matrix over $\mathbf{F} _q[x]$, $$\underset{l \times l}{G(x)} = \left( \begin{matrix} g _{0,0}(x) & g _{0,1}(x) & \ldots & g _{0,l-1}(x) \\\ \vdots & \vdots & \vdots & \vdots \\\ g _{l-1,0}(x) & g _{l-1,1}(x) & \ldots & g _{l-1,l-1}(x) \end{matrix} \right).$$ I call an eigenvalue of $G(x)$ roots of equation $\det G(x) = 0$. It can be founded from some extension $\mathbf{F} _{q^r}$ of finite field $\mathbf{F} _q$. I call an eigenvector corresponding to eigenvalue $\lambda _i$ a solution $\underset{l \times 1}{v _{i,j}}$ of system of equations $G(\lambda _i) v _{i,j} = 0$. So $v _{i,j}$ is the $j$-th eigenvector corresponding to eigenvalue $\lambda _i$.
I suppose, eigenvectors of $G(x)$ have equal algebraic and geometric multiplicities.
My problem is to prove that if some $l \times 1$ - vector of polynomials $r(x)$ satisfies $\underset{l \times 1}{r(\lambda _i)}^T \underset{1 \times l}{v _{i,j}} = 0$ $\forall i, j$ then it must belongs to space of rows of $G(x) = (\underset{1 \times l}{g_0(x)}, \ldots, \underset{1 \times l}{g_{l-1}(x)})$: so, $r(x) = \sum_{t = 0}^{l-1} b_t(x) \cdot g_t(x)^T$ for some $b_t(x) \in \mathbf{F}_q[x]$. How it can be proved? What technique can be used for that?
Thank you!
|
CryptoDB Paper: Abelian varieties with prescribed embedding degree
Authors: David Freeman Peter Stevenhagen Marco Streng Download: URL: http://eprint.iacr.org/2008/061 Search ePrint Search Google Abstract: We present an algorithm that, on input of a CM-field $K$, an integer $k \ge 1$, and a prime $r \equiv 1 \bmod k$, constructs a $q$-Weil number $\pi \in \O_K$ corresponding toan ordinary, simple abelian variety $A$ over the field $\F$ of $q$ elements that has an $\F$-rational point of order $r$ and embedding degree $k$ with respect to $r$. We then discuss how CM-methods over $K$ can be used to explicitly construct $A$. BibTeX @misc{eprint-2008-17738,
title={Abelian varieties with prescribed embedding degree},
booktitle={IACR Eprint archive},
keywords={public-key cryptography / pairing-friendly curves, embedding degree, abelian varieties, hyperelliptic curves, CM method, complex multiplication},
url={http://eprint.iacr.org/2008/061},
note={ dfreeman@math.berkeley.edu 13913 received 3 Feb 2008},
author={David Freeman and Peter Stevenhagen and Marco Streng},
year=2008
}
|
We have seen that some functions can be represented as series, which may give valuable information about the function. So far, we have seen only those examples that result from manipulation of our one fundamental example, the geometric series. We would like to start with a given function and produce a series to represent it, if possible.
Suppose that $\ds f(x)=\sum_{n=0}^\infty a_nx^n$ on some interval of convergence. Then we know that we can compute derivatives of $f$ by taking derivatives of the terms of the series. Let's look at the first few in general: $$\eqalign{ f'(x)&=\sum_{n=1}^\infty n a_n x^{n-1}=a_1 + 2a_2x+3a_3x^2+4a_4x^3+\cdots\cr f''(x)&=\sum_{n=2}^\infty n(n-1) a_n x^{n-2}=2a_2+3\cdot2a_3x +4\cdot3a_4x^2+\cdots\cr f'''(x)&=\sum_{n=3}^\infty n(n-1)(n-2) a_n x^{n-3}=3\cdot2a_3 +4\cdot3\cdot2a_4x+\cdots\cr }$$ By examining these it's not hard to discern the general pattern. The $k$th derivative must be $$\eqalign{ f^{(k)}(x)&=\sum_{n=k}^\infty n(n-1)(n-2)\cdots(n-k+1)a_nx^{n-k}\cr &=k(k-1)(k-2)\cdots(2)(1)a_k+(k+1)(k)\cdots(2)a_{k+1}x+{}\cr &\qquad {}+(k+2)(k+1)\cdots(3)a_{k+2}x^2+\cdots\cr }$$ We can shrink this quite a bit by using factorial notation: $$ f^{(k)}(x)=\sum_{n=k}^\infty {n!\over (n-k)!}a_nx^{n-k}= k!a_k+(k+1)!a_{k+1}x+{(k+2)!\over 2!}a_{k+2}x^2+\cdots $$ Now substitute $x=0$: $$f^{(k)}(0)=k!a_k+\sum_{n=k+1}^\infty {n!\over (n-k)!}a_n0^{n-k}=k!a_k,$$ and solve for $\ds a_k$: $$a_k={f^{(k)}(0)\over k!}.$$ Note the special case, obtained from the series for $f$ itself, that gives $\ds f(0)=a_0$.
So if a function $f$ can be represented by a series, we know just whatseries it is. Given a function $f$, the series$$\sum_{n=0}^\infty {f^{(n)}(0)\over n!}x^n$$is called the
Maclaurin seriesfor $f$.
Example 13.10.1 Find the Maclaurin series for $f(x)=1/(1-x)$. We need to compute the derivatives of $f$ (and hope to spot a pattern). $$\eqalign{ f(x)&=(1-x)^{-1}\cr f'(x)&=(1-x)^{-2}\cr f''(x)&=2(1-x)^{-3}\cr f'''(x)&=6(1-x)^{-4}\cr f^{(4)}(x)&=4!(1-x)^{-5}\cr &\vdots\cr f^{(n)}(x)&=n!(1-x)^{-n-1}\cr }$$ So $${f^{(n)}(0)\over n!}={n!(1-0)^{-n-1}\over n!}=1$$ and the Maclaurin series is $$\sum_{n=0}^\infty 1\cdot x^n=\sum_{n=0}^\infty x^n,$$ the geometric series.
A warning is in order here. Given a function $f$ we may be able to compute the Maclaurin series, but that does not mean we have found a series representation for $f$. We still need to know where the series converges, and if, where it converges, it converges to $f(x)$. While for most commonly encountered functions the Maclaurin series does indeed converge to $f$ on some interval, this is not true of all functions, so care is required.
As a practical matter, if we are interested in using a series to approximate a function, we will need some finite number of terms of the series. Even for functions with messy derivatives we can compute these using computer software like Sage. If we want to know the whole series, that is, a typical term in the series, we need a function whose derivatives fall into a pattern that we can discern. A few of the most important functions are fortunately very easy.
Example 13.10.2 Find the Maclaurin series for $\sin x$.
The derivatives are quite easy: $f'(x)=\cos x$, $f''(x)=-\sin x$, $f'''(x)=-\cos x$, $\ds f^{(4)}(x)=\sin x$, and then the pattern repeats. We want to know the derivatives at zero: 1, 0, $-1$, 0, 1, 0, $-1$, 0,…, and so the Maclaurin series is $$ x-{x^3\over 3!}+{x^5\over 5!}-\cdots= \sum_{n=0}^\infty (-1)^n{x^{2n+1}\over (2n+1)!}. $$ We should always determine the radius of convergence: $$ \lim_{n\to\infty} {|x|^{2n+3}\over (2n+3)!}{(2n+1)!\over |x|^{2n+1}} =\lim_{n\to\infty} {|x|^2\over (2n+3)(2n+2)}=0, $$ so the series converges for every $x$. Since it turns out that this series does indeed converge to $\sin x$ everywhere, we have a series representation for $\sin x$ for every $x$.
Sometimes the formula for the $n$th derivative of a function $f$ is difficult to discover, but a combination of a known Maclaurin series and some algebraic manipulation leads easily to the Maclaurin series for $f$.
Example 13.10.3 Find the Maclaurin series for $x\sin(-x)$.
To get from $\sin x$ to $x\sin(-x)$ we substitute $-x$ for $x$ and then multiply by $x$. We can do the same thing to the series for $\sin x$: $$ x\sum_{n=0}^\infty (-1)^n{(-x)^{2n+1}\over (2n+1)!} =x\sum_{n=0}^\infty (-1)^{n}(-1)^{2n+1}{x^{2n+1}\over (2n+1)!} =\sum_{n=0}^\infty (-1)^{n+1}{x^{2n+2}\over (2n+1)!}. $$
As we have seen, a general power series can be centered at a point other than zero, and the method that produces the Maclaurin series can also produce such series.
Example 13.10.4 Find a series centered at $-2$ for $1/(1-x)$.
If the series is $\ds\sum_{n=0}^\infty a_n(x+2)^n$ then looking at the $k$th derivative: $$k!(1-x)^{-k-1}=\sum_{n=k}^\infty {n!\over (n-k)!}a_n(x+2)^{n-k}$$ and substituting $x=-2$ we get $\ds k!3^{-k-1}=k!a_k$ and $\ds a_k=3^{-k-1}=1/3^{k+1}$, so the series is $$\sum_{n=0}^\infty {(x+2)^n\over 3^{n+1}}.$$ We've already seen this, in section 13.8.
Such a series is called the
Taylor series for the function,and the general term has the form$${f^{(n)}(a)\over n!}(x-a)^n.$$A Maclaurin series is simply a Taylor series with $a=0$.
Exercises 13.10
For each function, find the Maclaurin series or Taylor series centered at $a$, and the radius of convergence.
Ex 13.10.1$\cos x$(answer)
Ex 13.10.2$\ds e^x$(answer)
Ex 13.10.3$1/x$, $a=5$(answer)
Ex 13.10.4$\ln x$, $a=1$(answer)
Ex 13.10.5$\ln x$, $a=2$(answer)
Ex 13.10.6$\ds 1/x^2$, $a=1$(answer)
Ex 13.10.7$\ds 1/\sqrt{1-x}$(answer)
Ex 13.10.8Find the first four terms of the Maclaurin series for $\tanx$ (up to and including the $\ds x^3$ term).(answer)
Ex 13.10.9Use a combination of Maclaurin series and algebraicmanipulation to find a series centered at zero for$\ds x\cos (x^2)$.(answer)
Ex 13.10.10Use a combination of Maclaurin series and algebraicmanipulation to find a series centered at zero for$\ds xe^{-x}$.(answer)
|
The trigonometric functions frequently arise in problems, and often it is necessary to invert the functions, for example, to find an angle with a specified sine. Of course, there are many angles with the same sine, so the sine function doesn't actually have an inverse that reliably "undoes'' the sine function. If you know that $\sin x=0.5$, you can't reverse this to discover $x$, that is, you can't solve for $x$, as there are infinitely many angles with sine $0.5$. Nevertheless, it is useful to have something like an inverse to the sine, however imperfect. The usual approach is to pick out some collection of angles that produce all possible values of the sine exactly once. If we "discard'' all other angles, the resulting function does have a proper inverse.
The sine takes on all values between $-1$ and $1$ exactly once on the interval $[-\pi/2,\pi/2]$. If we truncate the sine, keeping only the interval $[-\pi/2,\pi/2]$, as shown in figure 9.5.1, then this truncated sine has an inverse function. We call this the inverse sine or the arcsine, and write $y=\arcsin(x)$.
Recall that a function and its inverse undo each other in either order, for example, $\ds (\root3\of x)^3=x$ and $\ds \root3\of{x^3}=x$. This does not work with the sine and the "inverse sine'' because the inverse sine is the inverse of the truncated sine function, not the real sine function. It is true that $\sin(\arcsin(x))=x$, that is, the sine undoes the arcsine. It is not true that the arcsine undoes the sine, for example, $\sin(5\pi/6)=1/2$ and $\arcsin(1/2)=\pi/6$, so doing first the sine then the arcsine does not get us back where we started. This is because $5\pi/6$ is not in the domain of the truncated sine. If we start with an angle between $-\pi/2$ and $\pi/2$ then the arcsine does reverse the sine: $\sin(\pi/6)=1/2$ and $\arcsin(1/2)=\pi/6$.
What is the derivative of the arcsine? Since this is an inverse function, we can discover the derivative by using implicit differentiation. Suppose $y=\arcsin(x)$. Then $$\sin(y)=\sin(\arcsin(x))=x.$$ Now taking the derivative of both sides, we get $$\eqalign{ y'\cos y &= 1\cr y'={1\over \cos y}\cr }$$ As we expect when using implicit differentiation, $y$ appears on the right hand side here. We would certainly prefer to have $y'$ written in terms of $x$, and as in the case of $\ln x$ we can actually do that here. Since $\ds \sin^2y+\cos^2 y=1$, $\ds \cos^2y=1-\sin^2y=1-x^2$. So $\ds \cos y=\pm\sqrt{1-x^2 }$, but which is it—plus or minus? It could in general be either, but this isn't "in general'': since $y=\arcsin(x)$ we know that $-\pi/2\le y\le \pi/2$, and the cosine of an angle in this interval is always positive. Thus $\ds \cos y=\sqrt{1-x^2 }$ and $${d\over dx}\arcsin(x)={1\over \sqrt{1-x^2 }}.$$ Note that this agrees with figure 9.5.1: the graph of the arcsine has positive slope everywhere.
We can do something similar for the cosine. As with the sine, we must first truncate the cosine so that it can be inverted, as shown in figure 9.5.2. Then we use implicit differentiation to find that $${d\over dx}\arccos(x)={-1\over \sqrt{1-x^2 }}.$$ Note that the truncated cosine uses a different interval than the truncated sine, so that if $y=\arccos(x)$ we know that $0\le y\le \pi$. The computation of the derivative of the arccosine is left as an exercise.
Finally we look at the tangent; the other trigonometric functions also have "partial inverses'' but the sine, cosine and tangent are enough for most purposes. The tangent, truncated tangent and inverse tangent are shown in figure 9.5.3; the derivative of the arctangent is left as an exercise.
Exercises 9.5
Ex 9.5.1Show that the derivative of $\arccos x$ is $\ds -{1\over \sqrt{1-x^2}}$.
Ex 9.5.2Show that the derivative of $\arctan x$ is $\ds {1\over 1+x^2}$.
Ex 9.5.3The inverse of $\cot$ is usually defined so that the range of arccot is $(0, \pi )$. Sketch the graph of $y=\arccot x$. In the process you will make it clear what the domain of arccot is. Find the derivative of the arccotangent.(answer)
Ex 9.5.4Show that $\arccot x + \arctan x =\pi/2$.
Ex 9.5.5Find the derivative of $\ds \arcsin(x^2)$.(answer)
Ex 9.5.6Find the derivative of $\ds \arctan(e^x)$.(answer)
Ex 9.5.7Find the derivative of $\ds \arccos (\sin x^3 )$(answer)
Ex 9.5.8Find the derivative of $\ds \ln( (\arcsin x )^2)$(answer)
Ex 9.5.9Find the derivative of $\ds \arccos e^x$(answer)
Ex 9.5.10Find the derivative of $\arcsin x + \arccos x$(answer)
Ex 9.5.11Find the derivative of $\ds \log _5 (\arctan (x^x ) )$(answer)
Ex 9.5.12Compute $\ds\int {\arcsec x\over x \sqrt{x^2 -1}}\,dx$
Ex 9.5.13Compute $\ds\int{\ln(\arcsin x )\over\arcsin x \sqrt{1-x^2}}\,dx$
Ex 9.5.14Compute $\ds\int_0^{\sqrt{2}}\Big(1+ x^3 + xe^{x^2 }-{1\over 1+x^2}\Big)\,dx$
Ex 9.5.15Compute $\ds\int {dx\over 1+ 9x^2 }$
Ex 9.5.16Find the equation of the tangent line to $f(x) =\arccsc x$ at $x=\pi/6$.
Ex 9.5.17Let$$A=\Big\{(x,y)\mid {1\over2}\leq x \leq{\sqrt{3}\over2} , 0 \leq y \leq {1\over(1-x^2 )^{1/4}}\Big\}.$$Sketch the region $A$.Let $S$ be the solid obtainedfrom rotating $A$ about the $x$-axis. Compute the volume of $S$.
|
Heat Transfer in Deformed Solids
In a previous blog post, we presented the applications of conjugate heat transfer involving immobile solids. The case of immobile solids simplifies the heat equation to be solved and is often a good approximation to the temperature field. Today, we will complete the description of the physics that account for thermoelastic effects of the material when heat transfer and solid mechanics are coupled.
Material and Spatial Frames
Before going into the physics, we should briefly recall the system of frames used in COMSOL Multiphysics. When geometric nonlinearities are considered, the
Solid Mechanics interface makes the distinction between material and spatial frames. The material frame expresses the physical quantities in the coordinates of the initial state \mathbf{X} = (X, Y, Z), while the spatial frame uses the coordinates \mathbf{x} = (x, y, z) of the current state.
The two figures below present the example of a square submitted to compressive strain. The square is ten centimeters long and its bottom-left corner is initially located at (X, Y) = (1~\textrm{cm}, 1~\textrm{cm}). It is then compressed by boundary loads at its left and right sides. This deformation modifies the position of almost all points of the square. For instance, the bottom-left corner moved to a new location (x, y) = (1.54~\textrm{cm}, 0.82~\textrm{cm}).
A deformed square represented in material coordinates, initial state on the left and final state on the right. A deformed square represented in spatial coordinates, initial state on the left and final state on the right.
The material coordinates always refer to the same particle in time, which was initially at a given point (X, Y, Z). The momentum equation of Solid Mechanics is formulated in this coordinate system. On the other hand, a point (x, y, z) in spatial coordinates refers to any particle that would be located there at the current state. The heat equation is formulated in this coordinate system.
In these two frames, volume-related physical quantities have different values. For instance, without any mass source, the density in material coordinates remains constant before and after transformation while the density in spatial coordinates changes according to the volume change. Hence, in order to couple an equation formulated on the material frame (structural mechanics) and another equation formulated on the spatial frame (heat transfer), these values need to be properly evaluated on each frame. The following table provides a list of conversions for some thermal physical quantities from spatial to material frame. These conversions involve the deformation gradient \mathbf{F} = {\partial \mathbf{x}} / {\partial \mathbf{X}} and its determinant, J. Both are evaluated using the displacement field computed by the
Solid Mechanics interface.
Quantity Material Spatial Temperature T T Density \rho_0 \rho = J^{-1} \rho_0 Thermal conductivity tensor \bold{k}_0 \mathbf{k} = J^{-1} \mathbf{F} \mathbf{k}_0 \mathbf{F}^T Thermoelastic damping W_{\sigma, 0} = \boldsymbol{\alpha} T : \frac{\mathrm{d} \mathbf{S}}{\mathrm{d} t} W_\sigma = J^{-1} W_{\sigma, 0} Heat source Q_0 Q = J^{-1} Q_0 Conversion of thermal physical quantities from material to spatial frame.
These conversions also reflect the fact that stress and strain affect the heat transfer by modifying the geometrical configuration (represented in the spatial frame). For example, a stretched boundary is more likely to receive a higher amount of heat by radiation (Q_\mathrm{r} > Q_{\mathrm{r}, 0}), as shown below.
Radiative heat flux received at the top surface of a solid, initial state (left) and after stretching the top surface (right).
Another example, the thermal conductivity expression in the spatial frame, usually using the initial state value \bold{k}_0, involves the quantities \mathbf{F} and J related to solid strain.
Modification of the thermal conductivity on the spatial frame after deformation of a solid. Temperature-Dependent Stress and Strain
The equations of Solid Mechanics are defined in the material frame. They relate the displacement, \mathbf{u}, the second Piola-Kirchhoff stress tensor, \mathbf{S}, and the elastic strain tensor, \mathbf{E}_\mathrm{el}, by a linear momentum balance equation and a stress-strain relation:
(1)
(2)
Here, \mathbf{C} is the elasticity tensor, which is often defined from the Young’s modulus and the Poisson coefficient. It may depend on the temperature as it is the case for Carbon Steel 1020.
Young’s modulus of Carbon Steel 1020, depending on the temperature.
Without any plastic effects, the elastic strain tensor, \mathbf{E}_\mathrm{el}, carries the temperature dependence via the thermal strain tensor, \mathbf{E}_\mathrm{th}, according to:
(3)
(4)
(5)
The coefficient of thermal expansion, \boldsymbol{\alpha}, characterizes the ability of the material to contract and expand because of temperature variations. It is often scalar but may more generally take a tensor form. The table below shows a list of typical values of isotropic \boldsymbol{\alpha}.
Material Coefficient of Thermal Expansion (10 -6 K -1) Acrylic plastic 70 Aluminum 23 Copper 17 Nylon 280 Silica glass 0.55 Structural steel 12.3 Coefficients of thermal expansion for some materials.
In addition, \boldsymbol{\alpha} can, itself, depend on the temperature as shown by the example below.
Coefficient of thermal expansion of Carbon Steel 1020, depending on the temperature.
As seen in these examples, the values of \boldsymbol{\alpha} are most often of the order of 10
-5 K -1. Hence, for \mathbf{E}_\mathrm{th} to become significant, a high temperature difference from the reference state is necessary. For instance, aluminum needs to reach about 500 K above the reference temperature to show a thermal elongation of only 1.2%. Example of thermal expansion of a constrained aluminum beam heated 500 K, using a deformation scale of 1:1.
Note that in the formulation of Equations (3)-(5), the thermal strain is subtracted from the total strain. This is an appropriate approximation for small strains, which the thermal strains normally are, due to usually low values of \boldsymbol{\alpha}. The more accurate multiplicative formulation, valid for large thermal strains, is shown below but not discussed further. This formulation is used for the hyperelastic materials in COMSOL Multiphysics.
(6)
(7)
(8)
The Heat Equation for Deformed Solids
The heat equation is an energy balance equation deduced from the First Law of Thermodynamics. For solids, it takes the following form when formulated on the spatial frame:
(9)
The coupling term W_\sigma is the heat source due to compression or expansion of the solid and is defined by:
(10)
which, in the case of \boldsymbol{\alpha} being independent from temperature, reduces to:
(11)
Here, \boldsymbol{\alpha} is the same coefficient of thermal expansion as in \mathbf{E}_\mathrm{th}. The low value of \boldsymbol{\alpha}, as seen in the table above, has to be compensated for by high enough values of T {\mathrm{d} \mathbf{S}} / {\mathrm{d} t} to make W_\sigma a significant heat source, that is:
by a high temperature by rapid and high variations of stress
We have now described four key contributions to the multiphysics coupling between Heat Transfer and Solid Mechanics:
The influence of strain and stress on thermal quantities and boundary heat fluxes in the material or spatial frames The temperature dependence of the elasticity matrix The temperature dependence of the elastic strain tensor via the thermal strain tensor The heat source, W_\sigma, corresponding to thermoelastic damping in the solid
Next, we will illustrate the last two coupling contributions and show how to handle them in COMSOL Multiphysics with a couple of modeling examples.
Example 1: Thermal Stress in a Turbine Stator Blade
My colleague Nicolas previously described in more detail how to model thermal stress in a turbine stator blade. Here, we display only the results in order to show the effects of J_\mathrm{th}. Because this is a steady-state model, the thermoelastic damping, W_\sigma, can be ignored.
Temperature field on the blade surface, representation in the material frame.
Due to a hot environment, the temperature field shows values between 870 K and 1100 K compared to the reference temperature of 300 K that the shape of the stator blade is initially. Such high temperatures make the material more prone to thermal deformations. The average coefficient of thermal expansion and temperature being around 1.2·10
-5 K -1 and 1070 K, \mathbf{E}_\mathrm{th} is around 0.9%.
The volume expansion, due to thermal effects, for large deformations is \Delta V/V_0 = J_\mathrm{th}-1 (where J_\mathrm{th} was introduced in Equation (8)). It is still a good approximation for a small strain, giving an expansion of around 2.80%. In postprocessing, the actual volume expansion is found to be 2.76%.
Temperature field and deformation of the stator blade, exaggerated plot with a scale factor of 3 for more visibility. Example 2: Transient Analysis of a Bracket with Heat Transfer
The Bracket — Transient Analysis model is available both in the Structural Mechanics Module Model Library and the Model Gallery. In this model, the arms of the brackets move according to rapid time-dependent loads. Consequently, small variations of temperature should occur.
The existing model neglects these thermal effects, so we need to add a new
Heat Transfer in Solids interface.
Then, we add the two multiphysics features below to couple the
Heat Transfer in Solids and Solid Mechanics interfaces: Thermal Expansion This modifies the thermal strain tensor, \mathbf{E}_\mathrm{th}, applied on the whole bracket domain and accounts for the thermoelastic heat source, W_\sigma Temperature Coupling This couples the temperature variable computed by the Heat Transfer in Solidsinterface with the Solids Mechanicsinterface This couples the temperature variable computed by the
The study can also be extended to 30 milliseconds to observe more load periods.
Starting from an isothermal profile of 20°C everywhere, the small temperature variations lead to a negligible thermal strain tensor. The main contribution to thermal effects is now the thermoelastic heat source due to rapid stress variations.
Temperature profile of the bracket over time, exaggerated plot with a scale factor of 10 for more visibility.
Differences of about 0.8 K can be observed between the extreme temperatures in the bracket. The heating and cooling process is, as expected, located at corners where the stress is more important and its variations stronger.
Conclusion
The heat transfer in a deformed solid is numerically computed by solving the heat equation and the momentum balance equation. For practical reasons, we made the distinction between two systems of coordinates:
The material frame where the equation of motion is formulated The spatial frame for the heat equation
Volume-related quantities in both frames have different values and need a conversion from each other, in particular for specific energies and density.
The two governing equations each contain coupling terms that makes the solid motion dependent on the temperature and the heat transfer dependent on the solid deformation. As shown in the previous two examples, COMSOL Multiphysics provides appropriate functionalities to conveniently account for them.
When temperatures remain near the reference state and without too rapid stress variations, these coupling effects are negligible. Otherwise, they shall be added to the formulation on the model.
To delve deeper into this topic, you can download the files related to the models mentioned here and read a couple of related blog posts via the links in the section below.
Further Resources Model downloads: Previous blog posts: Editor’s note: This blog post was updated on 7/23/2015 to be consistent with version 5.1 of COMSOL Multiphysics. Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
How to Model Residual Stresses Using COMSOL Multiphysics
Today, we will introduce the concept of residual stresses in structural mechanics and find out how to compute them by taking the example of a deep metal drawing process. First, we will explain how they can be computed and interpreted in a bending beam example with or without work hardening. Then, we will introduce a sheet metal forming model.
What Are Residual Stresses?
Residual stresses are self-equilibrating stresses that remain after performing the unloading of an elastic-plastic structure. During the manufacturing process of a mechanical part, residual stresses will be introduced. These will influence the part’s fatigue, failure, and even corrosion behaviors.
Indeed, uncontrolled residual stresses may cause a structure to fail prematurely. Although residual stresses may alter the performance, or even lead to the failure of manufactured products, some applications actually rely on them. For instance, brittle materials, such as glass in smartphone screens, are often manufactured so that compressive residual stresses are induced on the surface to avoid crack-tip propagation.
For these reasons, residual stresses play an important role in mechanical projects as a whole. Only through qualitative and quantitative analysis of these stresses is it possible to determine the most suitable machining processes for a given application. These types of analyses also help you discover the optimal amount of material to be used for their reliability or the most suitable shape that needs to be designed, in order to avoid malfunctions and failures.
Beam Under Pure Bending
Let’s consider the following slender beam with a rectangular cross section, depth a, and width b. The beam is fixed at the left-hand side and a bending moment is applied on the free end.
Computing Residual Stresses
Based on the beam theory, it turns out that the bending moment is constant in this case and the stress can be written as:
(1)
where I_z is the moment of inertia about the
z-axis.
As M_\mathrm{b} increases, the beam first behaves in an elastic manner, but after reaching its yield moment, M_y, it begins to take on plastic behavior. This leads to an elastic-plastic cross section. Once the plastic zone has propagated through the entire cross section, the ultimate bending moment, M_\mathrm{ult}, that the beam can carry is determined. Here, it is assumed that the beam will collapse at such a moment and that it has a perfectly plastic behavior.
The outer fibers of the beam will reach the yield point first, while the core fibers remain elastic. Thus, the previous equation applied to the outer fibers of the beam provides the first yielding moment:
(2)
where \sigma_\mathrm{yield} is the yield stress.
Under an elastic-plastic moment, M_\mathrm{ep} < M_\mathrm{ult}, the plastic zone propagates through the thickness by a distance of h_\mathrm{p} at each side of the beam, as shown below.
Plastic zone penetration in a rectangular cross-section beam.
The total moment can be divided into an elastic part, M_e, and a plastic part, M_p, such that:
(3)
where I_\mathrm{e}=\frac{a(b-2h_\mathrm{p})^3}{12} is the elastic core moment of inertia along the
z-axis.
Combining the last two expressions, we get the following:
(4)
When an elastic-perfectly plastic beam is unloading from M_\mathrm{ep}, a state of residual stress, \sigma_r, remains in the beam cross section. The beam attempts to recover its initial shape following recovery of elastic bending stress, \sigma_\mathrm{e}. Here, it is assumed that purely elastic unloading occurs after being loaded at M_\mathrm{ep}, corresponding to a state of elastic-plastic stress, \sigma. The residual stresses can be computed from the difference between the elastic-plastic stress and the purely elastic stress — i.e., the stress you would have if plastic behavior was not involved.
(5)
The elastic bending theory gives the recovered elastic stress as:
(6)
Assuming a perfectly plastic behavior, the stress \sigma in the plastic zone (in other terms, \frac{b}{2}-h_\mathrm{p} \le |y| \le \frac{b}{2}) remains constant and equal to \sigma_\mathrm{yield}. Therefore, according to the Equation (5), the residual stresses can be written as:
(7)
In the elastic zone (in other terms, 0 \le |y| \le \frac{b}{2}-h_\mathrm{p}), the beam theory provides the applied stress as:
(8)
Therefore, the residual stress is then deduced as:
(9)
Note that after the external moment has been removed, the beam will still have some permanent displacement due to plastic deformation, but it will also have recovered some of the displacement that was present at the peak load. This
springback effect is important when you want to achieve a controlled plastic deformation.
When modeling the beam in 2D, we could choose a
plane stress assumption taking Poisson’s ratio, \nu=0, to match with the 1D beam theory, which does not account for the Poisson effect. In COMSOL Multiphysics, you can model 2D plane stress by selecting a 2D space dimension and choosing the Solid Mechanics interface. Computing Residual Stresses in COMSOL Multiphysics
Here, we will show how to use the
Solid Mechanics interface in 2D to compute the residual stresses in the beam cross section. A snapshot of the 2D beam model using the Solid Mechanics interface.
According to the snapshot above, we define variables to evaluate the theoretical residual stresses we worked out in the section above. Those values will be used to compare the computed results with the theoretical ones.
The applied bending moment is ramped progressively. A Plasticity node is added to account for the uniaxial plastic behavior that may occur through the beam thickness. Plastic flow begins once \sigma_x reaches the critical value \sigma_\mathrm{yield}. Any fiber that has reached this value will remain at a constant state of stress during loading.
In the graph below, you can see the stress distribution along the
Y-axis of the cross section. The applied bending moment has been computed from Equation (4) for a plastic zone with depth h_\mathrm{p}=\frac{b}{4}=0.01 \ \mathrm{m}. According to the blue curve, COMSOL Multiphysics results match perfectly with this value. The red curve represents the residual stresses after one loading-unloading cycle. It is worth noting that the residual stresses obtained may also be found by subtracting the elastic curve (green) from the elastic-perfectly plastic curve (blue). Stress value after elastic-plastic loading, elastic loading, and unloading.
Equations (7) and (9) have been defined as variables and compared to the solution computed in COMSOL Multiphysics. As shown in the previous screenshot, you can create a “switch” using the if() operator, so that the two expressions representing the analytical residual stresses are gathered together in one expression. The next graph shows both analytical and computed residual stresses after two loading-unloading cycles.
Analytical vs. computed residual stresses.
COMSOL Multiphysics enables you to model the hysteresis cycle of a given material. In the case of perfectly plastic behavior, as depicted below, the second load cycle already provides a stable stress-strain response that is representative of each consecutive load cycle. For instance, you can use these load cycles to carry out a fatigue analysis.
Hysteresis behavior after three loading-unloading cycles.
Last but not least, let’s find out how strain-hardening behavior influences residual stresses and loading-unloading cycles. So far, we have been dealing with a perfectly plastic material. The yield stress remains constant, no matter the number of cycles or whether a tensile or a compressive is applied. Equation (5) is only valid as long as reverse yielding does not occur. Since reverse plastic deformation during unloading has a negative effect on the performance, it is quite important to figure out under which condition reverse yielding is likely to occur.
A ductile material that is subjected to an increasing stress in one direction (in tension, for instance) and then unloaded, will behave differently when loaded in the reverse direction. It is found that the
compressive yield stress is now lower than that measured in tension. This is called the Bauschinger effect. Similarly, an initial compression provides a lowered tensile yield stress. The figure below displays this effect over two stress cycles: Hysteresis behavior with kinematic strain hardening.
Now, let’s move on to a more sophisticated mechanical process in which residual stresses are of great importance: the sheet metal forming process.
Die Metal Forming
Die forming is a widespread sheet metal forming manufacturing process. The workpiece, usually a metal sheet, is permanently reshaped around a die through plastic deformation by forming and drawing processes. A blankholder applies pressure to the blank, leading the metal sheet to flow against the die.
In order to avoid cracks, tears, wrinkles, and too much thinning and stretching, you can turn to simulations. They can also be useful to estimate and overcome the
springback phenomenon. This refers to how the workpiece will attempt to recover its initial shape once the forming process is done and the forming tools are removed. Springback can lead the formed blank to reach an unexpected state of warping. To cope with this effect, the sheet can be over-bent. Thus, the die, punch, and blankholder must be manufactured not only to match the actual shape of the object, but also to allow for springback.
In this study, the sheet is made of aluminum. A Hill’s orthotropic elastoplastic material model with isotropic hardening is used to characterize the plastic deformation. It has been observed that metal sheets in deep drawing process no longer behave isotropically. There tends to be less plastic deformation through the thickness. Therefore, in die forming and deep drawing of sheets, we need a kind of anisotropy where the sheet is isotropic in-plane and has an increased strength in the perpendicular direction, called
transverse isotropy.
Below, we have illustrated the forming tools that are used in the process.
Forming tools: The die is shown in red, punch in blue, blankholder in pink, and the blank in gray.
As mentioned above, simulations can allow for handling several tasks that need to be taken into account whenever such a mechanical process is worked out. For instance, optimization of the corner radius of both the die and the punch can be carried out properly to prevent tearing of the metal sheet. It may also be useful to carry out simulations in order to get the clearance that is needed between the punch and the die, to avoid shearing or cutting of the metal blank.
One of the most challenging aspects is to figure out how much of the metal sheet should be over-bent. When the sheet has been formed, the residual stresses cause the material to spring back towards its initial position, so the sheet must be over-bent to achieve the desired bend angle. Therefore, you have to properly model residual stresses as not to over- or underestimate the springback phenomenon.
The two animations below show the sheet metal forming as well as the springback of the metal blank.
Representation in the RZ -plane of the spingback phenomena. Simulation of sheet metal forming.
When subjecting the structure to other mechanical loads, the superposition of the residual stresses can reduce the reliability of the structure or even cause irreversible damages. Therefore, the residual stresses must be released as much as possible or be managed so that the structure can withstand the external loads that may be applied. The plot below shows the Hill effective residual stresses that remain around the bend regions after the deep-drawn cup process.
Conclusion and Further Reading
Today, we studied residual stresses in structural mechanics. We introduced a conventional definition, which was first applied to a bending beam example. We simulated this bending example using COMSOL Multiphysics and compared our results to the analytical solution from the beam theory. Then, we explored the importance of the residual stresses in a sheet metal forming example. We saw that any mechanical process induces residual stresses and particular care must be given to release them properly or, at least, be certain that they will not cause any damage.
Learn more about the related products:
Comments (8) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
I read that 'Euclidean distance is not a good distance in high dimensions'. I guess this statement has something to do with the curse of dimensionality, but what exactly? Besides, what is 'high dimensions'? I have been applying hierarchical clustering using Euclidean distance with 100 features. Up to how many features is it 'safe' to use this metric?
A great summary of non-intuitive results in higher dimensions comes from "A Few Useful Things to Know about Machine Learning" by Pedro Domingos at the University of Washington:
[O]ur intuitions, which come from a three-dimensional world, often do not apply in high-dimensional ones. In high dimensions, most of the mass of a multivariate Gaussian distribution is not near the mean, but in an increasingly distant “shell” around it; and most of the volume of a high-dimensional orange is in the skin, not the pulp. If a constant number of examples is distributed uniformly in a high-dimensional hypercube, beyond some dimensionality most examples are closer to a face of the hypercube than to their nearest neighbor. And if we approximate a hypersphere by inscribing it in a hypercube, in high dimensions almost all the volume of the hypercube is outside the hypersphere. This is bad news for machine learning, where shapes of one type are often approximated by shapes of another.
The article is also full of many additional pearls of wisdom for machine learning.
Another application, beyond machine learning, is nearest neighbor search: given an observation of interest, find its nearest neighbors (in the sense that these are the points with the smallest distance from the query point). But in high dimensions, a curious phenomenon arises: the ratio between the nearest and farthest points approaches 1, i.e. the points essentially become uniformly distant from each other. This phenomenon can be observed for wide variety of distance metrics, but it is more pronounced for the Euclidean metric than, say, Manhattan distance metric. The premise of nearest neighbor search is that "closer" points are more relevant than "farther" points, but if all points are essentially uniformly distant from each other, the distinction is meaningless.
From Charu C. Aggarwal, Alexander Hinneburg, Daniel A. Keim, "On the Surprising Behavior of Distance Metrics in High Dimensional Space":
It has been argued in [Kevin Beyer, Jonathan Goldstein, Raghu Ramakrishnan, Uri Shaft, "When Is 'Nearest Neighbor' Meaningful?"] that under certain reasonable assumptions on the data distribution, the ratio of the distances of the nearest and farthest neighbors to a given target in high dimensional space is almost 1 for a wide variety of data distributions and distance functions. In such a case, the nearest neighbor problem becomes ill defined, since the contrast between the distances to diferent data points does not exist. In such cases, even the concept of proximity may not be meaningful from a qualitative perspective: a problem which is even more fundamental than the performance degradation of high dimensional algorithms.
... Many high-dimensional indexing structures and algorithms use the [E]uclidean distance metric as a natural extension of its traditional use in two- or three-dimensional spatial applications. ... In this paper we provide some surprising theoretical and experimental results in analyzing the dependency of the $L_k$ norm on the value of $k$. More specifically, we show that the relative contrasts of the distances to a query point depend heavily on the $L_k$ metric used. This provides considerable evidence that the meaningfulness of the $L_k$ norm worsens faster within increasing dimensionality for higher values of $k$. Thus, for a given problem with a fixed (high) value for the dimensionality $d$, it may be preferable to use lower values of $k$. This means that the $L_1$ distance metric (Manhattan distance metric) is the most preferable for high dimensional applications, followed by the Euclidean metric ($L_2$). ...
The authors of the "Surprising Behavior" paper then propose using $L_k$ norms with $k<1$. They produce some results which demonstrate that these "fractional norms" exhibit the property of increasing the contrast between farthest and nearest points. This may be useful in some contexts, however there is a caveat: these "fractional norms" are not proper distance metrics because they violate the triangle inequality. If the triangle inequality is an important quality to have in your research, then fractional metrics are not going to be tremendously useful.
The notion of Euclidean distance, which works well in the two-dimensional and three-dimensional worlds studied by Euclid, has some properties in higher dimensions that are contrary to our (maybe just
my) geometric intuition which is also an extrapolation from two and three dimensions.
Consider a $4\times 4$ square with vertices at $(\pm 2, \pm 2)$. Draw fourunit-radius circles centered at $(\pm 1, \pm 1)$. These "fill" the square,with each circle touching the sides of the square at two points, and eachcircle touching its two neighbors. For example, the circle centered at$(1,1)$ touches the sides of the square at $(2,1)$ and $(1,2)$, andits neighboring circles at $(1,0)$ and $(0,1)$. Next, draw a smallcircle
centered at the origin that touches all four circles. Sincethe line segment whose endpoints are the centers of two osculating circles passes through the point of osculation, it iseasily verified that the small circle has radius $r_2 = \sqrt{2}-1$and that it touches touches the four larger circles at $(\pm r_2/\sqrt{2}, \pm r_2/\sqrt{2})$.Note that the small circle is "completely surrounded" by the fourlarger circles and thus is also completely inside the square. Note alsothat the point $(r_2,0)$ lies on the small circle. Notice also that from the origin, one cannot "see" the point $(2,0,0)$ on the edge of the square because the line of sight passes through the point of osculation $(1,0,0)$ of the two circles centered at $(1,1)$ and $(1,-1)$.Ditto for the lines of sight to the other points where the axes pass throughthe edges of the square.
Next, consider a $4\times 4 \times 4$ cube with vertices at $(\pm 2, \pm 2, \pm 2)$. We fill it with $8$ osculatingunit-radius spheres centered at $(\pm 1, \pm 1, \pm 1)$, and then put a smaller osculating sphere centered at the origin.Note that the small sphere has radius $r_3 = \sqrt{3}-1 < 1$and the point $(r_3,0,0)$ lies on the surface of the small sphere.But notice also that in three dimensions, one
can "see" the point$(2,0,0)$ from the origin; there are no bigger bigger spheresblocking the view as happens in two dimensions. These clear lines of sightfrom the origin to the points where the axes pass through the surface of the cube occur in all larger dimensions as well.
Generalizing, we can consider a $n$-dimensional hypercube of side$4$ and fill it with $2^n$ osculating unit-radius hyperspherescentered at $(\pm 1, \pm 1, \ldots, \pm 1)$ and thenput a "smaller" osculating sphere of radius$$r_n = \sqrt{n}-1\tag{1}$$ at the origin. The point $(r_n,0,0, \ldots, 0)$lies on this "smaller" sphere.But, notice from $(1)$ that when $n = 4$, $r_n = 1$ and so the"smaller" sphere has unit radius and thus really does not deservethe soubriquet of "smaller" for $n\geq 4$. Indeed, it would be betterif we called it the "larger sphere" or just "central sphere". As noted in the last paragraph, there is a clear line of sight from the origin tothe points where the axes pass through the surface of the hypercube.Worse yet, when $n > 9$, we have from $(1)$ that $r_n >2$, and thus the point$(r_n, 0, 0, \ldots, 0)$ on the central sphere
lies outside the hypercube of side $4$even though it is "completely surrounded" by the unit-radius hyperspheresthat "fill" the hypercube (in the sense of packing it). The centralsphere "bulges" outside the hypercube in high-dimensional space.I find this very counter-intuitive because my mental translations of the notion of Euclidean distance to higher dimensions, usingthe geometric intuition that I have developedfrom the 2-space and 3-space that I am familiar with, do notdescribe the reality of high-dimensional space.
My answer to the OP's question "Besides, what is 'high dimensions'?" is $n \geq 9$.
It is a matter of
signal-to-noise. Euclidean distance, due to the squared terms, is particular sensitive to noise; but even Manhattan distance and "fractional" (non-metric) distances suffer.
I found the studies in this article very enlightening:
Zimek, A., Schubert, E. and Kriegel, H.-P. (2012),
A survey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analy Data Mining, 5: 363–387. doi: 10.1002/sam.11161
It revisits the observations made in e.g. On the Surprising Behavior of Distance Metrics in High Dimensional Space by Aggarwal, Hinneburg and Keim mentioned by @Pat. But it also shows how out synthetic experiments are misleading and that in fact
high-dimensional data can become easier. If you have a lot of (redundant) signal, and the new dimensions add little noise.
The last claim is probably most obvious when considering duplicate dimensions. Mapping your data set $x,y \rightarrow x,y,x,y,x,y,x,y,...,x,y$ increases representative dimensionality, but does not at all make Euclidean distance fail. (See also: intrinsic dimensionality)
So in the end, it still depends on your data. If you have a lot of useless attributes, Euclidean distance will become useless. If you could easily embed your data in a low-dimensional data space, then Euclidean distance should also work in the full dimensional space. In particular for
sparse data, such as TF vectors from text, this does appear to be the case that the data is of much lower dimensionality than the vector space model suggests.
Some people believe that cosine distance is better than Euclidean on high-dimensional data. I do not think so: cosine distance and Euclidean distance are
closely related; so we must expect them to suffer from the same problems. However, textual data where cosine is popular is usually sparse, and cosine is faster on data that is sparse - so for sparse data, there are good reasons to use cosine; and because the data is sparse the intrinsic dimensionality is much much less than the vector space dimension.
See also this reply I gave to an earlier question: https://stats.stackexchange.com/a/29647/7828
The best place to start is probably to read
On the Surprising Behavior of Distance Metricsin High Dimensional Space by Aggarwal, Hinneburg and Keim . There is a currently working link here (pdf), but it should be very google-able if that breaks. In short, as the number of dimensions grows, the relative euclidean distance between a point in a set and its closest neighbour, and between that point and its furthest neighbour, changes in some non-obvious ways. Whether or not this will badly affect your results depends a great deal on what you're trying to achieve and what your data's like.
Euclidean distance is very rarely a good distance to choose in Machine Learning and this becomes more obvious in higher dimensions. This is because most of the time in Machine Learning you are not dealing with a Euclidean Metric Space, but a Probabilistic Metric Space and therefore you should be using probabilistic and information theoretic distance functions, e.g. entropy based ones.
Humans like euclidean space because it's easy to conceptualize, furthermore it's mathematically easy because of linearity properties that mean we can apply linear algebra. If we define distances in terms of, say Kullback-Leibler Divergence, then it's harder to visualize and work with mathematically.
As an analogy, imagine a circle centred at the origin. Points are distributed evenly. Suppose a randomly-selected point is at (x1, x2). The Euclidean distance from the origin is ((x1)^2 + (x2)^2)^0.5
Now, imagine points evenly distributed over a sphere. That same point (x1, x2) will now probable be (x1, x2, x3). Since, in an even distribution, only a few points have one of the co-ordinates as zero, we shall assume that [x3 != 0] for our randomly-selected evenly-distributed point. Thus, our random point is most likely (x1, x2, x3) and not (x1, x2, 0).
The effect of this is: any random point is now at a distance of ((x1)^2 + (x2)^2 + (x3)^2)^0.5 from the origin of the 3-D sphere. This distance is larger than that for a random point near the origin of a 2-D circle. This problem gets worse in higher dimensions, which is why we choose metrics other than Euclidean dimensions to work with higher dimensions.
EDIT: There's a saying which I recall now: "Most of the mass of a higher-dimensional orange is in the skin, not the pulp", meaning that in higher dimensions
evenly distributed points are more "near" (Euclidean distance) the boundary than the origin.
Side note: Euclidean distance is not TOO bad for real-world problems due to the 'blessing of non-uniformity', which basically states that for real data, your data is probably NOT going to be distributed evenly in the higher dimensional space, but will occupy a small clusted subset of the space. This makes sense intuitively: if you're measuring 100 quantities about humans like height, weight, etc, an even distribution over the dimension space just does not make sense, e.g. a person with (height=65 inches, weight=150 lbs, avg_calorie_intake=4000) which is just not possible in the real world.
Another facet of this question is this:
Very often high dimensions in (machine-learning/statistical) problems are a result of over-constrained features.
Meaning the dimensions are NOT independent (or uncorrelated), but Euclidean metrics assume (at-least) un-correlation and thus may not produce best results
So to answer your question the number of "high dimensions" is related to how many features are inter-depedennt or redundant or over-constrained
Additionally: It is a theorem by Csiszar (et al.) that Euclidean metrics are "natural" candidates for inference when the features are of certain forms
This paper may help you too "Improved sqrt-cosine similarity measurement" visit https://journalofbigdata.springeropen.com/articles/10.1186/s40537-017-0083-6 This paper explains why Euclidean distance is not a good metric in high dimensional data and what is the best replacement for Euclidean distance in high dimensional data. Euclidean distance is L2 norm and by decreasing the value of k in Lk norm we can alleviate the problem of distance in high dimensional data. You can find the references in this paper as well.
protected by Sycorax Apr 17 at 2:01
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Ahlskog, M and Mukherjee, AK and Menon, Reghu (2001)
Low temperature conductivity of metallic conducting polymers. In: Synthetic Metals, 119 (1-3). pp. 457-458.
PDF
5.pdf
Restricted to Registered users only
Download (485kB) | Request a copy
Abstract
In several metallic conducting polymers, both positive and negative temperature coefficient of resistivity (TCR) has been observed at low temperatures; and this can be easily tuned by disorder, pressure and magnetic field. This sign change in TCR is related to the sign change in 'm' [ $\sigma = \sigma_o + mT^\frac{1}{2} $ ] as a function of the resistivity ratio $[\rho_r \sim \rho(300 K) / \rho(1.4 K)]$ in both disorder-tuned and pressure-tuned samples of doped polypyrrole and poly(3-methyl thiophene). In both cases, the zero-crossing of 'm' occurs at resistivity ratio around 2. This shows that the TCR, sign of 'm' and the resistivity ratio are consistently related to each other in metallic conducting polymers.
Item Type: Journal Article Additional Information: Copyright of this article belongs to Elsevier. Keywords: Conducting polymers;Resistivity;Electron-electron interactions Department/Centre: Division of Physical & Mathematical Sciences > Physics Depositing User: SV Maran Date Deposited: 13 Mar 2007 Last Modified: 19 Sep 2010 04:36 URI: http://eprints.iisc.ac.in/id/eprint/10361 Actions (login required)
View Item
|
This is a question I feel too stupid asking my professor about. I'm having a mental block remembering how this works even though I think I understood it at one point:
I know the following properties:
$$x(t) {\longrightarrow}\boxed{\textrm{LTI System}}{\longrightarrow} y(t) = x(t) \star h(t) \longleftrightarrow X(j\omega)H(j\omega)$$
$$x(t) = e^{(j\omega_0 t)} \overset{\mathcal F}{\longleftrightarrow}X(j\omega) = 2\pi\delta(\omega-\omega_0)$$
So why is this true:
$$x(t) = e^{(j\omega_0t)}{\longrightarrow}\boxed{\textrm{LTI}}{\longrightarrow} y(t) = e^{(j\omega_0t)}H(j\omega)$$
instead of this:
$$e^{(j\omega_0t)}{\longrightarrow}\boxed{\textrm{LTI}}{\longrightarrow} y(t) = 2\pi\delta(\omega-\omega_0)H(j\omega)$$
I know I'm missing something here or have some fundamental misunderstanding, but I can't seem to catch what it is. I'm taking a DSP course, but it's been quite a while since I took basic signals and systems. If anyone could help me out I'd really appreciate it.
|
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
|
The diagram shows a British 50 pence coin.
The seven arcs $AB$, $BC$, . . . , $FG$, $GA$ are of equal length and each arc is formed from the circle of radius a having its centre at the vertex diametrically opposite the mid-point of the arc. Show that the area of the face of the coin is
$$\frac{a^2}{2}(\pi-7\tan\frac{\pi}{14})$$
How can i prove it?
|
Suppose $p \in A \cap B$. To show connectedness of $A \cup B$, write $A \cup B=U \cup V$, where $U$ and $V$ are open disjoint subsets of $A \cup B$. It suffices to show $U$ or $V$ is empty...
So where is $p$? It must be in $U$ or in $V$, say $p\in U$ for definiteness; by symmetry it doesn't matter (or rename letters in the following proof).
Then $A = (A \cap U) \cup (B \cap V)$ (simple set theory). Also, as $U \cap A$ is open in $A$ and $V \cap A$ is open in $A$ too (subspace topolgoy wrt a subspace topology is again the subspace topology). And these sets are still disjoint. And we
know $A$ is connected, so both sets cannot be non-empty at the same time, and we already know $p \in U \cap A \neq \emptyset$, so $V \cap A = \emptyset$.
The exact same argument (mutatis mutandis) can be made for $B$ (also connected) as well so $V \cap B=\emptyset$.
But then $$V = V \cap (A \cup B) = (V \cap A) \cup (V \cap B) = \emptyset \cup \emptyset = \emptyset$$
and we are done: $A \cup B$ is connected.
|
Let $1\leq p < \infty$ and consider a sequence $(x_{n})_{n}\subseteq L^{p}[0,1]$. Show the equivalence of:
$1.$ $x_{n} \xrightarrow{ w} 0$
$2.$ $\sup\limits_{n \in \mathbb N} \vert \vert x_{n}\vert\vert_{p}<\infty $ and $\int_{A}x_{n}(t)dt\xrightarrow{n \to \infty} 0$ for any borel sets $A$ on $[0,1]$.
for $1. \Rightarrow 2.$
note that for any $\ell \in (L^{p})^{*}$ we have $\ell(x_{n})\xrightarrow{n \to \infty} 0$ and hence:
$\sup\limits_{n \in \mathbb N}\vert \ell(x_{n})\vert<\infty$ for any $\ell \in (L^{p})^{*}$ but now for $1<p < \infty$ we know that $L^{p}$ is reflexive, so we can write: $\sup\limits_{n \in \mathbb N} \vert \vert x_{n}\vert\vert_{p}=\sup\limits_{n \in \mathbb N} \vert \vert Jx_{n}\vert\vert_{*}<\infty$ by the uniform boundedness principle and the fact that $\sup\limits_{n \in \mathbb N} \vert \ell (x_{n})\vert= \sup\limits_{n \in \mathbb N}\vert Jx_{n}(\ell)\vert$ by reflexiveness. But we have now only shown this for $1< p < \infty$, since $L^{1}$ is not reflexive. How do we show this when $p=1$?
For "$\int_{A}x_{n}(t)dt\xrightarrow{n \to \infty} 0$ for any borel sets $A$ on $[0,1]$", consider any $1 \in L^{q}[0,1]$, note that because of Riesz there has to exist a $\ell \in (L^{p})^{*}$ so that $\ell(x)=\int 1 x(t)dt $ for any $x\in L^{q}$. Then it is clear that $\ell(x_{n})=\int_{0}^{1}x_{n}(t)dt\xrightarrow{n \to \infty}0$ by assumption. But how can I show that $\int_{A}x_{n}(t)dt\xrightarrow{n \to \infty}0$ for any Borel set $A$? Particularly since I have no assumption on whether the $(x_{n})_{n}$ are positive functions?
for $2. \Rightarrow 1.$
note that for any $\ell \in (L^{p})^{*}$ we can find a unique $y \in L^{q}$ where $\frac{1}{p}+\frac{1}{q}=1$ so that:
Note that $\vert \ell(x_{n})\vert =\vert\int_{0}^{1}y(t)x_{n}(t)dt\vert\leq \sup\limits_{n \in \mathbb N} \vert \vert x_{n}\vert\vert_{p}\vert\vert y\vert\vert_{q}<\infty$ and can I state from $\int_{A}x_{n}(t)dt\xrightarrow{n \to \infty} 0$ for any borel sets $A$ that $x_{n} \to 0$ almost everywhere, but I am not sure as the assumption of positivity of $x_{n}$ is once again missing. If I could use this and dominated convergence, then
$\lim\limits_{n\to \infty}\ell(x_{n})=\lim\limits_{n\to \infty}\int_{0}^{1}y(t)x_{n}(t)dt=0$ and we are done.
|
Change Points Posted on (Update: ) Introduction Page’s (1954, 1955) classical formulation Shiryaev (1963) and Lorden (1971) then developed
One is concerned with sequential detection of a change-point, which represents a disruption in a continuous production process.
problems in fixed samples return to sequential detection motivated by problems involving parallel streams of data subject to disruptions in some fraction of them. not discuss applications to finance Page’s problem
Suppose $X_1,\ldots,X_m$ are independent observations.
for $j\le K$, have the distribution $F_0$ for $j > K$, have the distribution $F_1$
where $F_i$ may be completely specified or may depend on unknown parameters.
Page’s solution and Barnard’s Suggestion
For sequential detection, Page (1954) suggested the stopping rule
$n$ should be $t$.
where $S_t$ is the $t$-th cumulative sum (CUSUM) of scores $Z(X_i)$.
Barnard (1959) discussed graphical methods for implementing Page’s sequential procedure and suggested a modified procedure for the case of normally distributed random variables with a mean value subject to change from an initial value of 0. Let $S_t = \sum_0^tX_i$, Barnard suggested the stopping rule
Note that if $F_1(F_0)$ denotes a normal distribution with unit variance and mean value equal to $\mu_1=\delta$ ($\mu_0=0$), the log-likelihood ratio at $n>K$ is
Maximization w.r.t. $\delta$ and $k < n$ leads to $\max_{0\le k < t}(S_t-S_k)^2/[2(t-k)]$, so Barnard’s suggestion can be described as stopping as soon as the generalized likelihood ratio statistic exceeds a suitable threshold.
Shiryaev’s and Lorden’s contributions
Shiryaev (1963) considered the case of completely specified $F_0$ and $F_1$. He assumed that $K$ is random and used optimal stopping theory to describe an exact solution to a well-formulated Bayesian version of the problem, and he computed the Bayes solution in a continuous time formulation involving Brownian motion.
loss: $1{K>n}+C(n-K)^+$ approximation: under geometric prior and P(a change in any bounded interval) = vanishingly small
Lorden (1971) took a maximum likelihood approach, in the case of two completely specified distributions, leading to the stopping rule
Some related fixed sample problems
Having observed $X_1,\ldots,X_m$, suppose we are interested in testing the hypothesis that there is no change-point. The statistic suggested by Page was
which is the likelihood ratio statistic.
Hypothesis Testing When a Nuisance Parameter Is Only Present under the Alternative
“semi-linear” regression example:
where $f$ is nonlinear and $\theta$ can be multidimensional.
The hypothesis to be tested is that $\beta = 0$; under this hypothesis the parameter $\theta$ has no meaning. The special case $f_i(\theta) = (x_i-\theta)^+$ is in the spirit of a change-point problem, where the change occurs in the slope of a linear regression.
|
Definition:Division Contents Definition
Let $\struct {F, +, \times}$ be a field.
Let the zero of $F$ be $0_F$.
The operation of division is defined as: $\forall a, b \in F \setminus \set {0_F}: a / b := a \times b^{-1}$
where $b^{-1}$ is the multiplicative inverse of $b$.
The concept is usually seen in the context of the standard number fields:
Let $\struct {\Q, +, \times}$ be the field of rational numbers.
The operation of division is defined on $\Q$ as: $\forall a, b \in \Q \setminus \set 0: a / b := a \times b^{-1}$
where $b^{-1}$ is the multiplicative inverse of $b$ in $\Q$.
Let $\left({\R, +, \times}\right)$ be the field of real numbers.
The operation of division is defined on $\R$ as: $\forall a, b \in \R \setminus \left\{{0}\right\}: a / b := a \times b^{-1}$
where $b^{-1}$ is the multiplicative inverse of $b$ in $\R$.
Let $\left({\C, +, \times}\right)$ be the field of complex numbers.
The operation of division is defined on $\C$ as: $\forall a, b \in \C \setminus \set 0: \dfrac a b := a \times b^{-1}$
where $b^{-1}$ is the multiplicative inverse of $b$ in $\C$.
Let $a, b \in \Z$ be integers such that $b \ne 0$..
From the Division Theorem:
$\exists_1 q, r \in \Z: a = q b + r, 0 \le r < \left|{b}\right|$ The process of finding $q$ and $r$ is known as (integer) division. $a / b$, which is probably the most common in the general informal context $\dfrac a b$, which is the preferred style on $\mathsf{Pr} \infty \mathsf{fWiki}$ $a \div b$, which is rarely seen outside grade school. Specific Terminology
The element $b$ is the
divisor of $a$.
The element $a$ is the
dividend of $b$.
The element $c$ is the
quotient of $a$ (divided) by $b$. Also see Results about divisioncan be found here.
|
Subset Equivalences
Jump to navigation Jump to search
Contents Definitions
In the following:
$S \subseteq T$ denotes that $S$ is a subset of $T$ $S \cup T$ denotes the union of $S$ and $T$ $S \cap T$ denotes the intersection of $S$ and $T$ $S \setminus T$ denotes the set difference between $S$ and $T$ $\varnothing$ denotes the empty set $\mathbb U$ denotes the universal set $\complement$ denotes set complement. $S \subseteq T \iff S \cup T = T$ $S \subseteq T \iff S \cap T = S$ $S \subseteq T \iff S \setminus T = \O$ $S \subseteq T \iff S \cap \map \complement T = \O$ $S \subseteq T \iff \complement \left({S}\right) \cup T = \mathbb U$ $S \subseteq T \iff \map \complement T \subseteq \map \complement S$
|
Fitting Measured Data to Different Hyperelastic Material Models
Previously on the blog, we have discussed the need for appropriate measured data to fit the material parameters that correspond to a material model. We have also looked at typical experimental tests, considerations for operating conditions when choosing a material model, and an example of how to use your measured data directly in a nonlinear elastic model. Our focus today will be on how to fit your experimental data to different hyperelastic material models.
Curve Fitting in COMSOL Multiphysics
After obtaining our measured data, the question then becomes this: How can we estimate the material parameters required for defining the hyperelastic material models based on the measured data? One of the ways to do this in COMSOL Multiphysics is to fit a parameterized analytic function to the measured data using the Optimization Module.
In the section below, we will define analytical expressions for stress-strain relationships for two common tests — the
uniaxial test and the equibiaxial test. These analytical expressions will then be fitted to the measured data to obtain material parameters. Isotropic, Nearly Incompressible Hyperelasticity
Characterizing the volumetric deformation of hyperelastic materials to estimate material parameters can be a rather intricate process. Oftentimes, perfect incompressibility is assumed in order to estimate the parameters. This means that after estimating material parameters from curve fitting, you would have to use a reasonable value for bulk modulus of the nearly incompressible hyperelastic material, as this property is not calculated.
Here, we will fit the measured data to several perfectly incompressible hyperelastic material models. We will start by reviewing some of the basic concepts of the nearly incompressible formulation and then characterize the stress measures for the case of perfect incompressibility.
For nearly incompressible hyperelasticity, the total strain energy density is presented as
where W_{iso} is the isochoric strain energy density and W_{vol} is the volumetric strain energy density. The second Piola-Kirchhoff stress tensor is then given by
where p_{p} is the volumetric stress, J is the volume ratio, and C is the right Cauchy-Green tensor.
You can expand the second term from the above equation so that the second Piola-Kirchhoff stress tensor can be equivalently expressed as
where \bar{I}_{1} and \bar{I}_{2} are invariants of the isochoric right Cauchy-Green tensor \bar{C} = J^{-2/3}C.
The first Piola-Kirchhoff stress tensor, P, and the Cauchy stress tensor, \sigma, can be expressed as a function of the second Piola-Kirchhoff stress tensor as
\sigma& = J^{-1}FSF^{T}
\end{align}
Here, F is the deformation gradient.
Note: You can read more about the description of different stress measures in our previous blog entry “Why All These Stresses and Strains?“
The strain energy density and stresses are often expressed in terms of the stretch ratio \lambda. The
stretch ratio is a measure of the magnitude of deformation. In a uniaxial tension experiment, the stretch ratio is defined as \lambda = L/L_0, where L is the deformed length of the specimen and L_0 is its original length. In a multiaxial stress state, you can calculate principal stretches \lambda_a\;(a = 1,2,3) in the principal referential directions \hat{\mathbf{N}_a}, which are the same as the directions of the principal stresses. The stress tensor components can be rewritten in the spectral form as
\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}
where S_{a} represents the principal values of the second Piola-Kirchhoff stress tensor and \hat{\mathbf{N}_{a}} represents the principal referential directions. You can represent the right Cauchy-Green tensor in its spectral form as
\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}
where \lambda_a indicates the values of the principal stretches. This allows you to express the principal values of the second Piola-Kirchhoff stress tensor as a function of the principal stretches
Now, let’s consider the uniaxial and biaxial tension tests explained in the initial blog post in our Structural Materials series. For both of these tests, we can derive a general relationship between stress and stretch.
Under the assumption of incompressibility (J=1), the principal stretches for the uniaxial deformation of an isotropic hyperelastic material are given by
The deformation gradient is given by
For uniaxial extension S_2 = S_3 = 0, the volumetric stress p_{p} can be eliminated to give
The isochoric invariants \bar{I}_{1_{uni}} and \bar{I}_{2_{uni}} can be expressed in terms of the principal stretch \lambda as
\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\
\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)
\end{align*}
Under the assumption of incompressibility, the principal stretches for the equibiaxial deformation of an isotropic hyperelastic material are given by
For equibiaxial extension S_3 = 0, the volumetric stress p_{p} can be eliminated to give
The invariants \bar{I}_{1_{bi}} and \bar{I}_{2_{bi}} are then given by
\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\
\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)
\end{align*}
Stress Versus Principal Stretch for Incompressible Hyperelastic Material Models
Let’s now look at the stress versus stretch relationships for a few of the the most common hyperelastic material models. We will consider the first Piola-Kirchhoff stress for the purpose of curve fitting.
Neo-Hookean
The total strain energy density for a Neo-Hookean material model is given by
where J_{el} is the elastic volume ratio and \mu is a material parameter that we need to compute via curve fitting. Under the assumption of perfect incompressibility and using equations (1) and (2), the first Piola-Kirchhoff stress expressions for the cases of uniaxial and equibiaxial deformation are given by
P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\
P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)
\end{align*}
The stress versus stretch relationship for a few of the other hyperelastic material models are listed below. These can be easily derived through the use of equations (1) and (2), which relate stress and the strain energy density.
Mooney-Rivlin, Two Parameters
P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\
P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)
\end{align*}
Here, C_{10} and C_{01} are Mooney-Rivlin material parameters.
Mooney-Rivlin, Five Parameters
P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\
& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right))\right)\\
P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\
& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right))\right)
\end{split}
\end{align}
Here, C_{10}, C_{01}, C_{20}, C_{02}, and C_{11} are Mooney-Rivlin material parameters.
Arruda-Boyce
P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\
P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}
\end{align}
Here, \mu_0 and N are Arruda-Boyce material parameters, and c_p are the first five terms of the Taylor expansion of the inverse Langevin function.
Yeoh
P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\
P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}
\end{align}
Here, the values of c_p are Yeoh material parameters.
Ogden
P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\
P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)
\end{align}
Here, \mu_p and \alpha_p are Ogden material parameters.
Curve Fitting in COMSOL Multiphysics Using the Optimization Interface
Using the
Optimization interface in COMSOL Multiphysics, we will fit measured stress versus stretch data against the analytical expressions detailed in our discussion above. Note that the measured data we are using here is the nominal stress, which can be defined as the force in the current configuration acting on the original area. It is important that the measured data is fit against the appropriate stress measure. Therefore, we will fit the measured data against the analytical expressions for the first Piola-Kirchhoff stress expressions. The plot below shows the measured nominal stress (raw data) for uniaxial and equibiaxial tests for vulcanized rubber. Measured stress-strain curves by Treloar.
Let’s begin by setting up the model to fit the uniaxial Neo-Hookean stress to the uniaxial measured data. The first step is to add an
Optimization interface to a 0D model. Here, 0D implies that our analysis is not tied to a particular geometry.
Next, we can define the material parameters that need to be computed as well as the variable for the analytical stress versus stretch relationship. The screenshot below shows the parameters and variable defined for the case of an uniaxial Neo-Hookean material model.
Within the
Optimization interface, a Global Least-Squares Objective branch is added, where we can specify the measured uniaxial stress versus stretch data as an input file. Next, a Parameter Column and a Value Column are added. Here, we define lambda (stretch) as a measured parameter and specify the uniaxial analytical stress expression to fit against the measured data. We can also specify a weighing factor in the Column contribution weight setting. For detailed instructions on setting up the Global Least-Squares Objective branch, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.
We can now solve the above problem and estimate material parameters by fitting our uniaxial tension test data against the uniaxial Neo-Hookean material model. This is, however, rarely a good idea. As explained in Part 1 of this blog series, the seemingly simple test can leave many loose ends. Later on in this blog post, we will explore the consequence of material calibration based on just one data set.
Depending on the operating conditions, you can obtain a better estimate of material parameters through a combination of measured uniaxial tension, compression, biaxial tension, torsion, and volumetric test data. This compiled data can then be fit against analytical stress expressions for each of the applicable cases.
Here, we will use the equibiaxial tension test data alongside the uniaxial tension test data. Just as we have set up the optimization model for the uniaxial test, we will define another global least-squares objective for the equibiaxial test as well as corresponding parameter and value columns. In the second global least-squares objective, we will specify the measured equibiaxial stress versus stretch data file as an input file. In the value column, we will specify the equibiaxial analytical stress expression to fit against the equibiaxial test data.
The settings of the Optimization study step are shown in the screenshot below. The model tree branches have been manually renamed to reflect the material model (Neo-Hookean) and the two tests (uniaxial and equibiaxial). The optimization algorithm is a Levenberg-Marquardt solver, which is used to solve problems of the least-square type. The model is now set to optimize the sum of two global least-square objectives — the uniaxial and equibiaxial test cases.
The plot below depicts the fitted data against the measured data. Equal weights are assigned to both the uniaxial and equibiaxial least-squares fitting. It is clear that the Neo-Hookean material model with only one parameter is not a good fit here, as the test data is nonlinear and has one inflection point.
Fitted material parameters using the Neo-Hookean model. Equal weights are assigned to both of the test data.
Fitting the curves while specifying unequal weights for the two tests will result in a slightly different fitted curve. Similar to the Neo-Hookean model, we will set up global least-squares objectives corresponding to Mooney-Rivlin, Arruda-Boyce, Yeoh, and Ogden material models. In our calculation below, we will include cases for both equal and unequal weights.
In the case of unequal weights, we will use a higher but arbitrary weight for the entire equibiaxial data set. It is possible that you may want to assign unequal weights only for a certain stretch range instead of the entire stretch range. If this is the case, we can split the particular test case into parts, using a separate
Global Least-Squares Objective branch for each stretch range. This will allow us to assign weights in correlation with different stretch ranges.
The plots below show fitted curves for different material models for equal and unequal weights that correspond to the two tests.
Left: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. In these cases, equal weights are assigned to both test data. Right: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. Here, higher weight is assigned to equibiaxial test data.
The Ogden material model with three terms fits both test data quite well for the case of equal weights assigned to both tests.
Fitted material parameters using the Ogden model with three terms.
If we only fit uniaxial data and use the computed parameters for plotting equibiaxial stress against the actual equibiaxial test data, we obtain the results in the plots below. These plots show the mismatch in the computed equibiaxial stress when compared to the measured equibiaxial stress. In material parameter estimation, it is best to perform curve fitting for a combination of different significant deformation modes rather than considering only one deformation mode.
Concluding Remarks
To find material parameters for hyperelastic material models, fitting the analytic curves may seem like a solid approach. However, the stability of a given hyperelastic material model may also be a concern. The criterion for determining material stability is known as
Drucker stability. According to the Drucker’s criterion, incremental work associated with an incremental stress should always be greater than zero. If the criterion is violated, the material model will be unstable.
In this blog post, we have demonstrated how you can use the
Optimization interface in COMSOL Multiphysics to fit a curve to multiple data sets. An alternative method for curve fitting that does not require the Optimization interface was also a topic of discussion in an earlier blog post. Just as we have used uniaxial and equibiaxial tension data here for the purpose of estimating material parameters, you can also fit the measured data to shear and volumetric tests to characterize other deformation states.
For detailed step-by-step instructions on how to use the
Optimization interface for the purpose of curve fitting, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery. Comments (3) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
|
Definition:Integer Division
Jump to navigation Jump to search
Definition
Let $a, b \in \Z$ be integers such that $b \ne 0$..
From the Division Theorem:
$\exists_1 q, r \in \Z: a = q b + r, 0 \le r < \left|{b}\right|$ The process of finding $q$ and $r$ is known as (integer) division. $29 \div 8 = 3 \rem 5$
\(\displaystyle 1 \div \paren {-7}\) \(=\) \(\displaystyle 0 \rem 1\) \(\displaystyle -2 \div \paren {-7}\) \(=\) \(\displaystyle 1 \rem 5\) \(\displaystyle 61 \div \paren {-7}\) \(=\) \(\displaystyle -8 \rem 5\) \(\displaystyle -59 \div \paren {-7}\) \(=\) \(\displaystyle 9 \rem 4\)
|
Problem: The series $\sum_{n=1}^{\infty} a_n$ diverge and positive. What can be said on the series $\sum_{n=1}^{\infty} \frac{a_n}{1+n^{2}a_n}$ ?
Approach:
First, I tried separating it into two cases: $a_n \to \infty$, $a_n \to A$ (where $A$ is some const value).
Tried the ratio test and got nowhere with that.
I think that the right approach is to check each case separately, but my hunch tells me that there is a workaround.
|
Let $V_\mathbb{R}$ denote the $\mathbb{R}$-vector space of binary quartic forms. The group $\operatorname{GL}_2(\mathbb{R})$ acts on $V_\mathbb{R}$ via the standard substitution action. That is, if $F(x,y) \in V_\mathbb{R}$ and $T = \begin{pmatrix} t_1 & t_2 \\ t_3 & t_4 \end{pmatrix} \in \operatorname{GL}_2(\mathbb{R})$, then $T$ sends $F$ to $F_T(x,y) = F(t_1 x + t_2 y, t_3 x + t_4 y)$. The action induced by the subgroup $\operatorname{GL}_2(\mathbb{Z})$ of $\operatorname{GL}_2(\mathbb{R})$ has two invariants, $I(F)$ and $J(F)$, which are algebraically independent and generate the ring of invariants under the action of $\operatorname{GL}_2(\mathbb{Z})$.
We will denote by the height of $F$, as in Bhargava and Shankar's paper (see references below), as $H(F) = \max\{|I(F)|^3, J(F)^2/4\}$.
The action of $\operatorname{GL}_2(\mathbb{R})$ on $F \in V_\mathbb{R}$ has a stabilizer, which we will denote by $\operatorname{Aut}_\mathbb{R} (F)$.
We denote by $V_\mathbb{Z}$ the subring of $V_\mathbb{R}$ consisting of binary quartic forms with integer coefficients. We will say an element $U \in \operatorname{Aut}_\mathbb{R}(F)$
almost rational if it is of the form
$$\displaystyle U = U(\alpha, \beta, \gamma) = \frac{1}{\sqrt{D}} \begin{pmatrix} \beta & 2 \gamma \\ -2 \alpha & -\beta \end{pmatrix},$$
where $\alpha, \beta, \gamma$ are co-prime integers and $D = |\beta^2 - 4 \alpha \gamma|$. We will say that $U$ is
reduced if $f(x,y) = \alpha x^2 + \beta xy + \gamma y^2$ is a reduced binary quadratic form (in the sense of Gauss).
We will say that a binary quartic form $F$ of height less than $Z$ has a "large" stabilizer if $\operatorname{Aut}_\mathbb{R} (F)$ contains a reduced almost rational element $U(\alpha, \beta, \gamma)$ such that $D \gg Z^{1/6}$. How does one count the number of irreducible forms with "large" stabilizer of height up to $Z$?
There is a reason for the exponent of $1/6$. Consider the following family of forms fixed by the matrix
$$\displaystyle U(1, 0, D) = \frac{1}{\sqrt{D}} \begin{pmatrix} 0 & D \\ -1 & 0 \end{pmatrix}.$$
The family of forms $F$ fixed by $U(1, 0, D)$ are the forms of the shape
$$\displaystyle F(x,y) = a_4 x^4 + a_3 x^3 y + a_2 x^2 y^2 - a_3 Dxy^3 + a_4 D^2 y^4,$$
and the $I$-invariant of the generic form in this family is given by
$$\displaystyle I(F) = 12 a_4^2 D^2 + 3 a_3^2 D + a_2^2.$$
Since for every form $F$ in the family the height $H(F)$ is given by $H(F) = I(F)^3$ (one checks that every form in this family has non-negative discriminant), it follows that the height condition is equivalent to
$$\displaystyle 12a_4^2 D^2 + 3a_3^2 D + a_2^2 \leq Z^{1/3}.$$
Thus, if $D \gg Z^{1/6}$, then $a_4 = 0$, which means that all such forms are reducible, which we do not want to count. However, this argument does not seem to work for an arbitrary family, so I am wondering if there is a more subtle principle at work.
Reference:
M. Bhargava, A. Shankar,
Binary quartic forms having bounded invariants, and the boundedness of the average rank of elliptic curves, Annals of Mathematics 181 (2015), 191-242.
|
Major Edit: The last attempt had a fatal flaw.
Let $J$ be the set of odd integers greater than $1$, and let $p_n(x)=x^n-nx+(n-1)$ for $n\in J$. Calculating derivatives $$p_n^{\prime}(x)=n(x^{n-1}-1)\text{ and }p^{\prime\prime}_n(x)=n(n-1)x^{n-2}$$ tells us that $p_n(-1)=2(n-1)$ is a local max, $p_n(1)=0$ is a local min, there are no other extrema, $p_n$ is strictly increasing on $(-\infty,-1]$, and $p_n(x)\geq0$ for all $x\geq a_n$, where $a_n$ is the least real root of $p_n$.
We will need this: $$p_n\left(-\frac{n+1}{n}\right)=\left(-\frac{n+1}n\right)^n-n\left(-\frac{n+1}n\right)+(n-1)\\=-\left(1+\frac1n\right)^n+2n>2n-e>0.$$
Thus $a_n<-\left(1+\frac1n\right)$. (This is the problem with my first attempt. The roots are bounded away from $-1$ by a sequence that doesn't converge particularly fast.)
We factor, using "$x^a-1=(x^{a-1}+x^{a-2}+\ldots+x+1)(x-1)$" once between the first and second lines and $n-1$ times between the third and fourth lines. $$p_n(x)=x^n-1-n(x-1)\\=\left(\left(\sum_{k=1}^nx^{n-k}\right)-n\right)(x-1)\\=\left(\sum_{k=1}^{n-1}(x^{n-k}-1)\right)(x-1)\\=\left(\sum_{k=0}^{n-2}(n-1-k)x^k\right)(x-1)^2.$$
Now we evaluate $$p_{n+2}(a_n)=\left(\sum_{k=0}^{n}(n+1-k)a_n^k\right)(a_n-1)^2\\=\left(a_n^2\left(\sum_{k=2}^{n}(n+1-k)a_n^{k-2}\right)+na_n+(n+1)\right)(a_n-1)^2\\=\left(a_n^2\left(\sum_{k=0}^{n-2}(n-1-k)a_n^{k}\right)+n a_n+(n+1)\right)(a_n-1)^2\\=(a_n^2\cdot0+n a_n+(n+1))(a_n-1)^2\\<\left(n\left(-\frac{n+1}{n}\right)+(n+1)\right)(a_n-1)^2=0.$$
Second to third line was a reindex. The term the cancelation from the third to fourth line is a recognition that the term is $p_n(a_n)(a_n-1)^{-2}$. Fourth line to fifth is using $a_n<-\frac{n+1}n$, which was established above.
Since $p_{n+2}$ is increasing, $a_{n+2}>a_n$. Thus, the sequence $a_n$ is strictly increasing. Let $a=\lim_{n\in J}a_n$. Since $a_n<-1$, $a\leq-1$. Because $a_n<a$, we have $p_n(a)>0$ for all $n\in J$. Thus $$\sqrt[n]{n(a-1)+1}<a\leq-1$$ for all $n\in J$. Letting $n\to\infty$ (keeping $n\in J$) gives $-1\leq a\leq-1$.
Define $q_n(x):=p_n(x+1)$ for each $n\in J$, and $x_n$ be the least real root of $q_n$. Note that $$q_n(x)=(1+x)^n-(1+nx)=p(x+1,n)\geq0$$ iff $x\geq x_n$. Hence, these $x_n$ are the same numbers as in your question. Also, $x_n=a_n-1$. Since $a_n\to-1$, $x_n\to-2$ as $n\to\infty$ (keeping $n$ odd).
|
(I apologize in advance if this question is unsuitable for MO. If so, please let me know and I will migrate it to MSE.)
Let $\sigma(M)$ be the sum of the divisors of the positive integer $M$. For example, $\sigma(6)=1+2+3+6=12$.
A number $N$ is called
perfect if $\sigma(N)=2N$.
Euler proved that an
odd perfect number $N$ must have the form $N=q^k n^2$ where $q$ is prime with $q \equiv k \equiv 1 \pmod 4$. We call $q$ the Euler prime of $N$.
Here are my questions:
(1)Who is credited with making the conjecture that $q$ is the largest prime factor of $N$?
(2)Additionally, to whom should I attribute the conjecture that $k=1$? Should it be Descartes, Frenicle, or Sorli?
(
Added September 28 2016 For (2): In an edit to this MO post, it is stated that (per Beasley), "Dris $\ldots$ refers to Descartes’ and Frenicle’s claim (that $k=1$) as Sorli’s conjecture; Dickson has documented Descartes’s conjecture as occurring in a letter to Marin Mersenne in 1638, with Frenicle’s subsequent observation occurring in 1657". As commented by Gerry, one would need to double-check Dickson's History of Number Theory to verify Beasley's statements.)
|
The probability of finding a particle at a point is always zero.
Recall that $\rho(x) = \lvert\psi(x)\rvert^2$ is a
probability density, not a probability, and so the probability to find the particle somewhere inside the interval $[a,b]$ is given by$$ P([a,b]) = \int_a^b \rho(x)\mathrm{d}x.$$Since points have zero measure ($\int_a^a \rho(x)\mathrm{d}x = 0$ regardless of $a$), this is always zero for single points. So it is not evident that there is any meaning to saying "the particle will never be found at $x_0$" because quantum mechanics only allows us to talk meaningfully about a region (however small) in which the particle can be found.
This is supported by the fact that the "eigenstates" $\lvert x\rangle$ of the position operator are not actual states since they are non-normalizable ($\langle x \vert x\rangle$ cannot be made finite/well-defined), so there is no actual measurement whose result could be $\lvert x\rangle$, a fully localized particle.
However, you are asking about the
nodes of the wavefunction of a particle trapped in a box. Indeed, even though you should not think of them are "points where the particle can never be found", the $n$-th excited state has $n$ of these nodes in its wavefunction.
garyp suggests an alternative interpretation of "the particle can never be found at $x_0$" in the comments that actually then is correct for the nodes:
For the nodes $n_i$, we have $$\lim_{a\to 0}\frac{P([n_i-a,n_i+a])}{P([x_0-a,x_0+a])} = 0$$for any $x_0$ that isn't a node itself. This means, in words, that if we take regions of equal size centered around the points $n_i$ and $x_0$ and shrink them, it becomes more and more likely to find the particle around $x_0$ compared to finding it around $n_i$, until in the limit the ratio becomes zero suggesting it is infinitely more likely to find the particle "at" any other $x_0$ then it is to find it "at" $n_i$. Note that the latter part of this sentence should only be understood heuristically due to the actual probability of finding a particle
at a point being zero as discussed at the beginning.
|
What is the motivation for including the compactness and semi-simplicity assumptions on the groups that one gauges to obtain Yang-Mills theories? I'd think that these hypotheses lead to physically "nice" theories in some way, but I've never, even from a computational perspective. really given these assumptions much thought.
As Lubos Motl and twistor59 explain, a necessary condition for unitarity is that the Yang Mills (YM) gauge group $G$ with corresponding Lie algebra $g$ should be real and have a positive (semi)definite associative/invariant bilinear form $\kappa: g\times g \to \mathbb{R}$, cf. the kinetic part of the Yang Mills action. The bilinear form $\kappa$ is often chosen to be (proportional to) the Killing form, but that need not be the case.
If $\kappa$ is degenerate, this will induce additional zeromodes/gauge-symmetries, which will have to be gauge-fixed, thereby effectively diminishing the gauge group $G$ to a smaller subgroup, where the corresponding (restriction of) $\kappa$ is non-degenerate.
When $G$ is semi-simple, the corresponding Killing form is non-degenerate.But $G$ does
not have to be semi-simple. Recall e.g. that $U(1)$ by definition is not a simple Lie group. Its Killing form is identically zero. Nevertheless, we have the following YM-type theories:
QED with $G=U(1)$.
the Glashow-Weinberg-Salam model for electroweak interaction with $G=U(1)\times SU(2)$.
I recommend that you to read the chapter 15.2 in "The Quantum Theory of Fields" Volume 2 by Steven Weinberg, he answers precisely your question.
Here a short summary In a gauge theory with algebra generators satisfying $$ [t_\alpha,t_\beta]=iC^\gamma_{\alpha\beta}t_\gamma $$ it can be checked that the field strength tensor $F^\beta_{\mu\nu}$ transforms as follows $$ \delta F^\beta_{\mu\nu}=i\epsilon^\alpha C^\beta_{\gamma\alpha} F^\gamma_{\mu\nu} $$ We want to construct Lagrangians. A free-particle kinetic term must be a quadratic combination of $F^\beta_{\mu\nu}$ and Lorentz invariance and parity conservation restrict its form to $$ \mathcal{L}=-\frac{1}{4}g_{\alpha\beta}F^\alpha_{\mu\nu}F^{\beta\mu\nu} $$ where $g_{\alpha\beta}$ may be taken symmetric and must be taken real for the Lagrange density to be real as well. The Lagrangian above must be gauge-invariant thus it must satisfy $$ \delta\mathcal{L}=\epsilon^\delta g_{\alpha\beta}F^\alpha_{\mu\nu}C^\beta_{\gamma\delta}F^{\gamma\mu\nu}=0 $$ for all $\epsilon^\delta$. In order not to impose any functional restrictions for the field strengths $F$ the matrix $g_{\alpha\beta}$ must satisfy the following condition $$ g_{\alpha\beta}C^\beta_{\gamma\delta}=-g_{\gamma\beta}C^\beta_{\alpha\delta} $$ In short, the product $g_{\alpha\beta}C^\beta_{\gamma\delta}$ is anti-symmetric in $\alpha$ and $\gamma$. Furthermor the rules of canonical quantization and the positivity properties of the quantum mechanical scalar product require that the matrix $g_{\alpha\beta}$ must be positive-definite. Finally one can prove that the following statements are equivalent There exists a real symmetric positive-definite matrix $g_{\alpha\beta}$ that satisfies the invariance condition above. There is a basis for the Lie algebra for which the structure constants $C^\alpha_{\beta\gamma}$ are anti-symmetric not only in the lower indices $\beta$ and $\gamma$ but in all three indices $\alpha$, $\beta$ and $\gamma$. The Lie algebra is the direct sum of commuting compact simple and $U(1)$ subalgebras.
The proof for the equivalence of these statements as well as a more in-detail presentation of the material can be found in the aforementioned book by S. Weinberg.
A proof for the equivalence for $g_{\alpha\beta}=\delta_{\alpha\beta}$ (actually the most common form) was given by M. Gell-Mann and S. L. Glashow in Ann. Phys. (N.Y.)
15, 437 (1961)
It's because you want the kinetic part of the Yang Mills action $$ \int Tr({\bf{F^2}}) dV$$ to be positive definite. To guarantee this, the Lie algebra inner product you're using (Killing form) needs to be positive definite. This is guaranteed if the gauge group is compact and semi-simple. (I'm not sure if it's
only if G is compact and semi simple though. Maybe someone else could fill in this detail).
|
Question:
Why is it important to make sure your entire unknown has vaporized? Why is it important to put a pinhole in the aluminum "cap"? If 0.750 g of a gas occupies 265 mL at 25 degree C and 680 mm Hg, what is the molar mass of the gas?
Molar mass:
Molar mass is an important parameter in stoichiometric calculations. It is defined as the quantity of mass of the substance per unit mole of it. It signifies the mass which is present in one mole of a particular compound.
Answer and Explanation:
STEP 1:
The particles tend to be immovable and begin to exert intermolecular forces on each other that are no longer negligible when a vapor is near the temperature at which the vapor would liquify. In this situation, the vapor behaves like an ideal gas. Thus, it is important to vaporize the whole of the unknown.
STEP 2:
It is important to put a pinhole, because the vapor from the unknown liquid should begin to exit the pinhole in the foil, after driving out all of the air in the flask.
STEP 3:
Given data are:
{eq}V=265 \enspace ml=0.265 \enspace L {/eq}
{eq}T=250^{\circ}c =298 \enspace k {/eq}
{eq}P=680 \enspace mm \enspace Hg = \dfrac{680}{760} = 0.895\enspace atm {/eq}
{eq}R = 0.0821 \enspace atm.L /mol.K {/eq}
STEP 4:
Using ideal gas law:
{eq}PV=nRT {/eq}
{eq}n=\dfrac{PV}{RT} {/eq}
{eq}n= \dfrac{0.895 \times 0.265}{0.0821 \times 298} =0.0097 \enspace mole {/eq}
Now, Molar mass = {eq}\dfrac{Mass}{No \enspace of \enspace moles} {/eq}
Molar mass= {eq}\dfrac{0.75}{0.0097}= 77.32 \enspace g/mol. {/eq}
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from General Studies Science: Help & ReviewChapter 27 / Lesson 24
|
In a question i found the recurrence relation $a_n=a_{n-2}+3^{(n-2)/2}$ then i should solve this relation with using generating factor but $3^{(n-2)/2}$ makes difficult to solve . Is there a solution for such a relations ? Or is my answer wrong ?
Expand the equation:
\begin{align*} a_n&=a_{n-2}+3^{\frac{n-2}{2}} \\ &=a_{n-4}+3^{\frac{n-4}{2}} + 3^{\frac{n-2}{2}} \\ &=\sum_{i=1}^{\frac{n}{2}}3^{\frac{n}{2}-i} \\ &=3^{\frac{n}{2}}\sum_{i=1}^{\frac{n}{2}}3^{-i}\\ &= 3^{\frac{n}{2}}\times \frac12 \times\left(1-\left(\frac{1}{3}\right)^{\frac{n}{2}}\right)\\ &=3^{\frac{n}{2}}\times \frac12 - \frac12\,. \end{align*}
If you're looking for an asymptotic bound, it is easier to observe that on the $n^{th}$ level we get $3^{(n-2)/2}$ but on the $(n-2)^{th}$ level, we get $3^{(n-4)/2} = 3^{(n-2)/2} \cdot 3^{-1}$, for any $n$. So at any level, the children contribute a factor of 3 less than the parent, causing the recurrence to be "root dominated". That is to say, the first level dominates asymptotically. So the total value of $a_n$ is on the order of $O(3^{(n-2)/2}) = O(3^{n/2})$.
|
It might help to think of an example. A simple is example is dust, specifically a collection of particles of a fixed rest mass, all at rest with respect to each other, and we can consider uniform dust, so they are equally spaced.
If that's the only thing in our universe, then there is no momentum or stress in the frame of the dust and the energy is just the rest-energy, so the mass density and the energy density are simply proportional.
So that's our example. Now let's look at the stress energy tensor. We have a $T^{\mu\nu}$ as the flux of four-momentum $p^{\mu}$ across a surface of constant $x^{\nu}$. A surface of constant $x^0=ct$ is a surface of constant $t$. A flux is a per-area thing. So you can imagine a bit of area/volume in the $t=const$ plane/hyperplane, say a rectangle/box with size $\Delta x \Delta y \Delta z$, if the box is bigger you get more flux. We can draw the worldlines of the particles and count how many pierce through this piece of the $t=const$ hypersurface, and once the piece is small enough, the result is proportional to the size of the volume. So that propotionality constant is the particle density. If we multiply that by the mass per particle, we are now counting mass that pierces that portion of the $t=const$ surface, and the proportionality constant is mass density. If we multiply that by the $c^2$, we are now counting energy that pierces that portion of the $t=const$ surface, and the proportionality constant is energy density.
It's a number that tells us how much the $p^0$ component of pierced a piece of a surface of constant $x^{\nu}$ divided by the size of the piece. Why is it called a flux?
Fluxes are (thing/area)/time. If you set up a surface of $x^1=const$ then you can make a piece of that surface with a $\Delta t$, and a $\Delta y$ and a $\Delta z$ and see how much of of your thing hits the piece of the $x^1=const$ surface inside your patch, and it is obviously propoertional to the duration of the patch (has to hit at within the time interval) and the area of the patch. So the rate per area is a measure of the constant of proportionality between how many pierced the piece of the $x^1=const$ surface and the "volume" $\Delta t \Delta y \Delta z$ of the piece.
So flux is the version when you have a "volume" $\Delta t \Delta y \Delta z$ for a piece, and density (stuff per volume $\Delta x \Delta y \Delta z$) is the name when you have a piece of a $t=const$ surface. Rightly they are the same exact concept. So we either call them both densities or call them both fluxes or we call them density-flux or flux-density.
It is called a flux because it's the thing you multiply by the "volume" $\Delta t \Delta y \Delta z$ to get how much of the thing pierced your piece of the hypersurface $x^1=const$. When the "volume" is an actual volume $\Delta x \Delta y \Delta z$ then historically we called that constant a density before we knew that spacetime is legitimate.
Recognizing that flux is a rate per area, and that this generalizes to density for $t=cosnt$ hypersurfaces is all you need to understand it. Now you can do particle flux, mass flux, energy flux, etc.
edit
So that's what flux and density are, and they are the same concept. Let's address your specific questions one by one:
$T^{00}$ should be the flux of energy through space, right?
No, $T^{00}$ is the flux of energy through a surface of $t=const$, or more rightly $$\int\int\int_{t=const}T^{00}dx^1dx^2dx^3,$$ should give you how much energy passed through your region, so locally $T^{00}$ is a constant of proportionality that scales $\Delta x \Delta y \Delta z$ up to a little bit of energy. Historically we'd call it energy density, but in relativistic physics we call it a flux to acknowledge that there is nothing different in principle between that constant and the constant you multiply by $\Delta t \Delta y \Delta z$ to see how much stuff flows through an area $\Delta y \Delta z$ in a time interval $\Delta t$.
I think I'm not understanding is the usage of the word flux here.
A flux is a constant of proportionality you multiply by $\Delta t \Delta y \Delta z$ to see how much stuff flows through an area $\Delta y \Delta z$ in a time interval $\Delta t$, all in a surface of $x=const$. So to be fair to all directions of spacetime, you can pick any surface like $dx^{\nu}=const$ then take $c\Delta t \Delta x \Delta y \Delta z/ \Delta x^{\nu}$ to quantify how much of that infinite surface $dx^{\nu}=const$ you have and the constant of proportionality that you multiply $c\Delta t \Delta x \Delta y \Delta z/ \Delta x^{\nu}$ by is called the flux.
However, what all the resources I've looked at say is that the flux of energy through space would be the energy density, which isn't at all the equation above.
An integral with $dxdydz$ looks exactly like you are integrating a density, and a density on a $t=const$ surface is exactly a flux for $dx^{\nu}=const$ surface for the special case where $\nu=0$.
If $T^{00}$ were to be the energy density $\rho$, wouldn't it make more sense to change the definition of the stress energy tensor, such that:
The four-momentum $p^{\mu}$ is the flux of $T^{\mu\nu}$ through a surface of constant $dx^{\nu}$.
OK. Sometimes people call the flux the rate, the thing per area per time, or the thing per volume. But sometimes they call the thing integrated the flux. This is unfortunate. Confusing the two is like confusing an energy and an energy density. The stress-energy tensor is telling you the rate, the per volume quantity:
$$p^{\mu}=\int\int\int_{t=const} T^{\mu 0} dx dy dz.$$
Instead of choosing a $t=const$ surface and seeing how much $p^\mu$ crossed it, you could pick an $x=const$ surface and get:
$$p^{\mu}=\int\int\int_{x=const} T^{\mu 1} cdt dy dz.$$
Or you could pick a $y=const$ surface and get:
$$p^{\mu}=\int\int\int_{y=const} T^{\mu 2} cdt dx dz.$$
Or you could pick a $z=const$ surface and get:
$$p^{\mu}=\int\int\int_{z=const} T^{\mu 3} cdt dx dy.$$
In each case, the integral is telling you how much $p^\mu$ crosses your hypersurface. And it turns out this is enough to handle any hypersurface, in particular if you pick any surface that locally looks flat, you can combine these rates per area (or densities) $T^{\mu 0}$, $T^{\mu 1}$, $T^{\mu 2}$, and $T^{\mu 3}$ to find out how much $p^\mu$ flows across your arbitrary surface.
|
Or should we input $[1 \ 0]$ in each H gate, because we are applying H
gates to just qubit of state $|0\rangle$ each time?
Yes, when you have a two-qubit state (say you label the two qubits as $A$ and $B$ respectively), you need to apply the two Hadamard gates separately on each qubit's state. The final state will be the tensor product of the two "transformed" single-qubit states.
If your input is $|0\rangle_A\otimes|0\rangle_B$, the output will simply be $$\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_B$$
Alternative:
If the two input qubits are
entangled, the above method won't work since you won't be able to represent the input state as a tensor product of the states of the two qubits. So, I'm outlining a more general method here.
When two gates are in parallel, like in your case, you can consider the tensor product of the two gates and apply
that on the 2-qubit state vector. You'll end up with the same result.
$\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\\ \end{bmatrix} \otimes \frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\\ \end{bmatrix} = \frac{1}{2}\begin{bmatrix}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1 \end{bmatrix}$
Now, on applying this matrix on the 2-qubit state $\begin{bmatrix}1\\0\\0\\0\end{bmatrix}$ you get:
$$\frac{1}{2}\begin{bmatrix}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1 \end{bmatrix} \begin{bmatrix}1\\0\\0\\0\end{bmatrix}=\begin{bmatrix}1/2\\1/2\\1/2\\1/2\end{bmatrix}$$
which is equivalent to $$\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_B$$
Justification
Tensor product of linear maps:
The tensor product also operates on linear maps between vector spaces.
Specifically, given two linear maps $S : V \to X$ and $T : W \to Y$
between vector spaces, the tensor product of the two linear maps $S$
and $T$ is a linear map $(S\otimes T)(v\otimes w) = S(v) \otimes T(w)$
defined by $(S\otimes T)(v\otimes w) = S(v) \otimes T(w)$.
Thus, $$(\mathbf H|0\rangle_A) \otimes (\mathbf H|0\rangle_B) = (\mathbf H\otimes \mathbf H)(|0\rangle_A \otimes |0\rangle_B)$$
|
The Sharpe ratio tells us the amount of excess return we get for taking on each additional unit of portfolio standard deviation. $$\frac{\mu_p - r_f}{\sigma_p }$$
We are looking for the combination of the two risky assets with the highest Sharpe ratio ($P^*$). Once we do that, we can take linear combinations of that portfolio and the risk-less asset and form the Capital-Market Line. This is usually solved for numerically rather than analytically but it is possible to do so analytically, particularly in the two asset case.
A portfolio $p$ has an expected return of:$$\mu_p(W_A) = w_A \cdot \mu_A + (1 - w_A) \cdot \mu_B $$
and a standard deviation of:$$\sigma_p(W_A) = \sqrt(w^2_A \cdot \sigma^2_A + (1 - w_A)^2 \cdot \sigma^2_B + 2(1-W_A)W_A \sigma_{AB}) $$where $\sigma^2_A$ is the variance of asset $A$, $\sigma^2_B$ is the variance of asset $B$, and $\sigma_{AB}$ is their covariance. It therefore has a Sharpe Ratio of:
$$\frac{\mu_p(W_A) - r_f}{\sigma_p(W_A) } = \frac{w_A \cdot \mu_A + (1 - w_A) \cdot \mu_B}{+\sqrt(w^2_A \cdot \sigma^2_A + (1 - w_A)^2 \cdot \sigma^2_B + 2(1-W_A)W_A \sigma_{AB})}$$
To maximize this you'll want to solve:$$ \frac{d}{dW_A} \frac{\mu_p(W_A) - r_f}{\sigma_p(W_A) } = 0 $$$\Rightarrow W^{*}_A$ s.t. $P(W^{*}_A)=P^*$and check that the second order condition is met:$$ \frac{d^2}{dW^2_A} \frac{\mu_p(W_A) - r_f}{\sigma_p(W_A) } < 0$$
The algebra is a bit hairy but there is nothing tricky from here on out.
|
I have reduced this problem (thanks @Mhenni) to the following (which needs to be proved):
$$\prod_{k=1}^n\frac{\Gamma(3k)\Gamma\left(\frac{k}{2}\right)}{2^k\Gamma\left(\frac{3k}{2}\right)\Gamma(2k)}=\prod_{k=1}^n\frac{2^k(1+k)\Gamma(k)\Gamma\left(\frac{3(1+k)}{2}\right)}{(1+3k)\Gamma(2k)\Gamma\left(\frac{3+k}{2}\right)}.$$
As you see it's quite a mess. Hopefully one can apply some gamma-identities and cancel some stuff out. I have evaluated both products for large numbers and I
know that the identity is true, I just need to learn how to manipulate those gammas.
|
Let $G \left(X, Y, E\right)$ be a bipartite graph with two equal-sized parts (that is, $|X|=|Y|=n$).
An
envy-free matching is a perfect matching between two subsets $X_1 \subseteq X$ and $Y_1 \subseteq Y$ such that no unmatched $x$ (that is, $x \in X \setminus X_1$) wants (i.e., is connected to) any matched $y$ (that is, $y \in Y_1$).
For example, in the following graph (where an edge from $x$ to $y$ means that $x$ wants $y$):
$x_1\ wants\ y_2$ $x_2\ wants\ y_1,y_2,y_3$ $x_3\ wants\ y_2$
an envy-free matching is: $x_2 \to y_1$, since $x_1$ and $x_3$ don't want $y_1$ so they are not envious.
Note that an envy-free matching may be different from the
maximum-size matching. In the above graph, there is a maximum matching of size 2 ($x_2 \to y_1,x_1\to y_2$), but it is not envy-free because $x_3$ is envious.
In the following graph:
$x_1\ wants\ y_2$ $x_2\ wants\ y_2$
the only envy-free matching is the empty matching, since if $y_2$ is matched to $x_i$, then $x_{3-i}$ is envious.
My conjecture is that,
if every $y$ is wanted by at least one $x$, then there is a non-empty envy-free matching.
Can you prove or disprove it?
|
I began to study Field Theory from Lang's book and I would be happy to discuss some questions which I am going to write down below:
Question 1. Suppose that $\mathbb{k}$-field, $E$ extension field of $\mathbb{k}$ and $\alpha_1,\alpha_2\in E$.
Am I right that $\mathbb{k}(\alpha_1,\alpha_2)=\mathbb{k}(\alpha_1)(\alpha_2)$?
The LHS is the smallest subfield in $E$ containing $\mathbb{k},\alpha_1$ and $\alpha_2$. The RHS is the smallest subfield in $E$ containing $\mathbb{k}(\alpha_1)$ and $\alpha_2$.
Proof: Since $\mathbb{k}(\alpha_1,\alpha_2)$ contains $\mathbb{k},\alpha_1,\alpha_2$ then it also contains $\mathbb{k}(\alpha_1)$ and $\alpha_2$ then it contains also $\mathbb{k}(\alpha_1)(\alpha_2)$. Hence $\mathbb{k}(\alpha_1)(\alpha_2)\subset\mathbb{k}(\alpha_1,\alpha_2)$.
Conversely, $\mathbb{k}(\alpha_1)(\alpha_2)$ contains $\mathbb{k},\alpha_1,\alpha_2$ then it immediately contains $\mathbb{k}(\alpha_1,\alpha_2)$. Thus we proved the equality. Is the proof correct?
Question 2. If $\alpha$ algebraic over $\mathbb{k}$, and $\mathbb{k}\subset F$ and $\mathbb{k}[\alpha],F\subset L$ then $\alpha$ is algebraic over $F$.
This proposition seems to me quite weird because the condition $\mathbb{k}[\alpha],F\subset L$ is extra.
Indeed, if $\alpha$ is algebraic over $k$ then there exists $a_0,\dots,a_n\in k, n\geq 1$ (not all of them are zero) such that $a_0+a_1\alpha+\dots+a_n\alpha^n=0$, but since $\mathbb{k}\subset F$ then it follows that $\alpha$ is algebraic over $F$.
Am I right? Maybe I am misubderstanding something?
Question 3: If $E=\mathbb{k}(\alpha_1,\dots,\alpha_n)$, and $F$ is an extension of $\mathbb{k}$, both $F,E$ contained in $L$, then $$EF=F(\alpha_1,\dots,\alpha_n).$$
Can anyone show how prove this equality? And what does it mean?
Would be very grateful for detailed answers!
EDIT: How it follows from question 1 that $k(\alpha_1,\dots,\alpha_{n-1})(\alpha_n)=k(\alpha_1,\dots,\alpha_{n-1},\alpha_n)$? I don't know how to prove it correctly and rigorously?
|
Suppose that $f:\mathbb{R}^n\to \mathbb{R}^m$ is of class $C^1$ and $Df(x_0)$ has rank $m$. Then show there is a whole neighborhood of $f(x_0)$ lying in the image of $f$.
My attempt: if $Df(x_0)$ is onto (rank $m$) and $n\leq m$, then I can use the Rank Theorem and justify that exists open sets $V,W\subset \mathbb{R}^m$, $f(x_0)\in V$ and $\psi:V\to W$ such that: $$(\psi \circ f)(x_1,\cdots,x_n)=(x_1,\cdots,x_n,0,\cdots,0)$$ Particularly, its mean that $V\subset Im(f)$.
Is this correct? What happend when $n>m$?
Thanks for your help.
|
In Classical Mechanics one usually considers the Lagrangian as $L = K - U$ where $K$ is the kinetic energy of the system and $U$ is the potential energy. One then gets the Euler-Lagrange equations and everything is fine: if we have a system we can plug in the kinetic energy and potential and find the Lagrangian for it.
The point is that I've already seen a different object: the Lagrangian density $\mathcal{L}$ which is on the other hand a $4$-form on space-time. The main difference being that the action is the integral of $L$ over time and the integral of $\mathcal{L}$ over all space-time.
The problem is that apart from that, no relation between $\mathcal{L}$ and other quantities is given at first. So for instance, in electrodynamics we have
$$\mathcal{L} = -\dfrac{1}{4\mu_0}F^{\alpha \beta}F_{\alpha\beta}-A_\alpha J^\alpha$$
Where $A$ is the $4$-potential and $F=dA$ is the electromagnetic tensor. It is not clear at first, why
this is the right Lagrangian density in the sense that it becomes a little hard to see where it comes from.
So, the Lagrangian itself is just $K-U$, but what about the Lagrangian density? How does one find it?
|
Assume we have $n$ double elements $a_1 \dots a_n$. We want to find out if two of the elements of the array are identical. And we have a hash function $h(x)$ which assigns each double value an integer between $1$ and $n$ and which can be calculate in $O(1)$ time. Let $m := \{(i,j) : a_j \neq a_i \text{ and } h(a_j) = h(a_i)\}.$
How can i check if all $n$ elements are different in $O(n+ |m|)$ time and $O(n)$ memory?
1) The naive approach would be to check all $n$ elements if there is another element with the same value, which requires $O(n^2)$ time.
2) A better way would be if i sort the elements, which needs $O(n \cdot \log(n))$ time and then check each adjacent pair of elements. In total it would be $O(n \cdot \log(n))$.
But i don't know how i can solve this problem quicker. I think neither approach 1) nor 2) can be further improved. Apparently I have to use the hash function somehow, but I don't see how.
|
Suppose we are interested in a more detailed inventory of the colorings of an object, namely, instead of the total number of colorings we seek the number of colorings with a given number of each color.
Example 6.3.1 How many distinct ways are there to color the vertices of a regular pentagon modulo $D_5$ so that one vertex is red, two are blue, and two are green?
We can approach this as before, that is, the answer is $${1\over|D_5|}\sum_{\sigma\in D_5}|\fix(\sigma)|,$$ where $\fix(\sigma)$ now means the colorings with one red, two blues, and two greens that are fixed by $\sigma$. No longer can we use the simple expression of corollary 6.2.8.
The identity permutation fixes all colorings, so we need to know how many colorings of the pentagon use one red, two blues, and two greens. This is an easy counting problem: the number is ${5\choose2}{3\choose2}=30$.
If $\sigma$ is a non-trivial rotation, $|\fix(\sigma)|=0$, since the only colorings fixed by a rotation have all vertices the same color.
If $\sigma$ is a reflection, the single vertex fixed by $\sigma$ must be red, and then the remaining 2-cycles are colored blue and green in one of two ways, so $|\fix(\sigma)|=2$.
Thus, the number of distinct colorings is $${1\over10}(30+0+0+0+0+2+2+2+2+2)=4.$$
What we seek is a way to streamline this process, since in general the computations of $|\fix(\sigma)|$ can be tedious. We begin by recasting the formula of corollary 6.2.8.
Definition 6.3.2 The
type of a permutation $\sigma\in S_n$ is $\tau(\sigma)=(\tau_1(\sigma),\tau_2(\sigma),\ldots,\tau_n(\sigma))$, where $\tau_i(\sigma)$ is the number of $i$-cycles in the cycle form of $\sigma$.
Note that $\sum_{i=1}^n \tau_i(\sigma)=\#\sigma$. Now instead of the simple $${1\over|G|}\sum_{\sigma\in G} k^{\#\sigma}$$ let us write $${1\over|G|}\sum_{\sigma\in G} x_1^{\tau_1(\sigma)}x_2^{\tau_2(\sigma)} \cdots x_n^{\tau_n(\sigma)}.$$ If we substitute $x_i=k$ for every $i$, we get the original form of the sum, but the new version carries more information about each $\sigma$.
Suppose we want to know the number of colorings fixed by some $\sigma$that use $i$ reds and $j$ blues, where of course $i+j=n$. Using ideasfamiliar from generating functions, consider the following expression:$$(r+b)^{\tau_1(\sigma)}(r^2+b^2)^{\tau_2(\sigma)}\cdots (r^n+b^n)^{\tau_n(\sigma)}.$$If we multiply out, we get a sum of terms of the form $r^pb^q$, each representing some particular way of coloring the vertices ofcycles red and blue so that the total number of red vertices is $p$and the number of blue vertices is $q$, and moreover this coloringwill be fixed by $\sigma$. When we collect like terms, the coefficientof $r^ib^j$ is the number of colorings fixed by $\sigma$ that use $i$reds and $j$ blues. This means that the coefficient of$r^ib^j$ in$$\sum_{\sigma\in G} (r+b)^{\tau_1(\sigma)}(r^2+b^2)^{\tau_2(\sigma)}\cdots (r^n+b^n)^{\tau_n(\sigma)}$$is $$\sum_{\sigma\in G} |\fix(\sigma)|$$where $\fix(\sigma)$ is the set of colorings using $i$ reds and $j$blues that are fixed by $\sigma$. Finally, then, the number of distinctcolorings using $i$ reds and $j$blues is this coefficient divided by $|G|$.This means that by multiplying out $${1\over |G|}\sum_{\sigma\in G}(r+b)^{\tau_1(\sigma)}(r^2+b^2)^{\tau_2(\sigma)}\cdots (r^n+b^n)^{\tau_n(\sigma)}$$and collecting like terms,we get a list of the number of distinct colorings using anycombination of reds and blues, each the coefficient of a differentterm; we call this the
inventory of colorings. If we substitute $r=1$ and $b=1$, we get the sum of thecoefficients, namely, the total number of distinct colorings with two colors.
Definition 6.3.3 The
cycle index of $G$ is$${\cal P}_G={1\over |G|}\sum_{\sigma\in G}\prod_{i=1}^n x_i^{\tau_i(\sigma)}.$$
Example 6.3.4 Consider again example 6.2.6, in which we found the number of colorings of a square with two colors. The cycle index of $D_4$ is $${1\over8}( x_1^4+x_4^1+x_2^2+x_4^1+x_2^2+x_2^2+x_1^2x_2+x_1^2x_2 ) ={1\over8}x_1^4 + {1\over4}x_1^2x_2 + {3\over8}x_2^2 + {1\over4}x_4.$$ Substituting as above gives $${1\over8}(r+b)^4 + {1\over4}(r+b)^2(r^2+b^2) + {3\over8}(r^2+b^2)^2 + {1\over4}(r^4+b^4) =r^4 + r^3b + 2r^2b^2 + rb^3 + b^4.$$ Thus there is one all red coloring, one with three reds and one blue, and so on, as shown in figure 6.2.4.
There is nothing special about the use of two colors. If we want to use three colors, we substitute $r^i+b^i+g^i$ for $x_i$ in the cycle index, and for $k$ colors we substitute something like $c_1^i+c_2^i+c_3^i+\cdots+c_k^i$.
Example 6.3.5 Let's do the number of 3-colorings of the square. Since we already have the cycle index, we need only substitute $x_i=r^i+b^i+g^i$ and expand. We get $$\eqalign{ {1\over8}(r+b+g)^4 &+ {1\over4}(r+b+g)^2(r^2+b^2+g^2) + {3\over8}(r^2+b^2+g^2)^2 + {1\over4}(r^4+b^4+g^4)\cr ={}&b^4 + b^3g + b^3r + 2b^2g^2 + 2b^2gr + 2b^2r^2 + bg^3 + 2bg^2r + 2bgr^2 +\cr &br^3 + g^4 + g^3r + 2g^2r^2 + gr^3 + r^4.\cr }$$ So, for example, there are two squares with two blue vertices, one green, and one red, from the $b^2gr$ term.
Example 6.3.6 Consider again example 6.2.7, in which we counted the number of four-vertex graphs. Following that example, we get $${\cal P}_G={1\over 24}(x_1^6+6x_2x_4+8x_3^2+3x_1^2x_2^2+6x_1^2x_2^2),$$ and substituting for the variables $x_i$ gives $$r^6 + r^5b + 2r^4b^2 + 3r^3b^3 + 2r^2b^4 + rb^5 + b^6.$$ Recall that the "colors'' of the edges in this example are "included'' and "excluded''. If we set $b=1$ and $r=i$ (for "included'') we get $$i^6 + i^5 + 2i^4 + 3i^3 + 2i^2 + i + 1,$$ interpreted as one graph with 6 edges, one with 5, two with 4, three with 3, two with 2, one with 1, and one with zero edges, since $1=i^0$.
It is possible, though a bit difficult, to see that for $n$ vertices the cycle index is $$ {\cal P}_G=\sum_{\bf j}\prod_{k=1}^n{1\over k^{j_k} j_k!} \prod_{k=1}^{\lfloor n/2\rfloor}(x_{k} x_{2k}^{k-1})^{j_{2k}} \!\!\prod_{k=1}^{\lfloor (n-1)/2\rfloor}\!\!x_{2k+1}^{kj_{2k+1}} \prod_{k=1}^{\lfloor n/2\rfloor} x_k^{kC(j_k,2)} \!\!\!\!\prod_{1\le r< s\le n-1}\!\!\!\!\!\!\!\! x_{\lcm(r,s)}^{\gcd(r,s)j_rj_s}, $$ where the sums are over all partitions ${\bf j}=(j_1,j_2,\ldots,j_n)$ of $n$, that is, over all $\bf j$ such that $j_1+2j_2+3j_3+\cdots+nj_n=n$, and $C(m,2)={m\choose2}$. This is where the formula 6.2.1 comes from, substituting $x_i=2$ for all $i$.
With this formula and a computer it is easy to compute the inventory of $n$-vertex graphs when $n$ is not too large. When $n=5$, the inventory is $$ i^{10} + i^9 + 2i^8 + 4i^7 + 6i^6 + 6i^5 + 6i^4 + 4i^3 + 2i^2 + i+1. $$
Exercises 6.3
Ex 6.3.2Using the previous exercise, write out a full inventory of coloringsof the vertices of a regular tetrahedron induced by the rigidmotions, , with three colors, as in example 6.3.5.You may use Sage or some other computer algebra system.
Ex 6.3.4Using the previous exercise, write out a full inventory of the graphson five vertices, as in example 6.3.6.You may use Sage or some other computer algebra system.
|
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
|
The exact answer depends on the exact kind of superposition you want. The answers by pyramids and Niel both give you something like
$$A\sum_{t=1}^n |\,\,f_t (x)\,\,\rangle \otimes |F_t\rangle$$
Here I've followed Niel in labelling the different functions $f_1$, $f_2$, etc, with $n$ as the total number of functions you want to superpose. Also I've used $F_t$ to denotes some description of the function $f_t$ as a stored program. The $A$ is just whatever number needs to be there for the state to be normalized.
Note that this is not simply a superposition of the $f_t(x)$. It is entangled with the stored program. If you were to trace out the stored program, you'd just have a mixture of the $f_t(x)$. This means that the stored program could constitute 'garbage', which prevents interference effects that you might be counting on. Or it might not. It depends on how this superposition will be used in your computation.
If you want rid of the garbage, things get more tricky. For example, suppose what you want is a unitary $U$ that has the effect
$$U : \,\,\, | x \rangle \otimes |0\rangle^{\otimes N} \rightarrow A \sum_{t=1}^n |\,\,f_t (x)\,\,\rangle$$
for all possible inputs $x$ (which I am assuming are bit strings written in the computational basis). Note that I've also included some blank qubits on the input side, in case the functions have longer outputs than inputs.
From this we can very quickly find a condition that the functions must satisfy: since the input states form an orthogonal set, so must the outputs. This will put a significant restriction on the kinds of functions that can be combined in this way.
|
\[\def\bigtimes{\mathop{\vcenter{\huge\times}}}\]
Introduction
In a recent post, principles of Dynamic Programming were used to derive a recursive control algorithm for Deterministic Linear Control systems. The challenges with the approach used in that blog post is that it is only readily useful for Linear Control Systems with linear cost functions. What if, instead, we had a Nonlinear System to control or a cost function with some nonlinear terms? Such a problem would be challenging to solve using the approach described in the former blog post.
In this blog post, we are going to cover a more general approximate Dynamic Programming approach that approximates the optimal controller by essentially discretizing the state space and control space. This approach will be shown to generalize to any nonlinear problems, no matter if the nonlinearity comes from the dynamics or cost function. While this approximate solution scheme is conveniently general in a mathematical sense, the limitations with respect to the Curse of Dimensionality will show why this approach cannot be used for every problem.
The Approximate Dynamic Programming Formulation Definitions
To approach approximating these Dynamic Programming problems, we must first start out with an applicable formulation. One of the first steps will be defining various items that will help make the work later more precise and understandable. The first two quantities are that of the complete State Space and Control Space. We can define those two spaces in the following manner:
\begin{align}
\mathcal{X} &= \bigtimes_{i=1}^{n} \lbrack x_{l}^{(i)}, x_{u}^{(i)}\rbrack \\ \mathcal{U} &= \bigtimes_{i=1}^{m} \lbrack u_{l}^{(i)}, u_{u}^{(i)}\rbrack \end{align}
where $\mathcal{X} \subset \mathbb{R}^{n}$ is the State Space, $x_{l}^{(i)}, x_{u}^{(i)}$ are the $i^{th}$ low and upper bounds of the State Space, $\mathcal{U} \subset \mathbb{R}^{m}$ is the Control Space, and $u_{l}^{(i)}, u_{u}^{(i)}$ are the $i^{th}$ low and upper bounds of the Control Space. Now these spaces represent the complete State Space and Control Space. To approximate the Dynamic Programming problem, though, we will instead discretize the State Space and Control Space into subspaces $\mathcal{X}_{D} \subset \mathcal{X}$ and $\mathcal{U}_{D} \subset \mathcal{U}$. We can thus define $\mathcal{X}_{D}$ and $\mathcal{U}_{D}$ in the following manner:
\begin{align}
\mathcal{X}_{D} &= \bigtimes_{i=1}^{n} L( x_{l}^{(i)}, x_{u}^{(i)}, N_i ) \label{xd} \\ \mathcal{U}_{D} &= \bigtimes_{i=1}^{m} L( u_{l}^{(i)}, u_{u}^{(i)}, M_i ) \label{ud} \\ L(a,b,N) &= \left \lbrace a + j \Delta : \Delta = \frac{b-a}{N-1}, j \in \lbrace 0, 1, 2, \cdots, N-1 \rbrace \right \rbrace \end{align}
What the formulation above shows is we generate a subset of both $\mathcal{X}$ and $\mathcal{U}$ by breaking up the bounds of the $i^{th}$ dimensions into pieces. With these definitions, we can proceed with the mathematical and algorithmic formulation of the problem!
Mathematical Formulation
To make a general (deterministic) control problem applicable to Dynamic Programming, it needs to fit within the following framework:
\begin{align}
\boldsymbol{x}_{k+1} &= f(\boldsymbol{x}_{k},\boldsymbol{u}_{k}) \\ % \mu_{k}^{*}(\boldsymbol{x}_{j}) &= \arg\min_{\hat{\boldsymbol{u}} \in \mathcal{U}_{D}} g_{k}(\boldsymbol{x}_{j},\hat{\boldsymbol{u}}) + V_{k+1}^{*}(f(\boldsymbol{x}_{j},\hat{\boldsymbol{u}}))\\ % V_{k}^{*}(\boldsymbol{x}_{j}) &= g_{k}(\boldsymbol{x}_{j},\mu_{k}^{*}(\boldsymbol{x}_{j})) + V_{k+1}^{*}(\boldsymbol{x}_{k+1})\\ % V_{k}^{*}(\boldsymbol{x}_{j}) &= g_{k}(\boldsymbol{x}_{j},\mu_{k}^{*}(\boldsymbol{x}_{j})) + V_{k+1}^{*}(f(\boldsymbol{x}_{j},\mu_{k}^{*}(\boldsymbol{x}_{j}))) \nonumber \\ % V_{N}^{*}(\boldsymbol{x}_{N}) &= g_{N}(\boldsymbol{x}_{N}) \end{align}
$\forall k \in \lbrace 1, 2, 3, \cdots, N-1 \rbrace$, and $\forall j \in \lbrace 1, 2, 3, \cdots, |\mathcal{X}_{D}| \rbrace $. Note as well that $\mu_{k}^{*}(\boldsymbol{x})$ is the optimal controller (or policy) at the $k^{th}$ timestep as a function of some state $\boldsymbol{x} \in \mathcal{X}_{D}$. The idea of the above formulation is we compute a cost at some terminal time, $t_{N}$, using the cost function $g_{N}(\cdot)$, and then work backwards in time recursively to gradually obtain the optimal policy for the problem at each timestep. With the mathematical formulation resolved, the next step is to put all of this into an algorithm!
The Algorithm
The algorithm can be defined in pseudocode using the following:
With the algorithm defined above, one can translate this into a code and apply it to some interesting problems! I have written a code in C++ to implement the above algorithm, which can be found at my Github. Assuming one has written the algorithm written above, the next step is to try it out on solving some control problems! Let’s take a look at an example.
Nonlinear Pendulum Control Problem Statement
The Nonlinear Pendulum control problem is one classically considered in most introductory control classes. The full nonlinear problem can be formulated with a nonlinear Second-Order Ordinary Differential Equation (ODE) in the following manner:
\begin{align}
\ddot{\theta}(t) + c \dot{\theta}(t) + \kappa \sin\left(\theta(t)\right) &= u \\ \theta(t_0) &= \theta_{0} \\ \dot{\theta}(t_0) &= \dot{\theta}_{0} \end{align}
where $u$ is a torque the controller can apply, and $c,\kappa$ are constants based on the exact pendulum system configuration. For this particular problem, we are going to try and build a controller that can invert the pendulum. Additionally, we are going to constrain the values for $u$ such that it is too weak to directly lift the pendulum up to an inverted position. The constraint is to, at least, make $|u| \lt \kappa$, ensuring it needs to find some different strategy to get the pendulum to an inverted position. Given we now have this problem statement, let’s make this problem solvable using the Approximate Dynamic Programming approach shown earlier!
Conversion to Dynamic Programming Formulation
First and formost, we should take the dynamical system in the problem statement and convert it into the discrete dynamic equation Dynamic Programming requires. Our first step is to pose this problem as a system of First-Order differential equations in the following way:
\begin{align}
\dot{x_1} &= x_2 \\ \dot{x_2} &= u \; – c x_2 \; – \kappa \sin( x_1 ) \\ &\text{or} \nonumber \\ \dot{\boldsymbol{x}} &= F(\boldsymbol{x},u) \end{align}
where $\lbrack \theta,\dot{\theta}\rbrack ^{T} = \lbrack x_1,x_2\rbrack^{T} = \boldsymbol{x}$. We can then discretize this dynamical system, using Finite Differences, into one that can be used in Dynamic Programming. This is done using the below steps:
\begin{align}
\dot{\boldsymbol{x}} &= F(\boldsymbol{x},u) \nonumber \\ \frac{\boldsymbol{x}_{k+1} – \boldsymbol{x}_{k}}{\Delta t} &\approx F(\boldsymbol{x}_{k},u_k) \nonumber \\ \boldsymbol{x}_{k+1} &= \boldsymbol{x}_{k} + \Delta t F(\boldsymbol{x}_{k},u_k) \nonumber \\ \boldsymbol{x}_{k+1} &= f(\boldsymbol{x}_{k},u_k) \end{align}
where $f(\cdot,\cdot)$ becomes the discrete dynamical map that is used within the Dynamic Programming formulation. Now the next step is to define our discrete State and Control sets, $\mathcal{X}_{D}$ and $\mathcal{U}_{D}$ respectively, that will be used. These sets will be defined as the following for this problem:
\begin{align}
\mathcal{X}_{D} &= L(\theta_{min},\theta_{max},N_{\theta}) \bigtimes L(\dot{\theta}_{min},\dot{\theta}_{max},N_{\dot{\theta}}) \\ \mathcal{U}_{D} &= L(-u_{max},u_{max},N_{u}) \end{align}
where $\theta_{min},\theta_{max},\dot{\theta}_{min},\dot{\theta}_{max},u_{max}, N_{\theta},N_{\dot{\theta}},N_{u}$ will have exact values assigned based on the specific pendulum problem being solved. The last items needed to make this problem well posed is the cost functions needed to penalize different possible trajectories. For this problem, we will used the cost functions defined below:
\begin{align}
g_{N}(\boldsymbol{q}) = g_{N}(\theta,\dot{\theta}) &= Q_{f} (|\theta|-\pi)^2 + \dot{\theta}^2\\ g_{k}(\boldsymbol{q},u) = g_{k}(\theta,\dot{\theta},u) &= Q (|\theta|-\pi)^2 + Ru^2 \end{align}
where $g_{k}(\cdot,\cdot)$ is defined $\forall k \in \lbrace 1, 2, \cdots, N-1 \rbrace$ and $R, Q, Q_f$ are scalar weighting factors that can be defined depending on how smooth you want the control to be and how quickly you want the pendulum to become inverted. Now that all the items are defined so Dynamic Programming can be used, let’s solve this problem and see what we get!
Solution using Approximate Dynamic Programming
Based on the Dynamic Programming formulation above of the Nonlinear Pendulum Control problem, we can crank out an optimal controller (at each timestep) algorithmically. To test the approach, algorithms I wrote that can be found at my Github are using the following values for the parameters mentioned earlier:
\begin{align*}
N &= 80 \\ c &= 0.0 \\ \kappa &= 5.0 \\ \theta_{min} &= -\pi \\ \theta_{max} &= \pi \\ N_{\theta} &= 3000\\ \dot{\theta}_{min} &= -3 \pi \\ \dot{\theta}_{max} &= 3 \pi \\ N_{\dot{\theta}} &= 3000 \\ u_{max} &= 1.0 \\ N_{u} &= 5 \\ R &= 0 \\ Q &= 10 \\ Q_f &= 100 \end{align*}
Note that in the discrete dynamics, due to the discontinuity of the angle $\theta$ at $\theta = -\pi$ and $\theta = \pi$, the discrete dynamics actually need to be modified for the equation updating $\theta$ at each timestep. This equation can be updated to the following:
\begin{align}
\theta_{k+1} = B( \theta_{k} + \Delta t \dot{\theta}_{k} ) \end{align}
where $B(\theta)$ bounds the input angle $\theta$ to be between $-\pi$ and $\pi$ and is defined as the following:
\begin{align}
B(\theta) = \begin{cases} \theta \,- 2\pi & \text{if } \theta \gt \pi \\ \theta + 2\pi & \text{if } \theta \lt -\pi \\ \theta & \text{Otherwise} \end{cases} \end{align}
Given we use the modified dynamics for the pendulum, we can use the Approximate Dynamic Programming algorithm described earlier to produce an optimal controller shown below. Note that this controller is actually just the optimal controller found for the first timestep. However, since the cost function penalizes the pendulum for not being inverted throughout its whole trajectory, the controllers made via Dynamic Programming are actually individually capable of inverting and stabilizing the pendulum. Thus, one only really needs one of these optimal controllers to get the desired result.
The graphic below shows the value the controller produces for any given $\theta$ and $\dot{\theta}$ within $\mathcal{X}_{D}$. Yellow is a positive torque value for $u$, while blue is a negative torque value for $u$.
As we can see in the graphic above, the optimal controller produced using Dynamic Programming is extremely nonlinear. Looking at the result, it would be hard to think of a great way to even represent this controller using a finite approximation with some set of basis functions.
The result is also interesting due to the complexity of the controller and patterns produced for various values for $\theta$ and $\dot{\theta}$. While there is certainly analysis that could be done to further understand what the optimal controller is doing, it would probably just be better to get a glimpse at what this policy actually does via visualization. Below is a video showing how it performs!
Shortcomings of Algorithm
Now while the above algorithm has proven to produce some pretty awesome results, the practicality of the algorithm as-is is pretty small. For starters, the amount of space needed for storing the complete controller at each timestep is on the order of $O( N |\mathcal{X}_{D}| )$, while the algorithmic computation is on the order of $O(N |\mathcal{X}_{D}| |\mathcal{U}_{D}| )$. For low dimensional problems, this may not seem like a big deal, but both $|\mathcal{X}_{D}|$ and $|\mathcal{U}_{D}|$ blow up as dimensions increase due to the Curse of Dimensionality.
For example, given equations $\refp{xd}$ and $\refp{ud}$, we can compute the cardinality of $\mathcal{X}_{D}$ and $\mathcal{U}_{D}$ to be the following:
\begin{align}
|\mathcal{X}_{D}| &= \prod_{i=1}^{n} N_{i} \\ |\mathcal{U}_{D}| &= \prod_{i=1}^{m} M_{i} \end{align}
These cardinality results show that each dimension we add multiplies the size of the State and Control spaces, in turn making the values of $|\mathcal{X}_{D}|$ and $|\mathcal{U}_{D}|$ potentially huge! For example, if all we did was model a rocket in 3D, the state is 12 dimensions (or 13 if you use quaternions). Chopping up each dimension into just $10$ discrete pieces would make $|\mathcal{X}_{D}| = 10^{12}$ … which is way too huge a number to use practically, and $10$ discrete pieces per dimension is not even a lot! So even without looking at any discretized control space, this Approximate Dynamic Programming method proves impractical for a realistic problem.
Conclusion
Within this post, we saw a way to use Dynamic Programming and approximately tackle deterministic control problems… no matter how nonlinear the dynamics or cost functions are! We saw the algorithm described used to find a nonlinear optimal controller for a Nonlinear Pendulum and invert the pendulum. We also saw how impractical this method, as-is, can be for realistic problems of larger dimensionality.
While the dimensionality does become a problem for a variety of problems, there are fortunately still some problems that can be adequately solved using the above approach. For those looking for something more capable, those interested can investigate other Approximate Dynamic Programming techniques in the literature. Some related areas of potential interest is that of Reinforcement Learning, as these areas are attempting to solve the same problem but with more flexibility than traditional Dynamic Programming.
|
Apps for Teaching Mathematical Modeling of Tubular Reactors
The Tubular Reactor application is a tool where students can model a nonideal tubular reactor, including radial and axial variations in temperature and composition, and investigate the impact of different operating conditions. It also exemplifies how teachers can build tailored interfaces for problems that challenge the students’ imagination. The model and exercise are originally described in Scott Fogler’s book
Elements of Chemical Reaction Engineering. I wish I had access to this type of tool when I was a student!
Apps Simplify Teaching and Learning Mathematical Modeling Concepts
I still remember the calculus classes at engineering school where we first encountered partial differential equations. Despite the teacher’s efforts in trying to exemplify diffusion with the distance and the time it takes for a shark to detect your blood in the water if you cut yourself while diving, the rest of the course was mostly overshadowed by theorems. Theorems that could prove existence and uniqueness, for relatively simple problems, and by techniques such as variable separation and conformal mapping.
Apart from math theory and solving techniques, I realize now that what we really needed, in order to understand mathematical models, was to study the solution to the model equations and investigate this for different assumptions and conditions.
The Tubular Reactor with Jacket application in the COMSOL Multiphysics® software version 5.0 gives students the possibility to go from a mathematical model of a nonideal tubular reactor straight to the solution of the corresponding numerical model. The model is taken from an exercise in Scott Fogler’s book
Elements of Chemical Reaction Engineering, which is one of the most popular books in undergraduate and graduate courses in chemical reaction engineering. Value to the Student
The mathematical model consists of an energy balance and a material balance described in an axisymmetric coordinate system. As a student, you can change the activation energy of the reaction, the thermal conductivity, and the heat of reaction in the reactor (see Step 2 in the figure above). The resulting solution gives the axial and radial conversion and temperature profiles in the reactor. For some data, the results from the simulation are not obvious, which means that the interpretation of the model results also becomes a problem-solving exercise.
Value to the Teacher
The Tubular Reactor app can be accessed by a teacher in the Application Builder. As a teacher, you can investigate how to include model and application documentation in an application’s user interface. You can also learn how to include user interface commands that allow the students to generate a report from each simulation. In addition, the application accessed in the Application Builder also shows you how to create menu bars, ribbons, ribbon tabs, form collections, and forms in an application’s user interface and how to link these user interface components with settings and results in the underlying embedded model.
The Tubular Reactor Application
The different steps in the exercise for the tubular reactor problem are reflected in the ribbon on Windows® operating systems or in the main toolbar on Linux® operating systems and Mac OS in the application’s user interface.
The natural first step is to read the documentation (see Step 1 in Figure 1 above). The students can then change the activation energy and the heat of reaction, as well as the thermal conductivity in the reactor in Step 2.
The third step is to compute the solution to the model equations (Step 3). This makes it possible for the students to analyze the solution in four different plots (Step 4): Two surface plots that show the temperature and the conversion of the reactant in the reactor, and two cut line plots that show the temperature and conversion of the reactant in the reactor along three different lines placed at three different
z-positions (see Figure 3 further down the page for an example). The four different plots are found under their respective tabs in a so-called form collection.
The last step is to generate a report (Step 5) that documents the model and the results from the simulation. In this case, the output is in Microsoft® Word® format, but you may also generate HTML reports.
For the teacher, the application builder tree and the member form preview in the Application Builder reveal the structure of the app (see Figure 2 below). The Main Window node (labeled 1 in the screenshot below) contains the child nodes that describe the file menu (2) and the ribbon (3). In Linux® operating systems, the ribbon is shown as a toolbar. It also contains a reference to the main form. The Form node (4) contains five forms in this case: One form that describes the main form and four forms that describe the different members in the graphics form collection. These four graphics form members correspond to the four plots mentioned above.
Figure 2. The Application Builder user interface that includes the application builder tree to the left and the preview of the included forms to the right. In between is the settings window for each selected form, declaration, method, library, or model nodes.
The text input widgets for the activation energy, the thermal conductivity, and the heat of reaction (5) are linked to the corresponding parameters in the embedded model. The range of values is also limited in order to provide a safe input range that does not produce garbage.
The Declarations node (6) includes the declaration of variables that are not defined in the embedded model. For instance, you can declare a string variable that displays a message in the user interface when the app is run based on a selection by the user. In this example, a string variable is created to show if the simulation results are updated or not (i.e., if the student changes the activation energy without re-solving the model equation, a string variable displayed in the graphics window is set to “*Not Updated”).
The application further contains a set of methods (7) that correspond to loading the model documentation, computing the results in a simulation, and generating the report. These methods are linked to the corresponding menus in the ribbon or in the main menu. The methods are graphically generated, but can then be edited manually using the method editor for further flexibility.
The Library node (8) contains files that are embedded in the application. In this example, we have a PDF-file that contains the application’s documentation linked to the corresponding ribbon menus.
The Tubular Reactor Model
The process described by the model is that for the exothermic reaction of propylene oxide with water to form propylene glycol. This reaction takes place in a tubular reactor equipped with a cooling jacket in order to avoid explosion (see the figure in the “Model Results” section below).
The reaction takes place in the liquid phase and in the presence of a solvent. The density of the reactor solution is therefore assumed to vary to a negligible extent despite variations in composition and temperature. Under these assumptions, it is possible to define a fully developed velocity profile along the radius of the reactor.
The model equations describe the conservation of material and energy. The dependent variables are the concentration,
c, and the temperature, T, in the reactor. The material and energy equations are defined along two independent variables: the variable for the radial direction, r, and along the axial direction, z. These equations form a system of two coupled partial differential equations (PDE), along r and z.
The boundary conditions define the concentration and temperature at the inlet of the reactor. At the outlet, the outwards flux of material and energy is dominated by advection and is described accordingly. At the reactor wall, the heat flux is proportional to the temperature difference between the reactor and the cooling jacket.
(1)
\nabla \cdot \left( { – D\nabla c} \right) + \nabla c \cdot {\bf{u}} + {k_f}c = 0\\
c = {c_0}\quad at\;inlet;\quad \left( {\left( { – D\nabla c} \right) + c{\bf{u}}} \right) \cdot {\bf{n}} = c{\bf{u}} \cdot {\bf{n}}\quad at\;outlet\\
\\
\nabla \cdot \left( { – k\nabla T} \right) + \rho {C_p}\nabla T \cdot {\bf{u}} + {k_f}c\Delta H = 0\\
T = {T_0}\quad at\;inlet;\quad \left( {\left( { – k\nabla T} \right) + \rho {C_p}T\,{\bf{u}}} \right) \cdot {\bf{n}} = \rho {C_p}T\,{\bf{u}} \cdot {\bf{n}}\quad at\;outlet\\
– k\nabla T \cdot {\bf{n}} = {s_a}h\left( {{T_j} – T} \right)\quad at\;reactor\;walls
\end{array}\]
Model Results
The results from the simulation are quite interesting. For example, the conversion profiles along the radial cut lines display a minimum and a maximum, as seen in Figure 3 below. In Fogler’s book, one of the tasks for the student is to explain these profiles.
Here, we can reveal that the profile is explained by the combination of the exothermic reaction, the advective term, and the cooling from the jacket.
In the middle of the reactor, the large flow velocity reduces the conversion, since the reactants reach far into the reactor before they react. This is labeled 1 in Figure 3.
Figure 3. Cut lines plot of the conversion in the reactor along the radial direction at different axial positions: Inlet, half axial location, and outlet.
Closer to the wall, the flow rate decreases and the conversion then increases, since the temperature is still relatively high far from the jacket wall, which also gives a high reaction rate (2).
However, as we get even closer to the wall, the conversion starts to decrease due to the cooling of the jacket, which decreases the reaction rate (3) in the figure above.
At the reactor wall, the cooling is very efficient, which should decrease the conversion even more. However, the conversion increases slightly, since there is no advection of reactants at the wall. In other words, the space time for the volume elements that travel at the wall is very high, since the flow is zero at the wall (4). The reactants are therefore consumed to a larger extent.
Applications in Teaching
The Tubular Reactor example shows how to create a dedicated user interface based on a model — an application — where students can build an intuitive connection between a physical description of a reactor and the implications of this description in the operation of the reactor. An important component in this exercise is that the results are not obvious; the interpretation of the results requires some thinking.
The Application Builder provides a user-friendly tool for the teacher to graphically create application interfaces. It allows teachers to concentrate on the exercise itself rather than investing time in explaining software tools or programming interfaces in the traditional way. They can focus on generating simulation results that trigger thinking.
The students get more challenging and entertaining exercises that focus on the problem, not on the technicalities of running simulation software.
Next Steps Intrigued? Learn more about the Application Builder on the 5.0 Release Highlights page. Download the Tubular Reactor Jacket app Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
Linux is a registered trademark of Linus Torvalds.
Mac OS is a trademark of Apple Inc., registered in the U.S. and other countries.
Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Show that $\mathbb{Z}[x]=\lbrace \sum_{i=0}^{n}{a_ix^i}:a_i \in \mathbb{Z}, n \geq 0 \rbrace$ is not a principal ideal ring. I know the definition of principal ideal ring is that every ideal is generated by a single element. So my aim here is to find an ideal which is not generated by a single element. But I fail to locate such ideal. Can anyone help me in finding such ideal?
Hint: What about the ideal $\langle2,x\rangle$?
To answer your comment, a principal ideal is just a "set of multiples" of the generating element. Thus every element of a principal ideal has the generator as a divisor, and usually has similar properties. For example, in $\mathbb{Z}[x]$, the ideal $\langle2\rangle$ is the set of polynomials where every coefficient is even, and $\langle x\rangle$ is the set of polynomials of degree greater than $1$. However, $\langle 2,x\rangle$ is made up of all polynomials with even constant term. It doesn't feel as if polynomials with this property have a shared divisor (If they did, what would it be?) so this ideal feels like it's not principal.
In a way, there are somehow too many restrictions on the elements of the ideal to be generated by a single element. (Degree is greater than one AND constant term is even.) To illustrate the idea of "too many restrictions" further, consider the ideals $I_2=\langle 2,x\rangle$, $I_3=\langle 4,2x,x^2 \rangle$, $I_4=\langle 8,4x,2x^2,x^3\rangle, \dots$ It is a fact (that I recall being reasonably hard to prove) that $I_n$ is generated by $n$ elements and no fewer. This is because of the inrease in the number of restrictions of what elements of $\mathbb{Z}[x]$ can be in each ideal.
Alternatively, if $R$ is a PID, then any (non-zero) prime ideal is maximal (i.e. it is dimension $1$). Note that $(x)$ is a prime ideal of $\mathbb{Z}[x]$ which is not maximal--this follows because $\mathbb{Z}[x]/(x)\cong\mathbb{Z}$ is a domain but not a field.
This further shows that if $R[x]$ is a PID, then $R$ is a field. The converse is also true, because $R[x]$ will posses a euclidean function ($\deg$).
Hint $\ $ A nonzero ideal $\rm\,I\,$ in a PID is generated by any element of $\rm\,I\,$ having the least number of prime factors among all elements of $\rm\,I.\:$ Hence if $\rm\,I\,$ contains nonassociate primes then $\rm\: I = (1)$. Counterexamples abound in $\rm\,R = \Bbb Z[x],\:$ i.e. it is easy to find primes $\rm\:p,q\in R\:$ with $\rm\:(p,q)\ne (1).$
|
No long explanation is needed,
What would happen if I were to allow one end of a rope to fall past the event horizon of a black hole while I held the other end?
Would I be able to pull it out? Would the rope feel extremely (infinitely?) heavy?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
No long explanation is needed,
What would happen if I were to allow one end of a rope to fall past the event horizon of a black hole while I held the other end?
Would I be able to pull it out? Would the rope feel extremely (infinitely?) heavy?
What would happen if I were to allow one end of a rope to fall past the event horizon of a black hole while I held the other end?
As usual, this is in the context of a Schwarzschild black hole.
First, outside the horizon, a object with
constant radial coordinate 'feels' a constant proper acceleration, i.e., an accelerometer (think of a weight scale) attached to the object gives a constant, non-zero value.
Second, the proper acceleration increases without bound as the radial coordinate approaches the value of the Schwarzschild radius.
Now, imagine that there
is a rope extending from some fixed radius inward to the horizon. But the weight of a section of rope increases without bound as one approaches the horizon. Do you see the essential problem here?
When talking about black holes, you need to take into account time dilation. As you lower a rope into an event horizon, you will see time for the end of the rope slow down. You will not be able to say at some point: "Now the rope has crossed the event horizon", because you would need to wait indefinitely.
The rope, on the other hand (or some observer you placed there), will look back at you and see you age very fast. Then, just before it crosses the event horizon, it will witness the entire (probably infinite) lifetime of the universe.
One should notice, that due to gravitational redshift, the end of the rope will not be visible with the naked eye. Conversely, the observer at the end of the rope will see a blueshift, so near the event horizon, if nothing else kills him, there are still gamma rays from the background radiation.
This isn't exactly an answer to your question, because as it stands your question can't be answered, but I thought I'd post this because the answer really surprised me.
Firstly, the reason your question can't be answered is that you can never get your rope below the event horizon. From the perspective of an observer stationary with respect to the black hole anything dropped into it takes an infinite time to even reach the event horizon, let along cross it. So you could not find yourself holding one end of a rope that had its other end below the black hole - not even if you waited an infinite time.
But provided the bottom end of the rope is above the event horizon then it's perfectly reasonable to ask what force you feel holding the end of the rope, and it's also perfectly reasonable to ask what happens to this force in the limit of reaching the event horizon. So let's do this.
But the force on a rope is hard to calculate because the mass is distributed evenly along its length. To keep things simple replace the rope by some mass $m$ dangling on the end of a weightless rope. With this setup calculating the force is easy.
Suppose the mass $m$ is at a distance $r$ from the centre of a black hole of mass $M$. Twistor59's answer to the question What is the weight equation through general relativity? tells us that relative to a shell observer hovering at a distance $r$ the gravitational acceleration is:
$$ a_{shell} = \frac{GM}{r^2} \frac{1}{\sqrt{1 - \frac{r_s}{r}}} $$
where $r_s$ is the radius of the event horizon. But relative to you standing a large distance from the black hole the time of the shell observer is dilated by a factor of:
$$ t_r = \frac{1}{\sqrt{1 - \frac{r_s}{r}}} $$
And the acceleration you measure far from the black hole is $a_{shell}$ divided by this factor squared so:
$$\begin{align} a &= \frac{GM}{r^2} \frac{1}{\sqrt{1 - \frac{r_s}{r}}} \left( 1 - \frac{r_s}{r} \right) \\ &= \frac{GM}{r^2} \sqrt{1 - \frac{r_s}{r}} \end{align}$$
And the force is simply the acceleration multipled by the mass of your weight $m$:
$$\begin{align} F &= \frac{GMm}{r^2} \sqrt{1 - \frac{r_s}{r}} \\ &= F_N \sqrt{1 - \frac{r_s}{r}} \end{align}$$
where $F_N$ is the force predicted by Newtonian gravity i.e. the force you'd measure in the absence of realtivistic effects.
So the force you would feel is actually less than you'd expect from Newtonian gravity and indeed the force goes to zero as the weight approaches the event horizon. To illustrate this I've graphed the force you would feel compared to the force predicted by Newton's equation:
The force is in units of $GMm$. At distances of around four times the event horizon radius and greater the force is similar to the one calculated by Newton's equation, by as the weight approaches the event horizon the force you feel peaks around $1.4r_s$ then falls to zero at the horizon.
In order to not fall straight in, you would have to be orbiting the black hole very quickly, in fact near the speed of light. By definition event horizon is when not even light can escape as it orbits. (
Edit: as John Rennie commented, hovering in a rocket is also an option)
So imagine you are whizzing around at nearly the speed of light. You lower Your rope of unlimited strength toward the event horizon... wait, your in orbit that compensated gravity... so you'd have to throw it.
Edit:
As the end of the rope gets closer to the event horizon, it would start to be pulled in by gravity. As it gets closer and closer the force will increase to no limit. That's right, there's no amount of force that could get the rope to just touch the event horizon and hold it there. In short, you would be pulled in.
I'm not even sure how to explain the spacial distortions due to relativity in that orbit. But that's a different question anyway. (
Edit: And John Rennie mentioned in the comments that such an orbit would have to be 3x the event horizon to be stable.)
I have an explicit calculation of the tension in the rope in section 8.1, example 5 of my GR book ("A rope dangling in a Schwarzschild spacetime"), which is free online: http://www.lightandmatter.com/genrel/ . I'll just sketch the main results here. Suppose we have a bucket hanging on the end of a rope in the Schwarzshcild spacetime. The tension $T$ in the rope obeys the differential equation
$$0=T'+(f'/f)T-(f'/f)\mu,$$
where primes denote differentiation with respect to the Schwarzschild coordinate $r$, $f=\sqrt(1-2m/r)$, and $\mu$ is the mass per unit length. We get a finite result for $\lim_{r\rightarrow\infty}T$, even when the bucket is brought arbitrarily close to the horizon. (The solution in this case is just $T = T_\infty /f$ , where $T_\infty$ is the tension at r = ∞.) However, this is misleading without the caveat that for μ < T , the speed of transverse waves in the rope is greater than c, which is not possible for any known form of matter — it would violate the null energy condition. For realistic forms of matter, the rope will break above the horizon.
This makes sense because the exterior of the black hole is causally disconnected from the interior.
|
A second order differential equation is one containing the second derivative. These are in general quite complicated, but one fairly simple type is useful: the second order linear equation with constant coefficients.
Example 19.5.1 Consider the intial value problem $\ddot y-\dot y-2y=0$, $y(0)=5$, $\dot y(0)=0$. We make an inspired guess: might there be a solution of the form $\ds e^{rt}$? This seems at least plausible, since in this case $\ds\ddot y$, $\ds\dot y$, and $y$ all involve $\ds e^{rt}$.
If such a function is a solution then $$\eqalign{ r^2 e^{rt}-r e^{rt}-2e^{rt}&=0\cr e^{rt}(r^2-r-2)&=0\cr (r^2-r-2)&=0\cr (r-2)(r+1)&=0,\cr} $$ so $r$ is $2$ or $-1$. Not only are $\ds f=e^{2t}$ and $\ds g=e^{-t}$ solutions, but notice that $\ds y=Af+Bg$ is also, for any constants $A$ and $B$: $$\eqalign{ (Af+Bg)''-(Af+Bg)'-2(Af+Bg)&=Af''+Bg''-Af'-Bg'-2Af-2Bg\cr &=A(f''-f'-2f)+B(g''-g'-2g)\cr &=A(0)+B(0)=0.\cr} $$ Can we find $A$ and $B$ so that this is a solution to the initial value problem? Let's substitute: $$ 5=y(0)=Af(0)+Bg(0)=Ae^0+Be^0=A+B $$ and $$0=\dot y(0)=Af'(0)+Bg'(0)=A2e^{0}+B(-1)e^0=2A-B.$$ So we need to find $A$ and $B$ that make both $5=A+B$ and $0=2A-B$ true. This is a simple set of simultaneous equations: solve $B=2A$, substitute to get $5=A+2A=3A$. Then $A=5/3$ and $B=10/3$, and the desired solution is $\ds (5/3)e^{2t}+(10/3)e^{-t}$. You now see why the initial condition in this case included both $y(0)$ and $\dot y(0)$: we needed two equations in the two unknowns $A$ and $B$
You should of course wonder whether there might be other solutions; the answer is no. We will not prove this, but here is the theorem that tells us what we need to know:
Theorem 19.5.2 Given the differential equation $\ds a\ddot y+b\dot y+cy=0$, $a\not=0$,consider the quadratic polynomial $ax^2+bx+c$, called the
characteristic polynomial. Using the quadratic formula, this polynomialalways has one or two roots, call them $r$ and $s$. The generalsolution of the differential equation is:
(a) $\ds y=Ae^{rt}+Be^{st}$, if the roots $r$ and $s$ are real numbers and $r\not=s$.
(b) $\ds y=Ae^{rt}+Bte^{rt}$, if $r=s$ is real.
(c) $\ds y=A\cos(\beta t)e^{\alpha t}+B\sin(\beta t)e^{\alpha t}$, if the roots $r$ and $s$ are complex numbers $\alpha+\beta i$ and $\alpha-\beta i$.
Example 19.5.3 Suppose a mass $m$ is hung on a spring with spring constant $k$. If the spring is compressed or stretched and then released, the mass will oscillate up and down. Because of friction, the oscillation will be damped: eventually the motion will cease. The damping will depend on the amount of friction; for example, if the system is suspended in oil the motion will cease sooner than if the system is in air. Using some simple physics, it is not hard to see that the position of the mass is described by this differential equation: $\ds m\ddot y+b\dot y+ky=0$. Using $m=1$, $b=4$, and $k=5$ we find the motion of the mass. The characteristic polynomial is $x^2+4x+5$ with roots $(-4\pm\sqrt{16-20})/2=-2\pm i$. Thus the general solution is $\ds y=A\cos(t)e^{-2t}+B\sin(t)e^{-2t}$. Suppose we know that $y(0)=1$ and $\dot y(0)=2$. Then as before we form two simultaneous equations: from $y(0)=1$ we get $1=A\cos(0)e^0+B\sin(0)e^0=A$. For the second we compute $$\ddot y=-2Ae^{-2t}\cos(t)+Ae^{-2t}(-\sin(t))-2Be^{-2t}\sin(t)+ Be^{-2t}\cos(t),$$ and then $$2=-2Ae^0\cos(0)-Ae^0\sin(0)-2Be^0\sin(0)+Be^0\cos(0) =-2A+B.$$ So we get $A=1$, $B=4$, and $\ds y=\cos(t)e^{-2t}+4\sin(t)e^{-2t}$.
Here is a useful trick that makes this easier to understand: We have $\ds y=(\cos t+4\sin t)e^{-2t}$. The expression $\cos t+4 \sin t$ is a bit reminiscent of the trigonometric formula $\cos(\alpha-\beta)=\cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)$ with $\alpha=t$. Let's rewrite it a bit as $$\sqrt{17}\left({1\over\sqrt{17}}\cos t + {4\over\sqrt{17}}\sin t\right).$$ Note that $\ds (1/\sqrt{17})^2+(4/\sqrt{17})^2=1$, which means that there is an angle $\beta$ with $\ds \cos\beta=1/\sqrt{17}$ and $\ds \sin\beta=4/\sqrt{17}$ (of course, $\beta$ may not be a "nice'' angle). Then $$\cos t+4\sin t = \sqrt{17}\left(\cos t\cos \beta+\sin\beta\sin t\right) =\sqrt{17}\cos(t-\beta).$$ Thus, the solution may also be written $\ds y=\sqrt{17}e^{-2t}\cos(t-\beta)$. This is a cosine curve that has been shifted $\beta$ to the right; the $\ds \sqrt{17}e^{-2t}$ has the effect of diminishing the amplitude of the cosine as $t$ increases; see figure 19.5.1. The oscillation is damped very quickly, so in the first graph it is not clear that this is an oscillation. The second graph shows a restricted range for $t$.
Other physical systems that oscillate can also be described by such differential equations. Some electric circuits, for example, generate oscillating current.
Example 19.5.4 Find the solution to the intial value problem $\ds\ddot y-4\dot y+4y=0$, $y(0)=-3$, $\dot y(0)=1$. The characteristic polynomial is $x^2-4x+4=(x-2)^2$, so there is one root, $r=2$, and the general solution is $\ds Ae^{2t}+Bte^{2t}$. Substituting $t=0$ we get $-3=A+0=A$. The first derivative is $\ds 2Ae^{2t}+2Bte^{2t}+Be^{2t}$; substituting $t=0$ gives $1=2A+0+B=2A+B=2(-3)+B=-6+B$, so $B=7$. The solution is $\ds -3e^{2t}+7te^{2t}$.
Exercises 19.5
Ex 19.5.1Verify that the function in part (a) oftheorem 19.5.2 is a solution tothe differential equation $\ds a\ddot y+b\dot y+cy=0$.
Ex 19.5.2Verify that the function in part (b) oftheorem 19.5.2 is a solution tothe differential equation $\ds a\ddot y+b\dot y+cy=0$.
Ex 19.5.3Verify that the function in part (c) oftheorem 19.5.2 is a solution tothe differential equation $\ds a\ddot y+b\dot y+cy=0$.
Ex 19.5.4Solve the initial value problem $\ds\ddot y-\omega^2y=0$,$y(0)=1$, $\ds\dot y(0)=1$, assuming $\omega\not=0$.(answer)
Ex 19.5.5Solve the initial value problem $\ds2\ddot y+18y=0$,$y(0)=2$, $\ds\dot y(0)=15$.(answer)
Ex 19.5.6Solve the initial value problem $\ds \ddot y+6\dot y +5y=0$,$y(0)=1$, $\ds\dot y(0)=0$.(answer)
Ex 19.5.7Solve the initial value problem $\ds\ddot y-\dot y-12y=0$,$y(0)=0$, $\ds\dot y(0)=14$.(answer)
Ex 19.5.8Solve the initial value problem $\ds\ddot y+12\dot y+36y=0$,$y(0)=5$, $\ds\dot y(0)=-10$.(answer)
Ex 19.5.9Solve the initial value problem $\ds\ddot y-8\dot y+16y=0$,$y(0)=-3$, $\ds\dot y(0)=4$.(answer)
Ex 19.5.10Solve the initial value problem $\ds\ddot y+5y=0$,$y(0)=-2$, $\ds\dot y(0)=5$.(answer)
Ex 19.5.11Solve the initial value problem $\ds\ddot y+y=0$,$y(\pi/4)=0$, $\ds\dot y(\pi/4)=2$.(answer)
Ex 19.5.12Solve the initial value problem $\ds\ddot y+12\dot y+37y=0$,$y(0)=4$, $\ds\dot y(0)=0$.(answer)
Ex 19.5.13Solve the initial value problem $\ds\ddot y+6\dot y+18y=0$,$y(0)=0$, $\ds\dot y(0)=6$.(answer)
Ex 19.5.18A mass-spring system $\ds m\ddot y+b\dot y+ky$ has$k=29$, $b=4$, and $m=1$. At time $t=0$ the position is $y(0)=2$ andthe velocity is $\dot y(0)=1$. Find $y(t)$.(answer)
Ex 19.5.19A mass-spring system $\ds m\ddot y+b\dot y+ky$ has$k=24$, $b=12$, and $m=3$. At time $t=0$ the position is $y(0)=0$ andthe velocity is $\dot y(0)=-1$. Find $y(t)$.(answer)
Ex 19.5.20Consider the differential equation $\ds a\ddot y + b\doty=0$, with $a$ and $b$ both non-zero. Find the general solution by themethod of this section. Now let $\ds g=\dot y$; the equation may bewritten as $\ds a\dot g+bg=0$, a first order linear homogeneousequation. Solve this for $g$, then use the relationship $\ds g=\dot y$ tofind $y$.
Ex 19.5.21Suppose that $y(t)$ is a solution to $\ds a\ddot y+b\doty+cy=0$, $y(t_0)=0$, $\ds\dot y(t_0)=0$. Show that $y(t)=0$.
|
I understand how to get the proper maclaurin series representation for $\cos x$, but I'm having trouble understanding the following part conceptually:
I get $\cos x$ as $\sum_{n=0}^\infty (-1)^n\frac{x^{2n}}{2n!}$ but,
Can the maclaurin series of $\cos x$ also be $\sum_{n=0}^\infty (-1)^n\frac{x^n}{n!}$?
I'm confused because even though the odd powers of this functions are going to $0$, wouldn't it still be valid to include them in our maclaurin series? Furthermore, why do we omit terms if they are $0$?
|
Functional PCA Posted on PCA and eigenanalysis
Define the covariance function $v(s,t)$ by
Each of those principal component weight functions $\xi_j(s)$ satisfies the equation
for an appropriate eigenvalue $\rho$. The left side is an
integral transform $V$ of the weight function $\xi$ defined by
The integral transform is called the
covariance operator $V$. Therefore we may also express the eigenequation directly as
where $\xi$ is now an eigenfunction rather than an eigenvector.
An important difference between the multivariate and functional eigenanalysis problems, concerning the maximum number of different eigenvalue-eigenfunction pairs. The counterpart of the number of variables $p$ in the multivariate case is the number of function $x_i$ are not linearly dependent, the operator $V$ will have rank $N-1$, and there will be only $N-1$ nonzero eigenvalues.
Computation
Suppose we have a set of $N$ curves $x_i$, and that preliminary steps such as curve registration and the possible subtraction of the mean curve from each (curve centering) have been completed. Let $v(s,t)$ be the sample covariance function of the observed data. In all cases, convert the continuous functional eigenanalysis problem to an approximately equivalent matrix eigenanalysis task.
Discretizing the functions
A simple approach is to discretize the observed functions $x_i$ to a fine grid of $n$ equally spaced values $s_j$ that span the interval $\cal T$. This produces eigenvalues and eigenvectors satisfying
for $n$-vectors $u$.
The sample variance-covariance matrix $V=N^{-1}X’X$ will have elements $v(s_j,s_k)$ where $v(s,t)$ is the sample covariance function. Given any function $\xi$, let $\tilde \bxi$ be the $n$-vector of values $\xi(s_j)$. Let $w=T/n$ where $T$ is the length of the interval $\cal T$. Then for each $s_j$,
so the functional eigenequation $V\xi=\rho\xi$ has the approximate discrete form
Basis function expansion of the functions
One way of reducing the eigenequation \eqref{eq:8.9} to discrete or matrix form is to express each function $x_i$ as a linear combination of known basis functions $\phi_k$.
Suppose that each function has basis expansion
Write in vector-form,
where the coefficient matrix $\C$ is $N\times K$. In matrix terms, the variance-covariance function is
Define the order $K$ symmetric matrix $\W$ to have entries
or $\W=\int \bphi\bphi’$. For the orthonormal Fourier series, $\W=\I$. Now suppose that an eigenfunction $\xi$ for the eigenequation \eqref{eq:8.9} has an expansion
or in matrix form, $\xi(s) = \bphi(s)’\b$. This yields
and \eqref{eq:8.9} becomes
Since this equation must hold for all $s$, this implies the purely matrix equation
and the constrain $\Vert\xi\Vert=1$ implies that $\b’\W\b=1$. Define $\u=\W^{1/2}\b$, solve the equivalent symmetric eigenvalue problem
and compute $\b=\W^{-1/2}\u$ for each eigenvector.
Two special cases:
the basis is orthonormal, $\W=\I$ view the observed functions $x_i$ as their own basis expansions, $\C=\I$.
|
Answer
The radius of the curve is approximately 73.3 feet.
Work Step by Step
We can convert the angle to radians: $\theta = (42.0^{\circ})(\frac{\pi~rad}{180^{\circ}}) = 0.733~rad$ Since the chord is approximately equal to the arc length, the we can use the chord length $d$ to find the approximate value of the radius: $d \approx \theta ~r$ $r \approx \frac{d}{\theta}$ $r \approx \frac{100~ft}{0.733~rad}$ $r \approx 73.3~ft$ The radius of the curve is approximately 73.3 feet
|
Let $\Omega\subset\mathbb{R}^n$ be a bounded open set. Let us say it has a Lipschitz boundary.
Consider the Laplacian $\Delta$ in the classical sense. Suppose $\Delta u=\frac{\partial^2}{\partial x_1^2}u+\dotsb+\frac{\partial^2}{\partial x_n^2}u$ is bounded.
Q: Can we say $u\in C^1(\Omega)$? Does it depend on the dimension $n$? Can we claim the smoothness recursively, i.e., if $\Delta^m u$: bounded, then,...etc?
I was pondering about the relations of partial differentiability and continuity, and got confused.
Bounded partial derivatives imply continuity says "If all partial derivatives of f are bounded, then f is continuous on E.", but we cannot apply this argument recursively as we do not have the "cross term" $\frac{\partial^2}{\partial x_i\partial x_j}$.
We have $\Delta u\in L^2(\Omega)$ as $\Delta u$ is bounded on a bounded region $\Omega$. However, we cannot use this result Sobolov Space $W^{2,2}\cap W^{1,2}_0$ norm equivalence and say $u\in H^2(\Omega)$, because 1. $u$ does not necessarily vanish on the boundary, and 2. we are not sure if $\frac{\partial^2}{\partial x_1^2}u+\dotsb+\frac{\partial^2}{\partial x_n^2}u+(\text{partial derivatives of cross terms})u$ are bounded.
Aha, from
Equivalent Norms on Sobolev Spaces, $\Delta u\in L^2(\Omega)$ is enough to say $u\in H^2(\Omega)$.
But one thing is I do not know if we have the same kind of equivalence for $m>3$, and another thing is resorting to Sobolev embedding does not sound like a good idea as it depends on the dimension heavily.
I wonder I could show this directly.
|
Image Denoising and Other Multidimensional Variational Problems
We previously discussed how to solve 1D variational problems with the COMSOL Multiphysics® software and implement complex domain and boundary conditions using a unified constraint enforcement framework. Here, we extend the discussion to multiple dimensions, higher-order derivatives, and multiple unknowns with what we hope will be an enjoyable example: variational image denoising. We conclude this blog series on variational problems with some recommendations for further study.
Variational Problems in Higher Spatial Dimensions
We have considered 1D problems and looked for a function u(x) that minimizes the functional
Now, let’s consider higher spatial dimensions while limiting ourselves to first-order derivatives. Consider the variational problem of minimizing
defined over a fixed domain \Omega.
For a neighboring function, u+\epsilon\hat{u}, this becomes
As with the 1D case, the necessary first-order optimality condition is
which means
To obtain variational derivatives, we have been forming a function that is essentially a function of the scalar parameter \epsilon and using single-variable calculus methods. A quicker formal way is to use the variational derivative notation as
for fixed domains.
Here, \delta u corresponds to \hat{u} in our previous notation. Notice that since the domain is fixed, we consider a variation of the function and its derivatives only. If the domain can vary with the solution, the variational derivative will have an extra contribution coming from the boundary variation.
Image Denoising: A Multidimensional Variational Problem
In recent decades, variational methods have yielded powerful and rigorous techniques for image processing, such as denoising, deblurring, inpainting, and segmentation. Today, we will use denoising as an example. One technique for image denoising is called
total variation minimization. Say you have image data, u_o, that has been corrupted by noise. The image has speckles (sometimes called “salt-and-pepper noise”). You want to recover as much of the original image given by u as possible. As such, the image has to be denoised.
An image with erroneous details will have high variation; therefore, the denoised image should have as little variation as possible. A model after Tikhonov measures this total variation as
Minimizing just the total variation will aggressively smooth and return a solution close to a constant, losing legitimate details in the image. To prevent this issue, we also want to minimize the difference between the input data, u_o, and the solution, u, given by a fidelity term
Now that we have multiple, potentially conflicting objectives, let’s attempt a compromise by introducing the functional
where the regularization parameter \mu determines the emphasis on detail versus denoising (this is a user-specified positive number).
We are now ready to derive the first-order optimality condition, as discussed in the previous section. Requiring a vanishing variational derivative, we get
To demonstrate this process, we import an image into COMSOL Multiphysics and add a random noise to corrupt it. Here is what we get after ruining an image of a goose provided by my colleague:
A test image deliberately corrupted by random noise.
The weak form above is given in vector notation. For use in computation, let’s write it out in Cartesian coordinates and leave out the common factor 2.
This can be entered into the COMSOL® software as shown in the screenshot below. We keep the data on the edge as is by using the Dirichlet boundary condition u=uo on all boundaries, and we use a regularization parameter of 1e6. In the simpler algorithms, the regularization parameter is determined by trial and error. We increase the parameter if the resulting image is missing relevant details and reduce it if the result is deemed too noisy.
Specifying a variational problem in 2D.
Below is the denoised image, shown along with the original image — not bad for a rudimentary model!
Denoised image (left) and original image (right).
The Tikhonov regularization smooths too much and does not preserve geometric features such as edges and corners. The so-called ROF model with a functional given by
does better in this regard.
The first-order necessary condition for optimality, obtained by setting the variational derivative to zero, as done so far, is
Notice that the use of the absolute value in the functional results in a highly nonlinear problem. Also, the numerator |\nabla u| can go to zero in numerical iterations, giving a division-by-zero error. To prevent this, we can add a small positive number to it. In COMSOL Multiphysics, we often use the floating-point relative accuracy eps, which is about 2.2204 \times 10^{-16}.
The ROF model preserves edges but reportedly causes the so-called “staircasing effect”. Including higher-order derivatives in the functional helps avoid this.
Variational Problems with Higher-Order Derivatives
High-fidelity image processing and other subjects involve variational problems with high-order derivatives. A traditional subject in this respect is the analysis of elastic beams and plates. For example, in Euler beam theory, a beam with Young’s modulus E and cross-sectional moment of inertia I, loaded with lateral load f, bends to minimize the total potential energy
For small deformations, the analysis neglects the change of the domain, thus the variational formulation is to find u such that
Notice that we do not differentiate the Young’s modulus in the variational derivative because we are considering a linearly elastic material where material properties are independent of deformation. For nonlinear materials, the contribution to the variational derivative from material properties has to be included. This functionality is built into the Nonlinear Structural Materials Module, an add-on product to COMSOL Multiphysics.
The inclusion of higher-order derivatives in the functional does not introduce any conceptual change to the way we find the first-order optimality criteria, but it does have computational implications. Our finite element interpolation functions, picked in the
Discretization section of the Weak Form PDE Settings window, need to have as much polynomial power, or higher, than the highest order spatial derivative in the variational form. For example, for the beam problem, there is a second-order derivative in the variational form, thus we cannot pick a linear shape function. If we did, all of the second derivatives in our equation would uniformly vanish. We have to use quadratic or higher-order shape functions. Variational Problems with Multiple Unknowns
So far, we have considered minimizing functionals that depend on only one unknown, u. Often, the functionals contain more than one unknown. Say we have a second unknown, v, and a functional
The first variation of this functional is
Here, we assume the two fields, u and v, are independent of each other and, as such, we can take independent variations in both variables. Sometimes, this is not the case: There can be a constraint between the variables.
The easiest constraint between variables is when one is given explicitly in terms of the other, as in v = g(u). In such cases, we can opt to eliminate v from the problem by considering the functional
Generally, we have constraints of the form g(u,v,\dotsc)=0 and we cannot algebraically invert this expression. In such cases, we use the techniques considered in Part 3 of the blog series, which is about constraint enforcement. For example, for the Lagrange multiplier method, we have an augmented functional given by
where \lambda = \lambda(x,y,z) for a distributed constraint or
where \lambda is a constant for a global constraint.
We use the same
Weak Form PDE interface to specify such multifield problems. The question is: Do we use a single interface or an interface for each unknown? It depends on the relationship between the unknowns. On one hand, if u and v are different components of the same physical vector, such as displacement or velocity, we can use the same interface and specify the number of dependent variables in the Dependent Variables section. On the other hand, if u represents temperature and v represents electric potential, we can employ different PDE interfaces. If, for some compelling reason, we have to use different discretizations or scales for different components of the same vector unknown, we can use multiple interfaces just as well. When we have multiple unknowns, we have to carefully choose the interpolation functions for each field (as discussed in a previous blog post on element orders in multiphysics models). Concluding Remarks
Variational methods provide a unified framework to model a plethora of scientific problems. The
Weak Form PDE interface, included in COMSOL Multiphysics, enables you to extend the functionality of the COMSOL® software by bringing in your own variational problems. In this series, we have demonstrated this power by solving problems that include finding the shape of soap films, planning paths for a hiker around a lake, and repairing corrupted images.
Bear in mind that several important partial differential equations do not come from minimizing a functional. The Navier-Stokes equation is one example. You can still use the
Weak Form PDE interface to solve such problems after deriving their weak forms.
An important part of solving variational problems is specifying constraints. The previous three blog posts in this series deal with the mathematical formulation and numerical analysis of constraint enforcement.
A warning here is that we have only considered necessary conditions for optimality. Vanishing first-order derivatives are not sufficient for minima. First-order derivatives vanish on maxima as well. In the examples discussed in this series, we consider well-known problems where we know that the solution provides a minimum. When you are working on novel problems, make sure to check the second-order optimality criteria as well as the existence and uniqueness of the minimum (maximum) before cranking up the computation. For this and other more involved analytical and numerical aspects of variational problems, the following references are some of my personal favorites:
A classic text on calculus of variations: I.M. Gelfand and S.V. Fomin (English translation by R.A. Silverman), Calculus of Variations, Dover Publications, Inc., 1963. I.M. Gelfand and S.V. Fomin (English translation by R.A. Silverman), More recent texts that discuss several engineering problems: K.W. Cassel, Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013. J.W. Brewer, Engineering Analysis in Applied Mechanics, Taylor & Francis, 2002. K.W. Cassel, For constraint enforcement strategies: D.G. Luenberger, Introduction to Linear and Nonlinear Programming, Addison-Wesley Publishing Company, 1973. S.S. Rao, Engineering Optimization: Theory and Practice, John Wiley & Sons Inc., 2009. D.G. Luenberger,
Hopefully, this series has given you a taste of modeling variational problems using COMSOL Multiphysics, especially when what you want to model is not already built into the software.
Have fun, and feel free to contact us with any questions via the button below:
View More Blog Posts in the Variational Problems and Constraints Series Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Title: On u-deformed Kottwitz's involution modules
报告人:胡峻 教授(北京理工大学)
地点:玉泉校区工商楼105
摘要: Let (W,S) be a Coxeter system and $\ast$ an automorphism of W with order $\leq 2$ and $S^{\ast}=S$.
Lusztig and Vogan have introduced a u-deformed version $M_u$ of Kottwitz's involution module over the Iwahori--Hecke algebra $H_{u}(W)$ with Hecke parameter $u^2$. Lusztig has conjectured that $M_u$ is isomorphic to the left ideal of $H_{u}(W)$ generated by $\sum_{w^*=w\in W}u^{-\ell(w)}T_w$. In this talk I will present some recent progress on this conjecture and its generalization to more general ground base ring.
|
There isn't a single way in which one can approach a discrete optimization problem using Differential Evolution (DE).
Widespread techniques listed under the Discrete Differential Evolution label aren't DE-specific.
You can allow variables to take values in a continuous range and use penalty functions to enforce integer values:
$$ \bar{f}(w) = f(w) - \sum_i{k_i \cdot (w_i - \operatorname{round}(w_i))^2} $$
$w$ is the vector of parameters (chromosome values), $f: \mathbb R^n \rightarrow \mathbb R$ the basic fitness function (here assuming "greater is better"), $k$ a problem-specific scaling vector, $\bar{f}(\cdot)$ the "penalized" fitness function.
In this way the DE algorithm (
DE/rand/1) stays the same:
$$\begin{align}X_{j,r2}^G - X_{j,r3}^G & = \{2,2,3,0,4,2\} - \{1,2,3,3,0,1\} = \{1,0,0,-3,4,1\} \\F \cdot (X_{j,r2}^G - X_{j,r3}^G) & = 0.5 \cdot \{1,0,0,-3,4,1\} = \{0.5,0,0,-1.5,2,0.5\} \\V_{j,i}^{G+1} & = \{4,1,3,2,2,0\} + \{0.5,0,0,-1.5,2,0.5\} = \{4.5,1,3,0.5,4,0.5\}\end{align}$$
The trial vector $U$ is obtained via crossover between the
donor vector $V_{j,i}^{G+1}$ and a target vector $X$:
$$U_{j,i}^{G+1} = \operatorname{crossover}(V_{j,i}^{G+1}, X_{j,i}^{G})$$
The
target vector is compared with the trial vector and the best one is admitted to the next generation.
This is the recommended procedure with R DEOptim Package (via the optional
fnMap parameter).
You can round all the real-valued parameters before evaluating the fitness function:
$$\bar{f}(w) = f(\operatorname{round}(w))$$
(
round acts as a
repair operator)
This is the technique used by
Mathematica's functions
NMinimize /
NMaximize with the options
Method → "DifferentialEvolution" and
Element[w,Integers]
There are also many variations of DE named
something-Discrete-DE: Binary Discrete Differential Evolution: the solution of a problem is presented as a binary string instead of a real-valued vector Real Value based Discrete Differential Evolution introduces forward/backward transformations to map integer into real numberand viceversa Exchange based Discrete Differential Evolution: here the crossover operator doesn't change but mutation, being the primary operator acting onelements of vector in continuous space, is replaced. ...
So you should specify what form of Discrete DE you're interested in for a step by step example.
Meanwhile A Comparative Study of Discrete Differential Evolution on BinaryConstraint Satisfaction Problems by Qingyun Yang (2008 IEEE Congress on Evolutionary Computation) is a good starting point with many references.
|
kidzsearch.com > wiki Explore:images videos games
Calculus Calculus is a branch of mathematics which helps us understand changes between values that are related by a function. For example, if you had one formula telling how much money you got every day, calculus would help you understand related formulas like how much money you have in total, and whether you are getting more money or less than you used to. All these formulas are functions of time, and so that is one way to think of calculus -- studying functions of time. There are two different types of calculus. Differential calculus divides things into small ( different) pieces and tells us how they change from one moment to the next, while integral calculus joins ( integrates) the small pieces together and tells us how much of something is made, overall, by a series of changes. Calculus is used in many different areas of study such as physics, astronomy, biology, engineering, economics, medicine and sociology.
The word
Calculus comes from the Latin language meaning "small stone". Contents History
In the 1670s and 1680s, Sir Isaac Newton in England and Gottfried Leibniz in Germany figured out calculus at the same time, working separately from each other. Newton wanted to have a new way to predict where to see planets in the sky, because astronomy had always been a popular and useful form of science, and knowing more about the motions of the objects in the night sky was important for navigation of ships. Leibniz wanted to measure the space (area) under a curve (a line which is not straight). Many years later, the two men argued over who discovered it first. Scientists from England supported Newton, but scientists from the rest of Europe supported Leibniz. Most mathematicians today agree that both men share the credit equally. Some parts of modern calculus come from Newton, such as its uses in physics. Other parts come from Leibniz, such as the symbols used to write it.
They were not the first people to use mathematics to describe the physical world—Aristotle and Pythagoras came earlier, and so did Galileo, who said that mathematics was the language of science. But they were the first to design a system that describes how things change over time and can predict how they will change in the future.
Differential calculus Differential calculus is the process of finding out the rate of change of a variable compared to another variable. It can be used to find the speed of a moving object or the slope of a curve, figure out the maximum or minimum points of a curve, or find answers to problems in the electricity and magnetism areas of physics, among many other uses.
Many amounts can be variables, which can change their value unlike numbers such as 5 or 200. Some examples of variables are distance and time. The speed of an object is how far it travels in a particular time. So if a town is 80 kilometres (50 miles) away and a person in a car gets there in one hour, they have traveled at an average speed of 80 kilometres (50 miles) per hour. But this is only an average—they may have been traveling faster at some times (on a highway) and slower at others (at a traffic light or on a small street where people live). Imagine a driver trying to figure out a car's speed using only its odometer (distance meter) and clock, without a speedometer!
Until calculus was invented, the only way to work this out was to cut the time into smaller and smaller pieces, so the average speed over the smaller time would get closer and closer to the actual speed at a point in time. This was a very long and hard process and had to be done each time people wanted to work something out.
A very similar problem is to find the slope (how steep it is) at any point on a curve. The slope of a
straight line is easy to work out—it is simply how much it goes up ( y or vertical) divided by how much it goes across ( x or horizontal). On a curve, though, the slope is a variable (has different values at different points) because the line bends. But if the curve was to be cut into very, very small pieces, the curve at the point would look almost like a very short straight line. So to work out its slope, a straight line can be drawn through the point with the same slope as the curve at that point. If it is done exactly right, the straight line will have the same slope as the curve, and is called a tangent. But there is no way to know (without very complicated mathematics) whether the tangent is exactly right, and our eyes are not accurate enough to be certain whether it is exact or simply very close.
What Newton and Leibniz found was a way to work out the slope (or the speed in the distance example) exactly using simple and logical rules. They divided the curve into an infinite number of very small pieces. They then chose points on either side of the range they were interested in and worked out tangents at each. As the points moved closer together towards the point they were interested in, the slope
approached a particular value as the tangents approached the real slope of the curve. They said that this particular value it approached was the actual slope.
Let's say we have a function
y = f( x). f is short for function, so this equation means "y is a function of x". This tells us that how high y is on the vertical axis depends on what x (the horizontal axis) is at that time. For example with the equation y = x², we know that if x is 1, then y will be 1; if x is 3, then y will be 9; if x is 20, then y will be 400.
[math]\lim_{h\rightarrow0} \frac{f(x+h) - f(x)}{h}[/math]
If we use y = x², the
derivative produced using this method is 2 x, or 2 multiplied by x. So we know without having to draw any tangent lines that at any point on the curve f(x) = x², the derivative f'(x) (marked with an apostrophe) will be 2 x at any point. This process of working out a slope using limits is called differentiation, or finding the derivative.
Leibniz came to the same result, but called h "dx", which means "a tiny amount of x". He called the resulting change in f(x) "dy", which means "a tiny amount of y". Leibniz's notation is used by more books because it is easy to understand when the equations become more complicated. In Leibniz notation:
[math]\frac{dy}{dx} = f'(x)[/math]
Mathematicians have grown this basic theory to make simple algebra rules which can be used to find the derivative of almost any function.
Main idea of calculus
The main idea in calculus is called the "Fundamental Theorem of Calculus". This main idea says that the two calculus processes, differential and integral calculus, are opposites. That is, a person can use differential calculus to undo an integral calculus process. Also, a person can use integral calculus to undo a differential calculus method. This is just like using division to "undo" multiplication, or addition to "undo" subtraction.
In a single sentence, the Fundamental Theorem runs something like this: "The derivative of the integral of a function
f is the function itself". Demonstration of main idea of calculus How to use integral calculus to find areas
The method integral calculus uses to find areas of shapes is to break the shape up into many small boxes, and add up the area of each of the boxes. This gives an approximation to the area. If the boxes are made narrower and narrower, then there are more and more of them, and the area of all the boxes becomes very close to the area of the shape. One of the main ideas of calculus is that we can imagine having an infinite number of these boxes, each infinitely narrow, and then we would have the exact area of the shape.
Other uses of calculus How waves move. Waves are very important in the natural world. For example, sound and light can be thought of as waves. Where heat moves, like in a house. This is useful for architecture (building houses), so that the house can be as cheap to heat as possible. How very small things like atoms act. How fast something will fall, also known as gravity. How machines work, also known as mechanics. The path of the moon as it moves around the earth. Also, the path of the earth as it moves around the sun, and any planet or moon moving around anything in space.
|
In this section, we define what is arguably the single most important function in all of mathematics. We have already noted that the function $\ln x $ is injective, and therefore it has an inverse.
Definition 9.3.1 The inverse functionof $\ln(x) $ is$y=\exp(x)$, called the
natural exponential function.
The domain of $\exp(x) $ is all real numbers and the range is $(0, \infty)$. Note that because $\exp(x)$ is the inverse of $\ln(x)$, $\exp (\ln x) =x$ for $x>0$, and $\ln (\exp x) = x$ for all $x$. Also, our knowledge of $\ln(x)$ tells us immediately that $\exp(1) = e$, $\exp(0) = 1$, $\ds\lim _{x\to\infty} \exp x =\infty$, and $\ds\lim_{x\to -\infty } \exp x = 0$.
Theorem 9.3.2 $\ds {d\over dx}\exp(x) = \exp(x)$.
Proof. By the Inverse Function Theorem (9.1.17), $\exp(x)$ has a derivative everywhere. The theorem also tells us what the derivative is. Alternately, we may compute the derivative using implicit differentiation: Let $y=\exp x $, so $\ln y =x $. Differentiating with respect to $x$ we get $\ds {1\over y} {dy\over dx} =1$. Hence, ${dy\over dx} = y =\exp x$.
Corollary 9.3.3 Since $\exp x >0 $, $\exp x $ is an increasing function whose graph is concave up.
Corollary 9.3.4 The general antiderivative of $\exp x $ is $\exp x + C $.
Of course, the word "exponential'' already has a mathematical meaning, and this meaning extends in a natural way to the exponential function $\exp(x)$.
Lemma 9.3.5 For any rational number $q$, $\exp(q) = e^q$.
Proof. Let $y=e^q $. Then $\ln y = \ln (e^q ) = q \ln e = q$, and so $y= \exp(q)$.
In view of this lemma, we usually write $\exp(x)$ as $\ds e^x$ for any real number $x$. Conveniently, it turns out that the usual laws of exponents apply to $\ds e^x$.
(a) $\ds e^{x+y} = e^x e^y $
(b) $\ds e^{x-y} = e^x/e^y$
(c) $\ds (e^x )^q = e^{xq} $
Proof. Parts (b) and (c) are left as exercises. For part (a), $\ln (e^x e^y) =\ln e^x + \ln e^y = x +y$, so $e^x e^y = e^{x+y }$.
Example 9.3.7 Solve $\ds e^{4x+5} - 3 =0$ for $x$.
If $\ds e^{4x+5} - 3 =0$ then $\ds 4x+5 =\ln 3$ and so $\ds x={\ln 3 -5\over 4}$.
Example 9.3.8 Find the derivative of $f(x) =e^{x^3 } \sin (4x)$.
By the product and chain rules, $f'(x) =3x^2 e^{x^3 } \sin (4x) + 4 e^{x^3 } \cos(4x)$.
Example 9.3.9 Evaluate $\int x e^{x^2 } dx $.
Let $u=x^2$, so $du = 2x\,dx$. Then $$\int x e^{x^2}\,dx={1\over2}\int e^u\,du= {1\over2}e^u= {1\over2}e^{x^2}+C.$$
Exercises 9.3
Ex 9.3.1Prove parts (b) and (c) of theorem 9.3.6.
Ex 9.3.2Solve $\ln (1+ \sqrt{x} ) = 6 $ for $x$.
Ex 9.3.3Solve $\ds e^{x^2} = 8$ for $x$.
Ex 9.3.4Solve $\ln (\ln (x) ) = 1 $ for $x$.
Ex 9.3.5Sketch the graph of $\ds f(x) = e^{4x-5 }+ 6 $.
Ex 9.3.6Sketch the graph of $f(x) =3e^{x+6} -4 $.
Ex 9.3.7Find the equation of the tangent line to $f(x) =e^x $at $x= a $.
Ex 9.3.8Compute the derivative of $f(x) = 3x^2 e^{5x-6} $.
Ex 9.3.9Compute the derivative of $\ds f(x)= e^x -\left( 1+ x +{x^2\over 2} + {x^3\over3!} + \cdots + {x^n\over n!}\right)$.
Ex 9.3.10Prove that $e^x > 1 $ for $x\geq 0$. Then prove that$e^x > 1+ x $ for $x\geq 0 $.
Ex 9.3.11Using the previous two exercises, prove (using mathematicalinduction) that $\ds e^x > 1+ x +{x^2\over 2} + {x^3\over3!} + \cdots + {x^n\over n!}=\sum_{k=0 }^n {x^k\over k!}$ for $x\geq 0 $.
Ex 9.3.12Use the preceding exercise to show that $e> 2.7$.
Ex 9.3.13Differentiate $\ds {e^{kx}+ e^{-kx}\over 2} $ withrespect to $x$.
Ex 9.3.14Compute $\ds\lim_{x\to\infty} {e^x + e^{-x}\over e^x -e^{-x}}$.
Ex 9.3.15Integrate $5x^4 e^{x^5}$ with respect to$x$.
Ex 9.3.16Compute$\ds \int_0^{\pi/3} \cos (2x) e^{\sin 2x}\,dx$.
Ex 9.3.17Compute $\ds \int {e^{1/x^2}\over x^3}\,dx$.
Ex 9.3.18Let $\ds F(x) = \int_0^{e^x} e^{t^4}\,dt$. Compute$F'(0)$.
Ex 9.3.19If $f(x) =e^{kx}$ what is $f^{(940)}(x)$?
|
Erdős cardinals
The $\alpha$-Erdős cardinals were introduced by Erdős and Hajnal in [1] and arose out of their study of partition relations. A cardinal $\kappa$ is $\alpha$-Erdős for an infinite limit ordinal $\alpha$ if it is the least cardinal $\kappa$ such that $\kappa\rightarrow (\alpha)^{\lt\omega}_2$ (if any such cardinal exists).
For infinite cardinals $\kappa$ and $\lambda$, the partition property $\kappa\to(\lambda)^n_\gamma$ asserts that for every function $F:[\kappa]^n\to\gamma$ there is $H\subseteq\kappa$ with $|H|=\lambda$ such that $F\upharpoonright[H]^n$ is constant. Here $[X]^n$ is the set of all $n$-elements subsets of $X$. The more general partition property $\kappa\to(\lambda)^{\lt\omega}_\gamma$ asserts that for every function $F:[\kappa]^{\lt\omega}\to\gamma$ there is $H\subseteq\kappa$ with $|H|=\lambda$ such that $F\upharpoonright[H]^n$ is constant for every $n$, although the value of $F$ on $[H]^n$ may be different for different $n$. Indeed, if $\kappa$ is $\alpha$-Erdős for some infinite ordinal $\alpha$, then $\kappa\rightarrow (\alpha)^{\lt\omega}_\lambda$ for all $\lambda<\kappa$ (Silver's PhD thesis).
The $\alpha$-Erdős cardinal is precisely the least cardinal $\kappa$ such that for any language $\mathcal{L}$ of size less than $\kappa$ and any structure $\mathcal{M}$ with language $\mathcal{L}$ and domain $\kappa$, there is a set of indescernibles for $\mathcal{M}$ of order-type $\alpha$.
A cardinal $\kappa$ is called Erdős if and only if it is $\alpha$-Erdős for some infinite limit ordinal $\alpha$. Because there exists at most one $\alpha$-Erdős cardinal, the notations $\eta_\alpha$ and $\kappa(\alpha)$ are sometimes used to denote the $\alpha$-Erdős cardinal.
Different terminology (Baumgartner, 1977): an infinite cardinal $κ$ is $ω$-Erdős if for every club $C$ in $κ$ and every function $f : [C]^{<ω} → κ$ that is regressive (i.e. $f(a) < \min(a)$ for all $a$ in the domain of $f$) there is a subset $X ⊂ C$ of order type $ω$ that is homogeneous for $f$ (i.e. $f ↾ [X]^n$ is constant for all $n < ω$). Schmerl, 1976 (theorem 6.1) showed that the least cardinal $κ$ such that $κ → (ω)_2^{<ω}$ has this property, if it exists.[2]
Facts $\eta_\alpha<\eta_\beta$ whenever $\alpha<\beta$ and $\eta_\alpha\geq\alpha$. [3]
With Baumgartner definition:[2]
Every $ω$-Erdős cardinal is inaccessible. If $η$ is an $ω$-Erdős cardinal then $η → (ω)_α^{<ω}$ for every cardinal $α < η$. If $α ≥ 2$ is a cardinal and there is a cardinal $η$ such that $η → (ω)_α^{<ω}$, then the least such cardinal $η$ is an $ω$-Erdős cardinal (and is greater than α.) Simple conclusions from the last two facts: The statement “there is an $ω$-Erdős cardinal” is equivalent to the statement $∃_η η → (ω)_2^{<ω}$. The statement “there is a proper class of $ω$-Erdős cardinals” is equivalent to the statement $∀_α ∃_η η → (ω)_α^{<ω}$.
Erdős cardinals and the constructible universe:
$\omega_1$-Erdős cardinals imply that $0^\sharp$ exists and hence there cannot be $\omega_1$-Erdős cardinals in $L$. [4] $\alpha$-Erdős cardinals are downward absolute to $L$ for $L$-countable $\alpha$. More generally, $\alpha$-Erdős cardinals are downward absolute to any transitive model of ZFC for $M$-countable $\alpha$. [5]
Relations with other large cardinals:
Every Erdős cardinal is inaccessible. (Silver's PhD thesis) Every Erdős cardinal is subtle. [6] $\eta_\omega$ is a stationary limit of ineffable cardinals. [7] $η_ω$ is a limit of virtually rank-into-rank cardinals. [8] The existence of $\eta_\omega$ implies the consistency of a proper class of $n$-iterable cardinals for every $1\leq n<\omega$.[9] For an additively indecomposable ordinal $λ ≤ ω_1$, $η_λ$ (the least $λ$-Erdős cardinal) is a limit of $λ$-iterable cardinals and if there is a $λ + 1$-iterable cardinal, then there is a $λ$-Erdős cardinal below it.[8] The consistency strength of the existence of an Erdős cardinal is stronger than that of the existence of an $n$-iterable cardinal for every $n<\omega$ and weaker than that of the existence of $0^{\#}$. The existence of a proper class of Erdős cardinals is equivalent to the existence of a proper class of almost Ramsey cardinals. The consistency strength of this is weaker than a worldly almost Ramsey cardinal, but stronger than an almost Ramsey cardinal. The existence of an almost Ramsey cardinal is stronger than the existence of an $\omega_1$-Erdős cardinal. [10] A cardinal $\kappa$ is Ramsey precisely when it is $\kappa$-Erdős. (Baumgartner definition) The existence of non-remarkable weakly remarkable cardinals is equiconsistent to the existence of $ω$-Erdős cardinal (equivalent assuming $V=L$):[2] Every $ω$-Erdős cardinal is a limit of non-remarkable weakly remarkable cardinals. If $κ$ is a non-remarkable weakly remarkable cardinal, then some ordinal greater than $κ$ is an $ω$-Erdős cardinal in $L$. Weakly Erdős and greatly Erdős
(Information in this section from [10])
Suppose that $κ$ has uncountable cofinality, $\mathcal{A}$ is $κ$-structure, with $X ⊆ κ$, and $t_\mathcal{A} ( X ) = \{ α ∈ κ \text{ — limit ordinal} : \text{there exists a set $I ⊆ α ∩ X$ of good indiscernibles for $\mathcal{A}$ cofinal in $α$} \}$. Using this one can define a hierarchy of normal filters $\mathcal{F}_\alpha$ potentially for all $α < κ^+$ ; these are generated by suprema ofsets of nested indiscernibles for structures $\mathcal{A}$ on $κ$ using the above basic $t_\mathcal{A} (X)$ operation. A cardinal $κ$ is
weakly $α$-Erdős when $\mathcal{F}_\alpha$ is non-trivial.
$κ$ is
greatly Erdős iff there is a non-trivial normal filter $\mathcal{F}$ on $\mathcal{F}$ such that $F$ is closed under $t_\mathcal{A} (X)$ for every $κ$-structure $\mathcal{A}$. Equivalently (for uncountable cofinality of cardinal $κ$): $\mathcal{G} = \bigcup_{\alpha < \kappa^+} \mathcal{F}_\alpha \not\ni \varnothing$ $κ$ is $α$-weakly Erdős for all $α < κ^+$
and (for inaccessible $κ$ and any choice $⟨ f_β : β < κ^+ ⟩$ of canonical functions for $κ$):
$\{γ < κ : f_β (γ) ⩽ o_\mathcal{A} (γ)\} \neq \varnothing$ for all $β < κ^+$ and $κ$-structures $\mathcal{A}$ such that $\mathcal{A} \models ZFC$
Relations:
If $κ$ is a $2$-weakly Erdős cardinal then $κ$ is almost Ramsey. If $κ$ is virtually Ramsey then $κ$ is greatly Erdős. There are stationarily many completely ineffable, greatly Erdős cardinals below any Ramsey cardinal.
References Erdős, Paul and Hajnal, Andras. On the structure of set-mappings.Acta Math Acad Sci Hungar 9:111--131, 1958. MR bibtex Wilson, Trevor M. Weakly remarkable cardinals, Erdős cardinals, and the generic Vopěnka principle., 2018. arχiv bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Silver, Jack. Some applications of model theory in set theory.Ann Math Logic 3(1):45--110, 1971. MR bibtex Silver, Jack. A large cardinal in the constructible universe.Fund Math 69:93--100, 1970. MR bibtex Jensen, Ronald and Kunen, Kenneth. Some combinatorial properties of $L$ and $V$.Unpublished, 1969. www bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Gitman, Victoria. Ramsey-like cardinals.The Journal of Symbolic Logic 76(2):519-540, 2011. www arχiv MR bibtex Sharpe, Ian and Welch, Philip. Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties.Ann Pure Appl Logic 162(11):863--902, 2011. www DOI MR bibtex
|
I have $d$ Normal Distributions, $N_1(\mu_1, \sigma_1^2) \cdots N_d(\mu_d, \sigma_d^2)$. We pick one of the $d$ distributions with each distribution having a probability of $\frac{1}{d}$ of being picked and generate a sample, $s_0$. What is the probability that it was generated from the distribution $N_1$?
I think this is related to the Bayes theorem. I have tried the following. I set a variable, $X$, which takes the index of the chosen distribution. Let there be a variable $s$ representing a sample from any of the $d$ distributions. I know :
$P(X=1) = \cdots = P(X=d) = \frac{1}{d}$. $P(s|X=k) \sim N_k(\mu_k, \sigma_k^2)$ for all $k \in [1, \cdots, d]$. $P (X=1 | s=s_0) = \frac{\frac{1}{d}\cdot P(s=s_0|X=1)}{\sum_{i=1}^d \frac{1}{d} \cdot P(s=s_0 | X=i)} = \frac{P(s=s_0|X=1)}{\sum_{i=1}^d P(s=s_0 | X=i)} $
However, I don't know the probability of a single sample as area under the PDF evaluated at a single point is 0. How do I proceed?
|
For a constant,
N, what value of
x will maximize the cosine (or any trig) function?
\begin{equation} 1 = \cos{(Nx)} \end{equation}
I am looking for the exact form, not the approximation because, \begin{equation} \frac{\arccos{(1)}}{N} = x = 0 \end{equation}
For example, WolframAlpha.com states that if
N = 19.013, then,\begin{equation}x = \frac{2000 \pi n}{19013} , n \text{ } \varepsilon \text{ } \text{set of integers}\end{equation}How was that solution calculated?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.