text
stringlengths
256
16.4k
Ms Alexandra Kiňová (23) is expecting the first Czechia's naturally born quintuplets (a package of 5 babies) on Sunday morning (tomorrow; update: they're out fine) which would mean that we match the achievement of the most fertile U.S. state – Utah – from the last week. Cool anniversary:In late January, we celebrated the 30th anniversary of the announcement of the discovery of the W-boson. Today, we celebrate the 30th anniversary of the Z-boson. They were comparably important discoveries to the recent discovery of the God particle. Sport:Viktoria Pilsen defeated Hradec, a much weaker team, 3-to-0 in the last round so we won the top soccer league for the 2nd time (after 2011). Because the Pilsner ice-hockey team has won the top league as well, Pilsen became the 2nd town in Czechia after Prague that collected both titles in the same year (correction: wrong, 3rd town, Ostrava did it in 1981). The Daily Mail tells us that the pregnancy has been easy so far. Doctors were still talking about "twins" in January and "quadruplets" in April. The probability that a birth produces \(n\)-tuplets goes like \(1/90^{n-1}\) or so but the decrease slows down relatively to this formula for really high representations. In physics, quintuplets are rare, too. By quintuplets, we mean five-dimensional irreducible representations of groups. Correct me if I am wrong but I think that among the simple Lie groups, only \(SU(2)=SO(3)\), \(USp(4)=SO(5)\), and \(SU(5)\) have irreducible five-dimensional representations. Let's look at them because looking at all quintuplets in group theory and physics is a rather unusual direction of approach to a subset of wisdom contained across the structure of maths and physics. First, \(SU(2)\). That's a three-dimensional group of \(2\times 2\) complex matrices \(M\) obeying \(MM^\dagger={\bf 1}\) and \(\det M=1\). The basic isomorphisms behind spinors imply that this group is the same as the group \(SO(3)\) of rotations of the three-dimensional space except that the matrices \(+M\) and \(-M\) have to be identified. The irreducible representations of \(SU(2)\) are labeled by the spin \(j\) which must be either non-negative integer or positive half-integer (only the former may also be interpreted as proper representations of \(SO(3)\); the latter change their sign after a 360-degree rotation). Because the \(z\)-projection goes from \(m=-j\) to \(m=+j\) with the spacing equal to one, the representation is \((2j+1)\)-dimensional. The \(j=0\) representation is the trivial singlet that doesn't transform at all; the \(j=1/2\) is the two-dimensional pseudoreal spinor; the \(j=1\) representation is equivalent to the usual 3-dimensional vector; the \(j=3/2\) representation is a gravitino-like four-dimensional "spinvector". And finally, the \(j=2\) representation is the traceless symmetric tensor. What do I mean by that? Imagine that you consider the tensor product \(V\otimes W\) of two copies of the three-dimensional vector space \(V=W=\RR^3\). The tensor product is composed of objects \(T_{ij}\) where \(i,j\) are vector indices: it's composed of tensors. Clearly, such a tensor has \(3\times 3 = 9\) independent components. They can be split into several pieces:\[ {\bf 3}\otimes {\bf 3} = {\bf 5} \oplus {\bf 1}\oplus {\bf 3} \] The identity \(3\times 3 = 5+1+3\) is the consistency check that verifies that the representations above have the right dimensions but the boldface identity above says more than just the arithmetic claim about the integers: the two sides are representations of whole groups and the identity says that they're transforming in equivalent ways under all elements of the group. Why is this decomposition right? Well, the tensor \(T_{ij}\) may be divided to the symmetric tensor part which is 6-dimensional and the antisymmetric tensor which is 3-dimensional (it is equal to \(\epsilon_{ijk}v_k\) i.e. equivalent to some vector \(v_k\)). However, the 6-dimensional symmetric tensor isn't an irreducible representation of \(SO(3)\). The trace \[ \sum_{i=1}^3 T_{ii} \] is independent of the coordinate system i.e. invariant under rotations and may be separated from the 6-dimensional representation. The trace may be set to zero by removing it i.e. considering\[ T^\text{traceless part}_{ij} = T_{ij} - \frac 13 \delta_{ij} T_{kk} \] and such a traceless tensor has 5 independent components; it is a quintuplet. The quadrupole moment tensor is one of the most famous applications of this 5-dimensional object. You could think it's just an accident that this number 5 is equal to the number of integers between \(m=-2\) and \(m=+2\); you could claim that the agreement is pure numerology, an agreement between the dimensions of two representations. But it is more than numerology: the representations are completely equivalent. The translation from the components \(T_{ij}\) of the (complexified) traceless tensor and the five complex amplitudes \(c_m\) for \(-2\leq m\leq 2\) is nothing else than a linear change of the basis. It has to be so because for every \(j\), the representation of \(SU(2)\) is unique. Now, let's talk about \(SO(5)\). Clearly, this group of rotations of the 5-dimensional space has a 5-dimensional vector representation consisting of \(v_i\). But what some readers aren't aware of is that the group \(SO(5)\) may also be identified with the isomorphic \(\ZZ_2\) quotient of a spinor-based group, namely \(USp(4)\). What is this group? It's a unitary (U) symplectic (Sp) group of complex \(4\times 4\) matrices \(M\) that obey\[ MM^\dagger = M^\dagger M = 1, \quad M A M^T = A. \] Both conditions have to be satisfied. The first condition is the well-known unitarity condition, effectively meaning that \(s_i^* s_i\) is kept invariant (it's the squared Pythagorean length of the vector computed with the absolute values). The other condition is equivalent to keeping the antisymmetric cross-like product of two vector-like objects \(s_i A_{ij} t_j\) invariant where \(A_{ij}\) are elements of the (non-singular) antisymmetric matrix \(A\) above. Note that in this invariant, there is no complex conjugation. Simple linear redefinitions of the 4 complex components \(s_i\) may always translate your convention for \(A\) mine which is \[ A = \text{block-diag} \zav{ \pmatrix{0&+1\\-1&0}, \pmatrix{0&+1\\-1&0} } \] You just arrange the right number of the "simplest nonzero antisymmetric matrices" along the (block) diagonal. The two conditions (unitary and symplectic) may be then seen to imply that \(M\) is composed of \(2\times 2\) blocks of this form\[ \pmatrix{ \alpha&+\beta\\ -\beta^*&\alpha^*},\quad \alpha,\beta\in\CC \] and the addition+matrix-multiplication rules for such matrices are the same rules as the addition+multiplication rules for the quaternions \(\HHH\). So the group \(USp(2N)\) may also be called \(U(N,\HHH)\), the unitary group over quaternions. In particular, \(USp(4)=U(2,\HHH)\). Such a quaternionization is possible with all pseudoreal representations. So the 4-dimensional complex (actually pseudoreal!) fundamental representation of \(USp(4)\) is complex-4-dimensional (but it is equivalent to its complex conjugate because it's pseudoreal!) and it may be viewed as a spinor of \(SO(5)\). It is no coincidence that \(4\) in \(USp(4)\) is a power of two. How do you get the five-dimensional \(j=1\) vector out of these four-dimensional spinors? Note that for \(SO(3)\sim SU(2)\), we had\[ {\bf 2}\otimes{\bf 2} = {\bf 3}\oplus {\bf 1}. \] The tensor product of two spinors produced a vector (triplet; also the symmetric part of the tensor with two spinor indices) and a singlet (the antisymmetric part of the tensor with two 2-valued indices). Similarly, here we have\[ {\bf 4}\otimes{\bf 4} = {\bf 5}\oplus {\bf 1}\oplus {\bf 10}. \] The decomposition of \(4\times 4 = 16\) to \(6+10\) is the usual decomposition of a "tensor with two spinor indices" to the antisymmetric part and the symmetric part, respectively. The symmetric part may be identified as the antisymmetrictensor with two vectorindices, note that \(5\times 4 / 2\times 1 = 10\). And the antisymmetric part is actually irreducible here. It's because the invariant for the symplectic groups is antisymmetric, \(a_{ij}\), rather than the symmetric \(\delta_{ij}\) we had for the orthogonal groups, so it's the antisymmetric part that decomposes into two irreducible pieces. By tensor multiplying \({\bf 4}\) with copies of itself, we may obtain all representations of \(USp(4)\) and \(SO(5)\) by picking pieces of the decomposed tensor products. That's what we mean by saying that the representation \({\bf 4}\) is "fundamental". Whenever an even number of these \({\bf 4}\) factors appears in the tensor product, we obtain honest representations of \(SO(5)\) that are invariant under 360-degree rotations and all these representations may also be given a natural description in terms of tensors with vector indices. Finally, the special unitary group \(SU(5)\) has an obvious 5-dimensional complex representation. It is a genuinely complex one, i.e. a representation inequivalent to its complex conjugate:\[ {\bf 5}\neq \overline{\bf 5} \] This representation (and its complex conjugate, of course) is important in the simplest grand unified models in particle physics. One may say that \(SU(5)\) is an obvious extension of the QCD colorful group \(SU(3)\). We keep the first three colors (red, green, blue, so to say) and add two more colors that are interpreted as two lepton species from the same generation. The full collection of fifteen 2-component left-handed spinors per generation (they describe quarks and leptons; a Dirac spinor is composed of two 2-component spinors; the right-handed neutrino is not included among the fifteen) is interpreted as \[ {\bf 5}\oplus\overline{\bf 10}, \] the direct sum of the fundamental quintuplet of \(SU(5)\) we have already mentioned and the antisymmetric "tensor" with \(5\times 4 / 2\times 1\) components. Note that the counting of the components is the same as it was for the representation of \(SO(5)\) above. However, the 10-dimensional representation of \(SU(5)\) is a complex one, inequivalent to its complex conjugate (I won't explain why the bar appears in the decomposition above, it's a technicality). The list of 15 spinors may be extended to 16, \(10+5+1\), if we add one right-handed neutrino and this \({\bf 16}\) is then the spinor representation of \(SO(10)\), a somewhat larger group that is capable of being the grand unified group (it is no accident that 16 is a power of two: that's what spinors always do). The number 5 may be thought of as the first "irregular" integer of a sort but it is still small and special enough and is therefore linked to many special things in maths and physics. In maths, five is special because the square root of five appears in the golden ratio; and a pentagram may be constructed by a pair of compasses and a ruler (these two facts are actually related). Quadrupole moments, moments of inertia, five-dimensional rotations, and grand unifications are among the physical topics in which 5-dimensional representations are used as "elementary building blocks". I hope that Ms Kiňová's birth will be as smooth as her pregnancy.
Quick Overview The Greatest Integer Function is also known as the Floor Function. It is written as $$f(x) = \lfloor x \rfloor$$. The value of $$\lfloor x \rfloor$$ is the largest integer that is less thanor equal to$$x$$. Definition The Greatest Integer Function is defined as $$\lfloor x \rfloor = \mbox{the largest integer that is}$$ less than or equal to $$x$$. In mathematical notation we would write this as $$ \lfloor x\rfloor = \max\{m\in\mathbb{Z}|m\leq x\} $$ The notation "$$m\in\mathbb{Z}$$" means "$$m$$ is an integer". Examples Example 1---Basic Calculations Evaluate the following. $$\lfloor 2.7\rfloor$$ $$\lfloor -1.4 \rfloor$$ $$\lfloor 8\rfloor$$ If we examine a number line with the integers and 2.7 plotted on it, we see The largest integer that is less than2.7 is 2. So $$\lfloor 2.7\rfloor = 2$$. If we examine a number line with the integers and -1.3 plotted on it, we see Since the largest integer that is less than-1.3 is -2, so $$\lfloor -1.3\rfloor = -2$$. Since $$\lfloor x\rfloor =$$ the largest integer that is less than or equal to$$x$$, we know $$\lfloor 8\rfloor = 8$$. Graphing the Greatest Integer Function To understand the behavior of this function, in terms of a graph, let's construct a table of values.TABLE $$ \begin{array}{|c|c|} \hline x & \lfloor x \rfloor\\ \hline -1.5 & -2\\ -1.25 & -2\\ -1 & -1\\ -0.75 & -1\\ -0.5 & -1\\ -0.25 & -1\\ 0 & 0\\ 0.25 & 0\\ 0.5 & 0\\ 0.75 & 0\\ 1 & 1\\ 1.25 & 1\\ 1.5 & 1\\ \hline \end{array} $$ The table shows us that the function increases to the next highest integer any time the x-value becomes an integer. This results in the following graph.Answer Example 2 Sketch a graph of $$y = \left\lfloor \frac 1 2x \right\rfloor$$.Solution We know what the basic graph should look like, so we just need to understand how the factor of $$\frac 1 2$$ is going to affect things. We can do this in two ways: we can make a table of values, or we can interpret this as a transformation. TABLE $$ \begin{align*} \begin{array}{|c|c|c|} \hline x & \frac 1 2 x & \left\lfloor \frac 1 2 x\right\rfloor\\[6pt] \hline -2 & -1.5 & -2\\[6pt] -1.5 & -0.75 & -1\\[6pt] -1 & -0.5 & -1\\[6pt] -0.5 & -0.25 & -1\\[6pt] 0 & 0 & 0\\[6pt] 0.5 & 0.25 & 0\\[6pt] 1 & 0.5 & 0\\[6pt] 1.5 & 0.75 & 0\\[6pt] 2 & 1 & 1\\[6pt] \hline \end{array} \end{align*} $$ We notice from the table that the function values jump to the next value when $$x$$ is even. TRANSFORMATION We can interpret $$y = \left\lfloor \frac 1 2 x\right\rfloor$$ as a horizontal stretch which doubles the length of each piece.Answer: Solving Equations There is a formula that can help us when working with equations that involve the floor function. $$\lfloor x\rfloor = m\qquad\mbox{if and only if}\quad m \leq x < m + 1$$ (remember, $$m$$ is an integer!) So, for example, $$\lfloor x\rfloor = 8$$ if and only if $$8 \leq x < 9$$. Example 3 Solve the equation $$\lfloor 2x + 5\rfloor = 9$$.Step 1 Rewrite the equation using the inequality. $$9 \leq 2x + 5 < 10$$Step 2 Solve the inequality. $$ \begin{align*} 9 & \leq 2x + 5 < 10\\ 9 - 5 & \leq 2x < 10 - 5\\ 4 & \leq 2x < 5\\ \frac 4 2 & \leq x < \frac 5 2\\[6pt] 2 & \leq x < 2.5 \end{align*} $$Answer: In interval notation, the equation is true for $$x \in [2, 2.5)$$. Example 4 Solve the equation $$\lfloor 1.25 + \lfloor x\rfloor\rfloor = 12$$.Step 1 Replace $$\lfloor x\rfloor$$ with $$u$$. This is called a "change of variable" and it will make the equation easier to work with. $$ \begin{align*} \lfloor 1.25 + \lfloor x\rfloor\rfloor & = 12\\[6pt] \lfloor 1.25 + u\rfloor & = 12 \end{align*} $$Step 2 Replace the equation with one of the inequalities, where $$m = 12$$ Solve the inequality. $$ \begin{align*} 12 & \leq 1.25 + u < 13\\[6pt] 10.75 & \leq u < 11.75\\[6pt] 10.75 & \leq \lfloor x \rfloor < 11.75 \end{align*} $$ Since $$\lfloor x \rfloor$$ is an integer, the only way to satisfy the above inequalities is for $$\lfloor x \rfloor = 11$$.Step 4 Determine the value of $$x$$. Again, using the inequalities, we know $$ 11 \leq x < 12 $$Answer: $$ 11 \leq x < 12 $$
Let $G$ be a finite group. For a prime number $p$, let us call $G$ an elementary $p$-group iff $\exp G=p$. I know that all elementary $2$-groups are abelian, and I also know the construction of non-abelian elementary $p$-groups of order $p^3$ for every odd prime number $p$. My question is that, can we list all the elementary $p$-groups for each $p\in\mathbb P$? Consider the group $U(n,\mathbb{Z}_p)$ consisting of $n\times n$ upper triangular matrices over the filed $\mathbb{Z}_p$ of order $p$, in which diagonal entries are $1$. For simplicity, consider $n\leq p$, which forces that $U(n,\mathbb{Z}_p)$ is a $p$-group of exponent $p$. I think the determination of subgroups of this group is still open, so the answer to your question could be "NO". It may not be a complete answer but those group can be constructed inductively using semi-direct product (hence they cannot be too complicated). Take $G$ to be an elementary $p$-group. Take any maximal group $M$ in $G$ then we know that $M$ cannot but be normal and $[G:M]=p$. From this it follows that $G/M$ is a group of order $p$ hence it is cyclic of order $p$. Furthermore it is clear that $M$ must be an elementary $p$-group as well. Hence we have decomposed $G$ in an extension : $$1\rightarrow M\rightarrow G\rightarrow \frac{\mathbb{Z}}{p\mathbb{Z}}\rightarrow 1$$ Now I claim that the extension is splitted, take $1$ to be a generator of $\frac{\mathbb{Z}}{p\mathbb{Z}}$ and $g_0\in G$ such that $g_0M=1$ then I claim that the function : $$s:\frac{\mathbb{Z}}{p\mathbb{Z}}\rightarrow G $$ $$k\mapsto g_0^k $$ This is well defined because $g_0$ is of order $p$. I claim that this function is a group morphism which splits the extension above. Hence : $$G\text{ is isomorphic to }M\rtimes s(\frac{\mathbb{Z}}{p\mathbb{Z}}) $$ Hence any elementary $p$-group of order $p^n$ is a semi direct product of an elementary $p$-group of order $p^{n-1}$ and $\frac{\mathbb{Z}}{p\mathbb{Z}}$.
I have an issue with an extremely elementary problem. Consider the differential equation $y' + \cot(x) y = 1$. Obviously, one can use an integrating factor of $e^{\int \cot(x) dx} = e^{\ln(\sin(x)) } $ (the arbitrary constant would cancel out) $= \sin(x)$ to solve the differential equation, obtaining the correct answer $y = - \cot(x) + C \csc(x)$. However, the assertion $ \int \cot(x) dx = \ln(\sin(x)) +C $ is true only modulo subtle things involving branches of $\ln$ in the complex plane. Restricted to the real line, we use $\int \cot (x) dx = \ln |\sin(x)| +C$. If we do the above method, we get an integrating factor not of $\sin(x)$ but of $|\sin(x)|$: $$ |\sin(x)| y' + \cot(x) | \sin(x)| y = |\sin(x)|$$ Indeed, $\frac{d}{dx} |\sin(x)| = \cot(x) | \sin(x)|$, so this is a valid alternate choice of integrating factor. Proceeding, we have $$ \frac{d}{dx} ( |\sin(x)| y ) = |\sin(x)| $$ $$ y = \frac{\int |\sin(x)| dx}{|\sin(x)|}$$ But this is not equal to the correct answer of $- \cot(x) + C \csc(x)$! What is going on? EDIT: It seems every calculus solution manual ever is wrong. EDIT 2: Alternatively, it seems every introductory calculus textbook, including Stewart, gives the wrong definition of "general solution".
I am trying to find an easy way to compute the limit as $x \to 0$ of $$f(x) = \frac{\sqrt{1+\tan(x)} - \sqrt{1+\sin(x)}}{x^3}$$ from first principles (i.e. without using l'Hôspital's rule). I have gone as far as boiling down the problem to computing the limit as $x \to 0$ of $$\frac{1 - \cos(x)}{x^2}$$ I thought about using the Small Angle Approximation for cosine, which indeed gives the right answer but doesn't seem to be a very formal. Any hint? Also, my working was fairly long so if you have a straightforward way to compute the limit of $f(x)$ I would love to hear it :)
$\dfrac{\sin{A}+\cos{A}}{\sin{A}-\cos{A}} = \dfrac{5}{3}$ is the given trigonometric equation and we have to find the value of trigonometric expression $\dfrac{7\tan{A}+2}{2\tan{A}+7}$. The trigonometric expression is in terms of $\tan{A}$ but we don’t know its value. So, it’s essential to find the value of $\tan{A}$ firstly and it can be done by solving the given trigonometric equation. The trigonometric equation can be solved in two different ways to find the value of $\tan{A}$. If you are a beginner, you can get the value of $\tan{A}$ by using cross multiplication method. $\dfrac{\sin{A}+\cos{A}}{\sin{A}-\cos{A}}$ $\,=\,$ $\dfrac{5}{3}$ $\implies$ $3 \times (\sin{A}+\cos{A})$ $\,=\,$ $5 \times (\sin{A}-\cos{A})$ $\implies$ $3\sin{A}+3\cos{A}$ $\,=\,$ $5\sin{A}-5\cos{A}$ $\implies$ $5\cos{A}+3\cos{A}$ $\,=\,$ $5\sin{A}-3\sin{A}$ $\implies$ $8\cos{A}$ $\,=\,$ $2\sin{A}$ $\implies$ $\dfrac{8}{2} \,=\, \dfrac{\sin{A}}{\cos{A}}$ $\implies$ $\dfrac{\sin{A}}{\cos{A}} \,=\, \dfrac{8}{2}$ According to ratio or quotient trigonometric identity of sin and cos functions, the ratio of $\sin{A}$ to $\cos{A}$ is equal to $\tan{A}$. $\implies$ $\tan{A} \,=\, \require{cancel} \dfrac{\cancel{8}}{\cancel{2}}$ $\,\,\, \therefore \,\,\,\,\,\,$ $\tan{A} \,=\, 4$ The value of $\tan{A}$ is equal to $4$. Now, substitute the value of $\tan{A}$ the trigonometric expression to evaluate it. If you are an advanced learner, you can solve the trigonometric equation by the componendo and dividendo rule. $\implies$ $\dfrac{\sin{A}+\cos{A}+\sin{A}-\cos{A}}{\sin{A}+\cos{A}-(\sin{A}-\cos{A})}$ $\,=\,$ $\dfrac{5+3}{5-3}$ $\implies$ $\dfrac{\sin{A}+\sin{A}+\cos{A}-\cos{A}}{\sin{A}+\cos{A}-\sin{A}+\cos{A}}$ $\,=\,$ $\dfrac{8}{2}$ $\implies$ $\dfrac{\sin{A}+\sin{A}+\cos{A}-\cos{A}}{\sin{A}-\sin{A}+\cos{A}+\cos{A}}$ $\,=\,$ $\dfrac{8}{2}$ $\implies$ $\require{cancel} \dfrac{2\sin{A}+\cancel{\cos{A}}-\cancel{\cos{A}}}{\cancel{\sin{A}}-\cancel{\sin{A}}+2\cos{A}}$ $\,=\,$ $\require{cancel} \dfrac{\cancel{8}}{\cancel{2}}$ $\implies$ $\dfrac{2\sin{A}}{2\cos{A}}$ $\,=\,$ $4$ $\implies$ $\require{cancel} \dfrac{\cancel{2}\sin{A}}{\cancel{2}\cos{A}}$ $\,=\,$ $4$ $\implies$ $\dfrac{\sin{A}}{\cos{A}}$ $\,=\,$ $4$ $\,\,\, \therefore \,\,\,\,\,\,$ $\tan{A} \,=\, 4$ It’s true that the value of $\tan{A}$ is equal to $4$. Substitute the value of $\tan{A}$ in the trigonometric expression and then simplify it mathematically. $\dfrac{7\tan{A}+2}{2\tan{A}+7}$ $\,=\,$ $\dfrac{7(4)+2}{2(4)+7}$ $=\, \dfrac{28+2}{8+7}$ $=\, \dfrac{30}{15}$ $=\, \require{cancel} \dfrac{\cancel{30}}{\cancel{15}}$ $=\, 2$ Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
The following LaTeX code looks pretty straightforward to me: \begin{eqnarray*}& \lambda_l \mathbf{l}\cdot\mathbf{r} & = & \lambda_r \mathbf{l}\cdot\mathbf{r} \nonumber \\\therefore & \mathbf{l}\cdot\mathbf{r} & = & 0 \nonumber\end{eqnarray*} but it results in the error: ! Missing $ inserted.<inserted text> $l.78 ..._l \mathbf{l}\cdot\mathbf{r} & = & \lambda _r \mathbf{l}\cdot\mathbf{... If I remove enough ampersands, I get a clean execution, but then I lose the alignment I am trying to get. Is there a problem with the implicit first empty item on the first line, or is it caused by something else?
Firstly, some definitions; Pre-image resistant: given a hash value $h$ find a message $m$ such that $h=Hash(m)$. Consider storing the hashes of passwords on the server. Eg. an attacker will try to find a valid password to your account. Second Pre-image resistant: given a message $m_1$ is should be computationally infeasible to find another message $m_2$ such that $m_1 \neq m_2$ and $Hash(m_1)=Hash(m_2)$. Producing a forgery of a given message. Collision resistance : if it is hard to find two inputs that hash to the same output $a$ and $b$ such that $H(a)= H(b)$, $a \neq b.$ 0) How do Hashes really ensure uniqueness? As David gave his answer, no they don't' ensure uniqueness. To see this consider a simple hash (imitating the only compression); $$H':\{0,1\}^{20} \rightarrow \{0,1\}^{1}$$$$x \mapsto x \pmod 2$$ By the definition; all the even numbers have $0$ as a hash value and odd numbers have $1$ as a hash value. Another way to see this is the pigeonhole principle. The input size is larger than the hash size, Therefore there exist at least one hash value contains more than one message. So, there is no uniqueness. But finding another one, a collision, must be computationally infeasible. a) As far as I understand, hashes are just long alphanumeric strings. Hash outputs are bits, just bits. How you represent them or transmit them is up to the developer. b) If one computes hashes across all documents, keys, information, files, etc, over and over again- its simply a matter of time until the same combination comes up again for the different information What you said is called a hash collision. By definition of hash, it is inevitable but the finding one must be computationally infeasible. But if your hash function is considered as weak or a new attack occurs you must change it as for MD5 or SHA-1. $$H:\{0,1\}^* \rightarrow \{0,1\}^l$$ As one can see, for $2^l$ possible hash output there are finitely (since we cannot process infinitely) many possible inputs. The SHA3-512 has only $l=512$ output bits. If the message space is just 1024 bits, then for a given hash value $h$ there are $ 2^{1024}/2^{512}$ possible input values that have $h$ as the hash value. Picking one at random, you will have $1/2^{512}$ probability to match the hash as long as the hash function behaves randomly. There is an interesting random hash collision on MD4 on e-mule. c) This might be an impractical test given size and possibility, yes, but is it this that makes hashes powerful, that its practically impossible to recreate a hash that for any foreseeable endeavor its fine or is there some element that I'm missing that reduces even that extremely low probability to zero In the designs of hash functions is it required that finding a pre,second-image and collision must be computationally infeasible. But there is always a negligible chance of the attacker to find one, as in the MD4 case. Is there any design constraint that prevents a very powerful computer to back-calculate the original data from a hash? Or is it simply that the design is so complex that its simply a futile exercise to dream of such a large computer required for this task? Hash functions are by design are not invertible functions as permutations. They achive this by2 Bit dependency: each bit of the output is dependent of the every bit of input. Avalanching : a single bit change in the input must change $\approx$ half of the bits randomly. Non-linearity: prevent from attacking linear systems solving techniques. The attacker must find either a preimage or secondary-preimage. A powerful entity can search all possible inputs to match the given hash. These examples rainbow table,hashcat may be not as powerful as you imagine but they are on the edge of computing. If somehow you find an image that works for the hash value, there is no way to determine that this is the original one, the pre-image. If your powerful entity is Quantum Computer, don't worry. D. J. Bernstein; Anyone afraid of quantum hash collision algorithms already has much more to fear from non-quantum hash-collision algorithms. The quantum computers reduced the complexity of hash collision from $2^{b/2}$ to $2^{b/3}$. The non-quantum computers already achieved $2^{b/3}$ with smaller time, The Rho Machine. 1,2 Whats stopping a hacker or malicious middle man to hack open the software program or library that creates this "hash" and then use that library to create hashes of his own or mislabel some target company's hash with his own pointing to their version of the file? Especially since many applications, languages, and developers use hashing independently. Whichever is most weakly secured, we can use that to take on the rest? Nothing except the hardness of finding a collision. If somehow an attacker is able to find a collision they can execute it. As recently, an identical-prefix collision attack for SHA-1 performed in PDF files to create malicious valid PDFs.
Assume you want to find the derivative respect to X ($p \times p$) matrix of $$ \frac{\partial}{\partial X} || X - A ||_1 $$ where A is ($p \times p$) matrix. How can I do it? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Define $$\eqalign{ Y &= (X-A) \cr B &= {\rm abs}(Y) \cr G &= {\rm signum}(Y) \cr B &= Y\odot G \cr }$$ where the functions are applied element-wise. Then find the differential and gradient of the norm as $$\eqalign{ \phi &= 1:B = 1:Y\odot G = G:Y \cr d\phi &= G:dY = G:dX \cr \frac{\partial\phi}{\partial X} &= G = {\rm signum}\big(X-A\big) \cr\cr }$$ In the above, the symbols {$\,:\,, \odot$} are used to denote the {Frobenius, Hadamard} products, respectively. Also note that signum has a discontinuity at zero, where its value jumps between $-1$ and $+1$.
The third hep-th paper in today's listings is very interesting. OK, more seriously, they study matrix models which are clearly relevant for full-blown definitions of quantum gravity. Lots of descriptions of vacua (or superselection sectors) of quantum gravity are given by \(U(N)\) or \(SU(N)\) gauge theories in various numbers of dimensions – that includes the \(AdS_5\)-like vacua in AdS/CFT and the BFSS matrix theory. Some half-supersymmetric subsets of operators in such theories are fully understood etc. A non-supersymmetric case of a matrix model is the "old matrix model", just a Hamiltonian with a natural kinetic and potential term written for bosonic degrees of freedom arranged in a Hermitian matrix. One may cleverly solve it, see that the eigenvalues of the matrix effectively behave as particles, and those particles happen to be fermions. That treatment may be shown to be dual to the Liouville theory etc. Koch and Berenstein study the seemingly simpler, fermionic version of that matrix model,\[ S = \int dt\,{\rm tr} \zzav { \bar\psi \cdot iD_t \cdot \psi - m\bar\psi \cdot \psi } \] A very simple theory of a massive fermion in 0+1 dimensions extended to a whole matrix of fermions. The solution of this problem for any \(N\) must be sort of analogous to the bosonic case with eigenvalues. OK, there are some gauge-invariant operators. The simplest way to make operators gauge-invariant is to trace a product of fields. Here, there is just one field, \(\psi\), so you may take the trace of the \(n\)-th power of this \(\psi\) (only odd powers are nonzero), or a product of such traces. That's a basis of gauge-invariant operators here. Alternatively, you may think about a single trace only but allow the traces over various representations \(R\) of \(U(N)\). Representations of \(U(N)\) may be labeled by Young diagrams. So there's another nice basis of the gauge-invariant operators whose elements coincide with Young diagrams. This Young diagram basis is really called "Schur function basis". Koch and Berenstein show that these two bases are actually exactly the same, up to the normalization of each basis vector. The translation is done by hooks in the Young diagrams. A hook is an (upside-down-inverted) L-shaped sequence of boxes. You may divide every diagram to hooks, starting from the left upper corner, see e.g. this hooky Wikipedia page. It's surprising that the identities to show that the basis vectors are the same, up to a normalization, are said to be newly discovered. So the single fermionic matrix model simplifies. For years, I have thought that the proper mastery of many such matrix models, their operators in assorted regimes, and their clever generalizations is important for a complete understanding of a theory of everything – or the most universal possible description of string/M-theory or quantum gravity. Note that the word Schur appears in 7 TRF blog posts so far, if I include this one. In some of them, I tried to make the Schur bases relevant for the ER-EPR correspondence. There's another hypothetical (my) connection to quantum gravity that needs to be refined. Note that Vafa et al. wrote about quantum Calabi-Yaus and classical crystals or quantum foam where the partition sum may be written as a sum over generalized, 3D "Young diagrams" labeling topologies of Calabi-Yau three-folds. I believe that there is some generalization involving the two-complex-dimensional topologies and matrix models of the Koch-Berenstein style are dual to a theory in a two-complex-dimensional world volume – a generalized definition of a world sheet.
Permanent of an $m \times n$-matrix $A = \left\Vert a_{ij} \right\Vert$ The function $$ \mathrm{per}(A) = \sum_\sigma a_{1\sigma(1)}\cdots a_{m\sigma(m)} $$ where $a_{ij}$ are elements from a commutative ring and summation is over all one-to-one mappings $\sigma$ from $\{1,\ldots,m\}$ into $\{1,\ldots,n\}$. If $m=n$, then $\sigma$ represents all possible permutations, and the permanent is a particular case of the Schur matrix function (cf. Immanant) $$ d_\chi^H (A) = \sum_{\sigma\in H} \chi(\sigma) \prod_{i=1}^n a_{i\sigma(i)} $$ for $H \subseteq S_n$, where $\chi$ is a character of degree 1 on the subgroup $H$ (cf. Character of a group) of the symmetric group $S_n$ (one obtains the determinant for $H=S_n$, $\chi =\pm 1$, in accordance with the parity of $\sigma$). The permanent is used in linear algebra, probability theory and combinatorics. In combinatorics, a permanent can be interpreted as follows: The number of systems of distinct repesentatives for a given family of subsets of a finite set is the permanent of the incidence matrix for the incidence system related to this family. The main interest is in the permanent of a matrix consisting of zeros and ones (a $(0,1)$-matrix), of a matrix containing non-negative real numbers, in particular doubly-stochastic matrices (in which the sum of the elements in any row and any column is 1), and of a complex Hermitian matrix. The basic properties of the permanent include a theorem on expansion (the analogue of Laplace's theorem for determinants) and the Binet–Cauchy theorem, which gives a representation of the permanent of the product of two matrices as the sum of the products of the permanents formed from the cofactors. For the permanents of complex matrices it is convenient to use representations as scalar products in the symmetry classes of completely-symmetric tensors (see, e.g., [3]). One of the most effective methods for calculating permanents is provided by Ryser's formula: $$ \mathrm{per}(A) = \sum_{t=0}^{n-1} (-1)^t \sum_{X \in \Gamma_{n-t}} \prod_{i=1}^m r_i(X) $$ where $\Gamma_k$ is the set of submatrices of dimension $m \times k$ for the square matrix $A$, $r_i = r_i(X)$ is the sum of the elements of the $i$-th row of $X$ and $i,k=1,\ldots,m$. As it is complicated to calculate permanents, estimating them is important. Some lower bounds are given below. a) If $A$ is a $(0,1)$-matrix with $r_i(A) \ge t$, $i=1,\ldots,m$, then $$ \mathrm{per}(A) \ge \frac{t!}{(t-m)!} $$ for $t \ge m$, and $$ \mathrm{per}(A) \ge t! $$ if $t < m$ and $\mathrm{per}(A) > 0$. b) If $A$ is a $(0,1)$-matrix of order $n$, then $$ \mathrm{per}(A) \ge \prod_{i=1}^n \{ r_i^* + i - n \} $$ where $r_1^* \ge \cdots \ge r_n^*$ are the sums of the elements in the rows of $A$ arranged in non-increasing order and $\{ r_i^* + i - n \} = \max(0, r_i^* + i - n )$. c) If $A$ is a positive semi-definite Hermitian matrix of order $n$, then $$ \mathrm{per}(A) \ge \frac{n!}{s(A)^n} \prod_{i=1}^n |r_i|^2 $$ where $s(A) = \sum_{i,j} a_{ij}$ if $s(A) > 0$. Upper bounds for permanents: 1) For a $(0,1)$>-matrix $A$ of order $n$, $$ \mathrm{per}(A) \le \prod_{i=1}^n (r_i!)^{1/r_1} \ . $$ 2) For a completely-indecomposable matrix $A$ of order $n$ with non-negative integer elements, $$ \mathrm{per}(A) \le 2^{s(A)-2n} + 1 \ . $$ 3) For a complex normal matrix $A$ with eigen values $\lambda_1,\ldots,\lambda_n$, $$ |\mathrm{per}(A)| \le \frac{1}{n}\sum_{i=1}^n |\lambda_i|^n \ . $$ The most familiar problem in the theory of permanents was van der Waerden's conjecture: The permanent of a doubly-stochastic matrix of order $n$ is bounded from below by $n!/n^n$, and this value is attained only for the matrix composed of fractions $1/n$. A positive solution to this problem was obtained in [4]. Among the applications of permanents one may mention relationships to certain combinatorial problems (cf. Combinatorial analysis), such as the "problème des rencontres" and the "problème d'attachement" (or "hook problem" ), and also to the Fibonacci numbers, the enumeration of Latin squares and Steiner triple systems (cf. Steiner system), and to the derivation of the number of $1$-factors and linear subgraphs of a graph, while doubly-stochastic matrices are related to certain probability models. There are interesting physical applications of permanents, of which the most important is the dimer problem, which arises in research on the adsorption of di-atomic molecules in surface layers: The permanent of a $(0,1)$-matrix of a simple structure expresses the number of ways of combining the atoms in the substance into di-atomic molecules. There are also applications of permanents in statistical physics, the theory of crystals and physical chemistry. References [1] H.J. Ryser, "Combinatorial mathematics" , Wiley & Math. Assoc. Amer. (1963) Zbl 0112.24806 [2] V.N. Sachkov, "Combinatorial methods in discrete mathematics" , Moscow (1977) (In Russian); translated by V. Kolchin: Encyclopedia of Mathematics and Its Applications 55. Cambridge University Press (1995) Zbl 0845.05003 [3] H. Minc, "Permanents" , Addison-Wesley (1978) [4] G.P. Egorichev, "The solution of van der Waerden's problem on permanents" , Krasnoyarsk (1980) (In Russian); Adv. Math. 42 (1981) 299-305. Zbl 0478.15003. [5] D.I. Falikman, "Proof of the van der Waerden conjecture regarding the permanent of a doubly stochastic matrix" Math. Notes , 29 : 6 (1981) pp. 475–479 Mat. Zametki , 29 : 6 (1981) pp. 931–938. Zbl 0475.15007 Comments The solution of the van der Waerden conjecture was obtained simultaneously and independently of each other in 1979 by both O.I. Falikman, [5], and G.P. Egorichev, [4], [a4]. For some details cf. also [a2]–[a5]. References [a1] D.E. Knuth, "A permanent inequality" Amer. Math. Monthly , 88 (1981) pp. 731–740 [a2] J.C. Lagarias, "The van der Waerden conjecture: two Soviet solutions" Notices Amer. Math. Soc. , 29 : 2 (1982) pp. 130–133 [a3] J.H. van Lint, "Notes on Egoritsjev's proof of the van der Waerden conjecture" Linear Algebra Appl. , 39 (1981) pp. 1–8 [a4] G.P. [G.P. Egorichev] Egorychev, "The solution of van der Waerden's problem for permanents" Adv. in Math. , 42 : 3 (1981) pp. 299–305 [a5] J.H. van Lint, "The van der Waerden conjecture: Two proofs in one year" Math. Intelligencer , 4 (1982) pp. 72–77 [a6] R.M. Wilson, "Non-isomorphic triple systems" Math. Zeitschr. , 135 (1974) pp. 303–313 [a7] A. Schrijver, "A short proof of Minc's conjecture" J. Comb. Theory (A) , 25 (1978) pp. 80–83 [a8] H. Minc, "Nonnegative matrices" , Wiley (1988) How to Cite This Entry: Permanent. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Permanent&oldid=35216
Convergent Sequence is Cauchy Sequence/Normed Division Ring/Proof 1 Theorem Let $\struct {R, \norm {\,\cdot\,}} $ be a normed division ring. Proof Let $\epsilon > 0$. Then also $\dfrac \epsilon 2 > 0$. Because $\sequence {x_n}$ converges to $l$, we have: $\exists N: \forall n > N: \norm {x_n - l} < \dfrac \epsilon 2$ So if $m > N$ and $n > N$, then: \(\displaystyle \norm {x_n - x_m}\) \(=\) \(\displaystyle \norm {x_n - l + l - x_m}\) \(\displaystyle \) \(\le\) \(\displaystyle \norm {x_n - l} + \norm {l - x_m}\) Triangle Inequality \(\displaystyle \) \(<\) \(\displaystyle \frac \epsilon 2 + \frac \epsilon 2\) (by choice of $N$) \(\displaystyle \) \(=\) \(\displaystyle \epsilon\) Thus $\sequence {x_n}$ is a Cauchy sequence. $\blacksquare$ Also see
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281 A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV... PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | info:eu-repo/classification/ddc/ddc:530 | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | info:eu-repo/classification/ddc/ddc:530 | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article Chemical Communications, ISSN 1359-7345, 01/2012, Volume 48, Issue 6, pp. 811 - 813 The spin echo is the single most important building block in modern NMR spectroscopy, but echo modulation by scalar couplings J can severely complicate its... SYSTEMS | TRANSVERSE RELAXATION RATES | SPECTROSCOPY | CHEMISTRY, MULTIDISCIPLINARY | Couplings | Modulation | Blocking | Scalars | Spectra | Nuclear magnetic resonance | NMR spectroscopy | Organic chemistry | Chemical Sciences SYSTEMS | TRANSVERSE RELAXATION RATES | SPECTROSCOPY | CHEMISTRY, MULTIDISCIPLINARY | Couplings | Modulation | Blocking | Scalars | Spectra | Nuclear magnetic resonance | NMR spectroscopy | Organic chemistry | Chemical Sciences Journal Article Physics Letters B, ISSN 0370-2693, 12/2015, Volume 751, Issue C, pp. 63 - 80 An observation of the decay and a comparison of its branching fraction with that of the decay has been made with the ATLAS detector in proton–proton collisions... PARTICLE ACCELERATORS PARTICLE ACCELERATORS Journal Article Journal of High Energy Physics, ISSN 1126-6708, 2012, Volume 2012, Issue 5 Journal Article 5. Prompt and non-prompt $$J/\psi $$ J/ψ elliptic flow in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}} = 5.02$$ sNN=5.02 Tev with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 23 The elliptic flow of prompt and non-prompt $$J/\psi $$ J/ψ was measured in the dimuon decay channel in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}}=5.02$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 261 - 281 A peaking structure in the mass spectrum near threshold is observed in decays, produced in pp collisions at collected with the CMS detector at the LHC. The... Journal Article 7. Measurement of the prompt J/ψ pair production cross-section in pp collisions at √s = 8 TeV with the ATLAS detector The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2017, Volume 77, Issue 2, pp. 1 - 34 Journal Article 8. Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s = 7 and 8 TeV with the ATLAS detector European Physical Journal C, ISSN 1434-6044, 2016, Volume 76, Issue 5, pp. 1 - 47 Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, el nombre del grupo de colaboración, si le hubiere, y... scattering [p p] | inclusive production | Polarization | Subatomic Physics | Quarkonium Production | dk/atira/pure/researchoutput/pubmedpublicationtype/D016428 | J-Psi | High Energy Physics - Experiment | Hadronic Collisions | Hadroproduction | hadroproduction [psi] | Journal Article | Science & Technology | Chi(C) | meson | Cross-sections | Engineering (miscellaneous) | transverse momentum [meson] | Subatomär fysik | S=1.8 Tev | ATLAS detector; LHC; proton-proton | Heavy-Quarkonium | Physics and Astronomy (miscellaneous) | P(P)Over-Bar Collisions | Physik | experimental results | rapidity | CERN LHC Coll | geometrical [acceptance] | hadroproduction [J/psi] | PP interactions | ATLAS detector | Física | measured [differential cross section] | 7000 GeV-cms8000 GeV-cms | Gluons | rapidity dependence | colliding beams [p p] | transverse momentum dependence | Prompt | differential cross section | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences scattering [p p] | inclusive production | Polarization | Subatomic Physics | Quarkonium Production | dk/atira/pure/researchoutput/pubmedpublicationtype/D016428 | J-Psi | High Energy Physics - Experiment | Hadronic Collisions | Hadroproduction | hadroproduction [psi] | Journal Article | Science & Technology | Chi(C) | meson | Cross-sections | Engineering (miscellaneous) | transverse momentum [meson] | Subatomär fysik | S=1.8 Tev | ATLAS detector; LHC; proton-proton | Heavy-Quarkonium | Physics and Astronomy (miscellaneous) | P(P)Over-Bar Collisions | Physik | experimental results | rapidity | CERN LHC Coll | geometrical [acceptance] | hadroproduction [J/psi] | PP interactions | ATLAS detector | Física | measured [differential cross section] | 7000 GeV-cms8000 GeV-cms | Gluons | rapidity dependence | colliding beams [p p] | transverse momentum dependence | Prompt | differential cross section | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 9. Suppression of non-prompt J/psi, prompt J/psi, and Upsilon(1S) in PbPb collisions at root s(NN)=2.76 TeV JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 05/2012, Issue 5 Yields of prompt and non-prompt J/psi ,as well as Upsilon(1S) mesons, are measured by the CMS experiment via their mu(+)mu(-) decays in PbPb and pp collisions... P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS Journal Article 10. Prompt and non-prompt $$J/\psi $$ J/ψ and $$\psi (2\mathrm {S})$$ ψ(2S) suppression at high transverse momentum in $$5.02~\mathrm {TeV}$$ 5.02TeV Pb+Pb collisions with the ATLAS experiment The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 28 A measurement of $$J/\psi $$ J/ψ and $$\psi (2\mathrm {S})$$ ψ(2S) production is presented. It is based on a data sample from Pb+Pb collisions at... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 11. Measurement of the differential cross-sections of inclusive, prompt and non-prompt J / ψ production in proton–proton collisions at s = 7 TeV Nuclear Physics, Section B, ISSN 0550-3213, 2011, Volume 850, Issue 3, pp. 387 - 444 The inclusive production cross-section and fraction of mesons produced in -hadron decays are measured in proton–proton collisions at with the ATLAS detector at... Journal Article 12. Study of the $$B_c^+ \rightarrow J/\psi D_s^+$$ B c + → J / ψ D s + and $$B_c^+ \rightarrow J/\psi D_s^{+}$$ B c + → J / ψ D s ∗ + decays with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 1/2016, Volume 76, Issue 1, pp. 1 - 24 The decays $$B_c^+ \rightarrow J/\psi D_s^+$$ B c + → J / ψ D s + and $$B_c^+ \rightarrow J/\psi D_s^{*+}$$ B c + → J / ψ D s ∗ + are studied with the ATLAS... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article
Lord Soth Knight of the Black Rose, image by http://jprart.deviantart.com/ 4 answers 0 questions ~3k people reached Member for 6 years, 5 months 29 profile views Last seen Sep 11 '14 at 21:30 Communities (9) Top network posts 58 Intuitive proofs that $\lim\limits_{n\to\infty}\left(1+\frac xn\right)^n=e^x$ 41 Is there a slowest rate of divergence of a series? 18 What if $\pi$ was an algebraic number? (significance of algebraic numbers) 18 Derivative of function is bigger??! 15 Proof that any square is of the form $3k$ or $3k+1$ 15 Coffee and regular polygons 14 Solutions to $a+b+c=12$, $a, b, c \in \mathbb{N}_0$ View more network posts → Top tags (7) 19 Was I being arrogant? Nov 3 '13 7 Text and MathJax above symbol Dec 5 '13 4 I can't put a Greek letter into `\tag{…}` Jan 14 '14
Given: $P(x)=x^4 -4(m+2)x^2 + m^2$ has 4 real roots. Show: the sum of the possible least 3 integer values of $m$ is zero. This is a question asked in an entrance exam (Colégio Naval 92). My attempt: Let $x^2=y$ to get $Q(y)=y^2 -4(m+2)y + m^2=$ and find conditions on $m$ for real roots for $Q(y)$. The discriminant would be defined by $$\vartriangle=16(m+2)^2-4m^2=4\left(4(m^2+4m+4)-m^2\right)=4(3m^2+16m+16).$$Therefore, for real roots, we need $\vartriangle\ge 0$ or $$3m^2+16m+16\ge 0\Leftrightarrow 9m^2+48m+64-64+48\ge 0\Leftrightarrow(3m+8)^2\ge 16$$leading to the conditions for $m$: (a) $3m+8\ge 4$ or (b) $3m+8\le -4$, or developing both expressions, (a) $m\ge -4/3$ or (b) $m\le -4$, for $\vartriangle\ge 0$. But that is not enough. Four real roots for $P(x)$ requires that all roots for $Q(y)$ are not only real, but positive. Therefore, we need, in addition, $$\frac{1}{2}\left(4(m+2)\pm 2\sqrt{3m^2+16m+16}\right)=2(m+2)\pm \sqrt{3m^2+16m+16}\ge 0$$ My difficulty is on how to solve this last condition for positive roots for $Q(y)$. Once I have the set of values for $m$ from this condition, next step would be solving for the intersection of this set with the previous condition for real roots, $m\ge -4/3$ or $m\le -4$. With the resulting set I would check for the sum of the least 3 integers to show that it is zero. Hints or full answers are welcome. I really don't know if my approach is most appropriate way to solve this problem.
Absolutely continuous measures A concept in measure theory (see also Absolute continuity). If $\mu$ and $\nu$ are two measures on a σ-algebra $\mathcal{B}$ of subsets of $X$, we say that $\nu$ is absolutely continuous with respect to $\mu$ if $\nu (A) =0$ for any $A\in\mathcal{B}$ such that $\mu (A) =0$. The absolute continuity of $\nu$ with respect to $\mu$ is denoted by $\nu\ll\mu$. If the measure $\nu$ is finite, i.e. $\nu (X) <\infty$, the property $\nu\ll\mu$ is equivalent to the following stronger statement: for any $\varepsilon>0$ there is a $\delta>0$ such that $\nu (A)<\varepsilon$ for every $A$ with $\mu (A)<\delta$. This definition can be generalized to signed measures $\nu$ and even to vector-valued measure $\nu$. Some authors generalize it further to vector-valued $\mu$'s: in that case the absolute continuity of $\nu$ with respect to $\mu$ amounts to the requirement that $\nu (A) = 0$ for any $A\in\mathcal{B}$ such that $|\mu| (A)=0$, where $|\mu|$ is the total variation of $\mu$ (see Signed measure for the relevant definition). The Radon-Nikodym theorem characterizes the absolute continuity of $\nu$ with respect to $\mu$ with the existence of a function $f\in L^1 (\mu)$ such that $\nu = f \mu$, i.e. such that \[ \nu (A) = \int_A f\rd\mu \qquad \text{for every '"`UNIQ-MathJax36-QINU`"'.} \] A corollary of the Radon-Nikodym, the Hahn decomposition theorem, characterizes signed measures as differences of nonnegative measures. We refer to Signed measure for more on this topic. Two measures which are mutually absolutely continuous are sometimes called equivalent. Radon-Nikdoym decomposition If $\mu$ is a nonnegative measure on a $\sigma$-algebra $\mathcal{B}$ and $\nu$ another nonnegative measure on the same $\sigma$-algebra (which might be a signed measure, or even taking values in a finite-dimensional vector space), then $\nu$ can be decomposed in a unique way as $\nu=\nu_a+\nu_s$ where $\nu_a$ is absolutely continuous with respect to $\mu$; $\nu_s$ is singular with respect to $\mu$, i.e. there is a set $A$ of $\mu$-measure zero such that $\nu_s (X\setminus A)=0$ (this property is often denoted by $\nu_s\perp \mu$). This decomposition is called Radon-Nikodym decompoition by some authors and Lebesgue decomposition by some other. The same decomposition holds even if $\nu$ is a signed measure or, more generally, a vector-valued measure. In these cases the property $\nu_s (X\setminus A)=0$ is substituted by $\left|\nu_s\right| (X\setminus A)=0$, where $\left|\nu_s\right|$ denotes the total variation measure of $\nu_s$ (we refer to Signed measure for the relevant definition). Comments A set of non-zero measure that has no subsets of smaller, but still positive, measure is called an atom of the measure. When considering the $\sigma$-algebra $\mathcal{B}$ of Borel sets in the euclidean space and the measure $\lambda$ as reference measure, it is a common mistake to claim that the singular part of a second measure $\nu$ must be concentrated on points which are atoms. A singular measure may be atomless, as is shown by the measure concentrated on the standard Cantor set which puts zero on each gap of the set and $2^{-n}$ on the intersection of the set with the interval of generation $n$ (such measure is also the distributional derivative of the Cantor ternary function or devil staircase). When some canonical measure $\mu$ is fixed, (as the Lebesgue measure on $\mathbb R^n$ or its subsets or, more generally, the Haar measure on a topological group), one says that $\nu$ is absolutely continuous meaning that $\nu\ll\mu$. References [AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration", Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory", 1, Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures", Wiley (1968) MR0233396 Zbl 0172.21201 [Ha] P.R. Halmos, "Measure theory", v. Nostrand (1950) MR0033869 Zbl 0040.16802 [He] E. Hewitt, K.R. Stromberg, "Real and abstract analysis", Springer (1965) MR0188387 Zbl 0137.03202 [Ro] H.L. Royden, "Real analysis", Macmillan (1968) How to Cite This Entry: Absolutely continuous measures. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolutely_continuous_measures&oldid=27274
I have the following categorical data Control Treatmentc1 285441 33296c2 40637 4187c3 737113 97433c4 34036 3993 In other words, I have 2 multinomial distributions with 4 categories each. In effect, I would like to test to determine whether or not the treatment changes the distribution of category mix (c1,c2,c3,c4). A quick glance at the data shows the following proportions for control and treatment respectively. For example, I calculate $p_{c1} = \frac{285441}{285441 + 40637 +737113 + 34036}$ Treatment (to 3 decimal places): $p_{c1} = .260, p_{c2} = .037, p_{c3} = .672, p_{c4} = .031$ Control (to 3 decimal places): $p_{t1} = .240, p_{t2} = .030, p_{t3} = .701, p_{t4} = .029$ Now it seems to me that, while there are some differences in the relative distribution of categories between control and treatment, this difference is not super drastic. So I'm going to run a chi-square test for homogeneity at significance level $\alpha = .0005$ (yes I know very small alpha). In our case, the degrees of freedom is 3, so we reject if we get a chi-square statistic $>17.7299$. Under the null hypothesis, we expect the context and treatment to be the same. We calculate the MLE $\hat{p_1}$, which is the probability of landing in category 1. It is calculated as follows. $\hat{p_{1}} = \frac{285441 + 33296}{\text{total count of both context and treatment}} \approx 0.258$ I'll calculate the first term of of my $\chi^2$ statistic (expected count for control in category 1), which I denote $E_{c1}$. The observation, denoted $O_{c1}$, is 285441. Hence, we have $E_{c1} = \text{total count control} \times \hat{p_{1}} \approx 282919.389$ The first term of the chi-square statistic is given to be $ \frac{(E_{c1} - O_{c1})^2}{E_{c1}} = 22.475$ So from the first term alone, we are already in the rejection region. I redid my calculations and I compute my statistic (following the same approach above) to be $542.5772$. I don't trust my numbers, so I was hoping to verify that I am not misusing the chi-square test for homogeneity or making some idiot computational error. Hopefully, that clarifies the question more. Thanks!
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
Difference between revisions of "Absolute continuity" m (added missing space to "vice versa", fixed upper limit in integral) m (Lebesgue measure changed from calligraphic L to lambda, minor spacinx fixes) Line 7: Line 7: ====Absolute continuity of the Lebesgue integral==== ====Absolute continuity of the Lebesgue integral==== − Describes a property of absolutely Lebesgue integrable functions. Consider the Lebesgue measure $\ + Describes a property of absolutely Lebesgue integrable functions. Consider the Lebesgue measure $\$ on the $n$-dimensional − euclidean space and let $f\in L^1 (\mathbb R^n, \ + euclidean space and let $f\in L^1 (\mathbb R^n, \)$. Then for every $\varepsilon>0$ there is a $\delta>0$ such that \[ \[ − \left|\int_E f (x) + \left|\int_E f (x) \(x)\right| < \varepsilon \qquad \{for every measurable set $A$ with $\(A)< \delta$}. \] \] This property can be generalized to measures $\mu$ on a $\sigma$-algebra $\mathcal{B}$ of subsets of a space $X$ and This property can be generalized to measures $\mu$ on a $\sigma$-algebra $\mathcal{B}$ of subsets of a space $X$ and Line 27: Line 27: the existence of a function $f\in L^1 (\mu)$ such that $\nu = f \mu$, i.e. such that the existence of a function $f\in L^1 (\mu)$ such that $\nu = f \mu$, i.e. such that \[ \[ − \nu (A) = \int_A f\ + \nu (A) = \int_A f\\mu \qquad \{for every $A\in\mathcal{B}$.} \] \] A corollary of the Radon-Nikodym, the Hahn decomposition theorem, characterize signed measures A corollary of the Radon-Nikodym, the Hahn decomposition theorem, characterize signed measures Line 38: Line 38: $\sum_i |a_i -b_i| <\delta$, we have $\sum_i |a_i -b_i| <\delta$, we have \[ \[ − \sum_i |f(a_i)-f (b_i)| <\varepsilon + \sum_i |f(a_i)-f (b_i)| <\varepsilon. \] \] This notion can be easily generalized when the target of the function is a metric space. This notion can be easily generalized when the target of the function is a metric space. Line 60: Line 60: functions, i.e. if we denote by $f'$ its pointwise derivative, we then have functions, i.e. if we denote by $f'$ its pointwise derivative, we then have \begin{equation}\label{e:fundamental} \begin{equation}\label{e:fundamental} − f (b)-f(a) = \int_a^b f' (x)\ + f (b)-f(a) = \int_a^b f' (x)\\qquad \forall a<b\in I. \end{equation} \end{equation} In fact this is yet another characterization of absolutely continuous functions. In fact this is yet another characterization of absolutely continuous functions. Revision as of 13:30, 30 July 2012 Contents Absolute continuity of the Lebesgue integral Describes a property of absolutely Lebesgue integrable functions. Consider the Lebesgue measure $\lambda$ on the $n$-dimensional euclidean space and let $f\in L^1 (\mathbb R^n, \lambda)$. Then for every $\varepsilon>0$ there is a $\delta>0$ such that \[ \left|\int_E f (x) \rd\lambda (x)\right| < \varepsilon \qquad \text{for every measurable set '"`UNIQ-MathJax6-QINU`"' with '"`UNIQ-MathJax7-QINU`"'}. \] This property can be generalized to measures $\mu$ on a $\sigma$-algebra $\mathcal{B}$ of subsets of a space $X$ and to functions $f\in L^1 (X, \mu)$. Absolute continuity of measures A concept in measure theory. If $\mu$ and $\nu$ are two measures on a $\sigma$-algebra $\mathcal{B}$ of subsets of $X$, we say that $\nu$ is absolutely continuous with respect to $\mu$ if $\nu (A) =0$ for any $A\in\mathcal{B}$ such that $\mu (A) =0$. This definition can be generalized to signed measures $\nu$ and even to vector-valued measure $\nu$. Some authors generalize it further to vector-valued $\mu$'s: in that case the absolute continuity of $\nu$ with respect to $\mu$ amounts to the requirement that $\nu (A) = 0$ for any $A\in\mathcal{B}$ such that $|\mu| (A)=0$, where $|\mu|$ is the total variation of $\mu$ (see Signed measure for the relevant definition). The Radon-Nikodym theorem characterizes the absolute continuity of $\nu$ with respect to $\mu$ with the existence of a function $f\in L^1 (\mu)$ such that $\nu = f \mu$, i.e. such that \[ \nu (A) = \int_A f\rd\mu \qquad \text{for every '"`UNIQ-MathJax37-QINU`"'.} \] A corollary of the Radon-Nikodym, the Hahn decomposition theorem, characterize signed measures as differences of nonnegative measures. We refer to Signed measure for more on this topic. Absolute continuity of a function A function $f:I\to \mathbb R$, where $I$ is an interval of the real line, is said absolutely continuous if for every $\varepsilon> 0$ there is $\delta> 0$ such that, for any $a_1<b_1<a_2<b_2<\ldots < a_n<b_n \in I$ with $\sum_i |a_i -b_i| <\delta$, we have \[ \sum_i |f(a_i)-f (b_i)| <\varepsilon. \] This notion can be easily generalized when the target of the function is a metric space. An absolutely continuous function is always continuous. Indeed, if the interval of definition is open, then the absolutely continuous function has a continuous extension to its closure, which is itself absolutely continuous. A continuous function might not be absolutely continuous, even if the interval $I$ is compact. Take for instance the function $f:[0,1]\to \mathbb R$ such that $f(0)=0$ and $f(x) = x \sin x^{-1}$ for $x>0$. The space of absolutely continuous (real-valued) functions is a vector space. A characterization of absolutely continuous functions on an interval might be given in terms of Sobolev spaces: a continuous function $f:I\to \mathbb R$ is absolutely continuous if and only its distributional derivative is an $L^1$ function (if $I$ is bounded, this is equivalent to require $f\in W^{1,1} (I)$). Vice versa, for any function with $L^1$ distributional derivatie there is an absolutely continuous representative, i.e. an absolutely continuous $\tilde{f}$ such that $\tilde{f} = f$ a.e.. The latter statement can be proved using the absolute continuity of the Lebesgue integral. An absolutely continuous function is differentiable almost everywhere and its pointwise derivative coincides with the generalized one. The fundamental theorem of calculus holds for absolutely continuous functions, i.e. if we denote by $f'$ its pointwise derivative, we then have \begin{equation}\label{e:fundamental} f (b)-f(a) = \int_a^b f' (x)\rd x \qquad \forall a<b\in I. \end{equation} In fact this is yet another characterization of absolutely continuous functions. The differentiability almost everywhere does not imply the absolute continuity: a notable example is the Cantor ternary function or devil staircase. Though such function is differentiable almost everywhere, it fails to satisfy \ref{e:fundamental} (indeed the generalized derivative of the Cantor ternary function is a measure which is not absolutely continuous with respect to the Lebesgue measure). An absolutely continuous function maps a set of measure zero into a set of measure zero, and a measurable set into a measurable set. Any continuous function of finite variation which maps each set of measure zero into a set of measure zero is absolutely continuous. Any absolutely continuous function can be represented as the difference of two absolutely continuous non-decreasing functions. References [AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration" , Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory" , 1 , Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures" , Wiley (1968) MR0233396 Zbl 0172.21201 [Ha] P.R. Halmos, "Measure theory" , v. Nostrand (1950) [He] E. Hewitt, K.R. Stromberg, "Real and abstract analysis" , Springer (1965) [KF] A.N. Kolmogorov, S.V. Fomin, "Elements of the theory of functions and functional analysis" , 1–2 , Graylock (1957–1961). [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces". Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 [Ro] H.L. Royden, "Real analysis" , Macmillan (1968) [Ru] W. Rudin, "Principles of mathematical analysis" , McGraw-Hill (1953) [Ta] A.E. Taylor, "General theory of functions and integration" , Blaisdell (1965) How to Cite This Entry: Absolute continuity. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolute_continuity&oldid=27248
Cristian Rivera Articles written in Pramana – Journal of Physics Volume 73 Issue 6 December 2009 pp 961-968 We propose to substitute Newton’s constant $G_{N}$ for another constant $G_{2}$, as if the gravitational force would fall off with the $1/r$ law, instead of the $1/r^{2}$; so we describe a system of natural units with $G_{2} , c$ and $\hbar$. We adjust the value of $G_{2}$ so that the fundamental length $L = L_{\text{Pl}}$ is still the Planck’s length and so $G_{N} = L \times G_{2}$. We argue for this system as (1) it would express longitude, time and mass without square roots; (2) $G_{2}$ is in principle disentangled from gravitation, as in (2 + 1) dimensions there is no field outside the sources. So $G_{2}$ would be truly universal; (3) modern physics is not necessarily tied up to $(3 + 1)$-dim. scenarios and (4) extended objects with $p = 2$ (membranes) play an important role both in M-theory and in F-theory, which distinguishes three $(2, 1)$ dimensions. As an alternative we consider also the clash between gravitation and quantum theory; the suggestion is that non-commutative geometry $[x_{i} , x_{j}] = \Lambda^{2} \theta_{ij}$ would cure some infinities and improve black hole evaporation. Then the new length 𝛬 shall determine, among other things, the gravitational constant $G_{N}$. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
This question already has an answer here: Let $(X,\tau)$ be a topological space. Consider $X^2$ with the product topology. Show that $X$ is Hausdorff iff the diagonal $D = \{(x,y) \in X^2 \mid x=y\}$ is a closed subset of $X^2$. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Let $(X,\tau)$ be a topological space. Consider $X^2$ with the product topology. Show that $X$ is Hausdorff iff the diagonal $D = \{(x,y) \in X^2 \mid x=y\}$ is a closed subset of $X^2$. Hint: Convince yourself that for $x \neq y$, finding distinct open neighborhoods $U_x$ and $U_y$ is the same as finding an open neighborhood of $(x,y) \in X \times X$, which does net meet the diagonal.
The question was asked by xport in the comments to Mike's solution as to whether this solution is unique. There are no doubt many ways to case bash an answer to this, and here is one, proving thatMike's solution is indeed unique. Let $x_i$ be the number of times the digit $ i $ appears on the paper, then we must have $$ \sum_{i=0}^9 x_i = 20 \quad \text{and} \quad \sum_{i=0}^9 ix_i = 45+20=65,$$ since there are $20$ digits on the completed paper and the first column sums to $45.$ All the digits appear at least once therefore $x_i \ge 1 $ for all $ i $ and $x_0=1.$Thus from the above equations we obtain $$\sum_{i=1}^9 x_i = 19 \quad (1) \quad \text{and} \quad \sum_{i=2}^9 (i-1)x_i = 46. \quad (2)$$ Since $\sum_{i=1}^8 i = 36$ it follows from $(2)$ that $(i-1)x_i \le 46 - (36-(i-1)),$ and so$$ x_i \le \left \lfloor \frac{i+9}{i-1} \right \rfloor \quad \text{for} \quad i \ge 2.$$ Hence $$ x_6 \le 3, \, x_7 \le 2, \, x_8 \le 2 \quad \text{and} \quad x_9 \le 2. \quad (3)$$ Now suppose $ x_i > 1 $ for some $ i > 2 $ then, since $i$ occurs in at least three places, we must have $x_j=i$ for some $j \ne i.$ And further suppose $j \ge 2,$ then from $(2)$ we obtain $$(i-1)x_i + (j-1)i + (36-(i-1)-(j-1)) \le 46,$$ which gives $$ x_i \le \frac{2i+8}{i-1} - j.$$ Hence when $j \ge 2$ we must have $$x_6 \le 2 \quad \text{and} \quad x_7=x_8=x_9=1. \quad (4)$$ We now do some case bashing. From $(3)$ we know $x_9 \le 2,$ so suppose $x_9=2$ then $(4)$implies $x_1=9,$ which again by $(4)$ implies $x_7=x_8=1,$ since if $x_1=9,$ then $x_1$ cannot equal $7$ or $8.$ Putting $x_1=9, x_7=1, x_8=1$ and $x_9=2$ in $(1)$ and $(2)$ we obtain $$x_2+x_3+ \cdots + x_6=6$$ and $$x_2 + 2x_3 + \cdots + 5x_6 = 17.$$ The former clearly only has solution $\lbrace 1,1,1,1,2 \rbrace, $ which satisfies the latter equation only when $x_3=2$ and the other $x_i$ are $1.$ But $3$ does not appear twice in this "solution." Hence we have a contradiction and so$$x_9=1.$$ Similary, suppose $x_8=2,$ then $(4)$implies $x_1=8,$ which again by $(4)$ implies $x_7=x_9=1,$ since if $x_1=8,$ then $x_1$ cannot equal $7$ or $9.$ Putting $x_1=8, x_7=1, x_9=1$ and $x_8=2$ in $(1)$ and $(2)$ we obtain $$x_2+x_3+ \cdots + x_6=7$$ and $$x_2 + 2x_3 + \cdots + 5x_6 = 18.$$ The former equation only has solutions $\lbrace 1,1,1,1,3 \rbrace $ (which does not satisfy the latter equation) and $\lbrace 1,1,1,2,2 \rbrace,$ which only satisifes the latterequation when $x_2=x_3=2$ and the other $x_i$ are $1.$ But again $3$ does not appear twice in this "solution." Hence we have a contradiction and so$$x_8=1.$$ Similary, suppose $x_7=2,$ then $(4)$implies $x_1=7,$ which again by $(4)$ implies $x_8=x_9=1,$ since if $x_1=7,$ then $x_1$ cannot equal $8$ or $9.$ Putting $x_1=7, x_8=1, x_9=1$ and $x_7=2$ in $(1)$ and $(2)$ we obtain $$x_2+x_3+ \cdots + x_6=8$$ and $$x_2 + 2x_3 + \cdots + 5x_6 = 19.$$ The former equation only has solutions $\lbrace 1,1,1,1,4 \rbrace $ (which does not satisfy the latter equation) and $\lbrace 1,1,2,2,2 \rbrace$ (which also does not satisfy the latter equation) and $\lbrace 1,1,1,2,3 \rbrace, $ which only satisifes the latterequation when $x_2=3, x_3=2,x_4=1,x_5=1$ and $x_6=1,$ and this is the solution given in Mike's answer. Hence we have a unique solution when $x_7=2,$ otherwise we must have$x_7=x_8=x_9=1.$ To complete the proof we need only show that when $x_7=1$ we must have $x_6=1.$ That would give us at least six $1's$ on the paper, implying that one of the numbers$\lbrace 6,7,8,9 \rbrace$ must be repeated, which is a contradiction since we have$x_6=x_7=x_8=x_9=1.$ So suppose $x_7=1$ and $x_6=3$ (the maximum possible from $(3)$) then $(4)$ implies$x_1=6.$ And since $x_8=x_9=1,$ from $(1)$ and $(2)$ we obtain$$x_2+ x_3 + x_4 + x_5=7$$ and $$x_2 + 2x_3 + 3x_4 + 4x_5 = 10.$$ These have no solutions since the latter equation implies $x_2=x_3=x_4=x_5=1,$ which does not satisfy the former equation. So suppose $x_7=1$ and $x_6=2,$ then since $x_8=x_9=1,$ from $(1)$ and $(2)$ we obtain$$x_1+x_2+ x_3 + x_4 + x_5=14 \quad (5) $$ and $$x_2 + 2x_3 + 3x_4 + 4x_5 = 15. \quad (6)$$ Now $(6)$ implies $x_5 \le 2,$ but if $x_5=2$ then we must have $x_2 \ge 3$ (since$x_5=x_6=2$ there are at least $3$ occurrences of $2$) and so the LHS of $(6)$ must be at least $16,$ a contradiction. Hence $x_5=1.$ But now we have $x_0=1,x_5=1,x_6=2,x_7=1,x_8=1$ and $x_9=1,$ and since $x_6=2$ some number must occur 6 times but we do not have enough undetermined $x_i$ for this to occur unless$x_1=6.$ Substituting $x_1=6$ and $x_5=1$ in $(5)$ and $(6)$ we obtain $$x_2+ x_3 + x_4=7 \quad (7) $$ and $$x_2 + 2x_3 + 3x_4 = 11. \quad (8)$$ and subtracting $(7)$ from $(8)$ we obtain $$x_3+2x_4=4. \quad (9)$$However, since we cannot have any more $1's$ (as $x_1=6$) we must have $x_3 \ge 2$ and$x_4 \ge 2,$ which contradicts $(9).$ Thus we have shown that when $x_7=1$ we must have $x_6=1,$ which as noted above completes the proof that the unique solution is given by $x_0=1,x_1=7,x_2=3,x_3=2,x_4=1,x_5=1,x_6=1,x_7=2,x_8=1$ and $x_9=1.$
A topological space $X$ is said to be star compact if whenever $\mathscr{U}$ is an open cover of $X$, there is a compact subspace $K$ of $X$ such that $X = \mathrm{st}(K,\mathscr{U}),$ where$\mathrm{st}(K, \mathscr{U})= \bigcup \{ U \in \mathscr{U}: U \cap K \neq \emptyset \}.$ We recursively define $\mathrm{st}^n$ for $n=0,1,2,\ldots$ by $$\begin{align} \mathrm{st}^0(K, \mathscr{U}) &= K \\ \mathrm{st}^{n+1}(K, \mathscr{U}) &= \bigcup \{ U \in \mathscr{U} : U \cap\ \mathrm{st}^n(K, \mathscr{U}) \neq \emptyset \} \end{align}$$ Definition: A space $X$ is said to be $ \omega$-starcompact if for every open cover $\mathscr{U}$ of $X$, there is some $n \in \mathbb{N}^{+}$ and some finite subset $B$ of $X$ such that $\mathrm{st}^{n}(B, \mathscr{U}) = X$. Let $(X, \tau)$ be $\omega$-starcompact and $\tau^{*} \subset \tau$. Is $(X, \tau^{*})$ a $\omega $-starcompact space?
I outlined a proof by Bjorn Poonen at http://www.artofproblemsolving.com/Forum/viewtopic.php?p=2450856#p2450856 . As Nils Matthes has noticed, computing the kernel is not necessary to solve Hartshorne's problem, though (in my opinion) it is more interesting than the problem itself. $\newcommand{\kk}{\mathbb{k}}$$\newcommand{\Ker}{\operatorname{Ker}}$For the sake of self-containedness, let me repost the proof linked above here (with improved notations). I begin by restating the problem: Theorem 1. Let $\kk$ be a commutative ring with $1$. Let $n$ and $m$ be two nonnegative integers. Let $M=\left\{0,1,\ldots ,m\right\}$ and $N=\left\{0,1,\ldots ,n\right\}$. Let $R$ be the polynomial ring $\kk\left[Z_{i,j} \mid i \in M \text{ and } j \in N \right]$. Let $S$ be the polynomial ring $\kk\left[x_0, x_1, \ldots, x_m, y_0, y_1, \ldots, y_n \right]$. Let $\phi : R \to S$ be the unique $\kk$-algebra homomorphism that sends each $Z_{i,j}$ to $x_i y_j$. Let $W$ be the ideal of $R$ generated by all elements of the type $Z_{a,b} Z_{c,d} - Z_{a,d} Z_{c,b}$ with $a \in M$, $b \in N$, $c \in M$ and $d \in N$. Then, $\Ker \phi = W$. To prove this, we shall use the following easy algebraic lemma: Lemma 2. Let $C$ be a $\kk$-module. Let $A$ and $B$ be two submodules of $C$ such that $C=A+B$. Let $\psi$ be a $\kk$-module map from $C$ to another $\kk$-module $D$ such that $\psi\mid_A$ is injective and $\Ker \psi \supseteq B$. Then, $\Ker \psi = B$. Proof of Lemma 2. Let $c \in \Ker \psi$. Thus, $c \in \Ker \psi \subseteq C = A + B$; hence, we can write $c$ in the form $c = a + b$ for some $a \in A$ and $b \in B$. Consider these $a$ and $b$. We have $b \in B \subseteq \Ker \psi$, so that $\psi\left(b\right) = 0$. Applying the map $\psi$ to the equality $c = a + b$, we obtain $\psi\left(c\right) = \psi\left(a + b\right) = \psi\left(a\right) + \psi\left(b\right)$ (since $\psi$ is a $\kk$-module map). Comparing this with $\psi\left(c\right) = 0$ (which follows from $c \in \Ker \psi$), we obtain $0 = \psi\left(a\right) + \underbrace{\psi\left(b\right)}_{= 0} = \psi\left(a\right)$, so that $\psi\left(a\right) = 0 = \psi\left(0\right)$. Since $\psi\mid_A$ is injective, this entails $a = 0$ (because both $a$ and $0$ belong to $A$). Thus, $c = \underbrace{a}_{=0} + b = b \in B$. Now, forget that we fixed $c$. We thus have shown that $c \in B$ for each $c \in \Ker \psi$. Thus, $\Ker \psi \subseteq B$. Combining this with $\Ker \psi \supseteq B$, we obtain $\Ker \psi = B$. This proves Lemma 2. $\blacksquare$ Proof of Theorem 1 (Bjorn Poonen) (sketched). We notice that $\Ker \phi\supseteq W$ is very easy to prove (in fact, a trivial computation shows that $\Ker \phi$ contains $Z_{a,b}Z_{c,d}-Z_{a,d}Z_{c,b}$ for all $a$, $b$, $c$, $d$). We order the set $M\times N$ lexicographically. If $k$ is a nonnegative integer, then $S_k$ shall denote the symmetric group consisting of all permutations of $\left\{1,2,\ldots,k\right\}$.Every $k$-tuple $\left(\left(a_1,b_1\right),\left(a_2,b_2\right),\ldots ,\left(a_k,b_k\right)\right) \in \left(M\times N\right)^k$ and every permutation $\sigma \in S_k$ satisfy \begin{equation}Z_{a_1,b_1}Z_{a_2,b_2}\cdots Z_{a_k,b_k} \equiv Z_{a_1,b_{\sigma 1}}Z_{a_2,b_{\sigma 2}}\cdots Z_{a_k,b_{\sigma k}} \mod W .\label{darij.pf.thm1.1}\tag{1}\end{equation} (In fact, this is obvious from the definition of $W$ when $\sigma$ is a transposition, and hence, by induction, it also holds for every permutation $\sigma$, because every permutation is a composition of transpositions.) Now, let $T$ be the $\kk$-submodule of $\kk\left[Z_{i,j}\mid \left(i,j\right)\in M\times N\right]$ generated by all products of the form $Z_{a_1,c_1}Z_{a_2,c_2}\cdots Z_{a_k,c_k}$ with $k$ being a nonnegative integer and $\left(\left(a_1,c_1\right),\left(a_2,c_2\right),\ldots,\left(a_k,c_k\right)\right)\in \left(M\times N\right)^k$ being a $k$-tuple satisfying $a_1\leq a_2\leq \cdots\leq a_k$ and $c_1\leq c_2\leq \cdots\leq c_k$. It is easy to see that the map $\left.\phi\mid_T\right. : T \to \kk\left[x_0,x_1,\ldots,x_m,y_0,y_1,\ldots,y_n\right]$ is injective. (In fact, if $\left(\left(a_1,c_1\right),\left(a_2,c_2\right),\ldots,\left(a_k,c_k\right)\right)\in T$, then\begin{align}\left(\phi\mid_T\right)\left(Z_{a_1,c_1}Z_{a_2,c_2}\cdots Z_{a_k,c_k}\right)&= \phi\left(Z_{a_1,c_1}Z_{a_2,c_2}\cdots Z_{a_k,c_k}\right) \\&= x_{a_1}y_{c_1}x_{a_2}y_{c_2}\cdots x_{a_k}y_{c_k} \\&=x_{a_1}x_{a_2}\cdots x_{a_k}y_{c_1}y_{c_2}\cdots y_{c_k}\end{align}is a monomial from which we can recover the $k$-tuple $\left(a_1,a_2,\ldots ,a_k\right)$ up to order and the $k$-tuple $\left(c_1,c_2,\ldots ,c_k\right)$ up to order; but since the order of each of these two $k$-tuples is predetermined by the condition that $a_1\leq a_2\leq \cdots\leq a_k$ and $c_1\leq c_2\leq \cdots\leq c_k$, we can therefore recover these two $k$-tuples completely; hence, the map $\phi\mid_T$ sends distinct monomials to distinct monomials, and thus is injective.) Next we are going to show that: \begin{equation}\text{every monomial in $\kk\left[Z_{i,j}\mid \left(i,j\right)\in M\times N\right]$ lies in $T+W$.}\label{darij.pf.thm1.2}\tag{2}\end{equation} [ Proof of \eqref{darij.pf.thm1.2}: Let $\mu$ be any monomial in $\kk\left[Z_{i,j}\mid \left(i,j\right)\in M\times N\right]$. Then, $\mu=Z_{a_1,b_1}Z_{a_2,b_2}\cdots Z_{a_k,b_k}$ for some nonnegative integer $k$ and some $k$-tuple $\left(\left(a_1,b_1\right),\left(a_2,b_2\right),\ldots,\left(a_k,b_k\right)\right)\in \left(M\times N\right)^k$ such that $\left(a_1,b_1\right)\leq\left(a_2,b_2\right)\leq \cdots\leq \left(a_k,b_k\right)$. Consider such a $k$ and such a $k$-tuple $\left(\left(a_1,b_1\right),\left(a_2,b_2\right),\ldots,\left(a_k,b_k\right)\right)$. Since $\left(a_1,b_1\right)\leq\left(a_2,b_2\right)\leq \cdots\leq \left(a_k,b_k\right)$, we have $a_1\leq a_2\leq \cdots\leq a_k$ (since our order is lexicographic). Clearly there exists a permutation $\sigma\in S_k$ such that $b_{\sigma 1}\leq b_{\sigma 2}\leq \cdots \leq b_{\sigma k}$. Consider such a $\sigma$. Let $c_i=b_{\sigma i}$ for every $i\in\left\{1,2,\ldots,k\right\}$. Hence, the chain of inequalities $b_{\sigma 1}\leq b_{\sigma 2}\leq \cdots \leq b_{\sigma k}$ rewrites as $c_1\leq c_2\leq \cdots\leq c_k$. Also,\begin{align}\mu&= Z_{a_1,b_1}Z_{a_2,b_2}\cdots Z_{a_k,b_k} \\&\equiv Z_{a_1,b_{\sigma 1}}Z_{a_2,b_{\sigma 2}}\cdots Z_{a_k,b_{\sigma k}}\qquad \left( \text{by \eqref{darij.pf.thm1.1}} \right) \\&= Z_{a_1,c_1} Z_{a_2,c_2} \cdots Z_{a_k,c_k} \mod W\end{align}(since $b_{\sigma i}=c_i$ for every $i\in\left\{1,2,\ldots,k\right\}$). But since $a_1\leq a_2\leq \cdots\leq a_k$ and $c_1\leq c_2\leq \cdots\leq c_k$, we have $Z_{a_1,c_1} Z_{a_2,c_2} \cdots Z_{a_k,c_k}\in T$ (by the definition of $T$), so this rewrites as follows:\begin{align}\mu&\equiv \left(\text{an element of }T\right)\mod W .\end{align}In other words, $\mu\in T+W$. Since this holds for every monomial $\mu$ in $\kk\left[Z_{i,j}\mid \left(i,j\right)\in M\times N\right]$, this proves \eqref{darij.pf.thm1.2}.] Since the monomials in $\kk\left[Z_{i,j}\mid \left(i,j\right)\in M\times N\right]$ generate the $\kk$-module $\kk\left[Z_{i,j}\mid \left(i,j\right)\in M\times N\right]$, and since $T+W$ is a submodule of this $\kk$-module, we obtain $\kk\left[Z_{i,j}\mid \left(i,j\right)\in M\times N\right]=T+W$ from \eqref{darij.pf.thm1.2}. Applying Lemma 2 to $C=\kk\left[Z_{i,j}\mid \left(i,j\right)\in M\times N\right]$, $A=T$, $B=W$ and $\psi=\phi$, we thus conclude that $\Ker \phi = W$. This proves Theorem 1. $\blacksquare$ There is yet another way to prove Theorem 1 -- namely, by revealing it to be a particular case of the Second Fundamental Theorem of Invariant Theory for GL. See https://mathoverflow.net/questions/202005/a-vector-version-of-the-segre-embedding-what-is-the-kernel-of-the-ring-map for this generalization. (Another place where this generalization appears with proof is Theorem 5.1 of J. Désarménien, Joseph P. S. Kung, Gian-Carlo Rota, Invariant Theory, Young Bitableaux, and Combinatorics, unofficial re-edition 2017; you just need to set $d = 1$, and realize that every standard $\left(\mathcal{X},\mathcal{U}\right)$-bideterminant of shape strictly longer than $\left(d\right)$ contains at least one row of length $\geq 2$, which is easily seen to place it inside the ideal $W$.)
Brachistochrone is Cycloid/Proof 2 Theorem Proof $v = \dfrac {\d s} {\d t}$ where $\d s$ is an infinitesimal arc length element. $\d s = \sqrt{1 + y'^2} \rd x$ Hence: $v = \sqrt {1 + y'^2} \dfrac {\d x} {\d t}$ $\dfrac {m v^2} 2 + m g y = E$ where $E$ is a constant of motion. To determine $E$ use the following initial conditions: $\tuple {\map x 0, \map y 0} = \mathbf 0$ $\tuple {\map {\dfrac {\d x} {\d t} } 0, \map {\dfrac {\d y} {\d t} } 0} = \mathbf 0$ Then it follows that: $E = 0$ and: $v = \sqrt {-2 g y}$ $\displaystyle T = \int_a^b \frac {\sqrt {1 + y'^2} } {\sqrt {-2 g y} } \rd x$ Application of Euler's Equation yields: $\dfrac {\sqrt {1 + y'^2} } {\sqrt {-2 g y} } - y' \dfrac {2 y'} {2 \sqrt {-2 g y} \sqrt {1 + y'^2} } = c$ or $\sqrt C = \sqrt {-y \paren {1 + y'^2} }$ where $C = \dfrac 1 {2 c^2 g}$ The aforementioned differential equation can be rearranged to: $\dfrac {\d x} {\d y} = \pm \sqrt {\dfrac {-y} {y + C} }$ Since we want to describe a downwards sliding bead, we have: $\dfrac {\d y} {\d x} \le 0$ This differential equation can be solved for $\map x y$ in the following way: \(\displaystyle x\) \(=\) \(\displaystyle -\int \sqrt{\frac {-y} {y + C} } \rd y\) \(\displaystyle \) \(=\) \(\displaystyle \int \frac {\sqrt {C u - 1} } {u^2} \rd u\) substituting $u = \dfrac 1 {C + y}$ \(\displaystyle \) \(=\) \(\displaystyle -\frac {\sqrt {C u - 1 } } u + \frac C 2 \int \dfrac {\d u} {u \sqrt {C u - 1} }\) Primitive of $\dfrac {\sqrt {a x + b} } {x^2}$ \(\displaystyle \) \(=\) \(\displaystyle -\frac {\sqrt {C u - 1} } u + \frac C 2 2 \, \map \arctan {C u - 1} + C_1\) Primitive of $\dfrac 1 {x \sqrt {a x + b} }$ \(\displaystyle \) \(=\) \(\displaystyle -\sqrt {-y \paren {C + y} } + C \, \map \arctan {\frac {-y} {C + y} } + C_1\) From the initial condition $\tuple {\map x 0, \map y 0} = \mathbf 0$ it follows that: $C_1 = 0$ $\sqrt {\dfrac {-y} {C + y} } = \tan \theta$ which can be solved for $y$: \(\displaystyle \sqrt {\frac {-y} {C + y} }\) \(=\) \(\displaystyle \tan \theta\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \frac {-y} {C + y}\) \(=\) \(\displaystyle \tan^2 \theta\) \(\displaystyle \leadsto \ \ \) \(\displaystyle - 1 - \frac C y\) \(=\) \(\displaystyle \frac 1 {\tan^2 \theta}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle y\) \(=\) \(\displaystyle -C \sin^2 \theta\) \(\displaystyle \) \(=\) \(\displaystyle -\frac C 2 \paren {1 - \map \cos {2 \theta} }\) Square of Sine \(\displaystyle \leadsto \ \ \) \(\displaystyle \) \(=\) \(\displaystyle -\frac C 2 \paren {1 - \map \cos \phi}\) Introduce a new parameter $\phi = 2 \theta$ Substitution into the expression for $x$ results in: \(\displaystyle x\) \(=\) \(\displaystyle C \paren {\theta - \frac {\map \sin {2 \theta} } 2 }\) \(\displaystyle \) \(=\) \(\displaystyle C \paren {\frac \phi 2 - \frac {\sin \phi} 2}\) Introduce a new parameter $\phi = 2 \theta$ \(\displaystyle \) \(=\) \(\displaystyle \frac C 2 \paren {\phi - {\sin \phi} }\) $\intlimits {\dfrac {\d y} {\d x} } {x = b} {} = 0$ For parametric equations we can rewrite this as: $\dfrac {\d y} {\d x} = \dfrac {\d y} {\d \phi} \paren {\dfrac {\d x} {\d \phi} }^{-1}$ We need to find to which $\phi$ the point $x = b$ corresponds. Notice that: $\paren {\dfrac {\d y} {\d \phi} = 0 \land \dfrac {\d x} {\d \phi} \ne 0} \implies \paren {\dfrac {\d y} {\d x} = 0}$ Therefore: $\dfrac {\d y} {\d \phi} = -\dfrac C 2 \map \sin \phi$ and this derivative vanishes if: $\phi = \pi n, n \in \Z$ Similarly: $\dfrac {\d x} {\d \phi} = \dfrac C 2 \paren {\map \cos \phi - 1}$ which vanishes if: $\phi = 2 \pi n, n \in \Z$ By comparing both conditions on $\phi$ we limit the set of solutions to $\phi = \pi + 2 \pi n, n \in \Z$. We choose the nearest appropriate value corresponding to $x = b > 0$. Then substitution into the expression for $x$ results in: $b = \dfrac C 2 \pi$ $x = \dfrac b \pi \paren {\phi - {\sin \phi} }$ $y = -\dfrac b \pi \paren {1 - \map \cos \phi}$ This is the form of a cycloid as portrayed upside down. $\blacksquare$ Isaac Newton interpreted the problem as a direct challenge to his abilities, and (despite being out of practice) solved the problem in the evening before going to bed. He published it anonymously, but Bernoulli recognised whose solution it was, and commented: I recognise the lion by his print. With justice we admire Huygens because he first discovered that a heavy particle slides down to the bottom of a cycloid in the same time, no matter where it starts. But you will be petrified with astonishment when I say that this very same cycloid, the tautochrone of Huygens, is also the brachistochrone we are seeking.
Marginal Product of Capital Marginal product of capital (MPK) is the incremental increase in total production that results from one unit increase in capital while keeping all other inputs constant. Identifying the marginal product of capital is important because firms take investment decisions by comparing their marginal product of capital with their cost of capital. When the marginal product of capital is higher than the cost of capital, it makes sense to increase production by increasing capital but as soon as marginal product of capital falls below the cost of capital, adding any more capital results in a decrease in the firm’s profit. An economy’s total production is best represented in most cases by the Cobb-Douglas production function with constant returns to scale. Under the Cobb-Douglas model, an economy’s total production (Y) depends on its total factor productivity (A), its stock of labor (L) and capital (K) and the responsiveness of Y to each input: $$ \text{Y}=\text{A}\times \text{K}^\alpha\times \text{L}^{\text{1}-\alpha} $$ α represents the proportion of capital and 1- α represents the proportion of labor required for production to occur. When an economy has constant returns to scale, any increase in capital while keeping labor constant results in diminishing marginal product of capital as illustrated in the example below. Calculation Marginal product of capital of an economy which is represented by the Cobb-Douglas production function can be calculated using the following formula: $$ \text{MPK}=\alpha\times \text{A}\times \text{K}^{\alpha-\text{1}}\times \text{L}^{\text{1}-\alpha}=\alpha\times\frac{\text{Y}}{\text{K}} $$ The equations above are derived using a bit of calculus i.e. by differentiating the Cobb-Douglas function with respect to K by keeping L constant. $$ \text{MPK}=\frac{\partial \text{Y}}{\partial \text{K}}=\frac{\partial \text{A} \text{K}^\alpha \text{L}^{\text{1}-\alpha}}{\partial \text{K}} $$ $$ \text{MPK}=\alpha\times \text{AK}^{\alpha-\text{1}}\text{L}^{\text{1}-\alpha} $$ Moving K α-1 to the denominator gives us the following expression: $$ \text{MPK}=\alpha\times \text{A}\times\frac{\text{L}^{\text{1}-\alpha}}{\text{K}^{\text{1}-\alpha}} $$ Multiplying and dividing the right-hand side of the above equation with K α, we get: $$ \text{MPK}=\alpha\times \text{A}\times\frac{\text{L}^{\text{1}-\alpha}}{\text{K}^{\text{1}-\alpha}}\times\frac{\text{K}^\alpha}{\text{K}^\alpha} $$ A bit of rearrangement: $$ \text{MPK}=\alpha\times\frac{\text{A}\times \text{K}^\alpha\times \text{L}^{\text{1}-\alpha}}{\text{K}^{\text{1}-\alpha+-\alpha}} $$ The numerator exactly equals Y and the denominator reduces to K: $$ \text{MPK}=\alpha\times\frac{\text{Y}}{\text{K}} $$ Example and Graph Let’s consider an economy which produces only baby strollers and whose production is represented by the following equation: $$ \text{Y}=\text{2,000}\times \text{K}^{\text{0.5}}\times \text{L}^{\text{0.5}} $$ The following table shows the total number of baby strollers produced when the number of manufacturing plants increase but the trained labor available to operate them remains constant. Number of Plants Number of Workers Total Production Marginal Product 0 500 - - 1 500 44,721 44,721 2 500 63,246 18,524 3 500 77,460 14,214 4 500 89,443 11,983 5 500 100,000 10,557 6 500 109,545 9,545 7 500 118,322 8,777 The following chart shows total production on y-axis and capital, K on x-axis. The total production curve gets flatter as more and more capital is added while keeping labor constant. It is because when capital increases without associated increase in labor, there are not enough people to operate the machines and hence the increase in production is lower. by Obaidullah Jan, ACA, CFA and last modified on
Mathematics - Functional Analysis and Mathematics - Metric Geometry Abstract The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society Satirical drama; shepherds; literary tradition; epic, Drama satírico; pastores; tradición literaria; épica, Drame satyrique; berger; tradition littéraire; épique, and Drama satírico; pastores; tradição literária; épica Abstract The space and pastoral characters of the only fully preserved satirical drama, Euripides’ Cyclops, have been studied in relation to their Homeric hypotext, even though this line of research hasn’t been applied to fragmentary satirical dramas. This article argues that the pastoral elements found in Aeschylus and Sophocles draw on a literary tradition in which the portrait of the shepherd and the landscape descend from epic poetry, especially Homer’s. Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA Antigone; Sophocles; Alberto Zavalía; El límite; dictatorship; freedom; argentine civil wars and Antígona; Sófocles; Alberto de Zavalía; El Límite; ditadura; liberdade; guerras civis argentinas Abstract By identifying with many of the tensions experienced in Argentina throughout the twentieth century, the role model of Oedipus ‘s daughter, as has happened in other Western literatures, called great interest in this Latin America country. From a significant list of five Argentine recreations of Antigone, produced in this period, we will focus only in Alberto’s Zavalía (1958), which symbolically associates facts of Argentine history of the nineteenth century to the Antigone myth in order to surreptitiously interpret a present marked by successive wars and dictatorships that, as the author says in the preface of his work, always start mortally wounded, because freedom is reborn from its own ashes . Fernández-Biggs, Braulio and García-Huidobro, Joaquín Subjects Antigone; Sophocles; Latin America; Forced disappearance; Leopoldo Marechal; Griselda Gambaro; Jorge Huertas; Daniela Cápona; Juan Carlos Villavicencio, Antígona; Sófocles; América latina; desaparición forzada; Leopoldo Marechal; Griselda Gambaro; Jorge Huertas; Daniela Cápona; Juan Carlos Villavicencio, Antigone; Sophocle; Amérique latine; Disparition forcée; Leopoldo Marechal; Griselda Gambaro; Jorge Huertas; Daniela Cápona; Juan Carlos Villavicencio, and Antígona; Sófocles; América latina; desaparecimento forçado; Leopoldo Marechal; Griselda Gambaro; Jorge Huertas; Daniela Cápona; Juan Carlos Villavicencio Abstract This paper analyzes five Latin American versions of Antigone, not frequently considered. The plays exhibit significant differences with regard to the original, most of them related to painful events in the recent history of the continent. However, the plays have in common with Sophocles’ original the permanent anthropological features that have made Antigone a classic, expressed in five fundamental ideas: place, time, transcendence, conflict of legal codes and the Antigone’s psychological state. Antígona, Mário Sacramento, Sófocles, Receção da tragédia grega, and Teatro português Abstract Sob a influência da Antígona sofocliana, Mário de Sacramento escreveu uma peça homónima publicada, isoladamente, em 1959, no vol. XIX, nº 186 da “Revista Vértice”, e incluída, no ano seguinte, na tetralogia intitulada Teatro Anatómico. Nesta peça em um ato, a tragédia homónima de Sófocles configura-se um recurso metateatral de carácter crítico-reflexivo, em que o diálogo intertextual com o ancestral texto trágico promove uma leitura dramática do destino infortunado dos sobreviventes de uma família francesa, vítima da ocupação alemão, na Segunda Guerra Mundial, que, como os últimos Labdácidas, confrontam o sofrimento de situações-limite, ditadas por conflitos insolúveis da condição humana. Neste «ensaio dramático de Mário de Sacramento, a protagonista é uma mulher francesa, Ivonne, que no tempo do Maquis, escolhe, como nome de código, “Antígona”. Pretende-se, neste estudo, apresentar uma análise da influência exercida pela Antígona sofocliana neste «ensaio dramático», ao nível da caracterização das dramatis personae e do desenvolvimento da ação, que se sustenta numa reflexão crítica sobre as motivações da filha de Édipo e o sentido trágico das suas ações. Truth-Telling, Literature, and Domínio/Área Científica::Humanidades Abstract Throughout the Western literary tradition, Antigone maintains a place of honour in the narration of power struggles. In recent times, her strenuous opposition to Creon’s absolute power inevitably recalls the role of resistance within the twentieth century’s totalitarian context. However, the heroin’s juxtaposition to Creon undergoes a significant change in contemporary, literary versions of typical Antigonean acts. In particular, Orwell’s Nineteen Eighty-Four and Wolf’s Cassandra show a situation similar to the polarized setting on Sophocles’ scene, but with a very different formulation of the dynamics between the parts. In the light of Michel Foucault’s analysis of power structures, this new relationship can be read as an attempt, on the resistant’s part, at re-subjectification. One of the fundamental practices of this process is that of truth-telling, analyzed by the late Foucault in its classical formulation of parrhesia. By applying the philosopher’s breakdown of this concept to the endeavour performed by the novels’ protagonists, the political value of parrhesia emerges as both a form of resistance and a requirement for any anti-totalitarian settings. However, the pervasiveness of power binds truth-telling with a necessary process of “care of the self” leading to selfknowledge: a process that only seems to be available for elite groups. In the aftermath of last century’s totalitarianism, these Antigones descend to their death in order to deliver a powerful message of resistance, which is deeply personal and political, external and internal. Their main question to us remains, what kind of Antigones do we want for our society? Agamemnon of the Iliad; tragedy; ethical progression; human level of action/divine level, Agamenón de Ilíada; tragedia; progresión ética; plano humano de acción/ plano divino, Agamemnon de l’Iliade; tragédie ; progression éthique ; plan humain de l’action/plan divin, and Agamémnon da Ilíada; tragédia; progressão ética; plano humano de ação/ plano divino Abstract In the Iliad, the main heroic characters undergo a transition from a culture in which mankind is left in the hands of the gods, or of an unquestionable fate, to a position in which the human being assumes his responsibility in the setting of an interaction between the human and the divine levels, a position where Aeschylus’ heroes, as well as those in Sophocles and Euripides, will behave in accordance with their respective dramatic premises. Dor, Filoctetes, Representação estética, Sentido humano, Aesthetic representation, Human significance, Pain, and Philoctetes Abstract Filoctetes de Sófocles é o paradigma para avaliar a dor e sua representação estética. A função de educação moral da arte leva o século xviii alemão a discutir a expressão da dor: deve ser contida ou exteriorizada? A atitude pessoal varia segundo a resistência, circunstâncias, regras sociais, integração em referências culturais. A dor não consiste só nos sinais neurotransmitidos da lesão ao cérebro. Implica a totalidade do ser, nas suas tonalidades psicológicas, afectivas, morais, volitivas. Ela rompe o ritmo da existência, desfigura a identidade, desarticula o discurso. Intransferível, gera limitações. Prova singular, expõe a possibilidade universal de sofrer. Muda a compreensão da realidade. Suscita a questão ontológica essencial: o sentido do ser vivo consignado ao sofrimento e à aniquilação. Teixeira, Cláudia, Pimentel, Cristina, and Morão, Paula Subjects Cândido Lusitano, Tradução, Édipo, and Medeia Abstract This paper discusses the reception and the principles that underlie the translation of the Sophocles' and Seneca's Oedipus Rex and of Euripides' and Seneca's Medea made by Cândido Lusitano, who are still handwritten. In this sense, we will give special attention to the introductory notes which preface the works listed and will try to discuss the provided on translations and discuss their suitability to the normative elements of the neoclassical aesthetic. Always with explicit political objectives, António Sérgio has recreated the myth of Antigone several times (1930, c. 1950, 1958) by adjusting it to the object of its contestation. In the third and last version, in order to rebel against the Estado Novo authoritarian regime that, in the 1958 fraudulent presidential elections, had successfully led its candidate to power, Sérgio resorts to the essential minimal elements of the tragic plot that already incorporated the suitable and necessary rhetoric of protest and liberty. Keywords: António Sérgio, Antigone, Sophocles, Aristotle, minimal, allegory, political contestation, Humberto Delgado, salazarismo. The Alexandros of Euripides is a tragedy based on the motif of the child who was exposed at birth, as had been Sophocles’ Oidipus Rex or the legend of Cyrus told by Herodotus. This motif is intertwined in the tragedy with a series of episodes — the athletic contest, the victory and crowning of the victor, the anagnorisis — which characterise Alexandros as a potential tyrannos and which display a drama of political consequences. In the historical context of the representation — 415 B.C. — it is possible to find some analogies between Alexandros and Alcibiades, the latter having won the Olympic games the year before, and appearing to be as handsome and powerful as the protagonist of the tragedy, though the substantial political meaning of the play (and trilogy) lies elsewhere: on the difficulty of the government of a city where participation becomes competition and where excellence poses a threat to the principle of equality and is at risk of distorting itself through unrestrained eros and lust. Keywords: Alexandros; Euripides; motif of the child exposed at birth; Alcibiades; tragedy and politics.
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
Suppose we have a tower of field extensions: $\overline{F} \subset K \subset E \subset F$ Is it true in general that $|G(K/F)| = |G(K/E)| \cdot |G(E/F)|$? I was able to verify some specific examples, like $\mathbb{Q}(\sqrt[3]{2}, \omega)$ for $x^3-2$ and another extension, but how could I show that this holds in general for all such towers of extensions?
Difference between revisions of "Lower attic" From Cantor's Attic Line 1: Line 1: Welcome to the lower attic, where we store the comparatively smaller notions of infinity. Roughly speaking, this is the realm of countable ordinals and their friends. Welcome to the lower attic, where we store the comparatively smaller notions of infinity. Roughly speaking, this is the realm of countable ordinals and their friends. − * [[aleph_1 | $\omega_1$]], the first uncountable ordinal, and the other uncountable cardinals of the [[middle attic]] + * [[aleph_1 | $\omega_1$]], the first uncountable ordinal, and the other uncountable cardinals of the [[middle attic]] * The ordinals of [[infinite time Turing machines]], including * The ordinals of [[infinite time Turing machines]], including ** [[infinite time Turing machines#Sigma | $\Sigma$]] = the supremum of the accidentally writable ordinals ** [[infinite time Turing machines#Sigma | $\Sigma$]] = the supremum of the accidentally writable ordinals Line 14: Line 14: * [[Hilberts hotel | Hilbert's hotel]] * [[Hilberts hotel | Hilbert's hotel]] * [[omega | $\omega$]], the smallest infinity * [[omega | $\omega$]], the smallest infinity − * + * to the [[subattic]], containing very large finite numbers Revision as of 09:18, 28 December 2011 Welcome to the lower attic, where we store the comparatively smaller notions of infinity. Roughly speaking, this is the realm of countable ordinals and their friends. $\ursh$ $\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic The ordinals of infinite time Turing machines, including $\omega_1^x$ admissible ordinals $\Gamma$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers the small countable ordinals, those below $\epsilon_0$ Hilbert's hotel $\omega$, the smallest infinity $\drsh$ to the subattic, containing very large finite numbers
Problem: I need to find the leading order term in an expansion whose leading order behavior is a priori unknown. I can of course go with Series and try different orders, say Series[f[x],{x,x0,n}] for different $n$, however I would like to find a more elegant way. Also each calculation takes very long (almost an hour) so this is not an efficient way (that it takes so long is because my function is convoluted and involves special functions, mostly hypergeometric functions). There is already a very elegant answer to this question here: Series with a specified number of terms . Basically, it replaces the variable $x$ with its series data version, say $x+O[x]^2$, and then truncates the extra terms by multiplying another SeriesData object. For example, (x^2 Sin[a x]^2 /. x -> Series[x, {x, 0, 1}]) (1 + O[x]) gives the correct leading order term $a x^4$. My problem with this approach is that replacing $x$ with its SeriesData version gives incorrect results for symbolic powers. For example, let us consider the function $\left(-x^2\right)^a \sin (x)$. We see that FullSimplify[((-x^2)^a Sin[x] /. x -> Series[x, {x, 0, 1}]) (1 + O[x]) /. a ->1/2, x > 0] gives incorrect result -x^2+O[x]^3 whereas the correct result is I x^2+O[x]^3 as can be obtained via FullSimplify[Series[(-x^2)^a Sin[x], {x, 0, 1}] /. a -> 1/2, x > 0] This problem can be avoided, at least in this particular example, if one does not use the SeriesData replacement but only chops off higher order terms with the command: FullSimplify[((-x^2)^a Sin[x]) (1 + O[x]) /. a -> 1/2, x > 0] Also, the problem does not arise if the symbolic coefficient is given from the beginning since (-x^2)^(-1/2) /. x -> Series[x, {x, 0, 1}] yields $$ \begin{cases} \left(\frac{1}{\sqrt{-x^2}}\right)^*+O\left(x^1\right) & -\Im\left(x^2\right)<0 \\ \frac{1}{\sqrt{-x^2}}+O\left(x^1\right) & \text{True} \end{cases}$$ accounting probable complex nature of $x$ whereas the same command fails for symbolic power as (-x^2)^a /. x -> Series[x, {x, 0, 1}] directly yields $$(-x)^{2 a} \left(1+O\left(x^1\right)\right) $$ Questions Is it a bug that Mathematica evaluates $(-x^2)^a$ to $(-x)^{2a}$ when $x$ is replaced with a SeriesData form $x+O[x]^2$ ? If it is not a bug, then why do this feature not commute with replacing a symbolic power? (I am under the impression that $a\to 1/2$ should commute with $x\to x+O[x]^2$) I tried wrapping with Assuming, giving the information $x>0$ etc in \$Assumptionsand using \$Pre=Refineetc but none of them helped. Does anyone have any idea that I can use to ensure that replacing $x$ with SeriesData works correctly? My main requirement is actually finding the leading order term. Would only multiplying with $(1+O[x])$ at the end works as well? (It works for this small example but my function is a long expression including tens of hypergeometric functions so I am uncertain how much this method would work) Does anyone know any alternative way for finding leading order term in a series expansion with symbolic exponents? I am using version 11.2 Is the situation same in other versions? (I plan to upgrade once 11.3 is released this month)
All topologies induced by a norm over finite dimensional spaces (of the same dimension) are equivalent to the standard topology on $\mathbb R^n$ so it is very much up to you how to choose your topology, noting that $\dim V^*=\dim V$. If you'd like, a dual vector in $V^*$ can be identified with some vector $v=(v_1, \dots v_n) \in V$ via the dual basis construction (not canonical.) One can then define the usual open ball topology here induced by the norm $\|v\|:=\sqrt{v_1^2+ \dots+v_n^2}$. A slightly more sophisticated approach, more amenable to generalization is the idea of the "continuous dual," which consists of functions that are continuous $f:V \to k$ (for finite dimensional spaces, this is every linear function.) Here, the topology would be induced by the norm$$\sup_{v \in V} \frac{\|f(v)\|}{\|v\|}$$where $\| \cdot\|$ is any norm on $V$. All in all: under the usual identification of a dual vector space with the original space, closed sets look exactly how you would expect them to.
Here’s an interesting little problem that came across my desk this afternoon: how much time is \(10!\) seconds? Is it a duration that is best measured in seconds? days? centuries? And, perhaps more importantly, what is the best way of figuring it out? Think about it for a minute, then check below the fold for the answer, which is a surprisingly round number! One possible way to answer the question is to multiply out \(10!\) using a calculator or other device, then multiply by the appropriate conversion factors to first convert to minutes, then hours, then days, and so on, until something tractable pops out. On the other hand, \(10!\) is a large number, and multiplying it out is impractical without a calculator. Moreover, there is something inelegant about all of that computation. My approach, which uses unit analysis and prime factorization, is as follows: \[\begin{aligned} 10!\text{ sec} &= \frac{10!\text{ sec}}{1} \cdot \frac{1\text{ min}}{60\text{ sec}} \cdot \frac{1\text{ hr}}{60\text{ sec}} \cdot \frac{1\text{ day}}{24\text{ hr}} \cdot \frac{1\text{ week}}{7\text{ day}} && \text{(1)}\\ &= \frac{10!}{1\cdot (2\cdot 2\cdot 3\cdot 5)\cdot (2\cdot 2\cdot 3\cdot 5) \cdot (2\cdot 2\cdot 2\cdot 3) \cdot 7}\text{ weeks} && \text{(2)}\\ &= \frac{10!}{(5\cdot 2)\cdot (3\cdot 3)\cdot (2\cdot 2\cdot 2)\cdot (7)\cdot (5)\cdot (2\cdot 2)\cdot (3)\cdot (2)\cdot 1}\text{ weeks} && \text{(3)}\\ &= \frac{10\cdot 9\cdot 8\cdot 7\cdot 6\cdot 5\cdot 4\cdot 3\cdot 2\cdot 1}{10\cdot 9\cdot 8\cdot 7\cdot 5\cdot 4\cdot 3\cdot 2\cdot 1}\text{ weeks} && \text{(4)}\\ &= 6\text{ weeks}. \end{aligned}\] In line (1), we write out the conversion factors that we are going to need. Initially, I wasn’t sure how much would be necessary, but I figured that having a 7 in the denominator might be useful (since \(10!\) has a factor of 7), so I went out to weeks. It turns out that this was the Right Thing to Do™. Next, we cancel all of the units (seconds in the numerator cancel seconds in the denominator, and so on), leaving only weeks in the numerator and no units in the denominator. We also factor the terms in the denominator, which will prove to be useful in the next step. The result is given in line (2), where each set of grouped terms represents one of the conversion factors. The number \(10!\) is the product of all of the natural numbers up to 10, i.e. 1,2,…,10. Since all of those numbers must be factors of the numerator, we try to group the terms in the denominator so that we get some cancelation. This rearrangement of denominator is given in line (3), then simplified in line (4). Finally, we simplify obtaining 6 weeks!
Equivalence of Definitions of Local Basis/Local Basis for Open Sets Implies Neighborhood Basis of Open Sets Theorem Let $T = \struct {S, \tau}$ be a topological space. Let $x$ be an element of $S$. $\forall U \in \tau: x \in U \implies \exists H \in \mathcal B: H \subseteq U$ Then $\mathcal B$ satisfies: every neighborhood of $x$ contains a set in $\mathcal B$. Proof Let $N$ be a neighborhood of $x$. Then there exists $U \in \tau$ such that $x \in U$ and $U \subseteq N$ by definition. By assumption, there exists $H \in \mathcal B$ such that $H \subseteq U$. From Subset Relation is Transitive, $H \subseteq N$. The result follows. $\blacksquare$
NCERT Solutions for Class 11 Chemistry Chapter 6 Chemical Thermodynamics is provided here. This topic is an extremely important topic in chemistry and is important for class 11, Class 12 and competitive exams like JEE and NEET. Students must have a good knowledge of the topic in order to excel in the examination. We at BYJU’S provide the NCERT Solutions for Class 11 Chemistry Chemical Thermodynamics PDF which comprises of answers to the textbook questions, exemplary problems, MCQ’s, Short answer questions, HOTS, tips and tricks. Students can download the solutions in pdf format for free. Introduction to Chemical Thermodynamics Thermodynamics is a branch of science that deals with the relationship between heat and other forms of energy. A part of the universe where observations are made is called system. Surrounding is part of universe excluding system. Based on the exchange of energy and matter, the system is classified into 3 types: Closed system, Open system and isolated system. In a closed system, only energy can be exchanged with the surrounding. In open system energy as well as matter can be exchanged with the surrounding. In an isolated system, both energy and matter cannot be exchanged with the surrounding. This was brief on Chemical Thermodynamics. Subtopics included in Class 11 Chemistry Chapter 6 Chemical Thermodynamics Thermodynamic Terms The System And The Surroundings Types Of The System The State Of The System Internal Energy As A State Function Applications Work Enthalpy, H Measurement Of Du And Dh: Calorimetry Enthalpy Change, Drh Of A Reaction – Reaction Enthalpy Enthalpies For Different Types Of Reactions Spontaneity Gibbs Energy Change And Equilibrium. Class 11 Chemistry Chapter 6 Chemical Thermodynamics Important Questions Q-1: A thermodynamic state function is _______. 1. A quantity which depends upon temperature only. 2. A quantity which determines pressure-volume work. 3. A quantity which is independent of path. 4. A quantity which determines heat changes. Ans: (3) A quantity which is independent of path. Reason: Functions like pressure, volume and temperature depends on the state of the system only and not on the path. Q-2: Which of the following is a correct conditions for adiabatic condition to occur. 1. q = 0 2. w = 0 3. \(\Delta p = 0\) 4. \(\Delta T = 0\) Ans: 1: q = 0 Reason: For an adiabatic process heat transfer is zero, i.e. q = 0. Q-3: The value of enthalpy for all elements in standard state is _____. (1) Zero (2) < 0 (3) Different for every element (4) Unity Ans: (1) Zero Q-4: For combustion of methane \(\Delta U^{\Theta }\) is – Y \(kJ mol^{-1}\). Then value of \(\Delta H^{\Theta }\) is ____. (a) > \(\Delta U^{\Theta }\) (b) = \(\Delta U^{\Theta }\) (c) = 0 (d) < \(\Delta U^{\Theta }\) Ans: (d) < \(\Delta U^{\Theta }\) \(\Delta H^{\Theta } = \Delta U^{\Theta } + \Delta n_{g}RT\) ; \(\Delta U^{\Theta }\) = – Y \(kJ mol^{-1}\),\(\Delta H^{\Theta } = ( – Y) + \Delta n_{g}RT \Rightarrow \Delta H^{\Theta } < \Delta U^{\Theta }\) Reason: Q-5: For, Methane, di-hydrogen and and graphite the enthalpy of combustion at 298K are given -890.3kJ \(mol^{-1}\), -285.8kJ\(mol^{-1}\) and -393.5kJ\(mol^{-1}\) respectively. Find the enthalpy of formation of Methane gas? (a) -52.27kJ\(mol^{-1}\) (b) 52kJ\(mol^{-1}\) (c) +74.8kJ\(mol^{-1}\) (d) -74.8kJ\(mol^{-1}\) Ans: (d) -74.8kJ\(mol^{-1}\) à CH 4(g) + 2O 2(g) \(\rightarrow\) CO 2(g) + 2H 2O (g) à C (s) + O 2(g) \(\rightarrow\) CO 2(g) à 2H 2(g) + O 2(g) \(\rightarrow\) 2H 2O (g) à C (s) + 2H 2(g) \(\rightarrow\) CH 4(g) = [ -393.5 +2(-285.8) – (-890.3)] kJ\(mol^{-1}\) = -74.8kJ\(mol^{-1}\) Q-6: A reaction, X + Y à U + V + q is having a +ve entropy change. Then the reaction ____. (a) will be possible at low temperature only (b) will be possible at high temperature only (c) will be possible at any temperature (d) won’t be possible at any temperature Ans: \(\Delta G\) should be –ve, for spontaneous reaction to occur\(\Delta G\) = \(\Delta H\) – T\(\Delta S\) (c) will be possible at any temperature As per given in question,\(\Delta H\) is –ve ( as heat is evolved) \(\Delta S\) is +ve Therefore, \(\Delta G\) is negative So, the reaction will be possible at any temperature. Q-7: In the process, system absorbs 801 J and work done by the system is 594 J. Find \(\Delta U\) for the given process. Ans: As per Thermodynamics 1 st law, W = work done W = -594 J (work done by system) q = +801 J (+ve as heat is absorbed) Now,\(\Delta U\) = 801 + (-594) \(\Delta U\) = 207 J Q-8: The reaction given below was done in bomb calorimeter, and at 298K we get, \(\Delta U\) = -753.7 kJ \(mol^{-1}\). Find \(\Delta H\) at 298K. NH 2CN(g) +3/2O2(g) à N 2(g)+ CO 2(g)+ H 2O (l) \(\Delta H\) is given by,\(\Delta H = \Delta U + \Delta n_{g}RT\)………………(1)\(\Delta n_{g}\) = change in number of moles\(\Delta U\) = change in internal energy Ans: Here,\(\Delta n_{g} = \sum n_{g}(product) – \sum n_{g}(reactant)\) Here, T =298K\(\Delta U\) = -753.7 \(kJmol^{-1}\) R = \(8.314\times 10^{-3}kJmol^{-1}K^{-1}\) Now, from (1)\(\Delta H = (-753.7 kJmol^{-1}) + (-0.5mol)(298K)( 8.314\times 10^{-3}kJmol^{-1}K^{-1})\) = -753.7 – 1.2\(\Delta H\) = -754.9 \(kJmol^{-1}\) Q-9: Calculate the heat (in kJ) required for 50.0 g aluminium to raise the temperature from \(45^{\circ}C\; to\; 65^{\circ}C\). For aluminium molar haet capacity is 24 \(J mol^{-1}K^{-1}\) Ans: Expression of heat(q),\(q = mCP\Delta T\);………………….(a) \(\Delta T\) = Change in temperature c = molar heat capacity m = mass of substance From (a)\(q = ( \frac{50}{27}mol )(24mol^{-1}K^{-1})(20K)\) q = 888.88 J q Q-10: Calculate \(\Delta H\) for transformation of 1 mole of water into ice from\(10^{\circ}C\;to\;(-10)^{\circ}C\).. \(\Delta _{fus}H = 6.03 kJmol^{-1} \;at\;10^{\circ}C\). \(C_{p}[H_{2}O_{(l)}] = 75.3 J\;mol^{-1}K^{-1}\) \(C_{p}[H_{2}O_{(s)}] = 36.8 J\;mol^{-1}K^{-1}\) \(\Delta H_{total}\) = sum of the changes given below: Ans: (a) Energy change that occurs during transformation of 1 mole of water from \(10^{\circ}C\;to\;0^{\circ}C\). (b) Energy change that occurs during transformation of 1 mole of water at \(0^{\circ}C\) to 1 mole of ice at \(0^{\circ}C\). (c) Energy change that occurs during transformation of 1 mole of ice from \(0^{\circ}C\;to\;(-10)^{\circ}C\).\(\Delta H_{total} = C_{p}[H_{2}OCl]\Delta T + \Delta H_{freezing} C_{p}[H_{2}O_{s}]\Delta T\) = (75.3 \(J mol^{-1}K^{-1}\))(0 – 10)K + (-6.03*1000 \(J mol^{-1}\)(-10-0)K = -753 \(J mol^{-1}\) – 6030\(J mol^{-1}\) – 368\(J mol^{-1}\) = -7151 \(J mol^{-1}\) = -7.151\(kJ mol^{-1}\) Thus, the required change in enthalpy for given transformation is -7.151\(kJ mol^{-1}\). Q-11: Enthalpies of formation for CO 2(g), CO(g), N2O4(g), N2O(g) are -393\(kJ mol^{-1}\),-110\(kJ mol^{-1}\), 9.7\(kJ mol^{-1}\) and 81\(kJ mol^{-1}\) respectively. Then, \(\Delta _{r}H\) = _____. N 2O4(g) + 3CO(g) à N 2O (g)+ 3 CO 2(g) Ans: “\(\Delta _{r}H\) for any reaction is defined as the fifference between \(\Delta _{f}H\) value of products and \(\Delta _{f}H\) value of reactants.”\(\Delta _{r}H = \sum \Delta _{f}H (products) – \sum \Delta _{f}H (reactants)\) Now, for N 2O 4(g) + 3CO (g) à 2O (g) + 3 CO 2(g) Now, substituting the given values in the above equation, we get:\(\Delta _{r}H\) = [{81\(kJ mol^{-1}\) + 3(-393) \(kJ mol^{-1}\)} – {9.7\(kJ mol^{-1}\) + 3(-110) \(kJ mol^{-1}\)}] \(\Delta _{r}H\) = -777.7 \(kJ mol^{-1}\) Q-12 Enthalpy of combustion of C to CO 2 is -393.5 \(kJ mol^{-1}\). Determine the heat released on the formation of 37.2g of CO2 from dioxygen and carbon. Ans: Formation of carbon dioxide from di-oxygen and carbon gas is given as: C (s) + O 2(g) à CO 2(g); \(\Delta _{f}H\) = -393.5 \(kJ mol^{-1}\) 1 mole CO 2 = 44g Heat released during formation of 44g CO 2 = -393.5 \(kJ mol^{-1}\) Therefore, heat released during formation of 37.2g of CO 2 can be calculated as = \(\frac{-393.5kJmol^{-1}}{44g}\times 37.2g\) = -332.69 \(kJ mol^{-1}\) Q-13: N 2(g) + 3H2(g) à 2NH 3(g); \(\Delta _{r}H^{\Theta }\) = -92.4 \(kJ mol^{-1}\) Standard Enthalpy for formation of ammonia gas is _____. Ans: “Standard enthalpy of formation of a compound is the enthalpy that takes place during the formation of 1 mole of a substance in its standard form, from its constituent elements in their standard state.” Dividing the chemical equation given in the question by 2, we get (0.5)N 2(g) + (1.5)H 2(g) à 2NH 3(g) Therefore, Standard Enthalpy for formation of ammonia gas = (0.5) \(\Delta _{r}H^{\Theta }\) = (0.5)(-92.4\(kJ mol^{-1}\)) = -46.2\(kJ mol^{-1}\) Q-14: Determine Standard Enthalpy of formation for CH 3OH(l) from the data given below: CH 3OH(l) + (3/2)O2(g) à CO 2(g)+ 2H 2O (l); \(\Delta _{r}H^{\Theta }\) = -726 \(kJ mol^{-1}\) C (g) + O2(g) à CO 2(g); \(\Delta _{c}H_{\Theta }\) = -393 \(kJ mol^{-1}\) H2 (g) + (1/2)O2(g) à H 2O (l); \(\Delta _{f}H^{\Theta }\) = -286 \(kJ mol^{-1}\) Ans: C (s) + 2H2O(g) + (1/2)O2(g) à CH3OH(l) …………………………(i) CH 3OH (l) can be obtained as follows, 3OH (l)] = \(\Delta _{c}H_{\Theta }\) 2\(\Delta _{f}H_{\Theta }\) – \(\Delta _{r}H_{\Theta }\) = (-393 \(kJ mol^{-1}\)) +2(-286\(kJ mol^{-1}\)) – (-726\(kJ mol^{-1}\)) = (-393 – 572 + 726) \(kJ mol^{-1}\) = -239\(kJ mol^{-1}\) Thus, \(\Delta _{f}H_{\Theta }\) [CH 3OH (l)] = -239\(kJ mol^{-1}\) Q-15: Calculate \(\Delta H\) for the following process CCl 4(g) à C (g)+ 4Cl (g)and determine the value of bond enthalpy for C-Cl in CCl 4(g). \(\Delta _{vap}H^{\Theta }\) (CCl 4) = 30.5 \(kJ mol^{-1}\). \(\Delta _{f}H^{\Theta }\) (CCl 4) = -135.5 \(kJ mol^{-1}\). \(\Delta _{a}H^{\Theta }\) (C) = 715 \(kJ mol^{-1}\), \(\Delta _{a}H^{\Theta }\) is a enthalpy of atomisation \(\Delta _{a}H^{\Theta }\) (Cl 2) = 242 \(kJ mol^{-1}\). Ans: “ The chemical equations implying to the given values of enthalpies” are: (1) CCl 4(l) à CCl 4(g) ; \(\Delta _{vap}H^{\Theta }\) = 30.5 \(kJ mol^{-1}\) (2) C (s) à C (g) \(\Delta _{a}H^{\Theta }\) = 715 \(kJ mol^{-1}\) (3) Cl 2(g) à 2Cl (g) ; \(\Delta _{a}H^{\Theta }\) = 242 \(kJ mol^{-1}\) (4) C (g) + 4Cl (g) à CCl4(g); \(\Delta _{f}H^{\Theta }\) = -135.5 \(kJ mol^{-1}\)\(\Delta H\) for the process CCl 4(g) à C (g) + 4Cl (g) can be measured as: = (715\(kJ mol^{-1}\)) + 2(\(kJ mol^{-1}\)) – (30.5\(kJ mol^{-1}\)) – (-135.5\(kJ mol^{-1}\)) Therefore, \(H = 1304kJmol^{-1}\) The value of bond enthalpy for C-Cl in CCl 4(g) = \(\frac{1304}{4}kJmol^{-1}\) = 326 \(kJ mol^{-1}\) Q-16: \(\Delta U\) = 0 for isolated system, then what will be \(\Delta U\)? \(\Delta U\) is positive ; \(\Delta U\) > 0. Ans: As, \(\Delta U\) = 0 then\(\Delta S\) will be +ve, as a result reaction will be spontaneous. Q-17: Following reaction takes place at 298K, 2X + Y à Z \(\Delta H\) = 400 \(kJ mol^{-1}\) \(\Delta H\) = 0.2\(kJ mol^{-1}K^{-1}\) Find the temperature at which the reaction become spontaneous considering \(\Delta S\) and \(\Delta H\) to be constant over the entire temperature range? Ans: Now,\(\Delta G = \Delta H – T\Delta S\) Let, the given reaction is at equilibrium, then \(\Delta T\) will be: T = \((\Delta H – \Delta G)\frac{1}{\Delta S}\) \(\frac{\Delta H}{\Delta S}\); (\(\Delta G\) = 0 at equilibrium) = 400\(kJ mol^{-1}\)/0.2\(kJ mol^{-1}K^{-1}\) Therefore, T = 2000K Thus, for the spontaneous, \(\Delta G\) must be –ve and T > 2000K. Q-18: 2Cl (g) à Cl 2(g) In above reaction what can be the sign for \(\Delta S\) and \(\Delta H\)? \(\Delta S\) and \(\Delta H\) are having negative sign. Ans: The reaction given in the question represents the formation of Cl molecule from Cl atoms. As the formation of bond takes place in the given reaction. So, energy is released. So, \(\Delta H\) is negative. Also, 2 moles of Chlorine atoms is having more randomness than 1 mole of chlorine molecule. So, the spontaneity is decreased. Thus, \(\Delta S\) is negative. Q-19: 2X (g) + Y(g) à 2D (g) \(\Delta U^{\Theta }\) = -10.5 kJ and \(\Delta S^{\Theta }\) = -44.1\(JK^{-1}\) Determine \(\Delta G^{\Theta }\) for the given reaction, and predict that whether given reaction can occur spontaneously or not. Ans: 2X (g) + Y (g) à 2D (g) = -1 mole Putting value of \(\Delta U^{\Theta }\) in expression of \(\Delta H\):\(\Delta H^{\Theta } = \Delta U^{\Theta } + \Delta n_{g}RT\) = (-10.5KJ) – (-1)( \(8.314\times 10^{-3}kJK^{-1}mol^{-1}\))(298K) = -10.5kJ -2.48kJ\(\Delta H^{\Theta }\) = -12.98kJ Putting value of \(\Delta S^{\Theta }\) and \(\Delta H^{\Theta }\) in expression of \(\Delta G^{\Theta }\):\(\Delta G^{\Theta } = \Delta H^{\Theta } – T\Delta S^{\Theta }\) = -12.98kJ –(298K)(-44.1\(JK^{-1}\)) = -12.98kJ +13.14kJ\(\Delta G^{\Theta }\) = 0.16kJ As, \(\Delta G^{\Theta }\) is positive, the reaction won’t occur spontaneously. Q-20: Find the value of \(\Delta G^{\Theta }\) for the reaction, if equilibrium is given 10.given that T = 300K and R = \(8.314\times 10^{-3}kJK^{-1}mol^{-1}\). Ans: Now,\(\Delta G^{\Theta }\) = \(-2.303RT\log eq\) = (2.303)( \(8.314\times 10^{-3}kJK^{-1}mol^{-1}\))(300K) \(\log 10\) = -5744.14\(Jmol^{-1}\) -5.744\(kJmol^{-1}\) Q-21: What can be said about the thermodynamic stability of NO(g), given (1/2)N 2(g) + (1/2)O2(g); \(\Delta _{r}H^{\Theta } = 90kJmol^{-1}\) NO (g) + (1/2)O2(g) à NO 2(g); \(\Delta _{r}H^{\Theta } = -74kJmol^{-1}\) Ans: The +ve value of \(\Delta _{r}H\) represents that during NO (g) formation from O 2 and N 2, heat is absorbed. The obtained product, NO (g) is having more energy than reactants. Thus, NO (g) is unstable. The -ve value of \(\Delta _{r}H\) represents that during NO 2(g) formation from O 2(g) and NO (g), heat is evolved. The obtained product, NO 2(g) gets stabilized with minimum energy. Thus, unstable NO (g) converts into unstable NO2(g). Q-22: Determine \(\Delta S\) in surrounding given that mole of H2O(l) is formed I standard condition. \(\Delta _{r}H^{\Theta } = -286kJmol^{-1}\). \(\Delta _{r}H^{\Theta } = -286kJmol^{-1}\) is given so that amount of heat is evolved during the formation of 1 mole of H Ans: 2O (l). Thus, the same heat will be absorbed by surrounding. Q surr = +286\(kJmol^{-1}\). Now, \(\Delta S_{surr}\) = Q surr/7 = \(\frac{286kJmol^{-1}}{298K}\) Therefore, \(\Delta S_{surr} = 959.73Jmol^{-1}K^{-1}\) Continue learning the Laws of Thermodynamics and its applications with Byju’s video lectures. Also Access NCERT Exemplar for class 11 chemistry Chapter 6 CBSE Notes for class 11 chemistry Chapter 6 Downloadable files of ebooks, notes, and pdfs are available at BYJU’S for students to start their exam preparation without delay. Once you have registered with BYJU’S access to the online course is very simple. To know more about the difference between system and surrounding, difference between closed, open and isolated systems, Internal energy, heat, work, First law of thermodynamics, Calculation of energy change, Correlation between ∆U and ∆H, Hess’s law, Extensive and intensive properties, ∆G and so on Signup with BYJU’S the learning app today. Apart from NCERT Solutions for Class 11 Chemistry Chapter 6, get NCERT solutions for other chemistry chapters (NCERT Solutions for Class 11 Chemistry) as well. Students can download worksheets, assignments, NCERT Class 11 Chemistry pdf and other study materials for exam preparation and score better marks.
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
For the following reaction: $$\ce{NiO2(s) + 4 H+(aq) + 2 Ag(s) -> Ni^2+(aq) + 2H2O(l) + 2Ag+(aq)}$$ $$ E^\circ = 2.48\ \mathrm V$$ Calculate the $\mathrm{pH}$ of the solution if $E = 2.23\ \mathrm V$ and $[\ce{Ag+}] = [\ce{Ni^2+}] = 0.023\ \mathrm{mol/l}$. My effort: So, I know that there is the Nerst equation: $$E=E^\circ-\left(0.0592/n\right)\log Q$$ Where $E^\circ$ = standard cell potential, $E$ = cell potential for non-standard conditions, and at non-standard pressures or concentrations, $$Q = \frac{[\text{products}]}{[\text{reactants}]}$$ I know that for this problem, $n = 4\ \mathrm{mol}$ $\ce{e-}$, and that $$Q = \frac{[0.023]^3}{[\ce{H+}]^4}$$ So: $$2.23 = 2.48 - \frac{0.0592}{4}\log\left(\frac{0.023^3}{[\ce{H+}]^4}\right)\tag{1}$$ $$0.25 = \frac{0.0592}{4}\log\left(\frac{0.023^3}{[\ce{H+}]^4}\right)\tag{2}$$ $$1.00 = \frac{0.0592}{1}\log\left(\frac{0.023^3}{[\ce{H+}]^4}\right)\tag{3}$$ $$1.00 = 0.0592\log\left(\frac{0.023^3}{[H^+]^4}\right)\tag{4}$$ $$\frac{1.00}{0.0592} = \log\left(\frac{0.023^3}{[\ce{H+}]^4}\right)\tag{5}$$ $$\frac{1.00}{0.0592} = \log(0.023^3)-\log\left([\ce{H+}]^4\right)\tag{6}$$ $$\frac{1.00}{0.0592} - \log(0.023^3) = -\log\left([\ce{H+}]^4\right)\tag{7}$$ $$\frac{1.00}{0.0592} - \log(0.023^3) = -4\log\left([\ce{H+}]\right)\tag{8}$$ $$\frac{\frac{1.00}{0.0592}-\log(0.023^3)}{4} = -\log\left([\ce{H+}]\right)\tag{9}$$ $$\mathrm{pH} = -\log\left([\ce{H+}]\right)\tag{10}$$ $$\frac{\frac{1.00}{0.0592}-\log\left(0.023^3\right)}{4} = 5.45\tag{11}$$ Therefore, $$\mathrm{pH} = 5.45\tag{12}$$ It even gives me practice versions for other variations of this problem, and yet I still always get a wrong answer using the same methods. Would someone be so kind as to point out any errors I may have made while calculating this answer?
I know I can find the orbit radius of a satellite from the equation: $$r=\sqrt[3]{\frac{T^2GM}{4 \pi^2}}$$ but what determines the orbit period $T$? If I assume a geosynchronous orbit, would that simply mean the orbit period is the same as how long the planet takes to turn? What is a safe orbit radius / period of a satellite that would, for example, send a lander to the planet? The reason I ask is that I'm looking for a in this first equation: $$\Delta V=\sqrt{\frac{\mu_s}{r_1}\left(\sqrt{\frac{2r_2}{r_1+r_2}}-1 \right)^2+\frac{2 \mu_1}{a_1}}-\sqrt{\frac{\mu_1}{a_1}}+\sqrt{\frac{\mu_s}{r_2}\left(\sqrt{\frac{2r_1}{r_1+r_2}}-1 \right)^2+\frac{2 \mu_2}{a_2}}-\sqrt{\frac{\mu_2}{a_2}}$$ $$\Delta v=v \ln \frac{m_0}{m_1}$$ In addition, what role does the mass of the satellite play in this? If someone could tell me how to calculate it that would be great, thanks
Tool for converting complex numbers into exponential notation form and vice versa by calculating the values of the module and the main argument of the complex number. Complex Number Exponential Form - dCode Tag(s) : Arithmetics, Geometry dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day! A suggestion ? a feedback ? a bug ? an idea ? Write to dCode! Sponsored ads Tool for converting complex numbers into exponential notation form and vice versa by calculating the values of the module and the main argument of the complex number. The exponential notation of a complex number $ z $ of argument $ \ theta $ and of modulus $ r $ is: $$ z = r \operatorname{e}^{i \theta} $$ Example: $ z = 1+i $ has for modulus $ \sqrt(2) $ and argument $ \pi/4 $ so its complex exponential form is $ z = \sqrt(2) e^{i\pi/4} $ Euler's formula applied to a complex number connects the cosine and the sine with complex exponential notation: $$ e^{i\theta } = \cos {\theta} + i \sin {\theta} $$ with $ \theta \in \mathbb{R} $ The conversion of cartesian coordinates into polar coordinates for the complex numbers $ z = ai + b $ (with $ (a, b) $ the cartesian coordinates) is precisely to write this number in complex exponential form in order to retrieve the module $ r $ and the argument $ \theta $ (with $ (r, \theta) $ the polar coordinates). If the complex number has no imaginary part: $ e^{i0} = e^{0} = 1 $ or $ e^{i\pi} = \cos(\pi) + i\sin(\pi) = -1 $ If the complex number has no real part: $ e^{i(\pi/2)} = \cos{\pi/2} + i\sin{\pi/2} = i $ or $ e^{i(-\pi/2)} = \cos{-\pi/2} + i\sin{-\pi/2} = -i $ dCode retains ownership of the source code of the script Complex Number Exponential Form online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Complex Number Exponential Form script for offline use on PC, iPhone or Android, ask for price quote on contact page !
That alternate form, I think, makes things even more confusing than the standard form. Since the matrix involved is circulant, the affine part of the AES S-box can be represented as $$b_o = b_i \oplus (b_i \lll 1) \oplus (b_i \lll 2) \oplus (b_i \lll 3) \oplus (b_i \lll 4) \oplus 99\,,$$where $\oplus$ is xor and $\lll$ is bit left rotation. So far so good. But mathematically, it is awkward to describe rotation as "moving bits". So instead, we can treat a byte as a polynomial with coefficients modulo 2, that is, $\mathbb{F}_{2}[x]$, and a byte is $x^7\cdot c_7 + \ldots + x \cdot c_1 + c_0$, where the $c_i$ coefficients are the bits of the byte. What happens when we multiply by $x$? This is simply a left shift operation:$$x(x^7\cdot c_7 + \ldots + x \cdot c_1 + c_0) = x^8\cdot c_7 + \ldots + x^2 \cdot c_1 + x\cdot c_0 + 0\,.$$Now, if we reduce this polynomial by $x^8 + 1$, which means that we replace $x^8$ by $1$ wherever available, we get$$1\cdot c_7 + \ldots + x^2 \cdot c_1 + x\cdot c_0 + 0 = x^7 \cdot c_6 + \ldots + x^2 \cdot c_1 + x\cdot c_0 + c_7 \,,$$which is precisely a rotation left by $1$. You can verify that the same principle works for any rotation value: multiplying by $x^k$ and reducing by $x^8 + 1$ rotates the polynomial by $k$ positions. Therefore, we can understand the affine transformation of the AES S-box as a multiplication in the ring $\mathbb{F}_{2}[x]/(x^8 + 1)$:$$b_o = b_i \cdot (1 + x + x^2 + x^3 + x^4) + 99 \in \mathbb{F}_{2}[x]/(x^8 + 1)\,.$$In other words, this modulus only exists to describe bit rotation cleanly, in an algebraic way. Converted to decimal, this becomes $b_o = b_i \cdot 31 \bmod 257 \oplus 99 $, but that form, to me, loses all its descriptive value.
An under-appreciated point occurred to me while preparing for my Coursera class and to comment on Daniel Greewald, Martin Lettau and Sydney Ludvigsson's nice paper "Origin of Stock Market Fluctuations" at the last NBER EFG meeting The answer is, it depends the horizon and the measure. 100% of the variance of price dividend ratioscorresponds to expected return (discount rate) shocks, and none to dividend growth (cash flow) shocks. 50% of the variance of one-year returnscorresponds to cashflow shocks. And 100% of long-run price variationcorresponds to from cashflow shocks, not expected return shocks. These facts all coexist I think there is some confusion on the point. If nothing else, this makes for a good problem set question. A quick review: The most basic VAR for asset returns is \[ \Delta d_{t+1} = b_d \times dp_{t}+\varepsilon_{t+1}^{d} \] \[ dp_{t+1} = \phi \times dp_{t} +\varepsilon_{t+1}^{dp} \] Using only dividend yields dp, dividend growth is basically unforecastable \( b_d \approx 0\) and \( \phi\approx0.94 \) and the shocks are conveniently uncorrelated. The behavior of returns follows from the identity, that you need more dividends or a higher price to get a return, \[ r_{t+1}\approx-\rho dp_{t+1}+dp_{t}+\Delta d_{t+1}% \] (This is the Campbell-Shiller return approximation, with \(\rho \approx 0.96\).) Thus, the implied regression of returns on dividend yields, \[ r_{t+1} = b_r \times dp_{t}+\varepsilon_{t+1}^{r} \] has \(b_r = (1-\rho\phi)+0 = 1-0.96\times0.94 = 0.1\) and a shock negatively correlated with dividend yield shocks and positively correlated with dividend growth shocks. Three propositions: The variance of p/d is 100% risk premiums, 0% cashflow shocks But The variance of returns is 50% due to risk premiums, 50% due to cashflows. Why are returns and p/d so different? Currentcash flow shocks affect returns. But a shock to dividends, when prices rise at the same time, does not affect the dividend price ratio. (This is the essence of the Campbell-Ammer return decomposition.) The third proposition is less familiar: The long-run variance of stock market values (and returns) is 100% due to cash flow shocks and none to expected return or discount rate shocks. This is related to a point made by Fama and French in their Equity Premium paper. Long run average returns are driven by long run dividend growth plus the average value of the dividend yield. The difference in valuation -- higher prices for given set of dividends -- can affect returns in a sample, as higher prices for a given set of dividends boost returns. But that mechanism can't last. (Avdis and Wachter have a nice recent paper formalizing this point.) It's related to a similar point made often by Bob Shiller: Long run investors should buy stocks for the dividends. A little more generality as this is the new bit. \[ p_{t+k}-p_t = dp_{t+k}-dp_t + \sum_{j=1}^{k}\Delta d _{t+j} \] \[ p_{t+k}-p_t = (\phi^{k}-1)dp_t + \sum_{j=1}^{k}\phi^{k-j} \varepsilon^{dp}_{t+j} + \sum_{j=1}^{k} \varepsilon^d _{t+j} \] \[ var(p_{t+k}-p_t) = \frac{(1-\phi^{k})^2}{1-\phi^2} \sigma^2(\varepsilon^{dp}) + \frac{(1-\phi^{2k})}{1-\phi^2} \sigma^2(\varepsilon^{dp}) + k\sigma^2(\varepsilon^d) \] \[var(p_{t+k}-p_t) = 2\frac{(1-\phi^{k})}{1-\phi^2} var(\varepsilon^{dp}_{t+1}) + k var(\varepsilon^d_{t+j})\] So you can see the last bit takes over. It doesn't take over as fast as you might think. Here's a graph using sample values, At a one year horizon, it's just about 50/50. The dividend shocks eventually take over, at rate 1/k. But at 50 years, it's still about 80/20. Exercise for the interested reader/finance professor looking for problem set questions: Do the same thing for long horizon returns, \( r_{t+1}+r_{t+2}+...+r_{t+k} \) using \(r_{t+1} = -\rho dp_{t+1} + dp_t + \Delta d_ {t+1} \) It's not so pretty, but you can get a closed form expression here too, and again dividend shocks take over in the long run. Be forewarned, the long run return has all sorts of pathological properties. But nobody holds assets forever, without eating some of the dividends. Disclaimer: Notice I have tried to say "associated with" or "correspond to" and not "caused by" here! This is just about facts. The facts have just as easy a "behavioral" interpretation about fads and bubbles in prices as they do a "rationalist" interpretation. Exercise 2: Write the "behavioralist" and then "rationalist" introduction / interpretation of these facts. Hint: they reverse cause and effect about prices and expected returns, and whether people in the market have rational expectations about expected returns.
1of 1 Prove that \(\displaystyle{D}\) is dense on \(\displaystyle{X}\) if, and only if, for each continuous function \(\displaystyle{f:X\longrightarrow \mathbb{R}}\) holds : \(\displaystyle{f(x)=0\,,\forall\,x\in D\implies f=\mathbb{O}}\) . Now assume the converse. A different definition of density in a metric space is the following : "$D$ is dense iff every open set in $X$ intersects $D$ non-trivially". So assume $D$ is not dense and pick an open set not intersecting $D$. Since we're working in a metric space, there exists an $x$ and an $\epsilon>0$ : $B_{\epsilon} (x) \cap D = \emptyset $. How can we use this information? Things like Urysohn's lemma come to mind... Indeed, Urysohn gives a continuous function where $f(X -B_{\epsilon} (x)) = 0$ and $f (\bar{B_{\epsilon/2}}(x) ) = 1 $ and we are done. (Every metric space is normal) However, things here are much easier! Just define $A= \bar{B}_{\epsilon/2}(x)$ and $B= B_{\epsilon}(x) ^{\mathsf{c}}$. Notice $D \subset B$ and let $$f(x)= \frac{dist(x,B)} {dist(x,A) +dist(x,B)}$$. This $f$ does the job , so we get home without any heavy machinery. Nikos Here is another proof : Suppose that \(\displaystyle{D}\) is not dense on \(\displaystyle{\left(X,d\right)}\), that is \(\displaystyle{\overline{D}\neq X}\) . Then, there exists \(\displaystyle{y\in X}\) such that \(\displaystyle{d(y,D)>0}\) . The function \(\displaystyle{f:X\longrightarrow \mathbb{R}\,,f(x)=d(x,D)}\) is continuous and \(\displaystyle{f(x)=0\,,\forall\,x\in D\subseteq \overline{D}}\) . Accoding to the hypothesis, \(\displaystyle{f=\mathbb{O}}\), a contradiction, since \(\displaystyle{f(y)>0}\) .
Search Now showing items 1-10 of 20 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ... Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE (Springer, 2013-07) The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ... Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV (American Physical Society, 2013-01) Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
While I agree completely with poncho's answer, this other viewpoint might be useful.Specifically, I think a better comparison isn't between $\mathbb{Z}_p^*$ and $\mathbb{R}^*$, but with $\mathbb{Z}_p^*$ and $S^1$. We can view $S^1 \cong \{z\in\mathbb{C} \mid |z| = 1\}$. It's not hard to show that any $z\in S^1$ is able to be written as $z = \exp(2\pi i t)$ for $t\in\mathbb{R}$ (we don't strictly need the factor $2\pi$ here, but it's traditional). Due to $\exp(x)$ being periodic, it's in fact enough to have $t\in[0,1)$. This has an obvious group structure, in that:$$\exp(2\pi i t_0)\exp(2\pi i t_1) = \exp(2\pi i (t_0+t_1))$$If we're making the restriction that $t_i\in[0,1)$, then we have to take $t_0+t_1\mod 1$, but this is fairly standard. More than just having an obvious group structure, we actually have that any $\mathbb{Z}_p^*$ injects into it.Specifically, we always have:$$\phi_p:\mathbb{Z}_p^*\to S^1,\quad \phi_p(x) = \exp(2\pi i x/(p-1))$$Here, $p-1$ in the denominator is because $|\mathbb{Z}_p^*| = p-1$.We can define the discrete logarithm problem for both of these groups in the standard way (here, it's important to restrict $t_i\in[0, 1)$ if we want a unique answer).Then, we can relate these problems to each via the aforementioned injection.Through this image, we see that $S^1$ is "continuous" in the sense that it takes up the full circle, but the image of $\mathbb{Z}_p^*$ in $S^1$ will always be "discrete" --- there will always be "some space" between points (they can't get arbitrarily close).
2019-09-12 16:43 Pending/LHCb Collaboration Pending LHCB-FIGURE-2019-008.- Geneva : CERN, 10 Detaljerad journal - Similar records 2019-09-10 11:06 Smog2 Velo tracking efficiency/LHCb Collaboration LHCb fixed-target programme is facing a major upgrade (Smog2) for Run3 data taking consisting in the installation of a confinement cell for the gas covering $z \in [-500, -300] \, mm $. Such a displacement for the $pgas$ collisions with respect to the nominal $pp$ interaction point requires a detailed study of the reconstruction performances. [...] LHCB-FIGURE-2019-007.- Geneva : CERN, 10 - 4. Fulltext: LHCb-FIGURE-2019-007_2 - PDF; LHCb-FIGURE-2019-007 - PDF; Detaljerad journal - Similar records 2019-09-09 14:37 Background rejection study in the search for $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$/LHCb Collaboration A background rejection study has been made using LHCb Simulation in order to investigate the capacity of the experiment to distinguish between $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$ and its main background $\Lambda^0 \rightarrow p^+ \pi^-$. Two variables were explored, and their rejection power was estimated applying a selection criteria. [...] LHCB-FIGURE-2019-006.- Geneva : CERN, 09 - 4. Fulltext: PDF; Detaljerad journal - Similar records 2019-09-06 14:56 Tracking efficiencies prior to alignment corrections from 1st Data challenges/LHCb Collaboration These plots show the first outcoming results on tracking efficiencies, before appli- cation of alignment corrections, as obtained from the 1st data challenges tests. In this challenge, several tracking detectors (the VELO, SciFi and Muon) have been misaligned and the effects on the tracking efficiencies are studied. [...] LHCB-FIGURE-2019-005.- Geneva : CERN, 2019 - 5. Fulltext: PDF; Detaljerad journal - Similar records 2019-09-06 11:34 Detaljerad journal - Similar records 2019-09-02 15:30 First study of the VELO pixel 2 half alignment/LHCb Collaboration A first look into the 2 half alignment for the Run 3 Vertex Locator (VELO) has been made. The alignment procedure has been run on a minimum bias Monte Carlo Run 3 sample in order to investigate its functionality [...] LHCB-FIGURE-2019-003.- Geneva : CERN, 02 - 4. Fulltext: VP_alignment_approval - TAR; VELO_plot_approvals_VPAlignment_v3 - PDF; Detaljerad journal - Similar records 2019-07-29 14:20 Detaljerad journal - Similar records 2019-07-09 09:53 Variation of VELO Alignment Constants with Temperature/LHCb Collaboration A study of the variation of the alignment constants has been made in order to investigate the variations of the LHCb Vertex Locator (VELO) position under different set temperatures between $-30^\circ$ and $-20^\circ$. Alignment for both the translations and rotations of the two halves and of the modules with certain constrains of the modules position was performed for each run that correspond to different a temperature [...] LHCB-FIGURE-2019-001.- Geneva : CERN, 04 - 4. Fulltext: PDF; Related data file(s): ZIP; Detaljerad journal - Similar records
I found a limits equation $$\lim_{n \to \infty}\left(1-\frac{\lambda}{n}\right)^n=e^{-\lambda}$$ How can I get the result of $e^{-\lambda}$? Normally, we can use $$\lim_{x \to \infty}\left(1+\frac{n}{x}\right)^x=e^n$$ And how can I get $e^n$? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I found a limits equation $$\lim_{n \to \infty}\left(1-\frac{\lambda}{n}\right)^n=e^{-\lambda}$$ How can I get the result of $e^{-\lambda}$? Normally, we can use $$\lim_{x \to \infty}\left(1+\frac{n}{x}\right)^x=e^n$$ And how can I get $e^n$? You may know that (sometimes this is used as definition of $e$) $$\lim_{n\to\infty}\left(1+\frac1n\right)^n=e $$ Taking $k$th powers, $k\in\Bbb N$, we obtain $$e^k=\lim_{n\to\infty}\left(1+\frac1{n}\right)^{nk}=\lim_{n\to\infty}\left(1+\frac k{nk}\right)^{nk}.$$ The latter limit is the limit of a subsequence of $\lim_{n\to\infty}\left(1+\frac k{n}\right)^{n}$, hence this also converges to $e^k$, once we know it converges at all. In fact, the same method shows that more generally $$\lim_{n\to\infty}\left(1+\frac {ak}n\right)^n =\left(\lim_{n\to\infty}\left(1+\frac {a}n\right)^n\right)^k$$ for $k\in\Bbb N$ and arbitrary $a$ (provided both limits exist). As a consequence, $$\lim_{n\to\infty}\left(1+\frac {a}n\right)^n=e^a\qquad \text{for all }a\in\Bbb Q_{\ge0}.$$ Finally, using $(1-\frac1n)^n(1+\frac1n)^n=(1-\frac1{n^2})^n$, you can show that the same also hods for $a=-1$ and hence also for all $a\in\Bbb Q$. It is actually this, lambda means here a variable.$$\lim_{n \to \infty}\left(1-\frac{x}{n}\right)^n=e^{-x}$$ So, for example $$\lim_{n \to \infty}\left(1-\frac{5}{n}\right)^n=e^{-5}$$
Tokyo Journal of Mathematics Tokyo J. Math. Volume 36, Number 1 (2013), 269-287. On the Fourth Moment of the Epstein Zeta Functions and the Related Divisor Problem Abstract In this paper, we study the fourth moment of the Epstein zeta function $\zeta (s;Q)$ associated to a $n\times n$ positive definite symmetric matrix $Q$ ($n\geq 4$) on the line $\mathrm{Re}(s)=\frac{n-1}{2}$. We prove that the integral $\int _{0}^{T}|\zeta (\frac{n-1}{2}+it;Q)|^{4}dt$ is evaluated by $O(T(\mathrm{log}\,T)^{4})$ if $Q$ satisfies some conditions. As an application, we consider the divisor problem with respect to the coefficients of the Dirichlet series of Epstein zeta functions. Certain estimates for the error term of the sum of the Dirichlet coefficients are obtained by combining our results and Fomenko's estimates for $\zeta (\frac{n-1}{2}+it;Q)$. Article information Source Tokyo J. Math., Volume 36, Number 1 (2013), 269-287. Dates First available in Project Euclid: 22 July 2013 Permanent link to this document https://projecteuclid.org/euclid.tjm/1374497524 Digital Object Identifier doi:10.3836/tjm/1374497524 Mathematical Reviews number (MathSciNet) MR3112388 Zentralblatt MATH identifier 1355.11027 Citation SONO, Keiju. On the Fourth Moment of the Epstein Zeta Functions and the Related Divisor Problem. Tokyo J. Math. 36 (2013), no. 1, 269--287. doi:10.3836/tjm/1374497524. https://projecteuclid.org/euclid.tjm/1374497524
Arens-Fort Space is not First-Countable Theorem Let $T = \left({S, \tau}\right)$ be the Arens-Fort space. Then $T$ is not a first-countable space. Proof Assume that $T$ is first-countable. Suppose there does not exist a point $\left({n_i, m_i}\right) \in U_i$ such that $n_i > i$ and $m_i > i$. Then: $\forall \left({n, m}\right) \in U_i: n \le i$ or $m \le i$ Now suppose that there does not exist some $\left({n, m}\right) \in U_i$ with $n > i$. Let $S_k$ be defined as: $S_k = \left\{{n: \left({k, n}\right) \notin U_i}\right\}$ But: $k > i \implies S_k = \Z_{\ge 0}$ So we have shown that: $\exists \left({n, m}\right) \in U_i: n > i$ So, let $n > i$, and let $m \le i$. Then $\left\{{m: m > i}\right\} \subseteq S_n$. Thus $S_n = \left\{{m: \left({n, m}\right) \notin U_i} \right\}$ is infinite for all $n > i$. Again, this contradicts the definition of the Arens-Fort space. So, from the definition of the neighborhood of $\left({0, 0}\right)$, for each $U_i \in B_0$ there is a point $\left({n_i, m_i}\right) \in U_i$ such that both $n_i > i$ and $m_i > i$. Now let the set $E$ be constructed as: $E := X \setminus \left\{{\left({n_i, m_i}\right)}\right\}_{i=1}^\infty$ We now prove that $E$ is a neighborhood of $\left({0, 0}\right)$. By the definition of Arens-Fort space, it is enough to show that: $\forall i \in \N: S_i$ is finite. We have that: \(\displaystyle S_i\) \(=\) \(\displaystyle \left\{ {m: \left({i, m}\right) \notin E}\right\}\) \(\displaystyle \) \(=\) \(\displaystyle \left\{ {m: \left({i, m}\right) \in \left\{ {\left({n_j, m_j}\right)}\right\}_{j=1}^\infty}\right\}\) But we defined $n_j > j$. So if $j > i \implies n_j > j > i$. Thus $S_i \subseteq \left\{{m_j}\right\}_{j \mathop = 1}^i$ is finite. However, $\left({n_i, m_i}\right) \notin E$. Therefore there exists no $U_i \subseteq E$. From this contradiction it follows that $T$ cannot be first-countable. $\blacksquare$
Module Name¶ These pages are written using reStructuredTextthat allows emphasis, strong, literal and many more styles. You can add a reference [A1], include equations like: \[V(x) = \left(\frac{1-\eta}{\sigma\sqrt{2\pi}}\right) \cdot exp\left({\frac{x^2}{2\sigma^2}}\right) + \eta \cdot \frac{\sigma}{2\pi} \cdot \frac{1}{x^2 + \left(\frac{\sigma}{2}\right)^2}\] or \[I_{white} = \int_{E_{1}}^{E_{2}} I(\theta,E) \cdot F(E)\,dE.\] and tables: Member Type Example first ordinal 1st second ordinal 2nd third ordinal 3rd More examples: \[X(e^{j\omega } ) = x(n)e^{ - j\omega n}\] Warning Warning text. Note Note text. Features¶ List here the module features Contribute¶ Documentation: https://github.com/decarlof/project/tree/master/doc Issue Tracker: https://github.com/decarlof/project/docs/issues Source Code: https://github.com/decarlof/project/project
SCIENTIFIC PROGRAMS AND ACTIVITIES September 23, 2019 A conference of junior researchers in the areas of PDE and Dynamical Systems. will be held April 28-29 at the Fields Institute , Toronto.The goal is to encourage scientific exchange, and to create an opportunity for mathematicians in an early stage of their career to get to know each other and each other's work. ABSTRACTS We present some dynamical and spectral results about the quasi-periodic Schroedinger equation. We will emphasise on a -------------------------------------------- The spiral is one of Nature's more ubiquitous shape: it can be seen in various media, from galactic geometry to cardiac tissue. In the literature, very specific models are used to explain some of the observed incarnations of these dynamic entities. Barkley first noticed that the range of possible spiral behaviour is caused by the Euclidean symmetry that these models possess. In experiments however, the physical domain is never perfectly Euclidean. The heart, for instance, is finite, anisotropic and littered with inhomogeneities. To capture this loss of symmetry and as a result model the physical situation with a higher degree of accuracy, LeBlanc and Wulff introduced forced Euclidean symmetry-breaking (FESB) in the analysis, via two basic types of perturbations: translational symmetry-breaking (TSB) and rotational symmetry-breaking terms. They show that phenomena such as anchoring and quasi-periodic meandering can be explained by combining Barkley's insight with FESB. In this talk, we provide a characterization of spiral anchoring by studying the effects of full FESB, combining RSB terms with simultaneous TSB terms. Let C be a smooth closed curve of length 2 Pi in three dimensionals, and let k(s) be its curvature, regarded as a function of arclength. This curve determines the one-dimensional Schrodinger operator H_C=-d^2/ds^2 + k^2(s) acting on the space of square integrable 2 Pi - periodic functions. A natural conjecture is that the lowest spectral value e(C) is bounded below by 1 for any curve (the value is assumed when C is a circle). In recent joint work with L. E. Thomas we study a family of curves that includes the circle and for which e(C)=1 as well. We show that the curves in this family are local minimizers, i.e., e(C) increases under small perturbations leading away from the family. In the talk, I will explain our interest in the problem, describe how such Schrodinger operators appear in various problems in Mathematical Physics, and sketch the proof of our result. ------------------------------------------- In this paper we use a mathematical model to study the effect of a cell-cycle specific drug on the development of cancer, including the immune response. The cancer cells are split into the mitotic phase (M-phase), the quiescent phase (G0-phase) and the interphase (G1; S; G2 phases). We include a time delay for the passage through the interphase. The immune cells interact with all cells and the drug is assumed to be M-phase specific. We study analytically and numerically the stability of the cancer-free equilibrium and we show that the M-phase specific drug does not change its stability. Nevertheless, the M-phase drug significantly reduces cancer growth. Moreover we find oscillations through a Hopf bifurcation. Finally, we use the model to discuss the efficiency of cell synchronization before treatment (synchronization method). Positivity of Lyapunov exponent is essential in proving that the eigenfunction of the discrete Schrodinger equation decays exponentially. In this talk, I will define a notion of "typical" potential, and show that for "typical" C^3 potential, the Lyapunov exponent is positive for all energies. ------------------------------------------------------------------- I will discuss application of Lyapunov-Schmidt reductions method to construct two-pulse solutions in the fifth order Korteweg-de Vries equation. Stablility of the two-pulse solutions is investigated numerically. It turns out that one half of the two-pulse solutions is stable and one half is unstable. We study the variable bottom generalized Korteweg-de Vries (bKdV) equation p_t u=-p_x(p_x^2 u+f(u)-b(t,x)u), where f is a nonlinearity and b is a small, bounded and slowly varying function related to the varying depth of a channel of water. Many variable coefficient KdV-type equations, including the variable coefficient, variable bottom KdV equation, can be rescaled into the bKdV. We study the long time behaviour of solutions with initial conditions close to a stable, b=0 solitary wave. We prove that for long time intervals, such solutions have the form of the solitary wave, whose centre and scale evolve according to a certain dynamical law involving the function b(t,x), plus an H^1-small fluctuation. We analyze the nonlinear parabolic problem $u_t= \Delta u - \frac{\lambda f(x)}{(1+u)^2}$ on a bounded domain $\Omega$ of $R^N$ with Dirichlet boundary conditions. This equation models the dynamic deflection of a simple electrostatic Micro-Electromechanical System (MEMS) device, consisting of a thin dielectric elastic membrane with boundary supported at $0$ above a rigid ground plate located at $-1$. Here $f(x) \geq 0$ characterizes the varying dielectric permittivity profile. When a voltage --represented here by $\lambda$-- is applied, the membrane deflects towards the ground plate and a snap-through (touchdown) may occur when it exceeds a certain critical value $\lambda ^*$. Applying analytical and numerical techniques, the existence of $\lambda ^*$ is established together with rigorous bounds. We show the existence of at least one steady-state when $\lambda < \lambda ^*$ (and when $\lambda =\lambda ^*$ in dimension N < 8), while none is possible for $\lambda >\lambda ^*$. More refined properties of steady states, such as regularity, stability, uniqueness, multiplicity, energy estimates and compactness results, are shown to depend on the dimension of the ambient space and on the permittivity profile. For the dynamic case, the membrane globally converges to its unique maximal steady-state when $\lambda \leq \lambda ^*$; on the other hand, if $\lambda >\lambda ^*$ the membrane must touchdown at finite time, and touchdown can not take place at the location where the permittivity profile vanishes. This is joint work with Nassif Ghoussoub at UBC. ------------------------------------------------------------------- Weak solutions of the Navier -Stokes equation are suitable if they satisfy a localized version of the energy inequality. The interest. In this notion is that the partial regularity theorems of Scheffer, and Caffarelli, Kohn and Nirenberg apply to suitable weak solutions of the Navier-Stokes equations in three spatial dimensions, limiting the parabolic Hausdorff dimension of their singular set. In this paper we discuss the class of suitable weak solutions and some of its properties. We show that the weak solutions obtained by the approximation method of Leray are suitable, as are weak solutions obtained by the super-viscosity approximation. However it is not known whether the weak solutions obtained by Hopf's method of Galerkin approximation are suitable. For the problem on the 3D torus we give a new estimate of weak solutions which has some bearing on this question. ------------------------------------------------------------------- ------------------------------------------------------------------- In recent years, exceptional discretizations of nonlinear PDEs have been constructed to support translationally invariant kinks and solitary waves, i.e. families of nonlinear waves centered at arbitrary points between the lattice sites. It has been suggested that the translationally invariant stationary solutions may persist as traveling solutions for small velocities. I will explain analysis and numerical algorithms which can be used to study and test the existence of traveling wave solutions. I In this talk, we will consider 1-dimensional Nonlinear Schrodinger Equations with a periodic potential and will study the stability properties of periodic solutions. We will show how, by exploiting the symmetries of the problem, we can develop a simple sufficient criterion that guarantees the existence of modulational instability. In the case of small amplitude solutions bifurcating from the band edges of the linear problem, we show that the lower band edges are unstable in the focusing case, while the upper band edges are unstable in the defocusing case. This is joint work with Jared C. Bronski. ----------------------------------------------------------------- We study the bifurcation of time-periodic differential equations with delay, depending on a parameter. The complete bifurcation analysis is performed explicitly, using Floquet-multipliers, spectral projection and center manifold reduction. The results are extended to the case of strong resonance. Numerous examples are given to illustrate our ------------------------------------------------------------------- The nonlinear Schroedinger equation (NLS) with a periodic, varying dispersion coefficient models the dynamics of light in dispersion- managed communication systems and mode-locked lasers. The dispersion- managed nonlinear Schroedinger equation (DMNLS) is an averaged version of NLS which restores some symmetries that are lost in NLS when the dispersion coeficient is not constant. I will discuss these symmetries, the corresponding conservation laws, and modes of the linearized DMNLS. I will also discuss how these linearized modes can be utilized to guide importance-sampled Monte-Carlo simulations of rare events in dispersion-managed lightwave systems subject to noise. This study is pertinent because the performance of lightwave systems is limited by the occurrence of rare events. Experimental observations of the reflection of weak shock waves off a thin wedge show a pattern that closely resembles Mach reflection, in which the incident, reflected, and Mach shocks meet at a triple point. However, the von Neumann theory of shock reflection shows that a triple point configuration, consisting of three shocks and a contact discontinuity meeting at a point, is impossible for sufficiently weak shocks. This apparent conflict between theory and experiment has been a long-standing puzzle, and is often referred to as the triple point, or von Neumann, paradox. We present numerical solutions of a two-dimensional Riemann problem for the nonlinear wave system that is analogous to the reflection of weak shocks off thin wedges. The solutions contain a remarkably complex structure: there is a sequence of triple points and supersonic In this talk I will prove low-regularity global well-posedness for the 1d Zakharov (Z) and 1d, 2d, and 3d Klein-Gordon- Schroedinger system (KGS), which are systems in two variables (u, n). Z is known to be locally well-posed in (u, n) \in L^{2} \times H^{-1/2} and KGS is known to be locally well-posed in (u, n) \in L^{2} \times L^{2}. I will show that Z and KGS are globally well-posed in these spaces, respectively, by using an available conservation law for the L^{2} norm of u and controlling the growth of n via the estimates in the local theory. This is joint work with Jim Colliander and Justin Holmer. ------------------------------------------------------------------- Lattice Differential Equations (LDEs) are systems of coupled ODEs. They arise naturally in a diverse range of fields, such as in modeling of biological systems and descriptions of materials at the atomic/molecular level. Traveling Waves (TWs) represent a fundamental solution class of these problems. The theory and computation of TWs in LDEs has only begun to receive a great deal of attention over the last 25 years. The analysis of such problems is quite difficult, owing to the presence of mixed-type functional differential equations (differential equations with delays and advances present). I will present a new approach we have developed which allows us to approximate the functional differential equations by a closed system of ODEs. By applying this technique to several example problems, I will show how this new formulation allows one to infer many characteristics of the traveling waves in the lattice differential equation, using standard dynamical systems analysis. ------------------------------------------------------------------- The formation of singularities of reaction diffusion equations arise in the motion by mean curvature flow, vortex dynamic in superconductors and mathematical biology. Our result shows, under certain conditions on the datum, the asymptotic behavior of the solution before the time forming singularities. Moreover we give the remainder estimate. -------------------------------------------------------------------
So, in general, how to choose the number of terms in Taylor (Maclaurin) series to evaluate the limit? To answer your general question about how to approach problems like these, it's useful to know the concept of equivalence of functions. Given two functions $f$ and $g$ defined and nonzero in a neighbourhood of $a$, we write $f \sim g$, and say $f$ and $g$ are equivalent, if $f(x)/g(x) \to 1$ as $x \to a$. This is an equivalence relation. The condition $f \sim g$ is equivalent to $f = g + o(g)$. Importantly, if $f_1 \sim f_2$ and $g_1 \sim g_2$, then $f_1 g_1 \sim f_2 g_2$, and $f_1^{\alpha} \sim f_2^{\alpha}$. Also, $o(f_1) = o(f_2)$ and $O(f_1) = O(f_2)$. Consequently, any time you are evaluating the limit of a product or quotient of several functions (or powers of functions), your goal should be to replace each factor with a simple equivalent function. For example, if $f \sim f_1, g \sim g_1, h \sim h_1$, then $f^3 g^2/h^{5/3} \sim f_1^3 g_1^2/h_1^{5/3}$. In your case, you would like to find simple equivalents for the factors $\sin x$ and $(\cos x)^{1/2} - (\cos x)^{1/3}$. In general, a simple equivalent for a function near a point $a$ will be given by the first nonzero term in its Taylor series. Therefore, in the present case where $x \to 0$, you will use $\sin x \sim x$, and you will want to have enough terms of the series for $(\cos x)^{1/2}$ and $(\cos x)^{1/3}$ that you know the first nonzero term in the series for $(\cos x)^{1/2} - (\cos x)^{1/3}$. (Strictly speaking, since your denominator is equivalent to $x^2$ and you only want the limit rather than a simple equivalent, you only need these series up to the second order. But it's best to understand the general principle I've stated.) We have $$(\cos x)^{1/2} = (1 - x^2/2 + o(x^2))^{1/2} = 1 + \frac{1}{2}(-x^2/2 + o(x^2)) + o(- x^2/2 + o(x^2)) = 1 - x^2/4 + o(x^2),$$ $$(\cos x)^{1/3} = (1 - x^2/2 + o(x^2))^{1/2} = 1 + \frac{1}{3}(-x^2/2 + o(x^2)) + o(- x^2/2 + o(x^2)) = 1 - x^2/6 + o(x^2),$$ hence$$(\cos x)^{1/2} - (\cos x)^{1/3} = -x^2/12 + o(x^2) \sim -x^2/12.$$Therefore$$\frac{(\cos x)^{1/2} - (\cos x)^{1/3}}{\sin^2 x} \sim \frac{-x^2/12}{x^2} = -1/12.$$This proves that your limit is $-1/12$.
I am trying to write down a formal proof. Attempt: Firstly, we settle on the notations and considerations: $A$, $B$, $C$ are all $m\times n$ matrices where $m$ is fixed and $n$ is arbitrary over the ground field $F$. $e_1^r(A): \text{ multiplication of the $r^{th}$ row of A by any $0\neq c \in F$}$ $e_2^r(A): \text{ $r^{th} $ row+$c.$$s^{th}$ row, any $c\in F, r\neq s$}$ $e_3^r(A): \text{ Interchange of $r^{th}$ and $s^{th}$ rows}$. Let the inverses of these elementary row operation be denoted by $ f_1^r(A),f_2^r(A),f_3^r(A)$ respectively, all of which are themselves elementary row operations. We wish to prove the symmetry of the relation by the method of Induction. $P(n):$ If $B$ is obtained from $A$ after any $n$ row operations, then $A$ can be obtained from $B$ by $n$ elementary row operations. $P(1):$ For any $e_u^r(A)=B\implies f_u^r(e^r_u(A))=f_u^r(B)\implies A=f_u^r(B) $, $u \in \{1,2,3\}$ Let $P(m)$ hold. To prove $P(m+1)$ holds: Let $e_{u_{m+1}}^{r_{m+1}}(...e_{u_2}^{r_2}(e_{u_1}^{r_1}(A))...)=B\implies f_{u_{m+1}}^{r_{m+1}}( e_{u_{m+1}}^{r_{m+1}}(...e_{u_2}^{r_2}(e_{u_1}^{r_1}(A))...))= f_{u_{m+1}}^{r_{m+1}}(B)= e_{u_{m}}^{r_{m}}(...e_{u_2}^{r_2}(e_{u_1}^{r_1}(A))...) $. Now, $P(m)$ being true, $P(m+1)$ is true as well. The reflex and transitive properties are evident.[ For transitive, $B=$ $s$ row operations on $A$, $C=$ $t $ row operations on $A$. Composition of mappings is associative, so $C= t$ row operations $((s$ row operations on $A))$.] For reflexive, we take $e_1^1(A)$, set $c=1$. Is the proof correct? Please verify.
Start with a gauge theory with Chern-Simons action $$ S[A] = \frac{k}{4 \pi} \text{Tr} \intop_{M} \left( A \wedge dA + \frac{2}{3} A \wedge A \wedge A \right) $$ and a Wilson loop observable in the fundamental representation for a knot $K$ $$ W_K[A] = \text{Tr} \, \vec{\exp} \intop_K A^a \cdot \frac{i {\sigma^a}}{2}. $$ Attempt to calculate $$ \left< W_K \right>_k = \int DA \, e^{i S[A]} W_K[A] $$ by running a Metropolis-Hastings simulation on the lattice and plugging the lattice regularization of $S[A]$ and $W_K[A]$. Naively, we expect the result to be $$ \left< W_K \right>_q = J_K(q) $$ where $J_K(q)$ is the Jones polynomial of the knot $K$ and $$ q = \exp \left(\frac{2\pi i}{k + 2}\right). $$ Question 1: has it been confirmed to work? Question 2: how does the framing anomaly manifest itself? Lattice regularization of $W_K$ is straightforward, and doesn't depend on framing. Question 3: ok, that might be too naive to expect the approach above to give finite results. Instead, do what Witten suggests: don't calculate $W_K$, instead restrict the monodromies around the lattice plaquettes which intersect with the knot to the $SU(2)$ orbit corresponding to the fundamental irrep and calculate the vacuum amplitude. Has it been confirmed to work? How does the framing anomaly manifest itself?
I know that the both the average and worst case complexity of binary search is O(log n) and I know how to prove the worst case complexity is O(log n) using recurrence relations. But how would I go about proving that the average case complexity of binary search is O(log n)? I think most text book will provide you a good proof. For me, I can show the average case complexity as follows. Assuming a uniform distribution of the position of the value that one wants to find in an array of size $n$. For the case of 1 read, the position should be in the middle so there is a probability of $\frac{1}{n}$ for this case For the case of 2 reads, one will read the middle position and then 1 of the 2 other middle positions from the 2 sub-arrays. This probability is $\frac{2}{n}$ For the case of 3 reads, there are $2*2$ positions which result in this cost as you go into the 4 sub-arrays of the first 2 sub-arrays. The probability for this cost is $\frac{2^2}{n}$ ... For the case of $x$ reads, the probability for this case is $\frac{2^{x-1}}{n}$ For the average case, the number of reads will be $\sum\limits_{i=1}^{\log(n)} \frac{i2^{i-1}}{n} = \frac{1}{n} \sum\limits_{i=1}^{\log(n)} i2^{i-1}$ Now you can do integration on an approximation formula which will give you $O(n\log(n))$. Note that $\int\limits_{1}^{\log(n)} x 2^x dx$ can be calculated and bounded into $\log(n)*2^{\log(n)} = n\log(n)$ This is a very good way to do that applies to many cases. Another way to see it can also be $i2^{i-1} < \log(n) * 2^{i-1}$ Then the formula above is bounded by $\frac{\log(n)}{n} \sum\limits_{i=1}^{\log(n)} 2^{i-1}$ The summation part is actually $\frac{1 - 2^{\log(n)}}{1 - 2} = 2^{\log(n)} - 1 = n - 1$ which is definitely less than $n$, multiplying this with $\frac{\log(n)}{n}$ gives you what you want $\log(n)$ So you will get the bound as you want $O(\log(n))$
Tool to compute continued fractions. A continued fraction is the representation of a number N in a form of a series of integers (a0, a1, ..., an) such as N = (a0+1/(a1+1/(a2+1/(...1/(an))). Continued Fractions - dCode Tag(s) : Series dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day! A suggestion ? a feedback ? a bug ? an idea ? Write to dCode! Sponsored ads Tool to compute continued fractions. A continued fraction is the representation of a number N in a form of a series of integers (a0, a1, ..., an) such as N = (a0+1/(a1+1/(a2+1/(...1/(an))). Continued fraction expansion is close to algorithm of euclidean division, as for PGCD. Example: If the fraction approximating pi is \( 355/113 = 3.14159292035... \) $$ 355 = 3 \times 113 + 16 \\ 113 = 7 \times 16 + 1 \\ 16 = 16 \times 1 + 0 $$ The continued fraction is [3,7,16] Some developments of continuous fractions are infinite To find the corresponding simple fraction, use the irreducible fraction tool. Calculate an approximate value of the root (as accurate as possible) and dCode will provide the corresponding continuous fraction. The easiest way is to use cfrac: $$ e=2+\cfrac{1}{1+\cfrac{1}{2+\cfrac{1}{ 1+\cfrac{1}{1+\cfrac{1}{4+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{6+\cdots}}}}}}}} $$ But the shortest way is to write $$ e = [2 ; 1, 2, 1, 1, 4, 1, 1, 6, \cdots] $$ Most known continued fractions are: - Square Root of 2: \( \sqrt{2} = [1;2,2,2,2,2,\cdots] \) - Golden Ratio: \( \Phi = [1;1,1,1,1,1,\cdots] \) dCode retains ownership of the source code of the script Continued Fractions online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Continued Fractions script for offline use on PC, iPhone or Android, ask for price quote on contact page !
There are two concepts you need to know. What is equilibrium? Not all reactions occur until one of the reactants is consumed completely. Some of them occur until a certain point, where the system doesn't continue to evolve; rather, the concentrations of all the species remain constant in what is called a chemical equilibrium. This was first observed experimentally and then some theories started to appear in order to explain this. The most important variable is the equilibrium constant, a quotient of the equilibrium concentration. If you think of an equilibrium as two opposite reactions, then this constant measures which one is prevalent. If we have: $$\ce{aA + bB <=> cC + dD}$$ Then the equilibrium constant K is: $$K = \frac{[C]^c \times [D]^d}{[A]^a \times [B]^b}$$ This means that if you mix A and B, or C and D, or those two with more reactants added, no matter what the initial result is, the final concentration of everyone has to be such that at the end, K is the same. So, if K is big, it means at any given time there is more of C and D than A and B. The direct reaction (to the right) is prevalent. If K is small, then the opposite reaction (to the left) is prevalent. So yeah, it is possible to have at the same time the reactants and the products of a reaction simultaneously in the same place. How? That is concept number two. Equilibrium is dynamic I told you to picture an equilibrium as two opposite reactions. That is because in reality, those two reactions are happening continuously! Indeed, in your example: $$\ce{CO3^2- + H2O <=> HCO3^- + OH-}$$ A certain amount of carbonate molecules are reacting with water to give $\ce{HCO3^-}$ and $\ce{OH-}$, while the exact same amount of other, different $\ce{HCO3^-}$ molecules are reacting with $\ce{OH-}$ to give water and carbonate. Now, equilibrium isn't instantaneous. What happens is that when you throw reactants together, the more favorable reaction happens at light speed while the opposite almost doesn't happen. As products start to accumulate, the speeds start to become more moderate until they are exactly the same and the overall reaction stops. The concentrations stop changing even though there are still molecules reacting all the time. Conclusion To simplify, you can think of the process you described as a two-step process. First, $\ce{CO3^2-}$ gives $\ce{HCO3^-}$ and $\ce{OH-}$ and achieves equilibrium (equation 1 that you wrote). Then, the $\ce{HCO3^-}$ reacts and reaches equilibrium, and as initial conditions we take the products of the first step. Carbonate in water$$ \ce{CO3^2- + H2O <=> HCO3^- + OH-}\qquad K_1 = 2.13 \times 10^{-4} $$ Okay, so here we see that most $\ce{CO3^2-}$ remains as-is, producing only a small amount of $\ce{HCO3^-}$ and $\ce{OH-}$. Everybody reacts $\ce{HCO3^-}$ has many options. It can undergo the same reaction as before, reversed:$$ \ce{HCO3^- + OH- <=> CO3^2- + H2O}\qquad K_2 = 1/K_1 = 4700^\mathbf{*} $$It can also further react with water:$$ \ce{HCO3^- + H2O <=> H2CO3 + OH-}\qquad K_3 = 2.22 \times 10^{-8}\\ \ce{HCO3^- + H2O <=> CO3^{2-} + H+}\qquad K_4 = 4.7 \times 10^{-11} $$ Here we see that the only significant reaction would be to reverse the first one (although the others also happen, but to a minimal extent). But we said before that an equilibrium consists of two simultaneous opposite reactions. Thus, the concentration remains largely the same, unaffected by the others which have very small K. The amount of $\ce{OH-}$ produced is bigger than what was originally on water, $$\ce{H2O <=> H+ + OH-} \quad K_w = 10^{-14}$$ due to the value of the equilibrium constants ($K_w = 10^{-14} \ll K_1 = 2.13 \times 10^{-4}$). So $\ce{CO3^2-}$ is indeed a basic salt. $\mathbf{^*}$ Easy to understand if you look at the generic expression for K.
You are probably thinking about undecidable languages which are computably enumerable. Otherwise, the diagonalization technique described in answers to similar questions would have provided simple counterexamples. If you don't care about the computably enumerable part, then I would say that your question is simply a duplicate of one of the similar questions. The halting problem is at least as strong than any such language, because you can just fix the string for which you want to know whether it is in the language, define a Turing machine which enumerates the strings in the language until it finds the given string and then stops, and then ask the halting oracle whether that Turing machine will stop. So are there such languages which are weaker than the halting problem and still undecidable? Yes, there are (at least ZFC claims that there are, and most other reasonable formal systems will think so too). Can I write down such a language? No, not at the moment. Does anybody knows how to write down such a language? Maybe, I don't know. Why do I write an answer, if I can't really provide an explicit answer? Because I want to point out a connection to formal systems. A language strong enough to complement a given axiom system (able to talk about TM like Peano arithmetic (PA) for example) to a consistent set of axioms complete for $\Pi^0_1$ sentences (i.e. the halting problem) provides enough computational power to construct a model of the given axiom system, and the set of $\Pi^0_1$ sentences of any model gives such language. The following answer by Noah Schweber first defines what it means to complement PA (as an example) A set $A$ of (indices for) $\Pi^0_1$ sentences is plausible if $PA\cup A\cup\{\neg \varphi: \varphi\in\Pi^0_1\setminus A\}$ is consistent. And then proceeds to show how Rosser's trick can be used to show I claim that every plausible set $A$ is of PA degree i.e. construct a model of the given axiom system (PA in this case). And if you had a stronger axiom system like ZFC, then being plausible with respect to ZFC would define you a ZFC degree, which is at least as strong as the PA degree. (I guess there are languages of PA degree which are not of ZFC degree, but in this case I don't even know whether ZFC claims this.)
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
Short Version Can I get a quantile of such an expression? \begin{equation} \sum_{k=1}^{n} A_k\exp(\mathcal{N}(t_k\mu-\sigma\sqrt{t_k}/2,\sigma))) \end{equation} I know I can do it for one part of the summation as stated here, however I would like to know if that can be also the case for the summation. Long Version Let's say I have a periodic investment, in a really simplified case, we have the user deposits $A_1$, $A_2$ and $A_3$ as in the following image. The deposits $A_1$, $A_2$ and $A_3$ happen at $t_1$, $t_2$ and $t_3$ respectively, being the period of time between each transaction $t$. The return of the period of time $t$ is $\mu$ and the volatility is $\sigma$.I want to know how much I have at time $t_3$ with a probability of $90\%$. My variables are normally random distributed. This means that for example in order to project $A_1$ to the times $t_1$, $t_2$ and $t_3$ we will have the following equations: \begin{equation} X_{A_1,t_1}\sim A_1\exp(\mathcal{N}(0t\mu-\sigma\sqrt{0t}/2,0\sigma))=A_1\\ X_{A_1,t_2}\sim A_1\exp(\mathcal{N}(t\mu-\sigma\sqrt{1t}/2,\sigma\sqrt{t}))\\ X_{A_1,t_3}\sim A_1\exp(\mathcal{N}(2t\mu-\sigma\sqrt{2t}/2,\sigma\sqrt{2t})) \end{equation} In the first equation you have exactly the same money because your deposit was it $t_1$, the same time that we analyze. Now, we know that we can calculate a quantile for a normal distribution is $\Phi^{-1}_P=\mu+\sqrt{2}\sigma \operatorname {erf} ^{-1}(2P-1)$ where $P \in [0,1]$, in our case $.9$ because we want to verify $90\%$ and $\operatorname {erf}$ is the error function. Thus, we can calculate the quantile of $A_1$ in $t_3$ as follows: \begin{equation} \Phi^{-1}_{A_1,t_3|P}= A_1\exp(2t\mu-\sigma\sqrt{2t}/2+\sqrt{2}\sigma\sqrt{2t} \operatorname {erf} ^{-1}(2P-1)) \end{equation} Some reference to this equation here. My question is, can I sum all the values of the quantiles to $t_3$? or, how can I obtain the values of the quantiles at $t_3$? I tried the following equation: \begin{equation} \Phi^{-1}_{t_3|P}=\Phi^{-1}_{A_1,t_3|P}+\Phi^{-1}_{A_2,t_3|P}+\Phi^{-1}_{A_3,t_3|P} \end{equation} As shown before, the last term of the quation is just $A_3$ but the others depend on the time and the quantile. This equation is not right, I have some dissimilarities with a montecarlo simulation I did to verify the results. I'm almost certain I must not sum quantiles the way I did, but I cannot find the proper resource to verify the equations.
Combination Theorem for Cauchy Sequences/Product Rule Theorem Let $\struct {R, \norm {\,\cdot\,}}$ be a normed division ring. Let $\sequence {x_n} $, $\sequence {y_n} $ be Cauchy sequences in $R$. Then: $\sequence {x_n y_n}$ is a Cauchy sequence. Proof Suppose $\norm {x_n} \le K_1$ for $n = 1, 2, 3, \ldots$. Suppose $\norm {y_n} \le K_2$ for $n = 1, 2, 3, \ldots$. Let $K = \max \set {K_1, K_2}$. Then both sequences are bounded by $K$. Let $\epsilon > 0$ be given. Then $\dfrac \epsilon {2K} > 0$. Since $\sequence {x_n}$ is a Cauchy sequence, we can find $N_1$ such that: $\forall n, m > N_1: \norm {x_n - x_m} < \dfrac \epsilon {2K}$ Similarly, $\sequence {y_n} $ is a Cauchy sequence, we can find $N_2$ such that: $\forall n, m > N_2: \norm {y_n - y_m} < \dfrac \epsilon {2K}$ Now let $N = \max \set {N_1, N_2}$. Then if $n, m > N$, both the above inequalities will be true. Thus $\forall n, m > N$: \(\displaystyle \norm {x_n y_n - x_m y_m}\) \(=\) \(\displaystyle \norm {x_n y_n - x_n y_m + x_n y_m - x_m y_m}\) \(\displaystyle \) \(\le\) \(\displaystyle \norm {x_n y_n - x_n y_m} + \norm {x_n y_m - x_m y_m}\) Axiom (N3) of norm (Triangle Inequality) \(\displaystyle \) \(=\) \(\displaystyle \norm {x_n \paren {y_n - y_m } } + \norm {\paren {x_n - x_m} y_m}\) \(\displaystyle \) \(\le\) \(\displaystyle \norm {x_n} \cdot \norm {y_n - y_m} + \norm {x_n - x_m} \cdot \norm {y_m}\) Axiom (N2) of norm (Multiplicativity) \(\displaystyle \) \(\le\) \(\displaystyle K \cdot \norm {y_n - y_m} + \norm {x_n - x_m} \cdot K\) since both sequences are bounded by $K$ \(\displaystyle \) \(\le\) \(\displaystyle K \cdot \dfrac \epsilon {2K} + \dfrac \epsilon {2K} \cdot K\) \(\displaystyle \) \(=\) \(\displaystyle \dfrac \epsilon 2 + \dfrac \epsilon 2\) \(\displaystyle \) \(=\) \(\displaystyle \epsilon\) Hence: $\sequence {x_n y_n}$ is a Cauchy sequence in $R$. $\blacksquare$
I want to calculate the volume of this integral by using cylindrical coordinates $$T:Z \le2-x^2-y^2, \ Z^2 \ge x^2+y^2$$ First i want to ask here i don't see the equation for the circle i mean i don't have for example $x^2+y^2=1 \ or \ x^2+y^2=2x.$ etc. So this is my first confusion and the second one i am not sure if i am getting the limits right for the limits of integration for Z its obvious i just substitute the equations with the parameters $x=r\cos\theta, \ y=r\sin \theta$. To find the limits for r is it right to intersect the two functions ? I am doing so and i get $r=2-cos\theta-sin\theta$, so i want to ask if this mean that i must search for where $(\sin\color{blue}\theta,cos\color{blue}\theta)$ are negative on the unit circle to get the bounds for $\color{blue}\theta$ ? I found the following limits : $$r \le Z \le2-r^2 \\ 0 \le r \le2-\cos\theta-sin\theta \\ \frac{3\pi}{2} \le\theta \le\pi$$ The intersection of the two surfaces is found by solving $$ x^2+y^2=2-x^2-y^2$$ which is the circle $$ x^2+y^2=1$$ Thus the region of integration is the disk $$ 0\le r \le 1, 0\le \theta \le 2\pi $$ And the integrand is $$2-2r^2$$ HINT Note that you have $$ x^2+y^2 \le z \le 2 - x^2-y^2, $$ and the projection of this into the $xy$-plane occurs where the boundaries intersect, i.e. when $$ x^2+y^2 = 2 - x^2 - y^2 \iff x^2+y^2=1, $$ so we will have to integrate this over the disk $D$ of radius $1$. Can you now set up the integration?
The Annals of Probability Ann. Probab. Volume 42, Number 6 (2014), 2417-2453. On the local time of random processes in random scenery Abstract Random walks in random scenery are processes defined by $Z_{n}:=\sum_{k=1}^{n}\xi_{X_{1}+\cdots+X_{k}}$, where basically $(X_{k},k\ge1)$ and $(\xi_{y},y\in\mathbb{Z})$ are two independent sequences of i.i.d. random variables. We assume here that $X_{1}$ is $\mathbb{Z}$-valued, centered and with finite moments of all orders. We also assume that $\xi_{0}$ is $\mathbb{Z}$-valued, centered and square integrable. In this case H. Kesten and F. Spitzer proved that $(n^{-3/4}Z_{[nt]},t\ge0)$ converges in distribution as $n\to\infty$ toward some self-similar process $(\Delta_{t},t\ge0)$ called Brownian motion in random scenery. In a previous paper, we established that $\mathbb{P}(Z_{n}=0)$ behaves asymptotically like a constant times $n^{-3/4}$, as $n\to\infty$. We extend here this local limit theorem: we give a precise asymptotic result for the probability for $Z$ to return to zero simultaneously at several times. As a byproduct of our computations, we show that $\Delta$ admits a bi-continuous version of its local time process which is locally Hölder continuous of order $1/4-\delta$ and $1/6-\delta$, respectively, in the time and space variables, for any $\delta>0$. In particular, this gives a new proof of the fact, previously obtained by Khoshnevisan, that the level sets of $\Delta$ have Hausdorff dimension a.s. equal to $1/4$. We also get the convergence of every moment of the normalized local time of $Z$ toward its continuous counterpart. Article information Source Ann. Probab., Volume 42, Number 6 (2014), 2417-2453. Dates First available in Project Euclid: 30 September 2014 Permanent link to this document https://projecteuclid.org/euclid.aop/1412083629 Digital Object Identifier doi:10.1214/12-AOP808 Mathematical Reviews number (MathSciNet) MR3265171 Zentralblatt MATH identifier 1307.60148 Subjects Primary: 60F05: Central limit and other weak theorems 60F17: Functional limit theorems; invariance principles 60G15: Gaussian processes 60G18: Self-similar processes 60K37: Processes in random environments Citation Castell, Fabienne; Guillotin-Plantard, Nadine; Pène, Françoise; Schapira, Bruno. On the local time of random processes in random scenery. Ann. Probab. 42 (2014), no. 6, 2417--2453. doi:10.1214/12-AOP808. https://projecteuclid.org/euclid.aop/1412083629
This question already has an answer here: Let $$a_n=\left(1-\dfrac{1}{\sqrt2}\right)\dots \left(1-\dfrac{1}{\sqrt{n+1}}\right),n\ge1$$ Then find $\lim_{n\to \infty} a_n$. How can I proceed? I am stuck at the first step. Please help. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Let $$a_n=\left(1-\dfrac{1}{\sqrt2}\right)\dots \left(1-\dfrac{1}{\sqrt{n+1}}\right),n\ge1$$ Then find $\lim_{n\to \infty} a_n$. How can I proceed? I am stuck at the first step. Please help. Note that $a_n>0$ and $$\lim_{n\to\infty}\log a_n=\sum_{k=1}^\infty \log \left(1-\dfrac{1}{\sqrt{k+1}}\right)$$ is divergent to $-\infty$ by comparaison test with Riemann divergent series: $$\log \left(1-\dfrac{1}{\sqrt{k+1}}\right)\sim_\infty -\dfrac{1}{\sqrt{k}}$$ hence $$\lim_{n\to\infty} a_n=0$$ If you consider $b_n=\ln a_n$, what can you say about the behavior of the sequence $(b_n)_{n\geq 2}$?
Okay, I'll give a group-theoretic answer, since that was requested, which goes most of the way to a proof. I'm assuming you mean a 3x3x3 Rubik's cube. For larger cubes, the proof is just a little bit less convenient, because these cubes don't form groups (or more precisely, the action of the group of all moves on the cube on the states of the cube is not free). Basically everything still holds though for all cubes larger than 3x3x3 when talking specifically about the corner pieces, except that if the number of layers is even nothing fixes the orientation. Edge pieces and centers can get a bit more complicated though. Also, throughout, I'll use the notation $\mathbb Z_n = \mathbb Z / n \mathbb Z$ for the cyclic group of order n. There are a number of groups one could be referring to when using the term "cube group". Here, by the "full cube group" $G$, I mean the set of all states of the cube that can be reached by disassembling it and reassembling it, keeping all the center pieces fixed in position. In order to define the product, we'll identify it as a subset of the permutation group on the stickers. Note that each corner/edge sticker is distinguishable from all 47 other stickers. This is because on the 3x3x3 cube, each cubie has a unique collection of sticker colors. Hence, each state $\Sigma$ corresponds uniquely to a permutation $\sigma_\Sigma \in S_{48}$, where $\sigma_\Sigma(s_i)$ is the sticker in the position that sticker $s_i$ would be in the state $\Sigma_0$ corresponding to the solved cube. The product structure on the full cube group is induced by this map. As a trivial consequence of this definition, the identity of the cube group is the solved cube position $\Sigma_0$. (If this multiplication looks a bit backwards to you, it's because I want my permutation group to act on the right, which is primarily since the standard notation of cube moves (e.g. R U R' U R U2 representing the Sune) is read left-to-right.) There are $8! \; 3^8$ ways of putting in the 8 corner pieces (because each can be rotated any of 3 ways, and they can be permuted), and independently $12! \; 2^{12}$ ways of putting in the 12 edge pieces. Hence, the full cube group has order $8! \; 3^8 \; 12! \; 2^{12}$. This group is too big for cubing purposes. When cubing, there are 6 basic moves you can make, specifically, U, D, L, R, F and B (as well as their inverses, which are typically denoted by primes e.g. U'). The cube group $H$ is the subgroup of the full cube group generated by these 6 permutations, i.e. which can be written as words in these 6 letters and their inverses. These represent the states of the cube which are reachable with a finite sequence of turns. On this subgroup, the product operation is especially easy to understand. Let $a_1 \cdots a_n$ be any word in the generators which takes you from $\Sigma_0$ to $\Sigma_1$, and $b_1 \cdots b_m$ any word taking you from $\Sigma_0$ to $\Sigma_2$. Their product $\Sigma_1 \Sigma_2$ is the state you reach from the sequence of moves $a_1 \cdots a_n b_1 \cdots b_m$. Now that we have this identification, I'll be a bit cavalier in equating moves with states, but it should always be clear in context what is meant. We want to prove that a particular state $\Sigma'$, that with a single corner turned by $120^\circ$, is not reachable, which is to say that it is not in the cube group (though it is in the full cube group). To do this, we'll define a homomorphism $\varphi: G \rightarrow \mathbb Z_3$. We'll then show that $\varphi(a_i) = 0$ for any one of the 6 generators $a_i$. Since any state $\Sigma$ in the cube group can be written as a word in the generators, that means that $\varphi(\Sigma) = 0$. That is to say that $\varphi$ is an invariant under the cube group. However, we can compute $\varphi(\Sigma') \ne 0$, which shows that $\Sigma' \not \in H$. Aside: I'll just point out that this sort of approach of defining invariants that are preserved under a set of transformations is very common in mathematics; for instance, point-set topology could be characterized broadly as the study of invariants of topological spaces under homeomorphisms. Here, we're dealing with an algebraic invariant. To be a bit fancier, we're trying to define a surjective homomorphism $\varphi$ so that the composition $H \hookrightarrow G \overset{\varphi}{\twoheadrightarrow} \mathbb Z_3$ is the trivial homomorphism. Keep in mind though that this isn't an exact sequence-I haven't required that the kernel of $\varphi$ is exactly $H$. It isn't a chain complex either, at least not in standard parlance, since $H$ and $G$ are nonabelian groups. To define the invariant, label each of the corner cubie stickers in the solved state with a number 0, 1, or 2, according to this diagram of a net of the cube: Define $\varphi(\Sigma)$ to be the total number on the stickers on the up and down faces mod 3 in the state $\Sigma$. This of course gives a (surjective) map from the full cube group to the group $\mathbb Z/3\mathbb Z$, which is not clearly a homomorphism. I won't give a full proof that it is, but I'll take you most of the way there. You may wonder where this particular choice comes from. In fact, the only reason I chose it is because it makes it very easy to see that $\varphi(a_i) = 0$ for each generator $a_i$. For the up and down moves, these don't even change any of the numbers at all. For the other 4 moves, the numbers do change, but they change the same way for all 4. Hence, you only need to check one position, and it's easy to check $\varphi(F) = 0$. For the proof I give below, it's a bit more convenient to check that the inverses of the generators also preserve $\varphi$, which only requires checking one more state $\varphi(F')=0$. This ambiguity of choice of edge-labels for the solved state turns out to be important for proving that $\varphi$ is a homomorphism. Specifically, you'll want to prove the following lemma: Any choice of numbering will define the same function $\varphi$ provided is satisfies two criteria: Each cubie contains each of the numbers 0, 1, 2, exactly once, and they are in that cyclic order when read counter-clockwise around the vertex. The solved state still satisfies $\varphi(\Sigma_0)=0$ for this choice of numbering. This isn't too hard to prove, but I'll skip it here. Once you can prove that, the homomorphism property of $\varphi$ comes basically for free. For simplicity, I'll just prove that the restriction $\varphi |_H$ is a homomorphism (specifically the trivial homomorphism), which is all we need. The proof is by induction. The base case is the identity. Suppose we can show that for every word of length $n$ in the generators, $\varphi(a_1 \cdots a_n) = \varphi(a_1) \cdots \varphi(a_n) = 0$. Consider an arbitrary word $a_1 \cdots a_n a_{n+1}$. Now note that for $\Sigma = a_1 \cdots a_n$, we could define a different numbering on the cube, specifically, the one we would get by first doing $\Sigma$, then numbering the cube as in the above figure (and then of course returning it to the initial solved state). By inductive hypothesis, $\varphi(\Sigma^{-1}) = \varphi (a_n^{-1} \cdots a_1^{-1}) = 0$. But $\Sigma^{-1}$ has the same numbering in the original scheme as $\Sigma_0$ in the new one, because to get to $\Sigma_0$ from the above we just undo the moves we did before by applying $\Sigma^{-1}$. Thus, this alternate numbering pattern fulfills both requirements of the lemma (the second one is clear), so it gives a different method to compute the same function $\varphi$. However, then the state $a_1 \cdots a_n a_{n+1} = \Sigma a_{n+1}$, by construction in the new edge-numbering, will have the same numbering as if you started with the original edge-numbering in the figure above and did the single turn $a_{n+1}$, which we know will still give $\varphi(a_1 \cdots a_{n+1}) = 0$. Hence, for any reachable state $\Sigma \in H$, we'll have $\varphi(\Sigma) = 0$, so $\varphi |_{H} = 0$. That's all you need, but I'll just say that proving that $\varphi$ is a homomorphism on $G$ isn't too much harder. You have to choose a different generating set though, since the generators for the cube group don't generate the full cube group. The generators you want to choose are basically "swap corners i and i+1" and "twist corner 1 by $120 ^\circ$". Those generate all possible configurations of the corner pieces (you should prove this). Of course, you also need to include things which generate the edge permutations, though these aren't important at all since we only care about the corners. The proof will proceed much the same as above, with a bit of special consideration taken for the "twist corner 1" generator. Now, once you know $\varphi(\Sigma) = 0$ whenever $\Sigma \in H$, you just have to compute $\varphi(\Sigma')$ for the state with a single corner turned. This is easily seen to be either 1 or 2 depending on which direction you twist the single corner. So $\Sigma' \not \in H$, and we're done with the proof. As you can see, it's not really that hard, though it is very detailed and long. I'll take a moment to mention some nice group theoretical stuff that we're actually not far off from proving. Note that as I said before, the sequence of homomorphisms $H \hookrightarrow G \twoheadrightarrow \mathbb Z_3$ is not exact. However, we can basically completely understand the group structure of the cube group. It is a normal subgroup of the full cube group, and the quotient is given by $H \hookrightarrow G \twoheadrightarrow \mathbb Z_3 \times \mathbb Z_2 \times \mathbb Z_2$ (which is exact). The latter factors of $\mathbb Z_2$ are also describable simply in terms of cube positions. The first represents edge orientation; a state with a singe edge flipped is not solvable. The second represents odd permutations. A state with 2 corner pieces (or 2 edge pieces, but not both) swapped, but with all other pieces in the right position, is not solvable. It's impossible to permute a single pair of pieces, though it is possible to permute 2 pairs simultaneously. For any cube state, when solving it, you'll either eventually come to a fully solved cube, or you'll reach something which has a single corner twisted and/or a single edge flipped and/or a single pair of either corners or edges swapped. In any of these latter cases, the cube is not solvable and will need to be disassembled to return it to the solved state. From the above, the cube group $H$ is an index 12 subgroup of the full cube group $G$, which means that 1/12 of the positions will be solvable. The cube group $H$ thus has order $8! \; 3^7 \; 12! \; 2^{10}$. Furthermore, if you know a bit of group theory, it's pretty easy to see that the full cube group is isomorphic to $\mathbb Z_3 \wr S_8 \times \mathbb Z_2 \wr S_{12}$. The edge and corner orientations are just the diagonal homomorphisms of their respective wreath products (which incidentally is the easiest way to prove the above lemma by showing that this gives you the same homomorphism), while the permutation parity is a bit more complicated to understand. Once you've understood these fully, you can compute that the cube group is isomorphic to $(\mathbb Z_3^7 \times \mathbb Z_2^{11}) \rtimes ((A_8 \times A_{12}) \rtimes \mathbb Z_2)$. Finally, since it's relevant to my interests, I'll briefly note that much of what I said here still works for higher dimensional cubes. See, for instance, this MO question.
I am solving a question whose first item is to demonstrate the Banach Fixed Point Theorem, and the second item is as follows: Show that for any parameter $t \in \mathbb{R}$ the system $$ \begin{cases} x = \frac{1}{2}\sin(x+y) + t - 1 \\ y = \frac{1}{2}\cos(x-y) - t + \frac{1}{2} \end{cases} $$ has a unique solution that depends continuously on the parameter $t$. Now, this seems clear to me that the question asks one to apply the just proved Banach Fixed Point Theorem to find the existence of fixed points for the map $$ F(x, y) = (\frac{1}{2}\sin(x+y) + t - 1, \frac{1}{2}\cos(x-y) - t + \frac{1}{2}) $$ However, I am not able to show that such a map is a contraction. Any hints wil be the most appreciated. Thanks in advance.
I am attempting to solve the following CES Utility Function problem: However, I am running into issues when I get to 3). For 1) I have $K = \left(\frac{\omega p_1}{p_2}\right)^{\frac{1}{p+1}}$ For 2) I get $X_2^M = \frac{m}{\frac{p_1}{K}+p_2} $ For 3) I find $\lambda^* = (K^\rho + \omega)^{-\frac1p-1} \cdot \omega \cdot p_2^{-1} $ and $v(p_1, p_2, m) = \left(\left(\frac{m}{p_1+Kp_2}\right)^{-\rho} + \omega(\frac{mK}{p_1+Kp_2})^{-\rho}\right)^{-1/\rho}$ I then divide $\lambda^*$ by $v(p_1, p_2, m)$, but when I do so I can't seem to fully cancel out $p_1,p_2,m,K$ and $\rho$ which I believe I would have to do to prove that they are proportional. I'm not sure if the issue is with my $\lambda^*$, my $v(p_1,p_2,m)$ or both... Additionally, for 6) how does one demonstrate homogeneity of a given degree?
The group completion (aka Grothendieck group) of an abelian monoid $M$ is an abelian group $G(M)$ with a homomorphism $\iota:M \to G(M)$ of monoids satisfying the following universal property: for every homomorphism $f:M \to H$ whose target is an abelian group there exists a unique homomorphism $\phi:G(M)\to H$ of groups such that $\phi\circ\iota=f$. By the universal property of $G$, every homomorphism $f:M \to N$ of monoids uniquely gives rise to a group homomorphism $G(f):G(M) \to G(N)$. Of course, this concept is used to define $K$-theory: if $\mathcal V_X$ is the abelian monoid of the isomorphism classes of real vector bundle over a topological space $X$, then $KO(X):=G(\mathcal V_X)$. In general, the homomorphism $\iota:M \to G(M)$ is not injective. (e.g. $M=\mathcal V_{\mathbb{CP}(1)}$.) Questions: Is there any criterion which implies (or is equivalent to) the injectivity of the map $\iota: M\to G(M)$? How about the cases when $M=\mathcal V_X$? Is there any criterion for a homomorphism $f:M \to N$ of monoid which implies the injectivity of the group homomorphism $G(f):G(M) \to G(N)$? How about the cases when $f:=F^*:\mathcal V_X \to \mathcal V_Y$? (Here $F:Y \to X$ is a continuous map.)
The definition of limit of a sequence I always encounterd was of the form: DEFINITION: Limit of a Sequence: Let $\{x_n\}_{n\in\mathbb{N}}$ be some sequence of real numbers, (i.e. $x:\mathbb{N}\to\mathbb{R}$) We say that: $lim_{n\to\infty}x_n=L$ if and only if $\forall 0 < \epsilon \in\mathbb{R} ,\exists N\in\mathbb{N}, \forall N<n\in\mathbb{N}, |x_n - L| < \epsilon $ And the theorem of arithmetic laws for limits of sequences was always of the form: THEOREM: Limits of Sequences under Arithmetic Operations: If $\{x_n\}_{n\in\mathbb{N}}$ and $\{y_n\}_{n\in\mathbb{N}}$ are two sequences (i.e. $x$ and $y$ are functions: $x:\mathbb{N}\to\mathbb{R}$ and $y:\mathbb{N}\to\mathbb{R}$) and $X,Y\in\mathbb{R}$ are two real numbers such that $lim_{n\to\infty}x_n=X$ and $lim_{n\to\infty}y_n=Y$ then the following are true: $lim_{n\to\infty}(x_n+y_n)=X+Y$ $lim_{n\to\infty}(x_n-y_n)=X-Y$ $lim_{n\to\infty}(x_n\times y_n)=X\times Y$ If in addition $\forall n\in\mathbb{N}, y_n \neq 0$ and $Y\neq 0$ then $lim_{n\to\infty}(\frac{x_n}{y_n})=\frac{X}{Y}$ Now the problem is that this definition and theorem does not allow to handle limits such as the one below in a truly rigorous manner: $lim_{n\to\infty}\frac{n-1}{2n-6} = \frac{1}{2}$ Suppose that we try to solve this limit by using the standard definition and theorem as given above: Define two sequences (i.e. two functions from $\mathbb{N}$ to $\mathbb{R}$) $\{x_n\}_{n\in\mathbb{N}}$ and $\{y_n\}_{n\in\mathbb{N}}$ such that $\forall n\in\mathbb{N}, x_n \triangleq 1-\frac{1}{n}$ and $\forall n\in\mathbb{N}, y_n \triangleq 2-\frac{6}{n}$ where the symbol $\triangleq$ means "equal by definition". (The sequences are well defined since for all $n\in\mathbb{N}$, the denominator in each of those two sequences does not match $0$). Now, We can show that for $n = 3$ we have $y_3 = 2-\frac{6}{3}=0$ and that $\forall 4\leq n\in\mathbb{N}, y_n \neq 0$, Also it is clear that $lim_{n\to\infty}x_n=1$ and that $lim_{n\to\infty} y_n = 2 \neq 0$. Now inorder to claim that $lim_{n\to\infty}\frac{x_n}{y_n}=\frac{1}{2}$ we have to change the condition of part 4. of the theorem (that is $\forall n\in\mathbb{N},y_n\neq 0$), and say that it is enough that almost for all $n\in\mathbb{N}$, we have $y_n \neq 0$, I.e. that is enough that $\exists N\in\mathbb{N},\forall N<n\in\mathbb{N}, y_n \neq 0$ inorder to conclude that (*) $lim_{n\to\infty}\frac{x_n}{y_n}=\frac{1}{2}$. Now since it can be shown that $\forall 4\leq n\in\mathbb{N}, \frac{n-1}{2n-6} = \frac{x_n}{y_n}$, We can conclude by (*) that $lim_{n\to\infty}\frac{n-1}{2n-6} = \frac{1}{2}$ as was to be shown. Now the problem is that the limit $lim_{n\to\infty}\frac{n-1}{2n-6} = \frac{1}{2}$ itself, also does not confine to the original definition of limit of a sequence that requires the sequence to be defined for every natural number $n\in\mathbb{N}$ because for $n=3$ the denominator is $0$. Now the problem is, in order to tackle limits like this we have to "update" the definition of limit of a sequence so that it will be able to handle rigorously limits of sequences like the one above or limits like this one $lim_{n\to\infty}\frac{3n^2+n-1}{5n^3-2n^2+n-4}$ which may have denominators that are equal to zero for a finite number of natural numbers, and thus the sequence may not be defined for all $n\in\mathbb{N}$. What definition of limit we should take? Maybe this one? UPDATED DEFINITION: Limit of a Sequence: Let $A\in\mathscr{P}(\mathbb{N})$ (the power set of $\mathbb{N}$) be some subset of the natural numbers with cardinality $|A|=\aleph_0$ that satisfies $\exists K\in\mathbb{N},\forall K<n\in\mathbb{N},n\in A$, And let $\{x_n\}_{n\in A}$ be some sequence of real numbers (i.e. $x:A\to\mathbb{R}$), We say that: $lim_{A\ni n\to\infty}x_n=L$ if and only if $\forall 0 < \epsilon \in\mathbb{R} ,\exists N\in\mathbb{N}, \forall N<n\in A, |x_n - L| < \epsilon $ IMPORTANT NOTE: The condition $\exists K\in\mathbb{N},\forall K<n\in\mathbb{N},n\in A$ must be included, Otherwise we may get a subsequential limit (i.e. limit of some subsequence), and the limit will not be uniquely defined. (take for example the sequence $\{(-1)^n\}$). This new definition probably will able to tackle this kind of limits rigorously, The only downside is that many theorems that were already proven for the original defintion, Now must be reproved again. Also we must prove that the sets on which the limit will be taken does not matter, In other words, We must prove that the limit is independent on the underlying set. I will state it formally as a theorem which I haven't tried to prove yet: NEW THEOREM: The limit (if exists) is independent of the underlying set Let $A_1\in\mathscr{P}(\mathbb{N})$ be some subset of the natural numbers with cardinality $|A_1|=\aleph_0$ that satisfies $\exists K_1\in \mathbb{N}, \forall K_1<n\in \mathbb{N}, n \in A_1$, Let $A_2\in\mathscr{P}(\mathbb{N})$ be some subset of the natural numbers with cardinality $|A_2|=\aleph_0$ that satisfies $\exists K_2\in \mathbb{N}, \forall K_2<n\in \mathbb{N}, n \in A_2$, Let $\{x_n\}_{n\in A_1}$ and $\{y_n\}_{n\in A_2}$ be two sequences such that $\forall n\in A_1 \cap A_2, x_n = y_n$. And let $L\in\mathbb{R}$ be some real number, Then: $lim_{A_1\ni n\to\infty}x_n = L $ if and only if $\lim_{A_2\ni n\to \infty}y_n = L$ We can also restate and prove (I haven't proved it yet, But the proof is supposed to be straitforward) the thoerem of arithmetic laws of limits of sequences: UPDATED THEOREM: Limits of Sequences under Arithmetic Operations: Let $A\in\mathscr{P}(\mathbb{N})$ be some subset of the natural numbers with cardinality $|A|=\aleph_0$ that satisfies $\exists K\in \mathbb{N}, \forall K<n\in \mathbb{N}, n \in A$, Let $\{x_n\}_{n\in A}$ and $\{y_n\}_{n\in A}$ be two sequences (i.e. $x$ and $y$ are functions: $x:A\to\mathbb{R}$ and $y:A\to\mathbb{R}$), And let $X,Y\in\mathbb{R}$ be two real numbers such that $lim_{A\ni n\to\infty}x_n=X$ and $lim_{A\ni n\to\infty}y_n=Y$ then the following are true: $lim_{A\ni n\to\infty}(x_n+y_n)=X+Y$ $lim_{A\ni n\to\infty}(x_n-y_n)=X-Y$ $lim_{A\ni n\to\infty}(x_n\times y_n)=X\times Y$ If in addition $\forall n\in A, y_n \neq 0$ and $Y\neq 0$ then $lim_{A\ni n\to\infty}(\frac{x_n}{y_n})=\frac{X}{Y}$ Now, For example, Let's try to prove that $lim_{n\to\infty}\frac{3n^2+n-1}{5n^3-2n^2+n-4} = 0$ by using the new definitions and theorems: Let $B\in\mathscr{P}(\mathbb{R})$ be defined as $B=\{x\in\mathbb{R}|5x^3-2x^2+x-4=0\}$, By the Fundamental Theorem of Algebra, This set have at most $3$ elements (i.e. $0\leq |B| \leq 3$). Now, Define $A\in\mathscr{P}(\mathbb{R})$ as $A = \mathbb{N}-B$, It clear that $|A| = \aleph_0$ since $B$ includes only a finite number of elements or none at all. Also, It is clear that $\exists K\in \mathbb{N},\forall K<n\in \mathbb{N}, n\in A$, (Just take $K=1$ if $B=\emptyset$, or choose any $K\in\mathbb{N}$ that satisfy $max(B)<K$ if $B\neq \emptyset$). Now, Define two sequences (i.e. two functions from $A$ to $\mathbb{R}$) $\{x_n\}_{n\in A}$ and $\{y_n\}_{n\in A}$ such that $\forall n\in A, x_n \triangleq \frac{3}{n}+\frac{1}{n^2}-\frac{1}{n^3}$ and $\forall n\in A, y_n \triangleq 5 - \frac{2}{n}+\frac{1}{n^2}-\frac{4}{n^3}$. (The sequences are well defined since for all $n\in\mathbb{N}$, the denominator in each of those two sequences does not match $0$). Now, It is clear that $lim_{A\ni n\to\infty}x_n=0$ and that $lim_{A \ni n\to\infty} y_n = 5 \neq 0$. Now by using the new form of the theorem of arithmetic laws for limits of sequences, We can conclude that (*) $lim_{A \ni n\to\infty}\frac{x_n}{y_n}=0$. Now since it can be shown that $\forall n\in A, \frac{3n^2+n-1}{5n^3-2n^2+n-4} = \frac{x_n}{y_n}$, We can conclude by (*) that $lim_{A \ni n\to\infty}\frac{3n^2+n-1}{5n^3-2n^2+n-4}=0$ as was to be shown. Q.E.D. Thanks for any ideas, Also suggestions/references for books (or papers/articles) that treat limits in such a manner will be appreciated. Thanks a lot.
I'm quite confused by this transformation, and am trying to gain fluency in moving back and forth between these objects. I understand that a second order dyadic Cartesian tensor can be represented as the sum of rank 0, 1, and 2 irreducible spherical tensors. I also know that the crucial point of extending this process to spherical tensors of higher rank is the fact that $r^lY_l^m$ is a homogeneous polynomial in $(x,y,z)$ of order $l$. My understanding of why this is the crux, while shaky, is because this polynomial is in fact a spherical tensor of rank $l$, proven by showing that it transforms properly under rotations. That is my background, here is where I'm trying to apply it. Consider the traceless symmetric Cartesian tensor $$T_{ij} = \frac{S_iS_j+S_jS_i}{2}-\frac{\delta_{ij}}{3}\vec{S}\cdot\vec{S}$$ with the components of $\vec{S}$ satisfying the angular momentum commutation relations $$[S_i,S_j] = \sum_{k=1}^{3}i\hbar\epsilon_{ijk}S_k.$$ How am I to construct the five components of the related spherical tensor $T_m^{2}$? Since the components of $S$ do not commute, I think it is appropriate to first determine $T_2^2$, and then use the commutation $$[J_-,T_q^{(k)}] = \sqrt{(k-q+1)(k+q)}T_{q-1}^k$$ to determine the others. Would this be a valid approach to constructing the spherical tensor? If so, I'm a bit confused about determining what $T_2^2$ ought to be. I know that I'll be using the fact that $r^2Y_2^2$ is a homogeneous polynomial, but that connection is the main source of my confusion. Thanks for any insight! EDIT - Add-on question It has been clarified that $T_q^k = r^kY_k^q \backsim (x+iy)^k$ can be extended to generating spherical tensors from a vector operator $\vec{A}$, satisfying $[A_i,A_j]=0$, with $T_k^k = \alpha(A_x+i A_y)^k$. $\alpha$ is the normalization constant. From here, the lower elements can be generated using the stated lowering commutation identity. The add-on question here is the following: For a vector operator $\vec{S}$ that satisfies the stated angular momentum commutation relations, does the above generating relationship still hold for the highest indexed element if the rank of the spherical tensor is even? In other words, is $T_{2k}^{2k} = (S_x+i S_y)^{2k}$ still the proper highest ordered element of the spherical tensor? How about if the rank is odd: $T_{2k+1}^{2k+1} = (S_x+i S_y)^{2k+1}$?
Let $\beta > 0$ and $S_{0}=0$, and let $S_{n}=\xi_{1}+\dots+\xi_{n}$,$n \geq 1$, be a random walk with i.i.d. increments $\{\xi_{n}\}$ having a common distribution $P(\xi_{1}=-1)=1-C_{\beta}$ and $P(\xi_{1}>t)=C_{\beta}e^{-t^{\beta}}$, $t \geq 0$, where $C_{\beta} \in (0,1)$ is s.t. $E\xi_{1}=-1/2$. Let $M= \sup_{n \geq 0}S_{n}$. Now, the question is for which values of $\beta > 0$ is it that the main reason for $M$ to be large is that there is a single large summand $\xi_{n}$ for some $n$? There is a hint: one has to identify the range of $\beta$ for which the distribution of $\xi_{1}$ is heavy-tailed. First, I tried to understand why 'answering' the hint answers the main question. So if we show that the distribution of $\xi_{1}$ is heavy-tailed (for some $\beta$'s) then for those values of $\beta$, $\xi_{2},\dots,\xi_{n}$ come from the same heavy-tailed distribution. Thus for some $n$, one of the $\xi_{n}$'s will be somehow extreme, i.e. large, causing the sum to be large. Is that logic correct? I know that, generally speaking, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded. Still, I'm not really sure what I have to do in order to show for which $\beta$ the distribution of $\xi_{1}$ is heavy-tailed. In general, how does one show that a distribution is heavy-tailed?
The limit of function is in terms of a variable $x$ and trigonometric function sine. In this limit problem, $x$ is used to represent a variable and represent angle of a right triangle. $\displaystyle \lim_{x \,\to\, 0}{\dfrac{\sin{2x}+3x}{4x+\sin{6x}}}$ The function in terms of sine and variable reminds the limit of quotient of $\sin{x}$ by $x$ as $x$ tends to $0$ formula. So, let’s try to transform the function in the form of this formula. Take $x$ common from both terms in numerator and denominator of the function. Actually, it is our first step to get the required form. $= \,\,\,$ $\displaystyle \lim_{x \,\to\, 0}{\dfrac{x\Big(\dfrac{\sin{2x}}{x}+3\Big)}{x\Big(4+\dfrac{\sin{6x}}{x}\Big)}}$ $= \,\,\,$ $\displaystyle \require{cancel} \lim_{x \,\to\, 0}{\dfrac{\cancel{x}\Big(\dfrac{\sin{2x}}{x}+3\Big)}{\cancel{x}\Big(4+\dfrac{\sin{6x}}{x}\Big)}}$ $= \,\,\,$ $\displaystyle \lim_{x \,\to\, 0}{\dfrac{\dfrac{\sin{2x}}{x}+3}{4+\dfrac{\sin{6x}}{x}}}$ Apply limit to both expressions in numerator and denominator of the function as per quotient rule of limits. $= \,\,\,$ $\dfrac{\displaystyle \lim_{x \,\to\, 0}{\Bigg[\dfrac{\sin{2x}}{x}}+3\Bigg]}{\displaystyle \lim_{x \,\to\, 0}{\Bigg[4+\dfrac{\sin{6x}}{x}\Bigg]}}$ There are two functions in numerator and denominator with limits and it belongs to both functions as per addition rule of limits. $= \,\,\,$ $\dfrac{\displaystyle \lim_{x \,\to\, 0}{\dfrac{\sin{2x}}{x}} + \lim_{x \,\to\, 0}{3}}{\displaystyle \lim_{x \,\to\, 0}{4}+\lim_{x \,\to\, 0}{\dfrac{\sin{6x}}{x}}}$ There is no variable in second term of the numerator and first term in denominator. Hence, the limit of constant as the variable approaches zero is equal to same respective constant. $= \,\,\,$ $\dfrac{\displaystyle \lim_{x \,\to\, 0}{\dfrac{\sin{2x}}{x}} + 3}{4+\displaystyle \lim_{x \,\to\, 0}{\dfrac{\sin{6x}}{x}}}$ The angle in sin function is $2x$ but its denominator is $x$. Adjust both numerator and denominator to have same value. Repeat the same procedure to the denominator as well. $= \,\,\,$ $\dfrac{\displaystyle \lim_{x \,\to\, 0}{\dfrac{2\sin{2x}}{2x}} + 3}{4+\displaystyle \lim_{x \,\to\, 0}{\dfrac{6\sin{6x}}{6x}}}$ $= \,\,\,$ $\dfrac{2\displaystyle \lim_{x \,\to\, 0}{\dfrac{\sin{2x}}{2x}}+3}{4+6\displaystyle \lim_{x \,\to\, 0}{\dfrac{\sin{6x}}{6x}}}$ In previous step, we have set angle in sin function and its denominator to have same value. If $x \to 0$, then $2x \to 2 \times 0$. Therefore, $2x \to 0$. Similarly, if $x \to 0$, then $6x \to 6 \times 0$. Therefore, $6x \to 0$. $= \,\,\,$ $\dfrac{2\displaystyle \lim_{2x \,\to\, 0}{\dfrac{\sin{2x}}{2x}}+3}{4+6\displaystyle \lim_{6x \,\to\, 0}{\dfrac{\sin{6x}}{6x}}}$ As per limit of ratio of $\sin{x}$ to $x$ as $x$ approaches zero rule, the limit of each function is one. $= \,\,\,$ $\dfrac{2 \times 1+3}{4+6 \times 1}$ $= \,\,\,$ $\dfrac{2+3}{4+6}$ $= \,\,\,$ $\dfrac{5}{10}$ $= \,\,\,$ $\require{cancel} \dfrac{\cancel{5}}{\cancel{10}}$ $= \,\,\,$ $\dfrac{1}{2}$ Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
Let $f\in L(A,\mu_x\otimes\mu_y)$ be a summable function on $A\subset X\times Y$ where $(X\times Y,\mu_x\otimes\mu_y)$ is the product of measure spaces $(X,\mu_x)$ and $(Y,\mu_y)$. Then Fubini's theorem says that $$\int_A f(x,y) d\mu_x\otimes\mu_y=\int_X\Bigg(\int_{A_x}f(x,y)d\mu_y\Bigg) d\mu_x=\int_Y\Bigg(\int_{A_y}f(x,y)d\mu_x\Bigg) d\mu_y.$$ From the proof in Kolmogorov-Fomin's Introductory Real Analysis it seems to me to understand, or misunderstand, that the integrands $\int_{A_x}f(x,y)d\mu_y$ and $\int_{A_y}f(x,y)d\mu_x$ exist everywhere. Nevertheless, further in their original Элементы теории функций и функционального анализа, when talking about the Fredholm operator, it is said (p. 461 here) that $\int_{[a,b]}K(s,t)\varphi(t)d\mu_t$, where $\varphi\in L_2[a,b]$ and $K\in L_2([a,b]^2)$, exists for almost all $s\in[a,b]$. Do $\int_{A_x}f(x,y)d\mu_y$ and $\int_{A_y}f(x,y)d\mu_x$ respectively exist for all $x\in X$ and $y\in Y$ or do not they? I thank you anybody for any answer!
134 Works A new class of languages of infinite words is introduced, called the \emph{max-regular languages}, extending the class of $\omega$-regular languages. The class has two equivalent descriptions: in terms of automata (a type of deterministic counter automaton), and in terms of logic (weak monadic second-order logic with a bounding quantifier). Effective translations between the logic and automata are given. Extracting the Kolmogorov Complexity of Strings and Sequences from Sources with Limited IndependenceMarius Zimand An infinite binary sequence has randomness rate at least $\sigma$ if, for almost every $n$, the Kolmogorov complexity of its prefix of length $n$ is at least $\sigma n$. It is known that for every rational $\sigma \in (0,1)$, on one hand, there exists sequences with randomness rate $\sigma$ that can not be effectively transformed into a sequence with randomness rate higher than $\sigma$ and, on the other hand, any two independent sequences with randomness... \emph{Partition functions}, also known as \emph{homomorphism functions}, form a rich family of graph invariants that contain combinatorial invariants such as the number of $k$-colourings or the number of independent sets of a graph and also the partition functions of certain ``spin glass'' models of statistical physics such as the Ising model. Building on earlier work by Dyer and Greenhill (2000) and Bulatov and Grohe (2005), we completely classify the computational complexity of partition functions. Our... Endowing computers with the ability to apply commonsense knowledge with human-level performance is a primary challenge for computer science, comparable in importance to past great challenges in other fields of science such as the sequencing of the human genome. The right approach to this problem is still under debate. Here we shall discuss and attempt to justify one approach, that of {\it knowledge infusion}. This approach is based on the view that the fundamental objective... Message Ferrying is a mobility assisted technique for working around the disconnectedness and sparsity of Mobile ad hoc networks. One of the importantquestions which arise in this context is to determine the routing of the ferry,so as to minimize the buffers used to store data at the nodes in thenetwork. We introduce a simple model to capture the ferry routingproblem. We characterize {\em stable} solutions of the system andprovide efficient approximation algorithms for the {\sc... Asymptotically Optimal Lower Bounds on the NIH-Multi-Party Information Complexity of the AND-Function and DisjointnessAndre Gronemeier Here we prove an asymptotically optimal lower bound on the information complexity of the $k$-party disjointness function with the unique intersection promise, an important special case of the well known disjointness problem, and the AND$_k$-function in the number in the hand model. Our $\Omega(n/k)$ bound for disjointness improves on an earlier $\Omega(n/(k \log k))$ bound by Chakrabarti {\it et al.}~(2003), who obtained an asymptotically tight lower bound for one-way protocols, but failed to do so... The covering and boundedness problems for branching vector addition systems are shown complete for doubly-exponential time. The relation of constant-factor approximability to fixed-parameter tractability and kernelization is a long-standing open question. We prove that two large classes of constant-factor approximable problems, namely~$\textsc{MIN F}^+\Pi_1$ and~$\textsc{MAX NP}$, including the well-known subclass~$\textsc{MAX SNP}$, admit polynomial kernelizations for their natural decision versions. This extends results of Cai and Chen (JCSS 1997), stating that the standard parameterizations of problems in~$\textsc{MAX SNP}$ and~$\textsc{MIN F}^+\Pi_1$ are fixed-parameter tractable, and complements recent research on problems that do not admit... In this paper, mostly consisting of definitions, we revisit the models of security protocols: we show that the symbolic and the computational models (as well as others) are instances of a same generic model. Our definitions are also parametrized by the security primitives, the notion of attacker and, to some extent, the process calculus. The game tree languages can be viewed as an automata-theoretic counterpart of parity games on graphs. They witness the strictness of the index hierarchy of alternating tree automata, as well as the fixed-point hierarchy over binary trees. We consider a game tree language of the first non-trivial level, where Eve can force that 0 repeats from some moment on, and its dual, where Adam can force that 1 repeats from some moment on. Both these... We study the management of buffers and storages in environments with unpredictably varying prices in a competitive analysis. In the economical caching problem, there is a storage with a certain capacity. For each time step, an online algorithm is given a price from the interval $[1,\alpha]$, a consumption, and possibly a buying limit. The online algorithm has to decide the amount to purchase from some commodity, knowing the parameter $\alpha$ but without knowing how the... We address the problem of alternating simulation refinement for concurrent timed games (\TG). We show that checking timed alternating simulation between\TG is \EXPTIME-complete, and provide a logical characterization of thispreorder in terms of a meaningful fragment of a new logic, \TAMTLSTAR.\TAMTLSTAR is an action-based timed extension of standard alternating-timetemporal logic \ATLSTAR, which allows to quantify on strategies where thedesignated player is not responsible for blocking time. While for full \TAMTLSTAR, model-checking \TG is undecidable, we... The group isomorphism problem asks whether two given groups are isomorphic or not. Whereas the case where both groups are abelian is well understood and can be solved efficiently, very little is known about the complexity of isomorphism testing for nonabelian groups. In this paper we study this problem for a class of groups corresponding to one of the simplest ways of constructing nonabelian groups from abelian groups: the groups that are extensions of an... We introduce a new technique proving formula size lower bounds based on the linear programming bound originally introduced by Karchmer, Kushilevitz and Nisan (1995) and the theory of stable set polytope. We apply it to majority functions and prove their formula size lower bounds improved from the classical result of Khrapchenko (1971). Moreover, we introduce a notion of unbalanced recursive ternary majority functions motivated by a decomposition theory of monotone self-dual functions and give integrally... This paper gives a brief overview of computation models for data stream processing, and it introduces a new model for multi-pass processing of multiple streams, the so-called \emph{mp2s-automata}. Two algorithms for solving the set disjointness problem with these automata are presented. The main technical contribution of this paper is the proof of a lower bound on the size of memory and the number of heads that are required for solving the set disjointness problem with... The ambiguity of a nondeterministic finite automaton (NFA) $N$ for input size $n$ is the maximal number of accepting computations of $N$ for an input of size $n$. For all $k,r \in \mathbb{N}$ we construct languages $L_{r,k}$ which can be recognized by NFA's with size $k \cdot$poly$(r)$ and ambiguity $O(n^k)$, but $L_{r,k}$ has only NFA's with exponential size, if ambiguity $o(n^k)$ is required. In particular, a hierarchy for polynomial ambiguity is obtained, solving a long... For a given (terminating) term rewriting system one can often estimate its \emph{derivational complexity} indirectly by looking at the proof method that established termination. In this spirit we investigate two instances of the interpretation method: \emph{matrix interpretations} and \emph{context dependent interpretations}. We introduce a subclass of matrix interpretations, denoted as \emph{triangular matrix interpretations}, which induce polynomial derivational complexity and establish tight correspondence results between a subclass of context dependent interpretations and restricted triangular matrix interpretations.... We describe a simple iterative method for proving a variety of results in combinatorial optimization. It is inspired by Jain's iterative rounding method (FOCS 1998) for designing approximation algorithms for survivable network design problems, and augmented with a relaxation idea in the work of Lau, Naor, Salvatipour and Singh (STOC 2007) on designing an approximation algorithm for its degree bounded version. At the heart of the method is a counting argument that redistributes tokens from... Preface -- IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (2009)Ravi Kannan & K. Narayan Kumar This volume contains the proceedings of the 29th international conference on the Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2009), organized under the auspices of the Indian Association for Research in Computing Science (IARCS) at the Indian Institute of Technology, Kanpur, India. We report on our experiences in redesigning Scala's collection libraries, focussing on the role that type systems play in keeping software architectures coherent over time. Type systems can make software architecture more explicit but, if they are too weak, can also cause code duplication. We show that code duplication can be avoided using two of Scala's type constructions: higher-kinded types and implicit parameters and conversions. Randomness extractors are efficient algorithms which convert weak random sources into nearly perfect ones. While such purification of randomness was the original motivation for constructing extractors, these constructions turn out to have strong pseudorandom properties which found applications in diverse areas of computer science and combinatorics. We will highlight some of the applications, as well as recent constructions achieving near-optimal extraction. Priced timed automata are emerging as useful formalisms for modeling and analysing a broad range of resource allocation problems. In this extended abstract, we highlight recent (un)deci\-dability results related to priced timed automata as well as point to a number of open problems. In a context of $\omega$-regular specifications for infinite execution sequences, the classical B\"uchi condition, or repeated liveness condition, asks that an accepting state is visited infinitely often. In this paper, we show that in a probabilistic context it is relevant to strengthen this infinitely often condition. An execution path is now accepting if the \emph{proportion} of time spent on an accepting state does not go to zero as the length of the path goes to... If computational complexity is the study of what makes certain computational problems inherently difficult to solve, an important contribution of descriptive complexity in this regard is the separation it provides between the specification of a decision problem and the structure against which this specification is checked. The formalisation of these two aspects leads to tools for studying them as sources of complexity, and on the one hand leads to results in the characterisation of complexity... We consider concurrent systems that can be modelled as $1$-safe Petri nets communicating through a fixed set of buffers (modelled as unbounded places). We identify a parameter $\ben$, which we call ``benefit depth'', formed from the communication graph between the buffers. We show that for our system model, the coverability and boundedness problems can be solved in polynomial space assuming $\ben$ to be a fixed parameter, that is, the space requirement is $f(\ben)p(n)$, where $f$...
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you. Purchase individual online access for 1 year to this journal. Impact Factor 2019: 0.808 The journal Asymptotic Analysis fulfills a twofold function. It aims at publishing original mathematical results in the asymptotic theory of problems affected by the presence of small or large parameters on the one hand, and at giving specific indications of their possible applications to different fields of natural sciences on the other hand. Asymptotic Analysis thus provides mathematicians with a concentrated source of newly acquired information which they may need in the analysis of asymptotic problems. Authors: Anders, I. Article Type: Research Article Abstract: We prove the existence of non‐decaying real solutions of the (2+1) ‐dimensional Gardner equation, vanishing when x\to +\infty . We give asymptotic formulae for the solutions in terms of infinite series of asymptotic solitons with curved lines of constant phase and varying amplitude for \vert t\vert \to\infty . Citation: Asymptotic Analysis, vol. 19, no. 3‐4, pp. 185-207, 1999 Authors: Kerler, Charlotte Article Type: Research Article Abstract: We consider self‐adjoint, strictly elliptic, short‐range perturbations of the Laplacian with variable coefficients, that operate on square integrable functions defined in an exterior domain. According to the limiting absorption principle, the resolvent is extended to the positive real line, which is the continuous spectrum of these operators. In suitable weighted spaces, where the weight also depends on the grade of differentiation, we show that the extended resolvent is strongly differentiable w.r.t. the spectral parameter. Moreover, we obtain estimates for the derivatives of the extended resolvent in these weighted spaces, which are locally uniform in the spectral parameter. Citation: Asymptotic Analysis, vol. 19, no. 3‐4, pp. 209-232, 1999 Article Type: Research Article Abstract: Scattering theory tells how solutions of one abstract Schrödinger equation (of the form {\rm i}({\rm d}u/{\rm d}t)=Hu with H=H^*) are asymptotic to solutions of another (in principle simpler) abstract Schrödinger equation. We extend this theory to inhomogeneous problems of the form {\rm i}({\rm d}u)/({\rm d}t)=Hu+h(t) , with special emphasis on factored equations of the form \prod_{j=1}^N (({\rm d}/{\rm d}t)-{\rm i}A_j)u(t) = h(t), where A_1,\ldots, A_N are commuting selfadjoint operators. As a special case, corresponding to N=4 and two‐space scattering, we conclude that every solution u(\cdot, t) of …the inhomogeneous elastic wave equation in the exterior of a bounded star shaped obstacle is of the form u=v+w+z, where v(\cdot, t) solves the free (homogeneous) elastic wave equation with no obstacle, w(\cdot,t) is determined by the (rather general) inhomogeneity, and z(\cdot,t)={\rm o}(1) as t\to \pm \infty. Some of the results are presented in a more general Banach space context. Show more Keywords: Scattering, d’Alembert’s formula, factored equations, elastic waves, Duhamel’s principle, unitary groups, (C_0) semigroups, asymptotics Citation: Asymptotic Analysis, vol. 19, no. 3‐4, pp. 233-252, 1999 Article Type: Research Article Abstract: We study the resonances associated to the transmission problem for a strictly convex obstacle provided that the speed of propagation of the waves in the interior of the obstacle is strictly greater than the speed in the exterior. We prove that there are no resonances in a region of the form {\rm Im}\,z\leqslant C_1|z|^{-1},\ |{\rm Re}\,z|\geqslant C_2>0 . Using this we obtain some uniform estimates on the decay of the local energy. Citation: Asymptotic Analysis, vol. 19, no. 3‐4, pp. 253-265, 1999 Article Type: Research Article Citation: Asymptotic Analysis, vol. 19, no. 3‐4, pp. 267-288, 1999 Authors: Tate, Tatsuya Article Type: Research Article Abstract: Our purpose of this paper is to estimate the rate of the decay of the off‐diagonal asymptotics in Sunada (preprint) in case where the Hamilton flow is of Anosov type. For the sake of this, we use the central limit theorem for transitive Anosov flows (Sinai, Soviet Math. Dokl. 1 (1960), 983–987; Ratner, Israel J. Math. 16 (1973), 181–197; Zelditch, Comm. Math. Phys. 160 (1994), 81–92). Also it is shown that if the Hamilton flow has homogeneous Lebesgue spectrum, then the measure {\rm d}m_{A} associated with a pseudodifferential operator A , which is introduced by Zelditch (J. …Funct. Anal. 140 (1996), 68–86), is absolutely continuous with respect to Lebesgue measure. Show more Citation: Asymptotic Analysis, vol. 19, no. 3‐4, pp. 289-296, 1999 Authors: Anné, Colette Article Type: Research Article Abstract: We consider the equation of linear elasticity on a general Riemannian manifold with boundary, and prove a formula relating the counting functions of the Neumann and the Dirichlet problem to the counting function of the Dirichlet‐to‐Neumann operator. Namely, the difference of the two counting functions at \alpha equals the number of negative eigenvalues of the Dirichlet‐to‐Neumann operator related to the resolvent at \alpha . We then apply this formula to bounded domains of Riemannian symmetric spaces of non‐compact type in the homogeneous case of elasticity (i.e., when the Lamé functions \lambda , \mu …are constant). The conclusion is that the difference of the two counting functions is greater or equal to 1 under one of the following hypothesis: either the rank of the symmetric space is greater or equal to 2, or the rank is 1 but the dimension of the nilpotent part is smaller than {8\mu}/({\lambda+2\mu}) . The Euclidean space is an example of the first case, but even in that situation the conclusion we draw is new. Show more Keywords: Elasticity, boundary conditions, spectrum, symmetric spaces Citation: Asymptotic Analysis, vol. 19, no. 3‐4, pp. 297-316, 1999 Article Type: Research Article Citation: Asymptotic Analysis, vol. 19, no. 3‐4, pp. 317-341, 1999 Inspirees International (China Office) Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl 如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
You can use the so-called "binomial encoding", described in one of my answers on math.stackexchange. You have to consider, though, whether it will be worthwhile. The naive encoding takes $n \log_2 k$ bits, and the best encoding saves about $O(\log n)$ bits (the best encoding uses $\log_2 \frac{n!}{f!^k}$ bits). Plug in your actual numbers to see what kind of saving you can expect. Edit: I can't find my supposed post on math.stackexchange, so here is a summary of binomial coding (or rather multinomial coding), as it applies in this case. For a vector $x$ of length $k$, define$$ w(x) = \frac{(\sum_i x_i)!}{\prod_i x_i!}. $$Note that for $x \neq 0$,$$ w(x) = \sum_{i\colon x_i > 0} w(x-e_i), $$where $e_i$ is the $i$th basis vector. For example, if $k = 2$ and $x_1,x_2>0$ then$$ w(x_1,x_2) = w(x_1-1,x_2) + w(x_1,x_2-1). $$This is just Pascal's identity$$ \binom{x_1+x_2}{x_1} = \binom{x_1+x_2}{x_1-1} + \binom{x_1+x_2}{x_1}. $$ For every vector $x$ and every $i$ such that $x_i > 0$, define$$ w(x,i) = \sum_{j < i\colon x_j > 0} w(x-e_j). $$Given a sequence $\sigma$ of letters in $\{1,\ldots,k\}$, let $H(\sigma)$ be its histogram, a vector of length $k$. For a non-empty word $\sigma \neq \epsilon$, let $\sigma_1$ be the first symbol, and $\sigma_{>1}$ be the rest, so that $\sigma = \sigma_1 \sigma_{>1}$. We are now ready to define the encoding:$$ C(\sigma) = \begin{cases} w(H(\sigma),\sigma_1) + C(\sigma_{>1}) & \sigma \neq \epsilon \\ 0 & \sigma = \epsilon. \end{cases} $$$C(\sigma)$ is an integer between $0$ and $\frac{n!}{f!^k}-1$. Here is an example, with $k=f=2$. Let us encode $\sigma = 0101$:$$\begin{align*}C(0101) &= w((2,2),0) + w((1,2),1) + w((1,1),0) + w((0,1),1) \\ &=0 + w(0,2) + 0 + 0 = 1.\end{align*}$$You can check that the inverse images of $0,1,2,3,4,5$ are$$0011,0101,0110,1001,1010,1100.$$These are just all the solution in increasing lexicography order. How do we decode? By reversing the encoding procedure. We know the initial histogram $H_1$. Given an input $c_1$, we find the unique symbol $\sigma_1$ such that $w(H_1,\sigma_1) \leq c_1 < w(H_1,\sigma_1+1)$. We then put $H_2 = H_1 - e_{\sigma_1}$, $c_2 = c_1 - w(H_1,\sigma_1)$, and continue. To test your understanding, try to decode $0101$ back from its encoding $1$.
The Classical open mapping theorem for Banach spaces tells that if $T:X \to Y$ is a continuous surjective linear map, then it is open. I have attempted to essentially "adapt" the proof for Lie groups: Let $G,H$ be connected Lie groups (embedded in $\mathbb R^n$, I've used second-countability and completeness so far). If $\phi: G \to H$ is a surjective Lie group homomorphism, then the image of an open neighborhood $U$ of the identity in $G$ is again a neighborhood with nonempty interior about $1 \in H$. Pick some neighborhood $U$ of $1 \in G$. We want to pick some open ball $V$in $U$ so that the closure of $V$ is compact and contained in $U$. Hence, $$G=\{x V \mid x \in G\}$$ but we can choose countably many such $x \in G$. We can call this collection $\{x_n\}$, and consider the image of $G$ under $\phi$, which is all of $H$. In particular, $H:=\{\phi(x_n \overline{V}) \mid n \in \mathbb N\}$. But since $\phi$ is a homomorphism, this is the same thing as considering a whole bunch of $\phi(x_n)\phi(\overline{V})$, whose images are compact and hence closed. By Baire category, one of these guys has empty interior, say $\phi(x_n)\phi(\overline{V})$. But then $\phi(\overline{V})$ has nonempty interior, since multiplication by an element is a homeomorphism. But then $\phi(U)$ has nonempty interior. From here, one can finish by first showing that this implies that there is a basis $\mathcal U$ about the origin so that the image of each element contains the identity in $H$, which is sufficient to show the main theorem, since we can then use the homomorphism property to show that for every open set $W$ about $w \in G$, the image $f(w) \in \mathrm{Int}(W)$, proving the theorem. Is My proof correct? I've seen one reference "An Open Mapping Theorem for Topological Groups". but this ultimately just redirects to a source that I cannot find. Is there a better way to show this Theorem? I'm not looking for maximal generality (weakest hypotheses) just yet, only to understand why this theorem might be true. A sufficient answer in my eyes is either a convincing argument for why my proof fails (including something in the way of "is this idea recoverable?") or some affirmation that it is indeed correct.
The cos of 30 degrees value can be derived mathematically in three methods. One of them is a trigonometric approach and other two are geometrical methods. You must know the direct relation between the sides of a right triangle when its angle is $30$ degrees. According to properties of right triangle, the length of opposite side is half of the length of hypotenuse if the angle of right triangle is $\dfrac{\pi}{6}$. The exact value of $\cos{(30^°)}$ is evaluated theoretically on the basis of this property. The lengths of opposite side and hypotenuse are known but the length of adjacent side is unknown in this case. It is essential to find it in order to find the $\cos{\Big(\dfrac{\pi}{6}\Big)}$ value. So, try Pythagorean Theorem to find the value of adjacent side mathematically. ${OP}^2 \,=\, {PQ}^2+{OQ}^2$ In $\Delta QOP$, the lengths of hypotenuse and opposite sides are $d$ and $\dfrac{d}{2}$ respectively. $\implies d^2 = {\Big(\dfrac{d}{2}\Big)}^2+{OQ}^2$ $\implies d^2 = \dfrac{d^2}{4}+{OQ}^2$ $\implies d^2-\dfrac{d^2}{4} = {OQ}^2$ $\implies {OQ}^2 = d^2-\dfrac{d^2}{4}$ $\implies {OQ}^2 = d^2{\Bigg(1-\dfrac{1}{4}\Bigg)}$ $\implies {OQ}^2 = d^2{\Bigg(\dfrac{1 \times 4 -1}{4}\Bigg)}$ $\implies {OQ}^2 = d^2{\Bigg(\dfrac{4-1}{4}\Bigg)}$ $\implies {OQ}^2 = d^2{\Bigg(\dfrac{3}{4}\Bigg)}$ $\implies OQ = \sqrt{\dfrac{3d^2}{4}}$ $\implies OQ = \dfrac{\sqrt{3}d}{2}$ $\implies \dfrac{OQ}{d} = \dfrac{\sqrt{3}}{2}$ In this case, $d$ represents the length of hypotenuse $(OP)$. $\implies \dfrac{OQ}{OP} = \dfrac{\sqrt{3}}{2}$ $\implies \dfrac{Length \, of \, Adjacent \, side}{Length \, of \, Hypotenuse} = \dfrac{\sqrt{3}}{2}$ The angle of $\Delta QOP$ is $\dfrac{\pi}{6}$ and the ratio represents $\cos{(30^°)}$ as per definition of trigonometric ratio cosine. $\therefore \,\,\, \cos{(30^°)} = \dfrac{\sqrt{3}}{2}$ Therefore, it is derived that the exact value of cos of $30$ degrees in fraction form is $\dfrac{\sqrt{3}}{2}$ and its value in decimal form is $0.8660254037\ldots$ $\cos{(30^°)} = \dfrac{\sqrt{3}}{2} = 0.8660254037\ldots$ The value of cosine of $\dfrac{\pi}{6}$ can be calculated geometrically by constructing a right triangle with $30$ degrees angle using geometric tools. The five geometrical steps have constructed a right triangle, called as $\Delta HGI$ with an angle of $30$ degrees. The value of $\cos{\Big(\dfrac{\pi}{6}\Big)}$ can be calculated from this triangle mathematically. $\cos{(30^°)} = \dfrac{Length \, of \, Adjacent \, side}{Length \, of \, Hypotenuse}$ $\implies \cos{(30^°)} \,=\, \dfrac{GI}{GH}$ In this example, it is taken that the length of hypotenuse is $7.5 \, cm$ but the length of adjacent side is unknown. However, it can be measured by a ruler. Now, measure the length of the adjacent side by ruler and you will observe that its length is $6.5 \, cm$ approximately. Now, calculate the value of $\cos{\Big({33\dfrac{1}{3}}^g\Big)}$ $\implies \cos{(30^°)} \,=\, \dfrac{GI}{GH} = \dfrac{6.5}{7.5}$ $\,\,\, \therefore \,\,\,\,\,\, \cos{(30^°)} \,=\, 0.8666666666\ldots$ The $\cos{\Bigg({33\dfrac{1}{3}}^g\Bigg)}$ value is evaluated exactly in trigonometry by the cos squared identity. The exact value of $\cos{\Big(\dfrac{\pi}{6}\Big)}$ is actually calculated by substituting the value of sin 30 degrees in this formula. $\cos{(30^°)} \,=\, \sqrt{1-\sin^2{(30^°)}}$ $\implies \cos{(30^°)} \,=\, \sqrt{1-{\Bigg(\dfrac{1}{2}\Bigg)}^2}$ $\implies \cos{(30^°)} \,=\, \sqrt{1-\dfrac{1}{4}}$ $\implies \cos{(30^°)} \,=\, \sqrt{\dfrac{1 \times 4 -1}{4}}$ $\implies \cos{(30^°)} \,=\, \sqrt{\dfrac{4-1}{4}}$ $\implies \cos{(30^°)} \,=\, \sqrt{\dfrac{3}{4}}$ $\implies \cos{(30^°)} \,=\, \dfrac{\sqrt{3}}{2}$ $\,\,\, \therefore \,\,\,\,\,\, \cos{(30^°)} \,=\, \dfrac{\sqrt{3}}{2}$ According to both theoretical geometric and trigonometric methods, it is proved that the $\cos{\Big(\dfrac{\pi}{6}\Big)}$ value is $\dfrac{\sqrt{3}}{2}$ and its approximate value is $0.8660254037\ldots$ It is also evaluated that the value of $\cos{(30^°)}$ in practical geometrical method is $0.8666666666\ldots$ Now, compare both values of $\cos{(30^°)}$ and you observe that the value of $\cos{(30^°)}$ slightly differs with the value obtained from theoretical and trigonometric approaches. It is due to measuring the length of adjacent side approximately. However, the approximate values of $\cos{\Big(\dfrac{\pi}{6}\Big)}$ are same. Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
Suppose $S: K \times M \to T$ is a secure MAC. ($K$ = key space, $M$ = message space, $T = \{0,1\}^n$ = tag space.) $S$ being secure means that no matter what messages $m_1,...,m_q$ we throw at $S$ to get back tags $t_1,...,t_q$: $\{ (m_1,t_1),...,(m_q,t_q) \}$ we cannot subsequently find another message-tag pair $(m,t)$ that is not in the above set. In other words, we cannot find a forgery $(m,t)$. Let's create a new MAC $S'$ based on $S$: $S'(k,m) = (S(k,m), S(k,0^n))$ It is clear by example that $S'$ is not secure. The attack is simple: Throw at $S'$ some $m \neq 0^n$, and extract the value $s = S(k,0^n)$ out of the tag. Then, the message $0^n$ and tag $(s,s)$ is our forgery. Here is the paradox. I believe I can prove that if $S'$ can be forged, than so can $S$. This would prove that if $S$ is secure (as we assumed), $S'$ is secure! There must be something wrong with my proof, so my question is, what is wrong with the following proof? Assume $S'$ can be forged. In other words, we have thrown at $S'$ messages $m_1,...,m_q$ and gotten back the tags $t_1 = S'(k,m_1), ..., t_q = S'(k,m_q)$ and subsequently obtained a forgery message-tag pair $(m, t = S'(k,m))$ such that $(m, S'(k,m)) \notin$ $\{ (m_1,S'(k,m_1)),...,(m_q,S'(k,m_q)) \} $ In other words, $(m, (S(k,m),S(k,0^n))) \notin$ $\{ (m_1,(S(k,m_1),S(k,0^n))),...,(m_q,(S(k,m_q),S(k,0^n)))\}$ (*) Then it seems obvious that the following is our forgery on $S$: $(m, S(k,m)) \notin$ $\{ (m_1,S(k,m_1)),...,(m_q,S(k,m_q)) \} $ because $(m, S(k,m))$ being an element in that set would contradict (*). That's the end of my proof. There must be something wrong with it. The question is, what?
Logarithmic spiral Approximate solution The logarithmic spiral is self-similar. As a consequence of that, the tangents make the same angle with the position vector in each point of the spiral. You can use that to draw a reasonable approximation of the spiral simply by taking unit steps in the direction of the current tangent. Perhaps it's easiest to look at this in terms of complex numbers: $$z(\theta)=ae^{(i+b)\theta}=a\left(e^{i+b}\right)^\theta\\\frac{\mathrm dz}{\mathrm d\theta} = ae^{(i+b)\theta}(i+b)$$ So the direction vector is the position vector rotated by $(i+b)$, which means the angle between them is $$\alpha=\arctan\frac{1}{b}=\operatorname{arccot}b $$ So the above approach would be something like this (written in sage code): def logspiral1(s): hypot = sqrt(1 + b^2) sinA = 1/hypot cosA = b/hypot x = 1 y = 0 for c in range(100): yield (x, y) f = s/sqrt(x^2 + y^2) dx = f*(cosA*x - sinA*y) dy = f*(sinA*x + cosA*y) x += dx y += dy Looks like a log spiral if taken all by it self, but since errors accumulate it will diverge considerably from the parametric equation. Exact solution To avoid that, you should probably compute the new $\theta$ in terms of the direction you want to take. Doing so accurately would mean integrating curve length. $$\int_{\theta_1}^{\theta_2}\left\lvertae^{(i+b)\theta}(i+b)\right\rvert\,\mathrm d\theta$$ According to Wolfram Alpha this can be computed using $$\int\left\lvert ae^{(i+b)\theta}(i+b)\right\rvert\;\mathrm d\theta=\frac{\lvert a(b+i)\rvert}{b}e^{b\theta}$$ so you want $$s=\frac{\lvert a(b+i)\rvert}{b}\left(e^{b\theta_2}-e^{b\theta_1}\right) \\e^{b\theta_2}=\frac{sb}{\lvert a(b+i)\rvert}+e^{b\theta_1} \\\theta_2=\frac1b\log\left(\frac{sb}{\lvert a(b+i)\rvert}+e^{b\theta_1}\right)$$ def logspiral2(s): theta = 0 h = s*b/(a*sqrt(1 + b^2)) for c in range(100): yield (a*cos(theta)*exp(b*theta), a*sin(theta)*exp(b*theta)) theta = log(h + exp(b*theta))/b The first, approximate approach is drawn in red in the following figure, the second exact one in green. The continuous line behind the green points is a simple parametric plot of the exact solution. Archimedean spiral Attempted exact solution So let's do the same for the Archimedean spiral, again using Wolfram Alpha. $$z(\theta)=r\theta e^{i\theta} \\\frac{\mathrm dz}{\mathrm d\theta} = r(1+i\theta)e^{i\theta} \\\int\left\lvert r(1+i\theta)e^{i\theta}\right\rvert\;\mathrm d\theta = \int r\sqrt{1+\theta^2}\;\mathrm d\theta=\frac r2\left(\theta\sqrt{1+\theta^2}+\operatorname{arsinh}\theta\right)$$ That last term is a real beast: it includes $\theta$ both inside and outside the transcendental function $\operatorname{arsinh}$ (the inverse sinus hyperbolicus), which means we probably won't be able to solve this expression for $\theta$ except perhaps numerically. Approximate solution You could again do this thing we did above, take unit steps in tangent direction. Since the tangent direction angle depends on $\theta$, you'd have to compute that: def archspiral1(s): x = 0 y = 0 for c in range(100): yield (x, y) theta = float(sqrt(x^2 + y^2)/r) f = float(s/sqrt(1 + theta^2)) dx = f*(1*x - theta*y) dy = f*(theta*x + 1*y) if theta == 0: dx = s x += dx y += dy Exact curve but approximate distances Or you could consider the change in $\theta$ induced by such a step in tangent direction. If you look at the derivative, then you notice that the radial component of that distance is $r$ while the tangential (to the circle) component is $r\theta$. So you have something like $$s = x\sqrt{1+\theta^2}\qquad r\left(\theta_2-\theta_1\right) = x \\\theta_2=\theta_1+\frac s{r\sqrt{1+\theta_1^2}}$$ def archspiral2(s): theta = s/2 for c in range(100): yield (r*theta*cos(theta), r*theta*sin(theta)) theta += s/(r*sqrt(1 + theta^2))
I want to pick two papers on the arXiv today. In Indian Association for the Cultivation of Science(I couldn't resist to write this cute name of an institution) point out that Belle-II, a Japanese B-factory experiment that began to take limited data one year ago and is already taking all the data since early 2019, may observe a smoking gun for a class of supersymmetric theories that recently looked very intriguing to many physicists, for lots of reasons. It's models with the extra \(L_\mu-L_\tau\) gauge symmetry which may be good enough to explain the masses of generations of leptons, dark matter, baryon asymmetry, and the discrepancy in the muon magnetic moment. Belle-II could see the reaction\[ e^+ + e^- \to \gamma Z' \to \gamma+\met \] where \(Z'\) is the new gauge boson and the reaction is possible due to its kinetic mixing with the photon. Looking at some nearly highest energy boxes, Belle-II could discover the \(Z'\) boson even if it were too heavy to be accessible by the LHC. This is an example of a cheaper experiment that could beat the "brute force energy frontier" collider such as the LHC or FCC – but the price you pay is that such reactions are very special and you must hope that a rather particular scenario is picked by Mother Nature, otherwise you see nothing. Now, a fascinating Slovenian-Upenn-SmallerBoston-Asian hep-th stringy paper. In It seems that they in order not to be beaten by savages on the street, a category that recently welcomed Sheldon Lee Glashow again who's been hibernating as a string theory hater for over 20 years, Cvetič and collaborators say "exact chiral spectrum of the Standard Model" when they actually mean the "exact chiral spectrum of the Minimal Supersymmetric Standard Model". Well, I trust my fists so I still think it's right to bragabout the supersymmetric adjective. And yes, they find a way to construct one quadrillion F-theory models, a natural class that automatically produces the right spectrum of the MSSM at low energies. Recall that in the 1980s, people found 5 originally separated superstring theories in 10 spacetime dimensions. Type I, IIA, IIB, heterotic \(E_8\times E_8\), and heterotic \(SO(32)\). In the mid 1990s, it was realized that all of them are connected with each other and their limits with a strong string coupling constant, previously considered to be a mysterious empire full of dragons, got fully understood. In particular, type I and \(SO(32)\) heterotic strings were found to be S-dual to each other. The strong coupling dual of one produces the weak coupling limit of the other. On the other hand, type IIA string theory and \(E_8\times E_8\) heterotic strings produce something new in the strong coupling limit – an eleven-dimensional theory. In the two cases, the new 11th dimension looks like a circle or a line interval with two domain walls carrying the \(E_8\), respectively. The strong coupling limit of type IIB string theory, the last, fifth one, turned out to be the same type IIB string theory. In fact, this type IIB string theory may be visualized as a 12-dimensional theory, F-theory, with two tiny dimensions compactified on the torus \(T^2\). The shape (ratio of sides and the tilting angle) is called the complex structure \(\tau\) and values of \(\tau\) related by the \(SL(2,\ZZ)\) group are geometrically equivalent. In particular, the exchange of the "two sides" of the two-torus or the \(\tau\to-1/\tau\) operation is responsible for the self-S-duality, the equivalence between the weak and strong limits from the beginning of this paragraph. Because \(\tau\) is a complex scalar field in type IIB string theory (the dilaton plus the RR-axion), the shape of the two-torus may be a variable function of the 10 spacetime dimensions. In effect, type IIB vacua with the variable dilaton and axion may be geometrized as "more geometric", 12-dimensional geometries. The 12-dimensional spacetime has toroidal, two-dimensional fibers, and a 10-dimensional base. Out of these 10 dimensions, 3+1 remain large and most people know them. The remaining 6 are compactified. So F-theory compactifications – a more geometric way to describe type IIB string vacua – are mostly given by an 8-dimensional geometry that may be described as a 2-toroidal fibration over a 6-dimensional base. Note that F-theory stands for Father Theory. If you are a progressive who believes that men and fathers are politically incorrect, please believe me that F-theory actually stands for a Fudged-up Frigid Feminist instead. OK, the full 8-dimensional space – the fibration – has to be a Calabi-Yau manifold (a four-fold where four counts the complex dimensions which is appropriate because all the geometers doing this stuff are truly complex ones). Fine. Sometimes the 8-dimensional topologies produce the MSSM spectrum. So far some "large families" with this intriguing property were found whose number of elements was below one million or so. Cvetič et al. increase this record by a factor of billions. They suddenly construct a quadrillion MSSMs. And I really mean three-generation models with the gauge group \([SU(3)\times SU(2)\times U(1)] / \ZZ_6\). And yes, this F-theory research is quite a precise and rigorous branch of physics close to mathematics which is why the folks generally don't overlook the \(\ZZ_6\) quotient. Things must work and do work really precisely here. If the likes of Sheldon Glashow got one million dollar for a single Standard Model, the laws of proportionality imply that Cvetič et al. deserve sextillions of dollars. What are their F-theory geometries? They basically say that all fibrations are good enough and only the base has to obey some condition. The three-fold base \(B_3\) must have a non-rigid anti-canonical irreducible divisor(s) obeying the equation 16, a D3-brane-tadpole cancellation of a sort\[ n_{\rm D3} = 12+ \frac 58 \overline {\mathcal K}^3 -\frac{45}{2 \overline {\mathcal K}^3 }\in \{0,1,2,\dots\} \] It's not even a too constraining equation. It just says that some number must be a non-negative integer. With this single condition, you get the exact spectrum of the MSSM. Isn't it amazing? A subset of these bases are "weak Fano toric threefolds" – please don't assume that I am as familiar with those as e.g. with "beer" – and those are encoded by 3D reflexive polytopes \(\Delta\). 4319 good enough polytopes are known for this construction. Great. Isn't it amazing? The spectrum of the Standard Model or MSSM looks rather ugly, artificial, and we normally describe it by many isolated sentences or many conditions. Cvetič et al. may derive them more or less from a single condition constraining an 8-dimensional geometry. I think it's obvious that it may be rather important – and it could be called progress. The crackpot movement recently attacked Ptolemy, his friends (including Nima Arkani-Hamed), and the epicycles again. I think that only the brainwashed laymen describe "epicycles" as something that should be hated. Epicycles were an old version of the "Fourier analysis" that described astronomy in the precise, state-of-the-art way from ancient Greece to the era of Kepler. They worked great and in some sense, they were "right" because they only claimed to produce a good enough description, not an "explanation" why the planetary orbits are what they are. One might argue that because there's something "more instinctive" about the epicycles relatively to the ellipses constrained by Kepler's laws, the epicycles and epicycles on epicycles were an unavoidable chapter in the history of science. All people who have drew patterns with the Inspiro/Spirograph as kids (I did) must understand where I am coming from. So yes, I think that people who just mock them are brainwashed morons who don't get it. Nevertheless, Kepler's description with ellipses was progress because it made a real explanation "why those orbits are what they are", an explanation given by Newton's laws, more accessible. But I want to say one more thing. The crackpots love to present the "epicycles and epicycles on epicycles" as something terrible – and I just discussed why this demonization is a sign of one's misunderstanding. But these organized crackpots also love to say that "epicycles and epicycles on epicycles" are analogous to string theory. Both statements are wrong and they are just totally indefensible insults that don't work at all. In fact, the more sensible analogy would be the exactly opposite one. Epicycles are analogous to quantum field theories with manually added (fields and) operators into the Lagrangian. On the other hand, epicycles were replaced with... Kepler's ellipses. And indeed, the elaborate packages of the Standard Model's operators may also be replaced with... elliptically fibered 8-dimensional manifolds. Even the word "ellipses" appear in both cases. (In the complex geometry jargon, two-tori are called elliptic curves because the elliptic functions appear there, and those had previously appeared in some analyses of ellipses.) Ellipses normally represent progress because they elevate us from some man-made ad hoc description that just happens to be precise enough to a deeper, more geometrically natural, understanding of what's going on – to a deeper construction that has fewer independently moving parts, so to say. The transition from Ptolemy to Kepler seems exactly analogous to the transition from the effective quantum field theories to the geometric structures in F-theory or string theory in general! That's why the intelligent people work hard to think in terms of string theory – and why people saying bad things about string theory are just stupid scum. And that's the memo.
M. Oren and S. K. Nayar have proposed a reflectance model of rough diffuse surfaces and two approximate functions (We call them "full O-N" and "qualitative O-N" respectively in this article) in 1993 [link] [link] . Full O-N approximate the model very well, but it is complex and has many computationally-expensive functions. Qualitative O-N, which is widely used in CG community, is simple but it has some problems. For example: I introduce a slightly modified version of the qualitative O-N. It features: We use θ to denote polar angles and φ to denote azimuth angles on the surfaces. Using a vector notation, proposed formula is described as follows. A and B are the constant numbers which depend on the roughness of the surfaces. Following formula can be used to match the result with full O-N: It violates energy conservation law, \( \int \D \omega_i L \le 1 \) when \( \rho \gt 0.97 \). However, full O-N also has this problem. Proposed formula simply consists of a linear combination of the diffuse term and the non-diffuse term, so I propose a artificial, but useful parameterization of σ. where σ' has a simple meaning: mixing ratio of (non-diffuse term) / (diffuse term). Normalization factor is determined to keep overall intensity at \( V \cdot N = 0 \), which is a maximal point of non-diffuse term, therefore it never violates energy conservation law. Comparison between full O-N (blue), qualitative O-N (green) and proposed (red) at \( \rho = 0.8, \sigma = \pi / 4 \). The worst case is \( \theta_i \simeq \frac{\pi}{2} \wedge \theta_r \simeq \frac{\pi}{2} \wedge \phi \simeq \frac{\pi}{2} \). But suppose that the directions of the surfaces are uniformly distributed in a scene, the area of each surface on the screen is proportional to \( \cos \theta_i \cos \theta_r \). So the worst case occupies relatively small region. We can evaluate \( \cos \phi \) by projecting N and V on the surface. Therefore, proposed formula is described in spherical coordinates as follows: On the other hand, qualitative O-N can be written as the following form: As you see, proposed formula is very similar to qualitative O-N. Dark rings on spheres are appeared on the borders of \( s = 0 \). Qualitative O-N has 1st-order discontinuity at \( \theta_r = 0 \), wheareas proposed formula has 3rd-order discontinuity. Dark rings are completely disappeared on proposed formula. s is always non-positive when \( L \cdot V \le 0 \) (see the definition of s in vector notation), hence qualitative O-N falls back to Lambertian even if \( \sigma \ne 0 \), whereas proposed formula doesn't. \( L_\text{iON} \) satisfy the following physical requirements. Moreover, when the domains of \( (\theta, \phi) \) are extended to negative values, \( L_\text{iON}( \theta_r, \theta_i, \phi ) \) satisfy the following equalities without any modification. It may imply that the combinations of trigonometric functions used in this formula seem to be natural. I implemented proposed formula to Blender cycles. Comparison between qualitative O-N and proposed formula at \( A = B \ ( \sigma' = 1.0 ) \) . cd ..Yasuhiro Fujii <y-fujii at mimosa-pudica.net>
Principle of Finite Induction/One-Based Theorem Suppose that: $(1): \quad 1 \in S$ $(2): \quad \forall n \in \N_{>0} : n \in S \implies n + 1 \in S$ Then: $S = \N_{>0}$ Consider $\N$ defined as a naturally ordered semigroup. The result follows directly from Principle of Mathematical Induction for Naturally Ordered Semigroup: General Result. $\blacksquare$ $T = \N_{>0} \setminus S$ Let this smallest element be denoted $a$. We have been given that $1 \in S$. So: $a > 1$ and so: $0 < a - 1 < a$ As $a$ is the smallest element of $T$, it follows that: $a - 1 \notin T$ That means $a - 1 \in S$. But then by hypothesis: $\paren {a - 1} + 1 \in S$ But: $\paren {a - 1} + 1 = a$ and so $a \notin T$. This contradicts our assumption that $a \in T$. Hence it follows that $T$ can have no elements at all. That is: $\N_{>0} \setminus S = \O$ That is: $S = \N_{>0}$ Sources 1971: Allan Clark: Elements of Abstract Algebra... (previous) ... (next): Chapter $1$: Properties of the Natural Numbers: $\S 20$ 1971: Robert H. Kasriel: Undergraduate Topology... (previous) ... (next): $\S 1.17$: Finite Induction and Well-Ordering for Positive Integers: $17.1$ 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $2$: Integers and natural numbers: $\S 2.1$: The integers: $\mathbf{I}$
First i transform the s-box into row matrix like if i have 4×4 s-box having 16 elements into 1×16 row matix then substitute the binary values in column . multiply the matrix by mobius trasformation matrix then tell me how to trasform the s-box which is in binary form into decimal form? Also how i calculate the non linearity of certain element of that s-box. I'm not sure if I fully understand your question. "tell me how to trasform the s-box which is in binary form into decimal form?": Changing numbers between binary and hexadecimal (or decimal, if you would really want to) is not different for numbers that are used with S-boxes compared to any other numbers. "Also how i calculate the non linearity of certain element of that s-box.": As far as I am aware, nonlinearity is defined for functions from $\mathbb{F}_2^n$ to $\mathbb{F}_2^m$. In the context of S-boxes, we typically consider the nonlinearity of a full S-box, and not of a single element. Having said that, let $S : \mathbb{F}_2^n \rightarrow \mathbb{F}_2^m$ be our S-box. Then the nonlinearity $\mathcal{N}$ of $S$ is usually defined in terms of the linearity $\mathcal{L}$ of $S$, which is usually defined in terms of the $bias$ $\mathcal{E}$ of Boolean functions. Here is the full definition, where $S_b$ considers Boolean component functions of $S$ and $\varphi_a$ does $x \mapsto ax$: $\mathcal{N}(S) = 2^{n-1} - \frac{1}{2}\mathcal{L}(S) = 2^{n-1} - \frac{1}{2} \max_{a,b \in \mathbb{F}_2^{m*}} \left|\mathcal{E}(S_b + \varphi_a)\right|$. In this answer, I elaborated on how to compute this bias $\mathcal{E}(S_b + \varphi_a)$ to compute the values in the linear approximation table of $S$, including a fully worked out example. I'll try to avoid repeating that here. Once all values in the linear approximation table are computed, the largest absolute number (ignoring the trivial case of $a=b=0$) is called the linearity. Using the definition, it is then straightforward to calculate the nonlinearity.
I’d like to give a simple account of what I call the hierarchy of logical expressivity for fragments of classical propositional logic. The idea is to investigate and classify the expressive power of fragments of the traditional language of propositional logic, with the five familiar logical connectives listed below, by considering subsets of these connectives and organizing the corresponding sublanguages of propositional logic into a hierarchy of logical expressivity. conjunction (“and”), denoted $\wedge$ disjunction (“or”), denoted $\vee$ negation (“not”), denoted $\neg$ conditional (“if…, then”), denoted $\to$ biconditional (“if and only if”), denoted $\renewcommand\iff{\leftrightarrow}\iff$ With these five connectives, there are, of course, precisely thirty-two ($32=2^5$) subsets, each giving rise to a corresponding sublanguage, the language of propositional assertions using only those connectives. Which sets of connectives are equally as expressive or more expressive than which others? Which sets of connectives are incomparable in their expressivity? How many classes of expressivity are there? Before continuing, let me mention that Ms. Zoë Auerbach (CUNY Graduate Center), one of the students in my logic-for-philosophers course this past semester, Survey of Logic for Philosophers, at the CUNY Graduate Center in the philosophy PhD program, had chosen to write her term paper on this topic. She has kindly agreed to make her paper, “The hierarchy of expressive power of the standard logical connectives,” available here, and I shall post it soon. To focus the discussion, let us define what I call the (pre)order of logical expressivity on sets of connectives. Namely, for any two sets of connectives, I define that $A\leq B$ with respect to logical expressivity, just in case every logical expression in any number of propositional atoms using only connectives in $A$ is logically equivalent to an expression using only connectives in $B$. Thus, $A\leq B$ means that the connectives in $B$ are collectively at least as expressive as the connectives in $A$, in the sense that with the connectives in $B$ you can express any logical assertion that you were able to express with the connectives in $A$. The corresponding equivalence relation $A\equiv B$ holds when $A\leq B$ and $B\leq A$, and in this case we shall say that the sets are expressively equivalent, for this means that the two sets of connectives can express the same logical assertions. The full set of connectives $\{\wedge,\vee,\neg,\to,\iff\}$ is well-known to be complete for propositional logic in the sense that every conceivable truth function, with any number of propositional atoms, is logically equivalent to an expression using only the classical connectives. Indeed, already the sub-collection $\{\wedge,\vee,\neg\}$ is fully complete, and hence expressively equivalent to the full collection, because every truth function can be expressed in disjunctive normal form, as a disjunction of finitely many conjunction clauses, each consisting of a conjunction of propositional atoms or their negations (and hence altogether using only disjunction, conjunction and negation). One can see this easily, for example, by noting that for any particular row of a truth table, there is a conjunction expression that is true on that row and only on that row. For example, the expression $p\wedge\neg r\wedge s\wedge \neg t$ is true on the row where $p$ is true, $r$ is false, $s$ is true and $t$ is false, and one can make similar expressions for any row in any truth table. Simply by taking the disjunction of such expressions for suitable rows where a $T$ is desired, one can produce an expression in disjunctive normal form that is true in any desired pattern (use $p\wedge\neg p$ for the never-true truth function). Therefore, every truth function has a disjunctive normal form, and so $\{\wedge,\vee,\neg\}$ is complete. Pressing this further, one can eliminate either $\wedge$ or $\vee$ by systematically applying the de Morgan laws $$p\vee q\quad\equiv\quad\neg(\neg p\wedge\neg q)\qquad\qquad p\wedge q\quad\equiv\quad\neg(\neg p\vee\neg q),$$ which allow one to reduce disjunction to conjunction and negation or reduce conjunction to disjunction and negation. It follows that $\{\wedge,\neg\}$ and $\{\vee,\neg\}$ are each complete, as is any superset of these sets, since a set is always at least as expressive as any of it subsets. Similarly, because we can express disjunction with negation and the conditional via $$p\vee q\quad\equiv\quad \neg p\to q,$$ it follows that the set $\{\to,\neg\}$ can express $\vee$, and hence also is complete. From these simple observations, we may conclude that each of the following fourteen sets of connectives is complete. In particular, they are all expressively equivalent to each other. $$\{\wedge,\vee,\neg,\to,\iff\}$$ $$\{\wedge,\vee,\neg,\iff\}\qquad\{\wedge,\to,\neg,\iff\}\qquad\{\vee,\to,\neg,\iff\}\qquad \{\wedge,\vee,\neg,\to\}$$ $$\{\wedge,\neg,\iff\}\qquad\{\vee,\neg,\iff\}\qquad\{\to,\neg,\iff\}$$ $$\{\wedge,\vee,\neg\}\qquad\{\wedge,\to,\neg\}\qquad\{\vee,\to,\neg\}$$ $$\{\wedge,\neg\}\qquad \{\vee,\neg\}\qquad\{\to,\neg\}$$ Notice that each of those complete sets includes the negation connective $\neg$. If we drop it, then the set $\{\wedge,\vee,\to,\iff\}$ is not complete, since each of these four connectives is truth-preserving, and so any logical expression made from them will have a $T$ in the top row of the truth table, where all atoms are true. In particular, these four connectives collectively cannot express negation $\neg$, and so they are not complete. Clearly, we can express the biconditional as two conditionals, via $$p\iff q\quad\equiv\quad (p\to q)\wedge(q\to p),$$ and so the $\{\wedge,\vee,\to,\iff\}$ is expressively equivalent to $\{\wedge,\vee,\to\}$. And since disjunction can be expressed from the conditional with $$p\vee q\quad\equiv\quad ((p\to q)\to q),$$ it follows that the set is expressively equivalent to $\{\wedge,\to\}$. In light of $$p\wedge q\quad\equiv\quad p\iff(p\to q),$$ it follows that $\{\to,\iff\}$ can express conjunction and hence is also expressively equivalent to $\{\wedge,\vee,\to,\iff\}$. Since $$p\vee q\quad\equiv\quad(p\wedge q)\iff(p\iff q),$$ it follows that $\{\wedge,\iff\}$ can express $\vee$ and hence also $\to$, because $$p\to q\quad\equiv\quad q\iff(q\vee p).$$ Similarly, using $$p\wedge q\quad\equiv\quad (p\vee q)\iff(p\iff q),$$ we can see that $\{\vee,\iff\}$ can express $\wedge$ and hence also is expressively equivalent to $\{\wedge,\vee,\iff\}$, which we have argued is equivalent to $\{\wedge,\vee,\to,\iff\}$. For these reasons, the following sets of connectives are expressively equivalent to each other. $$\{\wedge,\vee,\to,\iff\}$$ $$\{\wedge,\vee,\to\}\qquad\{\wedge,\vee,\iff\}\qquad \{\vee,\to,\iff\}\qquad \{\wedge,\to,\iff\}$$ $$\{\wedge,\iff\}\qquad \{\vee,\iff\}\qquad \{\to,\iff\}\qquad \{\wedge,\to\}$$ And as I had mentioned, these sublanguages are strictly less expressive than the full language, because these four connectives are all truth-preserving and therefore unable to express negation. The set $\{\wedge,\vee\}$, I claim, is unable to express any of the other fundamental connectives, because $\wedge$ and $\vee$ are each false-preserving, and so any logical expression built from $\wedge$ and $\vee$ will have $F$ on the bottom row of the truth table, where all atoms are false. Meanwhile, $\to,\iff$ and $\neg$ are not false-preserving, since they each have $T$ on the bottom row of their defining tables. Thus, $\{\wedge,\vee\}$ lies strictly below the languages mentioned in the previous paragraph in terms of logical expressivity. Meanwhile, using only $\wedge$ we cannot express $\vee$, since any expression in $p$ and $q$ using only $\wedge$ will have the property that any false atom will make the whole expression false (this uses the associativity of $\wedge$), and $p\vee q$ does not have this feature. Similarly, $\vee$ cannot express $\wedge$, since any expression using only $\vee$ is true if any one of its atoms is true, but $p\wedge q$ is not like this. For these reasons, $\{\wedge\}$ and $\{\vee\}$ are both strictly weaker than $\{\wedge,\vee\}$ in logical expressivity. Next, I claim that $\{\vee,\to\}$ cannot express $\wedge$, and the reason is that the logical operations of $\vee$ and $\to$ each have the property that any expression built from that has at least as many $T$’s as $F$’s in the truth table. This property is true of any propositional atom, and if $\varphi$ has the property, so does $\varphi\vee\psi$ and $\psi\to\varphi$, since these expressions will be true at least as often as $\varphi$ is. Since $\{\vee,\to\}$ cannot express $\wedge$, this language is strictly weaker than $\{\wedge,\vee,\to,\iff\}$ in logical expressivity. Actually, since as we noted above $$p\vee q\quad\equiv\quad ((p\to q)\to q),$$ it follows that $\{\vee,\to\}$ is expressively equivalent to $\{\to\}$. Meanwhile, since $\vee$ is false-preserving, it cannot express $\to$, and so $\{\vee\}$ is strictly less expressive than $\{\vee,\to\}$, which is expressively equivalent to $\{\to\}$. Consider next the language corresponding to $\{\iff,\neg\}$. I claim that this set is not complete. This argument is perhaps a little more complicated than the other arguments we have given so far. What I claim is that both the biconditional and negation are parity-preserving, in the sense that any logical expression using only $\neg$ and $\iff$ will have an even number of $T$’s in its truth table. This is certainly true of any propositional atom, and if true for $\varphi$, then it is true for $\neg\varphi$, since there are an even number of rows altogether; finally, if both $\varphi$ and $\psi$ have even parity, then I claim that $\varphi\iff\psi$ will also have even parity. To see this, note first that this biconditional is true just in case $\varphi$ and $\psi$ agree, either having the pattern T/T or F/F. If there are an even number of times where both are true jointly T/T, then the remaining occurrences of T/F and F/T will also be even, by considering the T’s for $\varphi$ and $\psi$ separately, and consequently, the number of occurrences of F/F will be even, making $\varphi\iff\psi$ have even parity. If the pattern T/T is odd, then also T/F and F/T will be odd, and so F/F will have to be odd to make up an even number of rows altogether, and so again $\varphi\iff\psi$ will have even parity. Since conjunction, disjunction and the conditional do not have even parity, it follows that $\{\iff,\neg\}$ cannot express any of the other fundamental connectives. Meanwhile, $\{\iff\}$ is strictly less expressive than $\{\iff,\neg\}$, since the biconditional $\iff$ is truth-preserving but negation is not. And clearly $\{\neg\}$ can express only unary truth functions, since any expression using only negation has only one propositional atom, as in $\neg\neg\neg p$. So both $\{\iff\}$ and $\{\neg\}$ are strictly less expressive than $\{\iff,\neg\}$. Lastly, I claim that $\iff$ is not expressible from $\to$. If it were, then since $\vee$ is also expressible from $\to$, we would have that $\{\vee,\iff\}$ is expressible from $\to$, contradicting our earlier observation that $\{\to\}$ is strictly less expressive than $\{\vee,\iff\}$, as this latter set can express $\wedge$, but $\to$ cannot, since every expression in $\to$ has at least as many $T$’s as $F$’s in its truth table. These observations altogether establish the hierarchy of logical expressivity shown in the diagram displayed above. It is natural, of course, to want to extend the hierarchy of logical expressivity beyond the five classical connectives. If one considers all sixteen binary logical operations, then Greg Restall has kindly produced the following image, which shows how the hierarchy we discussed above fits into the resulting hierarchy of expressivity. This diagram shows only the equivalence classes, rather than all $65536=2^{16}$ sets of connectives. If one wants to go beyond merely the binary connectives, then one lands at Post’s lattice, pictured below (image due to Emil Jeřábek), which is the countably infinite (complete) lattice of logical expressivity for all sets of truth functions, using any given set of Boolean connectives. Every such set is finitely generated.
I'm trying to prove that, given $(u_n)_n \in \mathbb{C}^\mathbb{N}$ verifying $ u_{n+1}-u_n =_{n} o(\frac{1}{n})$, the following holds: $$ \lim_{n\to\infty} \frac{u_1+...+u_n}{n} = a \in \mathbb{C} \implies \lim_{n\to\infty} u_n = a$$ It is the reciprocal to the Cesaro 'average theorem'. An indication is given, to rewrite the average using $u_n$ and $a_k = u_k - u_{k-1}$ (with $a_1=u_1$) which I have figured out to be: $$u_1+...+u_n = u_n + \sum\limits_{k=1}^n \sum\limits_{i=1}^k a_i$$ What I've tried doing is rewriting this double sum, which I've found to be a $o\big(ln(\small(n+1\small)!)\big)$. Other than that, most of my attempts have proven futile to solving this.
With the first week behind us, the class is starting to settle into something of a rhythm. Things are going by very quickly, but the students seem more comfortable with the homework system, and after two quizzes have a better idea about what to expect from me. And we are finally getting to the good stuff: derivatives! What I Taught I opened the class by writing a limit problem on the board and encouraged the students to work on it while I handed back quizzes and gave them study guides for the first exam. I spent a little bit of time going over the quiz, then we worked out the problem that I had written on the board. This particular problem was similar to the one discussed on day 4, with many of the same subtleties, but it went much better this time. At this point in the course, we had pretty much exhausted the introductory study of limits, and were ready to move on to the derivative. I started by asking a question: given the graph of a function and a particular point \(P\) on that graph, can you find the slope of the line tangent to the graph through \(P\)? I illustrated the idea with a picture, and demonstrated that an approximation to a tangent line can be made by considering a secant line that passes through \(P\) and some other arbitrary point \(Q\). The closer that \(Q\) is to \(P\), the better the approximation should be. We know how to compute the slope of \(\overline{PQ}\), and we have good mathematical interpretation of what it means to choose \(Q\) “as close as possible” to \(P\). Thus combining the slope formula from high school algebra with the new concept of limits, we can compute the slope of the tangent line. And thus the derivative is defined! Using this geometric idea, I defined the derivative of a function \(f\) at a point \(a\) to be the slope of the tangent line through \(a\), given by \[ \text{slope of the tangent line} = \lim_{x\to a} \frac{f(x)-f(a)}{x-a}. \] This looked familiar to most of the students, since we wrote something similar on the first day of class, though without having defined the concept of a limit yet. We worked an example, and I gave an alternative definition, i.e. \[ \text{slope of the tangent line} = \lim_{h\to 0} \frac{f(a+h)-f(a)}{h}. \] I demonstrated how the two formulae are related, and discussed how they could be interpreted (as a slope, as an instantaneous rate of change, as an instantaneous velocity, and so on). Next, I returned to ball-dropping problem from the first day of class. I asked the students to look at their notes and recall our solution, then used the derivative to obtain the same solution, making the point that the solution from day 1 was a numerical approximation, while the current solution was exact (but they matched, so our numerics must have been pretty good!). I then used this problem as a jumping off point to discuss the idea of the derivative function. The above defined derivative gives the slope of a single tangent line, while the derivative function gobbles up any value \(x\) and spits out the slope of the tangent line through \(x\). I made the point that, in our example problem, we could pick any \(x\) we liked, and the computations would be identical. One student asked, “The computations will be the same, but the numbers will be different? Ah-ha!” Yay! Right on the money, clearly stated, and obviously understood! At this point, my plan was to define differentiability (at a point and on an interval), then go through three basic situations where things could “go wrong.” I stated the definition, but then a student asked, “So a continuous function is differentiable?” Anther great question! While that was my “What can go wrong? (part 2),” I decided to address it immediately, and stated the theorem that differentiability implies continuity. This allowed me to talk about the contrapositive, converse, and inverse a little bit, and to note that discontinuous implies not differentiable by “reversing” the statement of the theorem. I then returned to the rest of my “What can go wrong?” speil and demonstrated corners and cusps (with \(|x\)), and mentioned the existence of vertical tangents. Finally, I introduced higher order derivatives (with their notation), and mentioned that acceleration was the second derivative of position with respect to time, and finished the lecture with a quick mention of Leibniz’s notation (we’ll spend more time on that later). What Worked Honestly, I felt like the whole lecture gelled quite well. I managed to get through a lot of material (probably too much, but the pacing is the pacing) in a manner that was, I think, clear and effective. There are a couple of things that I think worked exceptionally well, however: I started with an example that worked though a couple of tricky algebraic subtleties. One of these had to do with sign, and I made a little mistake in canceling negatives at one point. A student pointed it out, we fixed it, and all of my students spent the remainder of the lecture looking for my sign errors. Three different students stopped me at different times in the lecture to point out purported errors, indicating to me that they were engaged, which is a victory. There were several great questions (two of which are highlighted above). I may have moved off of my intended train of arguments a few times, and probably spent more time on tangential matters that I would have liked (I had to skip proof because of this), I am quite happy that my students were both paying attention and, seemingly, understanding. All-in-all, I feel like this was one of the better lectures that I have given. Engagement was high, participation was high, and I enjoyed the back and forth. What Didn’t Work Yeah, so, this one is a bit personal: I mentioned above that I made a sign error in my first example. This is not something that didn’t work—in fact, handled correctly, it is exactly what should happen. The problem was not the mistake, but my reaction to it. When one is in front of a class of students with well prepared lecture notes, it is easy to believe that one has not made any mistakes, and to assume that the error is the student’s. The particular mistake that I made was this: \[ \frac{-a}{-b-c} = \frac{a}{b-c}, \] (though complicated by expressions with more substance than \(a\), \(b\), and \(c\)). A student pointed out the problem, but I misunderstood—I assumed that he was asking about the sign in front of \(b\), which I had correctly canceled! Because I had made a snap judgment, I didn’t wait for him to finish his question. While he eventually made his point, it was much more difficult than it should have been. Again, it is easy to quickly jump to conclusions when you are the “expert,” but this is problematic. I know that I tend to come across as somewhat arrogant to some students (I believe that one student called me “overly smug” on an evaluation once), and I know that part of the problem is snap judgments like the one displayed above. The key, I think, as with everything in the classroom, is to slow down. I have worked hard to give more time for students to think after I ask them questions, and I have made an effort to stop at the end of each sentence or equation that I write on the board so that students can catch up, but I really need to work on letting my students finish their questions before I start answering. So, let’s try this: whenever a student speaks, I am going to count mentally count to five, take a deep breath.
I have a question in which the person asking has identified that the total sum of 11 comes up more often than a sum of 12 in the rolling of three dice and this is strange as they both have the same number of possible combinations of 3 numbers that make up 11 and 12. Clearly 11 can be made up of combinations of : (6,4,1),(6,3,2),(5,5,1),(5,4,2),(5,3,3),(4,4,3). And I argued that when we have (a,b,c) with $a\ne b\ne c$, then we have 3! possible combinations, when two of a,b,c are equal then we have 3, and when we have $a=b=bc$ then we only have one combination. I then argued that 12 has: (6,5,1), (5,5,2), (4,4,4), (5,4,3), (3,6,3), (6,2,4) and because of the (4,4,4) triplet which only has one possible combination, the pr of a 12 is slightly lower than the possibiity of an 11. The question asks me 'introduce a probability space' to tackle this question and I'm a bit confused as to how to approach this. What I've put together: $\Omega=\{1,2,3,4,5,6\}^3$ $\mathscr F=\mathscr P(\Omega)$ $\mathbb P: \mathscr F\to[0,1]:\mathscr F (x)=\frac{card(x)}{216}$ where card is cardinality of the set x. Or should the function be a piecewise that depends on different values of a,b,c in the triplets? Can I define the function like this instead?: let the triplet (a,b,c) correspond to the outcome on the toss of each die, such that a,b,c$\in${1,2,3,4,5,6} Let F be a function mapping the power set to [0,1] and x=(a,b,c). Case1 (a=b=c): f(x) = 1/216 case2 (a=b, a$n=$ c) f(x) = 3/216 case 3 ($a\ne b \ne c$) f(x) = 3!/216 = 6/216
Let $ k \in \mathbb{N}$. Let $ I \subset \mathbb{R}$ be an open interval. Let $ f : I \rightarrow \mathbb{R}$ be a $k$ times continuously differentiable function with $f'(x) \not= 0 $ for all $x \in I $. Show that: $1)$ $f$ is injective. 2) $f(I)$ is an open interval. 3) the inverse function $ f^{-1} : f(I) \rightarrow I $ is $k$-times continuously differentiable. So $1)$ and $2$) weren't a problem. but I need help with $3)$. I've already shown: $1)$, $2)$, $f^{-1}$ is continuous and $f^{-1}$ is differentiable. Many thanks in advance
Recently the question If $\frac{d}{dx}$ is an operator, on what does it operate? was asked on mathoverflow. It seems that some users there objected to the question, apparently interpreting it as an elementary inquiry about what kind of thing is a differential operator, and on this interpretation, I would agree that the question would not be right for mathoverflow. And so the question was closed down (and then reopened, and then closed again…. sigh). (Update 12/6/12: it was opened again,and so I’ve now posted my answer over there.) Meanwhile, I find the question to be more interesting than that, and I believe that the OP intends the question in the way I am interpreting it, namely, as a logic question, a question about the nature of mathematical reference, about the connection between our mathematical symbols and the abstract mathematical objects to which we take them to refer. And specifically, about the curious form of variable binding that expressions involving $dx$ seem to involve. So let me write here the answer that I had intended to post on mathoverflow: ————————- To my way of thinking, this is a serious question, and I am not really satisfied by the other answers and comments, which seem to answer a different question than the one that I find interesting here. The problem is this. We want to regard $\frac{d}{dx}$ as an operator in the abstract senses mentioned by several of the other comments and answers. In the most elementary situation, it operates on a functions of a single real variable, returning another such function, the derivative. And the same for $\frac{d}{dt}$. The problem is that, described this way, the operators $\frac{d}{dx}$ and $\frac{d}{dt}$ seem to be the same operator, namely, the operator that takes a function to its derivative, but nevertheless we cannot seem freely to substitute these symbols for one another in formal expressions. For example, if an instructor were to write $\frac{d}{dt}x^3=3x^2$, a student might object, “don’t you mean $\frac{d}{dx}$?” and the instructor would likely reply, “Oh, yes, excuse me, I meant $\frac{d}{dx}x^3=3x^2$. The other expression would have a different meaning.” But if they are the same operator, why don’t the two expressions have the same meaning? Why can’t we freely substitute different names for this operator and get the same result? What is going on with the logic of reference here? The situation is that the operator $\frac{d}{dx}$ seems to make sense only when applied to functions whose independent variable is described by the symbol “x”. But this collides with the idea that what the function is at bottom has nothing to do with the way we represent it, with the particular symbols that we might use to express which function is meant. That is, the function is the abstract object (whether interpreted in set theory or category theory or whatever foundational theory), and is not connected in any intimate way with the symbol “$x$”. Surely the functions $x\mapsto x^3$ and $t\mapsto t^3$, with the same domain and codomain, are simply different ways of describing exactly the same function. So why can’t we seem to substitute them for one another in the formal expressions? The answer is that the syntactic use of $\frac{d}{dx}$ in a formal expression involves a kind of binding of the variable $x$. Consider the issue of collision of bound variables in first order logic: if $\varphi(x)$ is the assertion that $x$ is not maximal with respect to $\lt$, expressed by $\exists y\ x\lt y$, then $\varphi(y)$, the assertion that $y$ is not maximal, is not correctly described as the assertion $\exists y\ y\lt y$, which is what would be obtained by simply replacing the occurrence of $x$ in $\varphi(x)$ with the symbol $y$. For the intended meaning, we cannot simply syntactically replace the occurrence of $x$ with the symbol $y$, if that occurrence of $x$ falls under the scope of a quantifier. Similarly, although the functions $x\mapsto x^3$ and $t\mapsto t^3$ are equal as functions of a real variable, we cannot simply syntactically substitute the expression $x^3$ for $t^3$ in $\frac{d}{dt}t^3$ to get $\frac{d}{dt}x^3$. One might even take the latter as a kind of ill-formed expression, without further explanation of how $x^3$ is to be taken as a function of $t$. So the expression $\frac{d}{dx}$ causes a binding of the variable $x$, much like a quantifier might, and this prevents free substitution in just the way that collision does. But the case here is not quite the same as the way $x$ is a bound variable in $\int_0^1 x^3\ dx$, since $x$ remains free in $\frac{d}{dx}x^3$, but we would say that $\int_0^1 x^3\ dx$ has the same meaning as $\int_0^1 y^3\ dy$. Of course, the issue evaporates if one uses a notation, such as the $\lambda$-calculus, which insists that one be completely explicit about which syntactic variables are to be regarded as the independent variables of a functional term, as in $\lambda x.x^3$, which means the function of the variable $x$ with value $x^3$. And this is how I take several of the other answers to the question, namely, that the use of the operator $\frac{d}{dx}$ indicates that one has previously indicated which of the arguments of the given function is to be regarded as $x$, and it is with respect to this argument that one is differentiating. In practice, this is almost always clear without much remark. For example, our use of $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ seems to manage very well in complex situations, sometimes with dozens of variables running around, without adopting the onerous formalism of the $\lambda$-calculus, even if that formalism is what these solutions are essentially really about. Meanwhile, it is easy to make examples where one must be very specific about which variables are the independent variable and which are not, as Todd mentions in his comment to David’s answer. For example, cases like $$\frac{d}{dx}\int_0^x(t^2+x^3)dt\qquad \frac{d}{dt}\int_t^x(t^2+x^3)dt$$ are surely clarified for students by a discussion of the usage of variables in formal expressions and more specifically the issue of bound and free variables.
The answer is that Bob and Alice each calculate $(Q_a Q_b^{-1})^{\alpha \beta}$. Alice computes the quantity $Q_a$, Bob computes $Q_b$, Alice computes $(Q_a Q_b^{-1})^\alpha$ and sends that to Bob, who can compute $c = (Q_a Q_b^{-1})^{\alpha \beta}$. Let's step through the protocol, keeping in mind the OTR reference (assuming that you do the tests, which I'm leaving out because it clutters things more): $\text{Alice} \rightarrow \text{Bob}$: Alice picks random $a$ and $\alpha$, sends $g_{2a} = h^{a}$ and $g_{3a} = h^{\alpha}$ to Bob. $\text{Bob} \rightarrow \text{Alice}$: Bob picks random $b$ and $\beta$, computes $g \equiv g_{2a}^{b}$ and $\gamma \equiv g_{3a}^{\beta}$, picks random $s$, sends $P_b \equiv \gamma^s$, $Q_b = h^s g^y$, $g_{2b} = h^b$, and $g_{3b} = h^\beta$ back to Alice. note: At this point, Bob has computed the secure $g$ and $\gamma$. $\text{Alice} \rightarrow \text{Bob}$: Alice also computes $g = g_{2b}^{a}$ and $\gamma = g_{3b}^{\alpha}$, computes $P_a \equiv \gamma^r$, $Q_a \equiv h^r g^x$, and $R_a = (Q_a Q_b^{-1})^{\alpha}$, sends $P_a, Q_a, R_a$. $\text{Bob} \rightarrow \text{Alice}$: Bob computes $R_b \equiv (Q_a Q_b^{-1})^{\beta}$, compute $c = R_{ab} = R_a^{\beta}$, check whether $c = P_a P_b^{-1}$, send $R_b$. $\text{Alice} \rightarrow \text{Bob}$: compute $c = R_{ab} = R_b^{\alpha}$, also check $c = P_a P_b^{-1}$. Bob can compute $c$ in the fourth step, because Bob computed $Q_b$ in step 2, and received $R_a$ from Alice in step 3.
Egg Drop Project This is the classic egg drop experiment. Students try to build a structure that will prevent a raw egg from breaking when dropped from a significant height. They should think about creating a design that would reduce the amount of energy transferred from potential to kinetic energy on the egg shell. Some ways to do this would be to decrease the final speed of the egg using air resistance, increasing the time of the collision using some sort of cushion, transferring the energy into something else, or whatever else they can think of! Each group of students gets the following: 2 balloons 2 small paper cups 4 straws 1 sq ft of cellophane 4 rubberbands 4 popsickle sticks 2 ft of tape 1 egg (not provided) Subjects Covered Forces Impulse Energy Conservation Supplies Provided by requester One egg for each student group Floor covering (Ex: Newspaper, Tarp) Provided by us Consumed supplies Balloons Small paper cups Straws Cellophane Rubberbands Popsickle sticks Tape Reusable supplies Scissors Physics Behind the Demo The Egg hitting the ground is a collision between the Earth and the Egg. When collisions occur, two properties of the colliding bodies are changed and/or transferred: their Energy and Momentum. This change and transfer is mediated by one or many forces. If the force is too strong, it can cause the shell of the egg to crack and break. Momentum Transfer and Impulse (no Calculus) Starting with the definition of Force a and knowing that acceleration is just the change in velocity over the change in time $$ \textbf{F}=ma=m\cdot{\frac{\Delta v}{\Delta t}} $$ If we move the $\Large \Delta t $ to the left side of the equation we can see how Force is related to momentum $$ \textbf{F} \cdot{\Delta t}=m \cdot{\Delta v}$$ This means that the Force multiplied by the change in time, or duration of a collision, is equal to the mass multiplied by the change in velocity. Momentum (p) is defined as the mass multiplied by the velocity so the right side is the change in momentum. This change in momentum is the Impulse ( J) $$ \textbf{J}= \textbf{F} \cdot{\Delta t}=\Delta \textbf{p}$$ Momemtum Transfer and Impulse (Calculus) In Progress
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
The question "why is preimage resistance needed for hash functions" is not really relevant. This is because collision resistance implies preimage resistance. Thus, it is just a fact that if you have collision resistance then you must have preimage resistance. So, instead, I will relate to what preimage resistance is good for at all. In more technical cryptographic terms, a preimage resistant function is called a one-way function. One-way functions are the most basic cryptographic primitive and you can construct all of symmetric crypto from them (i.e., you can construct pseudorandom generators, functions, symmetric encryption, message authentication codes, and so on). [Note: you cannot construct collision-resistant hash functions from them in a black-box way.] Thus, from a theoretical perspective, these functions are very interesting; I say "theoretical" since the above constructions are all theoretical and not practical. Note that preimage resistance is a necessary condition almost everywhere in cryptography, but it is usually not sufficient. This is because it does not mean that it's impossible to obtain half of the preimage, and it is also only meaningful for very high entropy - if not random - inputs. The use of preimage resistance in hashing passwords is used in practice, but is only heuristic. In order to analyze this properly, you actually need to model the function as a random oracle. So, preimage resistance or one-wayness is a fact of cryptography. It is the minimal property that you need to do almost anything interesting in crypto (apart from all of the information theoretic crypto work which I won't discuss here). However, it is not a security notion that is usually of interest in and by itself. A proof sketch that collision resistance implies one-wayness Assume that there exists an adversary $A$ who can invert a function $H:\{0,1\}^*\rightarrow\{0,1\}^n$ with probability $\epsilon$ on a random input of length $2n$. We construct an adversary $A'$ who finds a collision in $H$. Adversary $A'$ chooses a random $r\in\{0,1\}^{2n}$, computes $y=H(r)$ and invokes $A$ on input $y$. If $A$ returns $s$ such that $H(s)=y$ and $s\neq r$ then $A'$ outputs $s$. Otherwise it outputs $\bot$. We now analyze the probability that $A'$ succeeds. We have that $A'$ succeeds if $A$ succeeds and $s\neq r$. Since the input is of length $2n$ and the output is of length $n$, we have that each output has on average $2^n$ premiages. Thus, the probability that $s=r$ is negligible. [This part of the proof needs some work, but I'll leave that to the readers as an exercise.] We therefore conclude that $A'$ succeeds with probability $\epsilon - neg(n)$ where $neg$ is a negligible function. Thus, if $\epsilon$ is non-negligible then the function $H$ is not collision resistant. We conclude that if $H$ is collision resistant, then $A$ could only succeed with negligible probability and so $H$ is one-way.
Given the ODE system $$ \left\{ \begin{array}{l} \dot x = -2x - z \cos t, \\ \dot y = x \sin t - y, \\ \dot z = -4z + \sin^2 t. \end{array} \right. $$ I am asked to: 1) Examine stability of this system 2) Determine if the system has $2\pi$-periodic solutions. Since we usually refer to monodromy operator in cases like this, I noticed that, given a homogeneous system $$ \left\{ \begin{array}{l} \dot x = -2x - z \cos t, \\ \dot y = x \sin t - y, \\ \dot z = -4z. \end{array} \right. $$ $z$ is an eigenvector with an eigenvalue $-4$, hence one of the monodromy eigenvalues is $e^{-4\pi}$. Then, since we can explicitly calculate $z$ from the 3rd equation, substituting it into 1st equation and considering the homogeneous part $$ \left\{ \begin{array}{l} \dot x = -2x, \\ \dot y = x \sin t - y. \end{array} \right. $$ we observe that $x$ is an eigenvector with an eigenvalue $-2$, so $e^{-2 \pi} $ is the second monodromy eigenvalue. Replacing $x$ in the 2nd equation with its explicit expression, we get the third monodromy eigenvalue: $e^{-\pi}$. Since all the monodromy eigenvalues are less than 1 (by absolute value), system is asymptotically stable. That is my solution on the first problem. Is it correct? And how can I solve the second problem?
Reposted from PolymathProgrammer.com, my answer to my own initial query. Generated mostly on my own after some initial help from a friend (Jason Schmurr) and my dad (Russell Gmirkin) I believe I've solved my own inquiry. The following are functions that, when graphed in polar coordinates render lovely polygons. In fact, I’ve got 3 versions (6 if you consider rotation a factor; to either align a vertex or the midpoint of a side with $\theta=0$). One with circumradius = 1 (as vertices $\to \infty$, polygons expand outward toward the circumscribed circle), one with apothem = 1 (as vertices $\to \infty$, polygons collapse inward toward the inscribed circle) and one with the midpoint between circumradius & apothem = 1 (as vertices $\to \infty$, both the maxima and minima, thus the circumscribed and inscribed circles, collapse toward that ‘midpoint radius’). I’d be interested to know whether this approach, describing the radius of a polygon as a periodic function, has any precedent (has anyone else done this, or am I the first)? I’ve been working on this idea for some time (on and off for years), but just recently overcame some stumbling blocks with a little help from a friend and my dad. Most of the legwork was my own, though. The relatively final form(s) appear to be: (n-gon, circumradius=1, unrotated) 1/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[(v*x)/4]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/2)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1) (n-gon, circumradius=1, rotated $-\pi/4$) 1/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(3Pi/4)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1) (n-gon, function centered around unit circle, unrotated) ((Sec[Pi/v]+1)/2)/(((Sec[Pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/2)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1) (n-gon, function centered around unit circle, rotated $-\pi/4$) ((Sec[Pi/v]+1)/2)/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(3Pi/4)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1) (n-gon, apothem=1, unrotated) Sec[Pi/v]/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[(v*x)/4]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/2)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1) (n-gon, apothem=1, rotated $-\pi/4$) Sec[Pi/v]/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(3Pi/4)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1) Don’t know whether they simplify at all to something less complicated… Even if not, they’re beauties! Examples: 3-gon: here 4-gon: here 5-gon: here If it's a unique solution and I'm first to it, I submit these as the Gmirkin Polygon Radius Function(s) (or some suitably nifty sounding name that’s not too cumbersome). *Smile* Heh. I may write them up formally for publication at some point, once a few previous engagements clear up, assuming they’ve not previously been published or some directly correlated function has already been published elsewhere. (If so, I’d like to know when, where and by whom; for academic curiosity’s sake.) It is my belief that a similar function exists for describing 3D Polyhedrons of some description(s). Though, I have not yet even attempted such a case and will probably stick to 2D cases for now. I can also tell you that if you vary the phase shift of the denominator [Abs[Cos[]]] terms by differing amounts (though not both by some multiple of $\pi/4$, $\pi/2$, etc.), you can also reproduce rectangles, isosceles triangles, etc. In some cases you can also generate diamond shapes by varying some other parameters. It's s surprisingly robust solution, as I'd hoped. Lord knows it's taken me a few years of false starts to get at the correct combination of functions. Though, I learned plenty along the way, much of which helped me generalize to all polygons from the square case a friend solved at my behest a week or two ago. Here's hoping this is an interesting, unique new solution that's viable and notable. (One can hope!) Sorry the post is a bit lengthy... ;) Best, ~Michael Gmirkin Edit: Sorry. Jumped the gun slightly. I retract the above equations. At the behest of someone on another site, I checked in Wolfram Alpha at a few data points. While it appears to work for the Square case (where the coefficients and corrective term basically cancel out), it doesn't work for other cases, but is slightly off. I think I've got the coefficients wrong. Will have to poke around a bit more in the maths to see if it's possible to get a technically correct exact solution. The graphs were so close as to fool me into thinking they were exact for all cases. Will get back to you if/when I get a technically correct solution. 'Til then... I still believe there is a valid function, since the Square case is technically correct @ 1/(Abs[Sin(x)]+Abs[Cos[x]]) or 1/(Abs[Cos[x]]+[Abs[Cos[x-(Pi/2)]]]). Just need the technically correct coefficient... will work on it as I've got some time. But, for now, the incorrect versions are darned close! ;o) Enough to fool most people (including me, apparently).
Stochastic 3D Modeling of Three-Phase Microstructures for Predicting Transport Properties: A Case Study 143 Downloads Abstract We compare two conceptually different stochastic microstructure models, i.e., a graph-based model and a pluri-Gaussian model, that have been introduced to model the transport properties of three-phase microstructures occurring, e.g., in solid oxide fuel cell electrodes. Besides comparing both models, we present new results regarding the relationship between model parameters and certain microstructure characteristics. In particular, an analytical expression is obtained for the expected length of triple phase boundary per unit volume in the pluri-Gaussian model. As a case study, we consider 3D image data which show a representative cutout of a solid oxide fuel cell anode obtained by FIB-SEM tomography. The two models are fitted to image data and compared in terms of morphological characteristics (like mean geodesic tortuosity and constrictivity) as well as in terms of effective transport properties. The Stokes flow in the pore phase and effective conductivities in the solid phases are computed numerically for realizations of the two models as well as for the 3D image data using Fourier methods. The local and effective physical responses of the model realizations are compared to those obtained from 3D image data. Finally, we assess the accuracy of the two methods to predict permeability as well as electronic and ionic conductivities of the anode. KeywordsStochastic microstructure modeling Effective conductivity Permeability Solid oxide fuel cells 3D image data Nomenclature \(\beta _1, \beta _2, \beta _3\) Constrictivities of the three phases \(\varepsilon _1, \varepsilon _2, \varepsilon _3\) Volume fractions of the three phases \(\widehat{\varepsilon }\) Estimator for the volume fraction of a stationary random closed sets \(\widehat{\varepsilon }^{\star }\) Estimator for the volume fraction in the graph-based microstructure model \(\gamma _1, \gamma _2, \gamma _3\) Parameters of the distance measure used for the graph-based microstructure model \(\varGamma \) Pore–solid interface \(\kappa \,{(}\mathrm {m}^2{)}\) Permeability \(\widehat{\kappa }~{(}\mathrm {m}^2{)}\) Geometrical predictor of permeability \(\lambda _1, \lambda _2, \lambda _3~{(}\mathrm {m}^{-3}{)}\) Intensities of the Poisson point processes \(\mu _f~{(}{{\mathrm {kg}}}\cdot \mathrm {m}^{-1} \cdot \mathrm {s}^{-1}{)}\) Viscosity of an incompressible Newtonian fluid \(\nu _3\) 3-dimensional Lebesgue measure \(\varPhi \) probability distribution function of the standard normal distribution \(\phi ~{(}{\mathrm {kg}}\cdot \mathrm {m}^2 \cdot \mathrm {s}^{-3} \cdot \mathrm {A}^{-1}{)}\) Electrical potential (or ionic concentration) \(\rho _Y, \rho _Z\) Covariance functions of the Gaussian random fields Yand Z \(\sigma ~{(}{\mathrm {kg}}^{-1}\cdot \mathrm {m}^{-2} \cdot \mathrm {s}^{3} \cdot \mathrm {A}^{2}{)}\) Effective conductivity \(\sigma _{\mathrm {sol}}~{(}{\mathrm {kg}}^{-1}\cdot \mathrm {m}^{-2} \cdot \mathrm {s}^{3} \cdot \mathrm {A}^{2}{)}\) Intrinsic conductivity \(\tau _1, \tau _2, \tau _3\) Mean geodesic tortuosities of the three phases \(\varTheta \) Parameter space \(\theta _{ij}~{(}\mathrm {m}^{-1}{)}\) Parameters for modeling two-point coverage probability functions, \(i,j \in \lbrace 1,2 \rbrace \) \(\vartheta _0~{(}\mathrm {m}^{-2}{)}, \vartheta _1~{(}\mathrm {m}^{-1}{)}\) Intensities of point processes related to the triple phase boundary \(\varXi _1, \varXi _2, \varXi _3\) Random closed sets denoting the three different phases \(b_1, b_2, b_3\) Parameters of the beta-skeletons \(C_1, C_2, C_3\) Two-point coverage probability functions of the three phases d( x, A) Euclidean distance between a point \(x \in \mathbb {R}^3\) and a set \(A \subset \mathbb {R}^3\) \(d_\gamma (x,A)\) Distance measure with parameter \(\gamma \) between a point \(x \in \mathbb {R}^3\) and a set \(A \subset \mathbb {R}^3\) \(\mathbf {E}~{(}{\mathrm {kg}}\cdot \mathrm {m} \cdot \mathrm {s}^{-3} \cdot \mathrm {A}^{-1}{)}\) Electrical vector field (or opposite gradient of ionic concentration) \(\mathbf {G}~{(}{\mathrm {kg}}\cdot \mathrm {m}^{-2} \cdot \mathrm {s}^{-2}{)}\) Macroscopic pressure gradient \(\mathcal {G}_1, \mathcal {G}_2, \mathcal {G}_3\) Beta-skeletons of the three phases h Function used to estimate the volume fraction in the graph-based model \(\mathcal {H}_{k}\) k-dimensional Hausdorff measure for \(k \in \lbrace 1,2,3 \rbrace \) \(\mathbf {J}~{(}A{)}\) Electrical current (or particle current) \(L_{\mathrm {TPB}}~{(}{\mathrm {m}}^{-2}{)}\) Expected length of the triple phase boundary per unit volume M M-factor, i.e., the ratio of effective and intrinsic conductivity \(\widehat{M}\) Geometrical predictor of the M-factor o Origin in the 3-dimensional Euclidean space \(p~{(}{\mathrm {kg}}\cdot \mathrm {m}^{-1} \cdot \mathrm {s}^{-2}{)}\) Pressure field \(\mathbb {R}^3\) 3-dimensional Euclidean space \(R^2\) Coefficient of determination \(r_{\mathrm {max}}~{(}\mathrm {m}{)}\) Median of the volume equivalent particle radius distribution \(r_{\mathrm {min}}~{(}\mathrm {m}{)}\) Median radius of the characteristic bottleneck in a microstructure \({\mathcal {S}}\) Conductive phase \(S_1, S_2, S_3\) Specific surface area of the three phases \(s_{\mathrm {GBM}}~{(}\mathrm {m}{)}\) Smoothing parameter of the graph-based microstructure model \(s_{\mathrm {PGM}}~{(}\mathrm {m}{)}\) Smoothing parameter of the pluri-Gaussian microstructure model \(u_Y, u_Z\) Thresholds defining the excursion sets of the Gaussian random fields Yand Z \(\mathbf {v}~{(}\mathrm {m} \cdot \mathrm {s}^{-1}{)}\) Velocity of an incompressible Newtonian fluid \(X_1, X_2, X_3\) Homogeneous Poisson point processes Y, Z Gaussian random fields \(\varDelta \) Laplacian operator \(\nabla \) Gradient operator \(\partial A\) Boundary of a set \(A \subset \mathbb {R}^3\) Notes Supplementary material References Adler, R.J.: The Geometry of Random Fields. Wiley, Chichester (1981)Google Scholar Holzer, L., Pecho, O., Schumacher, J., Marmet, P., Stenzel, O., Büchi, F.N., Lamibrac, A., Münch, B.: Microstructure-property relationships in a gas diffusion layer (GDL) for polymer electrolyte fuel cells, part I: effect of compression and anisotropy of dry GDL. Electrochim. Acta 227, 419–434 (2017)CrossRefGoogle Scholar Lantuéjoul, C.: Geostatistical Simulation: Models and Algorithms. Springer, Berlin (2013)Google Scholar Matheron, G.: Random Sets and Integral Geometry. Wiley, New York (1975)Google Scholar MATLAB 2015b, The MathWorks. www.matlab.com (2015) Molchanov, I.: Statistics of the Boolean Model for Practitioners and Mathematicians. Wiley, Chichester (1997)Google Scholar Møller, J., Waagepetersen, R.P.: Statistical Inference and Simulation for Spatial Point Processes. Chapman & Hall/CRC, Boca Raton (2004)Google Scholar Moulinec, H., Suquet, P.: A fast numerical method for computing the linear and non linear mechanical properties of the composites. Comptes Rendus de l’Académie des Sciences Série II(318), 1417–1423 (1994)Google Scholar Neumann, M., Hirsch, C., Staněk, J., Beneš, V., Schmidt, V.: Estimation of geodesic tortuosity and constrictivity in stationary random closed sets. Scand. J. Stat. (2019). https://doi.org/10.1111/sjos.12375 Torquato, S.: Random Heterogeneous Materials: Microstructure and Macroscopic Properties. Springer, New York (2013)Google Scholar Westhoff, D., Van Franeker, J.J., Brereton, T., Kroese, D.P., Janssen, R.A.J., Schmidt, V.: Stochastic modeling and predictive simulations for the microstructure of organic semiconductor films processed with different spin coating velocities. Modell. Simul. Mater. Sci. Eng. 23(4), 045003 (2015)CrossRefGoogle Scholar Wiegmann, A.: Computation of the permeability of porous materials from their microstructure by FFF-Stokes (2007). http://kluedo.ub.uni-kl.de/files/1984/bericht129.pdf. Accessed 22 July 2015
I would agree with the answer key. The trick here is it's limiting molar conductivity, the molar conductivity at infinite dilution. For a neutral electrolyte compound, I was taught the notation $\Lambda ^0$, and $\lambda ^0$ was reserved for individual ions. (The $0$ superscript represents zero concentration, equivalent to $\infty$ dilution.) Anyway, limiting molar conductivity is an interesting property because it does not represent a physically possibly scenario (a solute conducting electric charge when no solute is present). It can be extrapolated by the Kolhrausch Law, where it $\Lambda^0$ is the y-intercept of molar conductivity $\Lambda$ vs $\sqrt C$, concentration (note this relationship is only valid for strong electrolytes). The degree of ionization of a weak electrolyte depends on the concentration, and tends to unity (100% dissociation) in the limit of concentration approaching zero. Limiting molar conductivity $\Lambda^0$ is defined (only) for this exact limiting scenario. So when we consider limiting molar conductivity, we are doing so in a (abstract) condition where both strong and weak electrolyte fully dissociate. Thus for $\Lambda^0$ it makes no difference whether the electrolytes being compared are "strong" or "weak" at finite concentrations (for we are comparing them at "zero" concentration, i.e. infinite dilution). So we should have no problem accepting that a "weak" electrolyte might have a higher limiting molar conductivity value than a "strong" one, as in this case. Why might $AcOH$ (organic chemist abbreviation), $HCl$, and $NaOH$ have higher $\Lambda^0$ values than $KCl$? Well, the $H^+$ ion is better at transporting its charge through an aqueous medium than $K^+$ because it can "water hop" via theGrotthuss mechanism. This makes acidic compounds very effective electrolyte conductors. $OH^-$ is also an extremely effective charge carrier by an analogous "deprotonation-chain" mechanism. Because they use water itself to "tunnel"* through the bulk solution, protons and hydroxide have extremely high ion mobility in aqueous solutions and thus provide abnormally high molar conductivity. So in general acids and bases would be expected to out-conduct a pH-neutral salt ($KCl$), and in the limit of zero concentration/infinite dilution, even "weak" acids and bases will have higher limiting molar conductivity values. *Not in the quantum sense.
If $Q=\operatorname{adding}(P,R)=P+R$, then $P$ can be computed from $Q$ and $R$ as $P=\operatorname{adding}(Q,-R)=Q+(-R)$ where $-R$ is easy to compute from $R$ (just change the $y$ coordinate of $R$ from $y_R$ to $p-y_R$ where $p$ is the order of the prime field). This property follows from $\operatorname{adding}$ being a group law. If $Q=\operatorname{doubling}(P)=P+P=2\times P$, then we can efficiently compute $P$ from $Q$. One method is to use that $P=\displaystyle\frac{n+1}2\times Q$ (where $n$ is the order of the curve's group). This property follows from $n$ being odd. Neither property is considered weakness, because it follows from having a group (of odd order), which is necessary for the applications thought, including ECDSA. By definition of secp256k1, for that curve $p=2^{256}-2^{32}-977$ and $n=2^{256}-432420386565659656852420866394968145599$. What is the major strength of secp256k1? It forms with point addition a group of known 256-bit prime order $n$ with a generator $G$, such that we do not know an algorithm requiring much less than $\sqrt n$ steps to compute $a$ from $a\times G$, for random $a$ in $\Bbb Z_n^*$ (Discrete Logarithm Problem). Note per comment: $G+G=2\times G$ in a group noted additively becomes $G*G=G^2$ in that same group noted multiplicatively. Similarly $Q=a\times G$ in a group noted additively becomes $Q=G^a$ in that same group noted multiplicatively. This is why solving either equation for $a$ is called the Discrete Logarithm Problem in the group considered. More strongly, we know no algorithm requiring much less than $\sqrt n$ steps to tell sizably better than random if a triple what obtained as $(a\times G,b\times G,c\times G)$ or as $(a\times G,b\times G,ab\times G)$ for random $a$, $b$, $c$ in $\Bbb Z_n$ (Decisional Diffie-Hellman assumption). secp256k1 is preferred over simpler groups with the same properties, notably Schnorr groups, because a point can be expressed much more compactly and the group law is much faster. secp256k1 is preferred over other groups with the same properties also built by point addition on an elliptic curve over a prime field, notably secp256r1, because the special form of $p$ for secp256k1 allows slightly faster calculation, with no known decisive drawback. Does that rely on the functions we choose for adding and doubling? Yes it has to do with the group and its law, that is the functions we choose for adding and doubling. Taking any arbitrary functions $Q=f(P)$ and $Q=g(P,R)$ and then performing the multiplication function as long as the multiplication function is Surjective and Injective over the field $\Bbb F_p$, will we get the same level of security? No, even with added constraints making the operation defined by $f$ and $g$ a finite group law. As a counterexample, take the group $(\Bbb Z_p,+)$ for some prime $p$: that defines point doubling $f$, point addition $g$, and we can take any non-zero element as $G$ with $n=p$. Point multiplication reduces to multiplication modulo $p$. That allows construction of the group as stated, yet the DLP is trivial, giving no security: given $Q=a\times G$ we can compute $a$ as $(G^{-1}\bmod p)\times Q$ where $G^{-1}\bmod p$ can be computed by the Extended Euclidean Algorithm. Note: When restricted to point addition and doubling as in an axiomatic construction from $f$ and $g$, we can compute $-R$ as $(n-1)\times R$.
Let $\{x,y,z\}\subset[0,+\infty)$,and $x+y+z=6$. Show that: $$xyz(x-y)(x-z)(y-z)\le 27$$ I tried AM -GM but without success. $$xyz\le\left(\dfrac{x+y+z}{3}\right)^3=8$$ maybe $$(x-y)(x-z)(y-z)\le \dfrac{27}{8}$$ it doesn't always true。 Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community We can assume that $(x-y)(x-z)(y-z)\geq0$. Let $x+y+z=3u$, $xy+xz+yz=3v^2$, $xyz=w^3$ and $u=tw$. Hence, we need to prove that $$(x+y+z)^6\geq1728xyz(x-y)(x-z)(y-z)$$ or $$27u^6\geq64w^3(x-y)(x-z)(y-z)$$ or $$729u^{12}\geq4096w^6(x-y)^2(x-z)^2(y-z)^2$$ or $$27u^{12}\geq4096w^6(3u^2v^4-4v^6-4u^3w^3+6uv^2w^3-w^6)$$ or $f(v^2)\geq0$, where $$f(v^2)=27u^{12}-4096w^6(3u^2v^4-4v^6-4u^3w^3+6uv^2w^3-w^6).$$ But $f'(v^2)=24576w^6(2v^4-u^2v^2-uw^3)$, which says that $(v^2)_{min}=\frac{u^2+\sqrt{u^4+8uw^3}}{4}$. Thus, it's enough to prove that $$f\left(\frac{u^2+\sqrt{u^4+8uw^3}}{4}\right)\geq0$$ or $$27t^{12}\geq4096\left(3t^2\left(\tfrac{t^2+\sqrt{t^4+8t}}{4}\right)^2-4\left(\tfrac{t^2+\sqrt{t^4+8t}}{4}\right)^3-4t^3+6t\left(\tfrac{t^2+\sqrt{t^4+8t}}{4}\right)-1\right)$$ or $$(t^3+8)\left(27t^9-216t^6+1216t^3+512-512\sqrt{t^3(t^3+8)}\right)\geq0.$$ Let $t^3=a$. Hence, we need to prove that $$27a^3-216a^2+1216a+512\geq512\sqrt{a(a+8)}$$ or $$(3a-8)^2(81a^4-864a^3+7296a^2-10240a+4096)\geq0,$$ which is obvious. Done! Note that $xyz(x-y)(x-z)(y-z)$ is cyclic. WLOG, assume that $z = \min(x, y, z)$. We only need to prove the case when $(x-y)(x-z)(y-z) > 0$. In other words, we only need to prove the case when $x > y > z$. Let $$A = 2 + 2\cos \frac{\pi}{9}, \quad B = 2 - 2\cos \frac{4\pi}{9}, \quad C = 2 - 2\cos \frac{2\pi}{9}.$$ Clearly $A > B > C > 0$. Using AM-GM, we have \begin{align} &xyz(x-y)(x-z)(y-z)\\ =\ & \frac{x}{A}\, \frac{y}{B}\, \frac{z}{C}\, \frac{x-y}{A-B}\, \frac{x-z}{A-C}\, \frac{y-z}{B-C} \cdot ABC(A-B)(A-C)(B-C)\\ \le\ & \frac{1}{6^6}\Big(\frac{x}{A}+ \frac{y}{B}+ \frac{z}{C}+ \frac{x-y}{A-B}+ \frac{x-z}{A-C}+ \frac{y-z}{B-C}\Big)^6 \cdot ABC(A-B)(A-C)(B-C) \\ =\ & \frac{1}{6^6}(ax + by + cz)^6 \cdot ABC(A-B)(A-C)(B-C) \end{align} where $$a = \frac{1}{A} + \frac{1}{A-B} + \frac{1}{A-C}, \quad b = \frac{1}{B} - \frac{1}{A-B} + \frac{1}{B-C}, \quad c = \frac{1}{C} - \frac{1}{A-C} - \frac{1}{B-C}.$$ We can prove that $a = b = c = 1$ and $ABC(A-B)(A-C)(B-C) = 27$. We are done. Remark: The proof of $a = b = c = 1$ and $ABC(A-B)(A-C)(B-C) = 27$ may be not simple. One method is to use discriminant and resultant by noting that $A, B, C$ are the three distinct real roots of $u^3-6u^2+9u-3 = 0$. Omitted.