text
stringlengths
256
16.4k
How should I make someone understand the term log or logarithm and how its values are determined who is standard VI? like: $$\log_2 8 = 3$$ or $$\log_{2x+5}(10x^2+29x+10)=5−\log_{5x+2}(4x^2+20x+25)$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community How should I make someone understand the term log or logarithm and how its values are determined who is standard VI? like: $$\log_2 8 = 3$$ or $$\log_{2x+5}(10x^2+29x+10)=5−\log_{5x+2}(4x^2+20x+25)$$ This explanation will contain both a small rigorous and elementary explanation of your following query. $\mathbf{Rigerous \ Explanation:}$ The logarithmic function, denoted as $log(y)$ is the geometric area under a $\frac 1t$ vs. $t$ curve, mathematically equivalent to $log(y)=\int_1^y\frac 1tdt$, bounded for $t\in[1,y]$ and the line $y=0$. Furthermore, by taking the inverse of $log(y)$, we generate an analogous (exponential) function notated as $exp(x)$ or $e^x$, geometrically equivalent to the reflection of $log(y)$ about the line $y=x$. In fact, in real analysis, $log(y)$ can be defined by considering an arbitrary continuous function which satisfies $f(x+y)=f(x)f(y)$ and $f(0)=1$, for $x,y>0$, and then applying it to the definition of a derivative. This is exactly how the result $log(y)=\int_1^y\frac 1tdt$ is derived. Furthermore, the generation of such a function proves useful when trying to determine an expression, such that its value is its own derivative, called Euler's constant, or $e$. As a note, mathematicians often find it convenient to define the logarithm, as the inverse of the exponential function, as stated above. $\mathbf{Elementary \ Explanation:}$ Consider the following more general exponential function: $f(x)=b^x$, where $b$ is called the "base" and $x$ the "exponent" or "index". Furthermore, when $b=e\approx2.718$, then the inverse of $y=e^x$ is $log(y)$, or equivalently $ln(y)$ (called the natural logarithm). As a note, if not specified, a logarithm without a base, or notated as $log(y)$, is assumed to have base $e$. We can extend this notion further, however, by considering other bases other than $e$, such as binary ($2$) or $10$. Our value of $e$ proves very useless in this regard, as it bridges base conversion between exponential functions, that is: $$b^x=e^{xlogb}$$ This comes by definition, but the equivalence is seen by simply taking the logarithm of both sides. Let's consider the equality $1000=10^3$ Then the corresponding logarithmic equivalent would be: $$log_{10}1000=3$$ The above expression denotes how many times the base $10$, will be multiplied by itself, to attain a value of $1000$. As you can see, this is quite the opposite to $1000=10^3$, as it reverses its initial exponential operation. Furthermore, there are several algebraic properties of logarithms, as shown below: For $x,y,r>0$ $$log_b(xy)=log_b(x)+log_b(y)$$ $$log_b(\frac xy)=log_b(x)-log_b(y)$$ $$log_b(x^r)=rlog_b(x)$$ $$log_b(x)=\frac {log(x)}{log(b)}$$ The proofs for these are pretty straightforward, and can be looked up online. Please note, this is only a small introduction to logarithms/exponential functions, and a lot more can be said! logorithm and exponential function are inverse of each other $$a^{x} = y$$ then it implies that $$\log_{a}^{y} = x$$ "a" is called base of the logorithm ... I think the property used in your problem is change of base property of log
My friends working on Thermalization of Black Holes explained solutions to their matrix-valued differential equations (from numerical implementation of the Berenstein-Maldacena-Nastase matrix model) result in chaotic solutions. They are literally getting random matrices. For the eigenvalue spectrum, would expect a semicircle distribution but for finite N get something slightly different. The proof of the Wigner Semicircle Law comes from studying the GUE Kernel\[ K_N(\mu, \nu)=e^{-\frac{1}{2}(\mu^2+\nu^2)} \cdot \frac{1}{\sqrt{\pi}} \sum_{j=0}^{N-1}\frac{H_j(\lambda)H_j(\mu)}{2^j j!} \]The eigenvalue density comes from setting $\mu = \nu$. The Wigner semicircle identity is a Hermite polynomial identity\[ \rho(\lambda)=e^{-\mu^2} \cdot \frac{1}{\sqrt{\pi}} \sum_{j=0}^{N-1}\frac{H_j(\lambda)^2}{2^j j!} \approx \left\{\begin{array}{cc} \frac{\sqrt{2N}}{\pi} \sqrt{1 - \lambda^2/2N} & \text{if }|\lambda|< 2\sqrt{N} \\0 & \text{if }|\lambda| > 2 \sqrt{N} \end{array} \right. \]The asymptotics come from calculus identities like Christoffel-Darboux formula. For finite size matrices the eigenvalue distribution is a semicircle yet. Plotting the eigenvalues of a random $4 \times 4$ matrix, the deviations from semicircle law are noticeable with 100,000 trials and 0.05 bin size. GUE is in brown, GUE|trace=0 is in orange. Axes not scaled, sorry! Mathematica Code: num[] := RandomReal[NormalDistribution[0, 1]] herm[N_] := (h = Table[(num[] + I num[])/Sqrt[2], {i, 1, N}, {j, 1, N}]; (h + Conjugate[Transpose[h]])/2) n = 4; trials = 100000; eigen = {}; Do[eigen = Join[(mat = herm[n]; mat = mat - Tr[mat] IdentityMatrix[n]/n ; Re[Eigenvalues[mat]]), eigen], {k, 1, trials}]; Histogram[eigen, {-5, 5, 0.05}] BinCounts[eigen, {-5, 5, 0.05}]; a = ListPlot[%, Joined -> True, PlotStyle -> Orange] eigen = {}; Do[eigen = Join[(mat = herm[n]; mat = mat; Re[Eigenvalues[mat]]), eigen], {k, 1, trials}]; Histogram[eigen, {-5, 5, 0.05}] BinCounts[eigen, {-5, 5, 0.05}]; b = ListPlot[%, Joined -> True, PlotStyle -> Brown] Show[a, b] My friend asks if traceless GUE ensemble $H - \frac{1}{N} \mathrm{tr}(H)$ can be analyzed. The charts suggest we should still get a semicircle in the large $N$ limit. For finite $N$, the oscillations (relative to semicircle) are very large. Maybe has something to do with the related harmonic oscillator eigenstates. The trace is the average eigenvalue & The eigenvalues are being "recentered". We could imagine 4 perfectly centered fermions - they will repel each other. Joint distribution is:\[ e^{-\lambda_1^2 -\lambda_2^2 - \lambda_3^2 - \lambda_4^2} \prod_{1 \leq i,j \leq 4} |\lambda_i - \lambda_j|^2 \] On average, the fermions will sit where the humps are. Their locations should be more pronounced now that their "center of mass" is fixed.This post has been migrated from (A51.SE)
Let $K$ be a field and $f,g$ be (1 variable) polynomials over $K$, and suppose that $g=p_{1}^{e_1} p_{2}^{e_2} \cdots p_{k}^{e_k}$ where each $p_i$ is irreducible over $K$ and $e_i \geq 1$. Does there exist polynomials $b$ and $a_{ij}$ with the following properties? $\deg{a_{ij}}<\deg{p_i}$ for all $i=1,\ldots,k$ $\displaystyle \frac{f}{g}=b+\sum_{i=1}^{k}\sum_{j=1}^{e_k}\frac{a_{ij}}{p_{i}^{j}}$ Moreover, are such polynomials unique? What I have tried: Since $\{p_{i}^{e_i}\}$ are pairwise relatively prime, there are polynomials $A_1,\ldots,A_k$ (Bezout) such that $$A_1 p_1^{e_1}+\cdots+ A_k p_k^{e_k}=1$$ and thus we may write $\displaystyle \frac{f}{g}=\frac{fA_1 p_1^{e_1}+\cdots+ fA_k p_k^{e_k}}{g}=\frac{fA_1}{p_2^{e_2}\cdots p_k^{e_k}}+\cdots+\frac{fA_k}{p_1^{e_1}\cdots p_{k-1}^{e_{k-1}}}$. Repeating this on every summand $k$ times, we get polynomials $B_i$ such that $\displaystyle \frac{f}{g}=\sum_{i=1}^{k}\frac{B_i}{p_{i}^{e_i}}$, and after long division (if necessary) there exist polynomials $b,\tilde{B}_i$ such that $$\frac{f}{g}=b+\sum_{i=1}^{k}\frac{\tilde{B}_i}{p_{i}^{e_i}}$$ with $\deg{\tilde{B}_i}<\deg{p_i^{e_i}}$. Where I'm stuck: I don't see how I should proceed with the summands of the form $\frac{\tilde{B}_i}{p_i^{e_i}}$. Since $\{p_i,p_i^2,\ldots,p_i^{e_i}\}$ are not relatively prime, Bezout does not work and I don't see how to choose $a_{ij}$ from $\tilde{B}_i$ (unless $p_i^{e_i}$ is linear...). I'm also having difficulties with the uniqueness part of the assertion. Can someone give me an advice for this problem? Please enlighten me!
Is there a math equivalent of the ternary conditional operator as used in programming? a = b + (c > 0 ? 1 : 2) The above means that if $c$ is greater than $0$ then $a = b + 1$, otherwise $a = b + 2$. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community From physics, I'm used to seeing the Kronecker delta,$$ {\delta}_{ij} \equiv \left\{ \begin{array}{lll} 1 &\text{if} & i=j \\ 0 &\text{else} \end{array} \right. _{,} $$and I think people who work with it find the slightly generalized notation$$ {\delta}_{\left[\text{condition}\right]} \equiv \left\{ \begin{array}{lll} 1 &\text{if} & \left[\text{condition}\right] \\ 0 &\text{else} \end{array} \right. $$to be pretty natural to them. So, I tend to use $\delta_{\left[\text{condition}\right]}$ for a lot of things. Just seems so simple and well-understood. Transforms: Basic Kronecker delta: To write the basic Kronecker delta in terms of the generalized Kronecker delta, it's just$$ \delta_{ij} \Rightarrow \delta_{i=j} \,.$$It's almost the same notation, and I think most folks can figure it out pretty easily without needing it explained. Conditional operator: The " conditional operator" or " ternary operator" for the simple case of ?1:0:$$\begin{array}{ccccc}\boxed{\begin{array}{l}\texttt{if}~\left(\texttt{condition}\right) \\\{ \\~~~~\texttt{return 1;} \\\} \\\texttt{else} \\\{ \\~~~~\texttt{return 0;} \\\}\end{array}~} &\Rightarrow &\boxed{~\texttt{condition ? 1 : 0}~} &\Rightarrow &\delta_{i=j}\end{array}_{.}$$Then if you want a non-zero value for the false-case, you'd just add another Kronecker delta, $\delta_{\operatorname{NOT}\left(\left[\text{condition}\right]\right)} ,$ e.g. $\delta_{i \neq j} .$ Indicator function: @SiongThyeGoh's answer recommended using indicator function notation. I'd rewrite their example like$$ \begin{array}{ccccc} \underbrace{a=b+1+\mathbb{1}_{(-\infty, 0]}(c)} _{\text{their example}} & \Rightarrow & \underbrace{a=b+1+ \delta_{c \in \left(-\infty, 0\right]}} _{\text{direct translation}} & \Rightarrow & \underbrace{a=b+1+ \delta_{c \, {\small{\leq}} \, 0}} _{\text{cleaner form}} \end{array} \,. $$ Iverson bracket: Iverson bracket notation, as suggested in @FredH's answer, is apparently the same thing; according to Wikipedia, it's meant as a generalization of the Kronecker delta, except they drop the $\delta$ entirely, just putting the condition in square-brackets. In a context in readers expect it, Iverson bracket notation might be preferable if conditionals will be used a lot. The conditional operator, condition ? trueValue : falseValue, has 3 arguments, making it an example of a ternary operator. By contrast, most other operators in programming tend to be unary operators (which have 1 argument) or binary operators (which have 2 arguments). Since the conditional operator is fairly unique in being a ternary operator, it's often been called " the ternary operator", leading many to believe that that's its name. However, " conditional operator" is more specific and should generally be preferred. The expression b + (c > 0 ? 1 : 2) is not a ternary operator; it is a function of two variables. There is one operation that results in $a$. You can certainly define a function $$f(b,c)=\begin {cases} b+1&c \gt 0\\b+2 & c \le 0 \end {cases}$$ You can also define functions with any number of inputs you want, so you can define $f(a,b,c)=a(b+c^2)$, for example. This is a ternary function. In Concrete Mathematics by Graham, Knuth and Patashnik, the authors use the "Iverson bracket" notation: Square brackets around a statement represent $1$ if the statement is true and $0$ otherwise. Using this notation, you could write$$a = b + 2 - [c \gt 0].$$ Using the indicator function notation:$$a=b+1+\mathbb{1}_{(-\infty, 0]}(c)$$ In math, equations are written in piecewise form by having a curly brace enclose multiple lines; each one with a condition excepting the last which has "otherwise". There are a few custom operators that also occasionally make an appearance. E.g. the Heavyside function mentioned by Alex, the Dirac function, and the cyclical operator $\delta_{ijk}$ - all of which can be used to emulate conditional behaviour. I think mathematicians should not be afraid to use the Iverson bracket, including when teaching, as this is a generally very useful notation whose only slightly unusual feature is to introduce a logical expression in the middle of an algebraic one (but one already regularly finds conditions inside set-theoretic expressions, so it really is not a big deal). It may avoid a lot of clutter, notably many instances of clumsy expressions by cases with a big unmatched brace (which is usually only usable as the right hand side of a definition). Since brackets do have many other uses in mathematics, I personally prefer a typographically distinct representation of Iverson brackets, rendering your example as $$\def\[#1]{[\![{#1}]\!]} a= b + \[c>0]1 + \[c\not>0]2. $$ This works best in additive context (though one can use Iverson brackets in the exponent for optional multiplicative factors). It is not really ideal for general two-way branches, as the condition must be repeated twice, one of them in negated form, but it happens that most of the time one needs $0$ as the value for one branch anyway. As a more concise two-way branch, I can recall that Algol68 introduced the notation $b+(c>0\mid 1\mid 2)$ for the right-hand side of your equation; though this is a programming language and not mathematics, it was designed by mathematicians. They also had notation for multi-way branching: thus the solution to the recursion $a_{n+2}=a_{n+1}-a_n$ with initial values $a_0=0$, $a_1=1$ can be written $$ a_n=(n\bmod 6+1\mid 0,1,1,0,-1,-1) $$ (where the "${}+1$" is needed because in 1968 they still counted starting from $1$, which is a mistake), which is reasonably concise and readable, compared to other ways to express this result. Also consider, for month $m$ in year $y$, the number $$ ( m \mid 31,(y\bmod 4=0\land y\bmod 100\neq0\lor y\bmod400=0\mid 29\mid 28) ,31,30,31,30,31,31,30,31,30,31). $$ The accepted answer from Nat, suggesting the Kronecker Delta, is correct. However, it is also important to note that one of the highly upvoted answers here, which claims the C ternary operator x?y:z is not ternary is incorrect. Mathematically, the expression x?y:z can be expressed as a function of three variables: $$f(x,y,z)=\begin {cases} y&x\neq 0\\ z & x=0 \end {cases}$$ Note that in programming an expression such as $a<b$ could be used for $x$. If the expression is true, then $x=1$, otherwise $x=0$. About nomenclature: computer programmers have used the phrase ternary operator to mean exactly this since at least the 1970s. Of course, among mathematicians, it would simply be the One should realize that operators are just a fancy way of using functions. So a ternary operator is a function of 3 variables that is notated in a different way. Is that useful? The answer is mostly not. Also realize that any mathematician is allowed to introduce any notation he feels is illustrative. Let's review why we use binary operators at all like in a+b*c . Because , it makes sense to leave out parentheses and introduce complicated priority rules. Imagine that a b c are numbers and we have a normal + and a peculiar * that results in dragons. Now the expression doesn't make sense (assumming a high priority *), because there is no way to add numbers and dragons. Thusly most ternary operators results in a mess. parameters and results are of the same type With a proper notation there are examples of ternary operations. For example, there is a special notation for "sum for i from a to b of expression". This takes two boundaries (numbers) and a function from a number of that type that results in another number. (Mathematician, read "element of an addition group" for number.)The notation for integration is similarly ternary. So in short ternary operators exist, and you can define your own. They are in general accompagnied with a special notation, or they are not helpful. Now back to the special case you mention. Because truth values are implied in math, an expression like "if a then b else c" makes sense if a represens a truth value like (7<12). The above expression is understood in every mathematical context. However in a context where truth values are not considered a set, (if .. then .. else ..) would not be considered an operator/function, but a textual explanation. A general accepted notation could be useful in math, but I'm not aware there is one. That is probably, because like in the above, informal notations are readily understood. Fundamentally, the non-answer is the answer. Whatever notation you think you might want to use to express "$a=b+1$ if $c>0$ and $a=b+2$ otherwise" or$$a=\begin{cases}b+1 &\text{if }c>0\\b+2 &\text{otherwise}\end{cases}$$ is much harder to read than either of those two things. There are many good answers that give notation for, “if this condition holds, then 1, else 0.” This corresponds to an even simpler expression in C; (x>1) is equivalent to (x>1 ? 1 : 0). It’s worth noting that the ternary operator is more general than that. If the arguments are elements of a ring, you could express c ? a : b with (using Iverson-bracket notation) $(a-b) \cdot [c] + b$, but not otherwise. (And compilers frequently use this trick, in a Boolean ring, to compile conditionals without needing to execute a branch instruction.) In a C program, evaluating the expressions $a$ or $b$ might have side-effects, such as deleting a file or printing a message to the screen. In a mathematical function, this isn’t something you would worry about, and a programming language where this is impossible is called functional. Ross Millikan gave the most standard notation, a cases block. The closest equivalent in mathematical computer science is the if-then-else function of Lambda Calculus. Following solution is not defined for $c = 0$; however it uses very basic operations only, which might be useful as you probably look for an expression to implement in a program: $$a = b + 1\lambda + 2(1-\lambda)$$ where $$\lambda = \frac{ 1 + \frac{|c|}{c} }{2}$$ You need to make the problem discrete and make a choice from two values. So, given some value $c \in \mathbb{R}$ we need to calculate some value $\lambda \in \{0,1\}$ depnding on $c<0$ or $c>0$. Knowing that $$\frac{|c|}{c} \in \{1,-1\}$$ we can calculate the $\lambda$ as follows: $$\lambda = \frac{ 1 + \frac{|c|}{c} }{2}$$ Now that our $\lambda \in \{0,1\}$ we can do the "choice" between the two constants $d$ and $e$ as follows: $$d\lambda + e(1-\lambda)$$ which equals $d$ for $\lambda = 1$, and $e$ for $\lambda = 0$. The best idea is probably to split the world into different cases above the context in which your expression lives, so you consider the cases where $c$ is positive and the cases where $c$ is nonpositive separately. Or impose conditions on $c$ like $|c| = 1$ . Another alternative is to create some new ad hoc indicator-like functions whose algebraic properties are as strong as possible. I'm partial to signum because it's odd and multiplicative. b + (c > 0 ? 1 : 2) Can be written nicely using two new functions $S$ for signum and $Z$ the zero indicator function: $$ S(x) \stackrel{\small{\text{def}}}{=} \begin{cases} \;\;1 & x > 0 \\ \;\;0 & x = 0 \\ -1 & x < 0 \end{cases}$$ And $Z$ is $1$ when its argument is $0$ and $0$ otherwise. So we can write the expression as $$ b + \frac{3}{2} + \frac{1}{2} S(c) - \frac{1}{2} Z(c) $$ or $$ b + 1 + S(c) + \frac{1}{2} S^2(c) $$ with $S^2(c)$ denoting $S(c^2)$ or $S(c)^2$ since they're equivalent. This expression isn't that great in this case, but $S$ and $Z$ have some nice properties: $$ S(a)S(b) = S(ab) $$ $$ S(a)Z(a) = 0 $$ $$ Z(a) = 1 - S(a)^2 $$ $$ S(S(a)) = S(a) $$ $$ S(a)^n = S(a^n) \;\;\;\;\forall n \in \mathbb{N} $$ $$ S(a)^m = S(a)^m \cdot S(a)^{(2n)} \;\;\;\;\forall m, n \in \mathbb{N} $$ The concepts from lambda calculus and combinatory logic is worth mentioning here. We define lambda calculus by two constructions: - you can make a function using abstraction, e.g. to define the function $f(x)=x+1$ (assuming that numbers and addition is defined), we write $\lambda x.x+1$. - you can use a function using application, e.g. given a function $M$, we can evaluate it at $x$ by $M\ x$. Surprisingly, these two constructions are enough to express all of mathematics, without anything else assumed! For natural numbers, there is Church numerals to use. To define boolean values, we can write $$\mathbf{True} \equiv \lambda x. \lambda y. x$$ and $$\mathbf{False} \equiv \lambda x. \lambda y. y.$$ From this we can define all of Boolean algebra. Specifically, for the tenary operator $b?x:y$, we can construct the expression $b\ x\ y.$ Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Resistance \[R\], measured in Ohms \[\Omega\]is expressed as the ratio of the driving force - the voltage \[V\], measured in Volts, V - to the current \[I\], measured in Amps, A: \[R=\frac{V}{I}\]. The resistance \[R\]of an object is itself proportional to the length \[l\]of the object and inversely proportional to the cross sectional area \[A\]. We write \[R=\rho \frac{l}{A}\], where \[\rho\]is a constant called the resistivity of the material, measured in Ohm metres - specific to each material. Good insulators - bad conductors - have high resistivity, and bad insulators - good conductors - have low resistivity. Semiconductors are intermediate between these two. Increasing the temperature tends to increase the resistivity of good conductors and decrease the resistivity of insulators and semiconductors.
On page 48 of Calculus on Manifolds Spivak defines (Riemann) integration over rectangles $[a_{1},b_{1}]\times\cdots\times[a_{n},b_{n}]\subset\mathbb{R}^{n}$. Then on page 55 he extends this integral to bounded subsets $C\subset\mathbb{R}^{n}$ via characteristic functions. This integral is the usual one. Then, on page 63 he defines (smooth) partitions of unity and uses them later on page 65 to define and extended integral over open sets $A\subset\mathbb{R}^{n}$. The usual and extended integrals are not always the same. However, Theorem 3-12 (3) gives us a precise relation between the extended integral and the usual one. Now, mostly via problems, Spivak makes the reader verify all the familiar properties of the usual integral ( linearity, comparison, monotonicity, etc). However, there is no mention in either the theory or the exercises of whether these properties hold for the extended integral. Moreover, when doing the problems I found myself making use of them, so it is natural to ask if the extended integral also verifies these properties. That is: Let $A$ be an open subset of $\mathbb{R}^{n}$ and $f,g:A\rightarrow\mathbb{R}$ be continuous functions: Linearity: If $f,g$ integrable over $A$, so is $af+bg$ and $\int_{A}af+bg=a\int_{A}f+b\int_{A}g$. Comparison: $f,g$ integrable over $A$ and $f(x)\leq g(x)$ then $\int_{A}f\leq\int_{A}g$. In particular $\left|\int_{A}f\right|\leq\int_{A}|f|$. Monotonicity: If $B \subset A$ is open and $f$ in non-negative on $A$ and integrable over $A$ then it is integrale over $B$ and $\int_{B}f\leq\int_{A}f$. Additivity: If $A$ and $B$ are open and $f$ is continuous on $A\cup B$ and integrable over $A$ and $B$ then it is integrable over the union and the inetrsection and $\int_{A\cup B}f=\int_{A}f+\int_{B}f-\int_{A\cap B}f$. Let $A$ be open and of measure $0$. If $f$ is integrable over $A$ then $\int_{A}f=0$. If $f$ and $g$ agree except on a set of measure $0$ then $\int_{A}f=\int_{A}g$. I have been verifying these properties and seem to have a proof for each. But I would appreciate it if someone with more experience could corroborate that these properties do indeed hold for extended integrals.
The idea is using a "covariant" formalism in classical mechanics. I do not know how accommodate your equations in particular, but a good notion of covariant derivative that physically encompasses the "inertial forces" (as in GR) is at disposal also in classical physics. Consider the classical spacetime $M$. There there is a privileged class of reference frames called inertial such that changing (Cartesian) coordinates among them$$t=t'+c\:, \quad x^{i} = R^i\:_j x^{'j} + v^it' + b^i\:, \quad i,j= 1,2,3 \tag{1}$$for $R\in O(3)$, $c, v^i, b^i\in \mathbb R$ Instead, general transformations of coordinates (always Cartesian) at rest of generic reference frames are all of the form (in view of the fact that lengths are absolute at every given time)$$t=t'+c\:, \quad x^{i} = R^i\:_j(t') x^{'j} + b^i(t')\:, \quad i,j= 1,2,3 \tag{2}$$for arbitrary (smooth) functions $\mathbb R \ni t \mapsto R(t) \in O(3)$ and $\mathbb R \ni t \mapsto b^i(t) \in O(3)$. In inertial reference frames inertial motions are described by geodesics $$t= v^0s+ c^0\:, \quad x^i(s) = sv^i + c^i\:,\quad v^0 \neq 0\:.$$Indeed, they these are solutions $\gamma : \mathbb R \to M$ of the geodesic equation $$\frac{d^2x^a}{ds^2} + \Gamma^{a}_{bc}\frac{dx^b}{ds}\frac{dx^c}{ds}=0\:, \quad a,b,c =0,1,2,3$$where $$\Gamma^{a}_{bc}=0\:,\tag{3}$$with the requirement that $v^0\neq 0$, i.e. $\langle dT, \dot{\gamma}\rangle \neq 0 $, where $T: M \to \mathbb R$ is the absolute classical time, defined up to an additive constant and always coinciding with the coordinate $t$ and $t'$ (up to an additive constant). All that means that the spacetime of classical physics, as soon as we switch on dynamics according to Newtonian picture, a non-metric, symmetric affine connection arises determining inertial motions. This connection is trivial just in the Cartesian coordinates at rest with every inertial reference frame. Indeed, it is easy to see that the condition (3) is invariant under transformation of coordinates of the form (1). Instead, passing to a non-inertial reference frame equipped with (Cartesian) coordinates $x^{'a}$ where $x^{'0}=t'$ by means of (2), we have$$\Gamma^{'a}_{bc} \neq 0$$and, referring to (2) where the coordinates $t,x^i$ are supposed at rest with an inertial reference frame, since$$\Gamma^{'a}_{bc} =\frac{\partial x^{'a}}{\partial x^d} \frac{\partial^2 x^{d}}{\partial x^{'b}\partial x^{'c}} + 0\:, $$we have$$\Gamma^{'0}_{bc} =0\:, \quad \Gamma^{'i}_{0j}= \Gamma^{'i}_{j0} = (R^{-1})^{i}_{m}\dot{R}^m_j(t)\:,\quad \Gamma^{'i}_{00}= (R^{-1})^{i}_{m}\ddot{R}(t)^{m}_jx^{'j} + (R^{-1})^{i}_{m}\ddot{b}^m(t)\:, \quad \Gamma^{'i}_{jk}=0\:.$$In the non-inertial reference frame the geodesic equation (inertial motion) is the complete geodesic equation, where I have introduced the inertial mass $m$ of the point of matter considered,$$m\frac{d^2x^{'a}}{ds^2}= -m\Gamma^{'a}_{bc}\frac{dx^{'b}}{ds}\frac{dx^{'c}}{ds}\quad a,b,c =0,1,2,3$$and the terms $-\left(\frac{dx^{'0}}{ds} \right)^{-2} m\Gamma^{'i}_{bc}\frac{dx^{'b}}{ds}\frac{dx^{'c}}{ds}$ are nothing but the various inertial forces written into a more abstract formalism. Equipped with this affine connection (and taking advantage of the absolute spatial metric $\delta_{ij}$), every differential equation can be written in every coordinate system using the standard procedure, replacing every derivative $\partial_a$ with the corresponding covariant derivative $\nabla_a$. Inertial forces are automatically embodied in the formalism. The technically difficult (but not impossible obviously) point is re-writing the coefficients $\Gamma$ in terms of $\vec{\omega}$ its derivative and the acceleration of the origin of the inertial reference frame with respect to the inertial reference frame. By comparing with the classical formula of inertial forces, if $\vec{\omega} = \omega^i {\bf e}'_i$ is the angular velocity of the non-inertial reference frame with respect to the inertial one, $\vec{x'} = x^{'i}{\bf e}'_i$ is the position vector in the non-inertial reference frame and $\vec{a}_{O'} = a^{'i}{\bf e}'_i$ is the acceleration of the origin of axes of the non-inertial reference frame with respect to the inertial one, and everything is represented in the axes ${\bf e}'_1, {\bf e}'_2, {\bf e}'_3$ of the non-inertial frame, it should be (using the Euclidean metric $\delta_{ij}$ to raise and lower indices)$$\Gamma^{'i}_{0j} = \Gamma^{'i}_{j0} = \omega^k \epsilon^i\:_{jk}\quad \mbox{Coriolis' force}$$and$$\Gamma^{i}_{00} = a_{O'}^i + \epsilon^i\:_{jk}\dot{\omega}^j x'^{k} + \omega^i x^j\omega_j - x^i \omega^j \omega_j \quad \mbox{drag force}\:.$$I am not completely sure on this result as is the first time I try to get it.
I am currently trying to perform MCMC sampling using a (stochastic) model, for which I cannot derive a likelihood function, but which allows me to draw samples $y_\theta \sim p_{y|\theta}$, where $p_{y|\theta}$ is the distribution defined by the model with parameters $\theta$. I already looked at several papers for likelihood-free parameter estimation, e.g. this one, but they usually use some kind of summary statistic, for which the likelihood is estimated (e.g. instead of estimating $L(\theta)=p_{y|\theta}(y_0|\theta)$ they may estimate $L_\mu(\theta)=p_{\mu|\theta}(\mu_0|\theta)$ where $\mu_0$ is the mean of the observed data). This supplementary likelihood can then be used for sampling. For my current analysis, however, I cannot define any usable summary statistics. The reason is, that in addition to parameters my model also has some inputs, and I have very few samples for each distinct input. Instead, I hope to use the estimated likelihood directly. So my approach would be the following: For each update in the MCMC, draw a large number of samples $\hat{Y}_\theta=\{\hat{y}_{1,\theta},\ldots,\hat{y}_{N,\theta}\}$ from $p_{y|\theta}$ Estimate $\hat{p}_{y|\theta} \approx p_{y|\theta}$ using a kernel density estimate based on $\hat{Y}_\theta$. Use this estimate to estimate the likelihood of the observed data as $\hat{L}(\theta) = \hat{p}_{y|\theta}(y_0|\theta)$ Accept or reject the update based on the estimated likelihood using the normal rules for MCMC (e.g. Metropolis). However, I am unsure, what kind of bias I would introduce using this estimation. Looking at equation (35) and the section following it in the paper linked above, I would assume that this becomes equivalent to approximate Bayesian computation without using summary statistics given the choice of kernels mentioned in the text. Or is there some other bias I am introducing by not resorting to summary statistics? In general, I would assume, that the data itself can become its own summary statistic, but some kind of validation would be good.
I was looking through an old schematic and found two symbols that I didn't recognize: Is this a PNP transistor? looking up the model number doesn't give much information. Is this some kind of variable resistor or variable inductor? Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community I was looking through an old schematic and found two symbols that I didn't recognize: Is this a PNP transistor? looking up the model number doesn't give much information. Is this some kind of variable resistor or variable inductor? Is this a PNP transistor? looking up the model number doesn't give much Building up on the comment of Harry Svensson and jonk, this is a mesa PNP transistor. The MESA technique, in the early days of the transistor, was a technique developed for improve the (then poor) HF response of the devices by removing those parts of the base region which, for their geometric structure, do not improve the \$\beta\$ current gain and rise too much the stored base charge \$Q_{bb}\$ and the base-collector capacitance \$C_{bc}\$, raising the switching time and lowering the cut-off frequency of the device, resulting in its general slowing down. The technique consist of etching of the semiconductor around the emitter and the base contacts: this creates a sort of plateau respect to the collector region on the wafer around these contacts, and the Spanish word for this is "mesa". Is this some kind of variable resistor or variable inductor? This is precisely an analog delay line: it is a network which, within given frequency range and reasonable waveform distortion, produces at its output(s) a delayed version of its input signal, i.e.$$v_o(t)=v_i(t-t_D)$$where \$t_D\$ is the characteristic delay of the line. The model shown seems to be a multiple tap delay line i.e. a delay line offering \$n\$ outputs delayed respect to the input by increasing delay times, i.e.$$\begin{split}v_{o1}(t)&=v_i(t-t_D)\\v_{o2}(t)&=v_i(t-t_{D1})=v_i(t-(t_{D1}+t_{D2}))\\v_{o3}(t)&=v_i(t-t_{D1})=v_i(t-(t_{D1}+t_{D2}))\\\vdots\quad & \qquad\qquad\qquad\vdots\\v_{oN}(t)&=v_i(t-t_{DN})=v_i\left(t-\sum_{i=1}^Nt_{Di}\right)\\\vdots\quad & \qquad\qquad\qquad\vdots\\\end{split}$$In the case under examination, \$DE1\$ seems a 4-tap delay line where each tap adds a \$50\mathrm{ns}\$ delay respect to the preceding one.
I'm having difficulty proving Leibniz' formula for weak derivatives as presented in Evans PDE. I was hoping that someone could maybe shed some light on it for me? To be honest I'm not entirely comfortable with weak derivative concept. I'm actually an economist by trade hoping to improve my maths. My question is very closely related to the following one posted on this website: Evans's proof of the Leibniz's formula for the weak derivatives in Sobolev spaces however my undrstanding isn't as great as the original poster. I hope this exercise will help others struggling with weak derivatives. Disclaimer. I have copied the latex code from @Cookie's question as it's exactly the same as Evans' theorem in the book in a bid to save time. Since I'm relatively new to this site I don't know if this is acceptable or not. If not please tell me and I'll rewrite the code. The theorem and proof is as follows: $W^{k,p}(U)$ denotes a Sobolev space that consists of all locally summable functions $u : U \rightarrow \mathbb{R}$ such that for each multiindex $\alpha$ with $|\alpha| \le k$, $D^\alpha u$ exists in the weak sense and belongs to $L^p(U)$. $C_c^\infty(U)$ denotes the space of infinitely differentiable functions $\phi : U \rightarrow \mathbb{R}$, with compact support in $U$, and the function $\phi$ is called a test function. Theorem 1(Properties of weak derivatives) . Assume $u,v, \in W^{k,p}(U), |\alpha| \le k$. Then $\quad$ (iv) If $\zeta \in C_c^\infty(U)$, then $\zeta u \in W^{k,p}(U)$ and$$D^\alpha (\zeta u)=\sum_{\beta \le \alpha} {\alpha \choose \beta} D^\beta \zeta D^{\alpha - \beta} u \qquad \textit{(Leibniz' formula)} \tag{7}$$ $\quad$ where ${\alpha \choose \beta} = \frac{\alpha!}{\beta!(\alpha-\beta)!}$. Again, I didn't list properties $\text{(i)-(iii)}$ because I understood them already. Proof(of property $\text{(iv)}$) .We prove $\text{(7)}$ by induction on $|\alpha|$. Suppose first $|\alpha|=1$. Choose any $\phi \in C_c^\infty (U)$. Then \begin{align} \int_U \zeta u D^\alpha \phi \, dx &= \int_U u D^\alpha (\zeta \phi) - u(D^\alpha \zeta) \phi \, dx \\ &= - \int_U (\zeta D^\alpha u + u D^\alpha \zeta) \phi \, dx \end{align} Thus $D^\alpha (\zeta u)=\zeta D^\alpha u + u D^\alpha \zeta$, as required. $\quad$ Next assume $l < k$ and formula $\text{(7)}$ is valid for all $|\alpha| \le l$ and all functions $\zeta$. Choose a multiindex $\alpha$ with $|\alpha| = l+1$. Then $\alpha = \beta + \gamma$ for some $|\beta|=l, |\gamma| = 1$. Then for $\phi$ as above, \begin{align} \int_U \zeta u D^\alpha \phi \, dx &= \int_U \zeta u D^\beta (D^\gamma \phi) \, dx \\ &= (-1)^{|\beta|} \int_U \sum_{\sigma \le \beta} {\beta \choose \sigma} D^\sigma \zeta D^{\beta - \sigma} u D^\gamma \phi \, dx \tag{A} \\ &= (-1)^{|\beta|+|\gamma|} \int_U \sum_{\sigma \le \beta} {\beta \choose \sigma} D^\gamma(D^\sigma \zeta D^{\beta - \sigma} u ) \phi \, dx \tag{B} \\ &= (-1)^{|\alpha|} \int_U \sum_{\sigma\le \beta} {\beta \choose \sigma} [D^\rho \zeta D^{\alpha - \rho} u + D^\sigma \zeta D^{\alpha - \sigma} u] \phi \, dx \tag{C} \\ &= (-1)^{|\alpha|} \int_u \left[\sum_{\sigma \le \alpha} {\alpha \choose \sigma} D^\sigma \zeta D^{\alpha - \sigma} u \right] \phi \, dx. \tag{D} \end{align} Just to make sure I am understanding this correctly, the first part of the proof follows because \begin{align} \int_U u D^\alpha (\zeta \phi) - u(D^\alpha \zeta) \phi \, dx &=\int_U u \zeta D^\alpha \phi\, dx + \int_U u \phi D^\alpha \zeta\,dx - \int_U u(D^\alpha \zeta) \phi \, dx \\ &=\int_U \zeta u D^\alpha \phi \, dx \end{align} and we have since $\zeta \phi \in C^\infty_c(U)$ \begin{align*} \int_U \zeta u D^\alpha \phi \, dx=\int_U u D^\alpha(\zeta \phi)\, dx - \int_U u(D^\alpha \zeta)\phi \,dx&=-\int_U D^\alpha u (\zeta \phi)\,dx-\int_U u(D^\alpha \zeta)\phi \,dx \\ &= -\int(\zeta D^\alpha u + u D^\alpha \zeta)\phi\,dx \end{align*} I also understand how to get (A). My main problem is that I don't see how to get from (A) to (B), (B) to (C) and (C) to (D). Many thanks.
I am working on an model of a permanent magnet synchronous machine. Right now I am stuck with calculating the eddy current losses caused by the harmonics of the stator magnetic field in the electrical steel of the rotor. Or to put it differently. How do I calculate the eddy current in electric steel at high frequencies and low flux density? I would like to use something very simple like $$P_{ec} = \sum\limits_\nu \sigma_{ec} (f(\nu-1))^2 B^2_\nu m_\nu$$ were $P_{ec}$ are the eddy current losses in watts. $\sigma_{ec}$ are the specific eddy current losses for the material, but I don't know if they are any good for frequencies above 2000Hz and I would like to calculate the losses up to 100kHz if that is even possible. $f$ is the fundamental frequency an $\nu$ is the ordinal of the harmonics. $B_\nu$ is the respective flux density and $m_\nu$ is the mass. Now there are two big questions. I know the amplitude of the flux density in the air gap (above the surface) but how do I calculate the flux density in the lamination (electric steel: M330-35A). Apparently I have to consider the skin depth, but I have no values for the permeability at such high frequencies and comparably low flux densities. And also, how do I calculate the mass? $$m_\nu = A \delta \rho$$ ($A$ - surface of the rotor , $\delta$ - skin depth, $\rho$ - density of the electric steel) If I take the same flux density as in the air gap and calculate the masses like above, I obtain losses that are so low, that I am pretty sure they can't be right. Does anyone have an idea how to solve this problem by adjusting the described or with another approach. I don't need a 100% accurate result. If I am 50% off that is still ok. Any references to text books or papers are also very much appreciated.
Any family of language that is a trio isclosed under interleaving with a regular set. This includes of course interleaving of 2 regular sets, since regular sets form a trio. Proving the result (and more) only with closure properties Note: I created the definitions below for the purpose of thisquestion. I do not know whether there are established definitions forthis, which might exist under another name. The purpose of this approach is to avoid any complex construction ofautomaton. But we need at least one specific operation to account fordealing with several strings at the same time. And, as an unexpected benefit, the end result ismuch more general ( this is actually more to be expected from proofs based on closure properties). However, the proof is centered on the question asked, and only includes remarks to show how it generalizes. Consider two alphabets $\Sigma_i$ for $i=1,2$. We can consider theirproduct $\Sigma_1\times\Sigma_2=\{(a_1,a_2)\mid a_1\in\Sigma_1\wedgea_2\in\Sigma_2\}$ as a new alphabet, where the symbols are pairs ofsymbols of $\Sigma_1$ and $\Sigma_2$. Similarly, with 3 alphabets, we can build an alphabet of triples(instead of pairs). We ignore the trivial issue of associativity inusing pairs to build triples, or $n$-tuples, here and in the rest ofthis answer. Now, given two strings $x\in\Sigma^*$ and $y\in\Pi^*$ such that $|x|=|y|$we can define the conflation of these two strings as the string $z=\mathrm{Conflate}\,(x,y)\in(\Sigma\times\Pi)^*$with the same size, such that $\forall i\in[1,|x|], z_i=(x_i,y_i)$. We can similarly conflate $n$ strings of equal length into a singlestring of $n$-tuples of symbols ... but we will not go beyond $n=3$. Finally, given two languages $L_1\subseteq\Sigma_1^*$ and $L_2\subseteq\Sigma_2^*$we can define the conflation of these two languages: $$\mathrm{Conflate}\,(L_1,L_2)=\{\mathrm{Conflate}\,(x,y)\mid |x|=|y|\wedge x\in L_1 \wedge y\in L_2\}$$ We can also conflate similarly any number of languages, to produce alanguage on the cross product of their alphabets. This $\mathrm{Conflate}$ operation has many simple properties, thatare rather trivial to prove. Given two alphabets $\Sigma_1$ and $\Sigma_2$ and two languages$L_1\subseteq\Sigma_1^*$ and $L_2\subseteq\Sigma_2^*$: $\mathrm{Conflate}\,(L_1,L_2)\subseteq(\Sigma_1\times\Sigma_2)^*$ $\mathrm{Conflate}\,(L_1,\Sigma_2^*)$ is regular iff $L_1$ is regular $\mathrm{Conflate}\,(\Sigma_1^*,L_2)$ is regular iff $L_2$ isregular The proof uses a projection homomorphism that keep only the $L_1$ or the $L_2$ component of the conflation. side note: the above is also true for context-free, and moregenerally families of languagesclosed under non-erasing homomorphism and inverse homomorphism (such as trios). For example, if $\mathcal F$ is a trio, and $L$ is a language, and $\Sigma$ and alphabet (not necessarily the alphabet of $L$), then $\mathrm{Conflate}\,(L,\Sigma^*)\in\mathcal F\;$ iff $\;L\in\mathcal F$. $\mathrm{Conflate}\,(L_1,L_2)= \mathrm{Conflate}\,(L_1,\Sigma_2^*) \cap \mathrm{Conflate}\,(\Sigma_1^*,L_2)$ Hence, if $L_1$ and $L_2$ are both regular, then$\mathrm{Conflate}\,(L_1,L_2)$ is also regular. Now we consider also the alphabet $B=\{0,1\}$, and the alphabetcross-product $\Sigma_1\times\Sigma_2\times B$, and we define on thisalphabet the substitution $\sigma$ as follows: $\forall (a_1,a_2,b)\in(\Sigma_1\times\Sigma_2\times B),\;\sigma((a_1,a_2,b))=\;($ if $b=0$ then $a_1$ else $a_2)$. If $L_1$ and $L_2$ are both regular, then $\mathrm{Conflate}\,(L_1,L_2)$ is also regular, and thus$\mathrm{Conflate}\,(\mathrm{Conflate}\,(L_1,L_2),B^*)$ is regular, since $B^*$ is. Applying the substitution $\sigma$, since regular sets are closedunder substitution, we know that the language$\sigma(\mathrm{Conflate}\,(\mathrm{Conflate}\,(L_1,L_2),B^*))$is regular. But it can fairly easily be proved that $$\mathrm{Interleave}\,(L_1,L_2)=\sigma(\mathrm{Conflate}\,(\mathrm{Conflate}\,(L_1,L_2),B^*))$$ Hence $\mathrm{Interleave}\,(L_1,L_2)$ is regular. The interesting point here is that, with nearly no further work, wecan prove identically that many families of languages, context-free for example,and more generally trios (hence also AFLs), are closed under$\mathrm{Interleave}$ composition with regular languages, notablybecause the substitution $\sigma$ is non-erasing. This essentiallyfollows from the fact that trios are closed under inverse homomorphism, underintersection with regular sets, and under non-erasing homomorphism.
Assume you've come in contact with a tribe of people cut off from the rest of the world, or you've gone back in time several thousand years, or (more likely) you've got a numbskull cousin. How would you prove that the Earth is, in fact, round? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Assume you've come in contact with a tribe of people cut off from the rest of the world, or you've gone back in time several thousand years, or (more likely) you've got a numbskull cousin. How would you prove that the Earth is, in fact, round? The shadow of the Earth on the Moon during an eclipse and the way masts of ships are still visible when the hulls are out of sight are the classical reasons. Simplest, you say? There are two that strike me as being simple to demonstrate. Luckily someone on the internet has already spent some time to help us here to make these easy to illustrate: Eratosthenes carried out this experiment to determine the circumference of the Earth, already assuming its spherical shape; incidentally, the proof of such being consequential of the procedure. However, a demonstration can be achieved by a simple, local experiment (as opposed to having a party venture to a distant enough point): Take a piece of card (A3, or so), attach two obelisks to the card by their bases and, with a light source, produce shadows - now, slowly bend the card so that it becomes convex (that is, the side with obelisks attached bulging out) and watch the effect. There are numerous other ways of demonstrating that the Earth is round, or curved, at least, from analysing the center of gravity to simply observing the other round objects that are visible in space; but I believe these illustrations to be the simplest to comprehend. Images sourced from SmarterThanThat Another way is the triple-right triangle: After this you'll end up at the starting point. This is not possible on a flat surface since you'd just be "drawing" an incomplete square. Source: http://www.math.cornell.edu (add /~mec/tripleright.jpg to find the image) If the person in question is from a temperate latitude, take them to the tropics to feel the heat of the noon sun, preferably trapped out on a sailboat without water. Point to the very high sun and make your point when they are the most miserable. Next, take them to a very high latitude. As they freeze and become exhausted at 3:00 AM while out walking the tundra, point out the low, non-setting (or non-rising) sun, and re-iterate your point in their heightened state of misery. Through suffering and a sense of pride, the object of your demonstration will now likely feel that they have "been there" and "seen it" with "their own eyes". If convinced, that person will gladly proselytize the "truth" of aforementioned roundness of said planet, and will confront the heretics who do not believe. I think that there are no simple answers to provide "proof" of anything. "Proof" is relative, much in the way "truth" is relative. If simple means "without using science or technology" then you are without hope, as the receiver of the "proof" must accept the truth of the methodology. Photos from space are photoshopped. Ships at sea look below the horizon because Osirus/Neptune/Odin/Jesus/Bhaal does not wish man to see to infinity (which also proves that the heavenly bodies are not very far away). Sticks in the dirt and shadows prove nothing unless you accept that other bodies are permanent, in orbital motion, and far away (at which point the person will already believe that the planet is round). Don't try to prove anything. You can't. Instead, "Demonstrate and educate", because all you can do is convince, not prove. Sitting for a while by the seashore ought to make it clear the Earth isn't flat, even if you don't happen to see a ship go over the horizon. The edge of the discworld Earth would have to be just a few miles away, and there's no way that the entire, circular world would fit inside the circle that the ocean horizon seems to make. Humans have not just known the Earth was spherical but actually have been measuring its radius for thousands of years. http://en.m.wikipedia.org/wiki/History_of_geodesy Besides the going back in time option, you could just show your "numbskull cousin" a picture of the Earth taken from the moon like the one below. You can build a simple pendulum and observe how it rotates as the day progresses. You can then put a pendulum on a stick or something that you can rotate yourself in order to demonstrate that when you rotate the stick, the pendulum will continue to swing in the same direction. This shows that the direction of movement of the pendulum will change relative to its base only if its base is rotated. Pendulums can also be used to measure your latitude (its direction will change at different rates for different latitudes), and to measure the local value of g (the amount of time it takes to go through one cycle, or its period of oscillation, will vary with gravity). I think the simplest way is to have two sticks of same size put both of them perpendicular to the surface of the earth in the mid day sunshine and the gape between them is to be few miles and exact time mesure the angle of elevation or mesure the size of the shadow so both will be differ! By several exams in a sysmetic order we can find that the earth is round. The occurrence of noon (i.e. meridian passage of true Sun) isn't simultaneous for two observers situated along an east-west line. Hmmm...okay perhaps even simpler. Sunrise and sunset aren't simultaneous for those two observers. There's a video on youtube of a island a few miles away such that when you see the island from an elevation, you can see further to its base than you can when you see the island from the shoreline ( a demonstration of answer 1 above). I think this is the simplest way given that now we have zoom ability, anyone can do this kind of experiment on a clear day from any shoreline viewing something a few miles away. Classically, the gravitational force experienced by a mass $m$ above the Earth is given by the familiar, $$F=G\frac{Mm}{r^2}$$ where $M$ is the mass of the Earth. In other words, a mass will experience a force which continually decreases as it distances itself away from the Earth. Now suppose the Earth was a flat infinitely$^{\dagger}$ large plane in $\mathbb R^3$ which is infinitesimal, with mass density $\sigma$ (per unit area, not volume). The gravitational potential $\Phi$ satisfies the Poisson equation $\nabla^2 \Phi = 2\pi G \sigma \delta(z)$, assuming the plane is at $z=0$. The solution is given by $\Phi(z)= 2\pi G \sigma |z|$. The gravitational force is $-\partial_z \Phi$, which is always pointing towards the plane. Another feature is that the gravitational force is constant with magnitude $2\pi G \sigma$. In other words, no matter how high one is above the plane, the same forced is experienced. To be more realistic, if the plane had some non-zero thickness, the force would still be constant, but whilst inside there would be a 'jump' as depicted: Hence, to determine if the Earth is flat, one would simply have to conduct an experiment to see how the gravitational force scales as one increases altitude. One will find $F \sim r^{-2}$ approximately, as expected, confirming the Earth is round. Of course, for sufficiently small variations in $r$, one may be fooled into thinking $F$ is constant since the change is minute, but it is measurable. $\dagger$ For convenience, it is taken to be infinitely large; the conclusions remain the same, but the force will of course be different, since it will be dependent on $x$ and $y$ as well. draw a triangle. On a curved surface the angle sum of a triangle is never equal to 180. If that is the case on Earth, it is spherical. If you're in the northern hemisphere, walk to the south and notice how Polaris sinks lower and lower each night until it disappears below the horizon. If he can handle more info point out how the stars before you are getting higher and the stars behind you are getting lower. Of course then you'll need to explain their east to west motions too. Here's another way, which only works in the Southern hemisphere - at least that is the case if you use the Flat Earth Society's map of the world, where Antarctica is a ring of ice around the works, and the North Pole is at the centre. When flying from Sydney to Santiago or Pretoria, you regularly get close enough to Antarctica to see it. If the earth were flat, that would mean a huge waste of fuel. Only on a spherical world does this kind of trajectory make sense. On a flat earth, the shortest distance from Sydney to Santiago is via the North Pole, Sydney to Pretoria takes you across the Himalayas. Of course this depends on airlines trying to save fuel - if all of them are in cahoots with NASA and all the other conspirators who claim a spherical earth, then all bets are off... Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Are mobile phone masts bad for health? Character Questionnaire An Introduction to E-Cigarettes Why I Hate DRM Debugging The Manuscript On Being Stuck Writing and Roleplay Murphy's Law Apocryphal Tales About Rebuilding the Garden Pond About Me Pictures Of Cats Cosmology for the Layman Magnatune Demo Recommendations Hardware Software Ambercon UK Contact Personal Recommendation This is a seminal book on structured programming. As such it is showing its age; all the examples are in PL/1 and FORTRAN, and Amazon is only showing second-hand copies, but it is well worth getting and reading. It's recommendations are as true today as they were in 1978. ~ o ~ For an open-source, world-class network protocol analyser. Diceless role-playing in four-star luxury. ~ o ~ There are new scare stories circulating saying that mobile phone masts on schools are bad for children's health. There is plenty of scientific evidence that low levels of exposure to radio transmissions have no harmful effects, but in fact even if we assume that there are dangers, having a mobile phone mast on a school can be safer than not having one there. Let us assume that in our hypothetical school, children: spend eight hours per day at school 1, spend all day at an average distance of twenty metres from the mast, 2 make three five minute phone calls during the day with their mobile phones to their ears 3, and spend the rest of the time with their mobile phones one metre from them. 4 Let us also assume that: the mast puts out a constant signal of 1 kilowatt 5, and the mobile phones communicate with the mast for five seconds every ten minutes, or the mobile phones communicate with the mast continuously when a phone call is in progress. We will assume that the mast has been built for a reason, so that coverage is difficult if the mast is not present, and that only a child's own phone will affect them 6. In this study I will be using ad-hoc figures for exposure based on the power of the transmission, the distance to the transmitter and the time taken. These units are related to a one hour exposure to a one watt transmitter at a distance of one metre 7. A little maths can easily convert that into proper exposure measurements, but my aim here is only to show the relative levels of exposure in various scenarios. The extra maths would just get in the way 8 With No Mast If there is no mast, then the mobile phones will be using full power, 4 watts, to communicate with the nearest mast. The child's exposure is the sum of: 8 hours at 1 metre from the phone, with the phone active for five seconds every ten minutes 15 minutes at 3cm from the phone, with the phone active continuously. Five seconds every ten minutes is $\frac {5} {60 \times 10} = 0.0083$ of the time, $8 hours \times 0.0083 = 0.06 hours$. The output is four watts, so that is $0.067 \times 4 = 0.268$ in our ad-hoc units. The exposure changes with the inverse square of the distance from the transmitter, so during a phone call the distance is only 3cm, or 0.03m, and that will increase the exposure by a factor of $\frac {1} {0.03^2} = 1111$ but only for a quarter of an hour a day, so that is an exposure of 277.75 per watt, for a total of 1111. So the child's total exposure over the school day is $1111.3$ of our ad-hoc units. With A Mast If there is a mast, then the mobile phones will be using their minimum power, $\frac 1 4$ watts. That reduces the exposure due to the phone to $\frac 1 4 \div 4 = \frac 1 {16} th$ of the previous case: 69.4 But then we have to add in the effect of the mast: a continuous signal of 1000 watts at a distance of 20 metres. $\frac {1000} {20^2} = 2.5$ for eight hours: $2.5 \times 8 = 20$ So the child's total exposure over the school day is $69.4 + 20 = 89.4$ of our ad-hoc units. What about texting? Let's say fifty texts a day, each taking five seconds to send, with the phone half a metre away. So the distance effect is $\times 4$ and the time is $50 \times \frac 5 {60 \times 60} = 0.07$ hours. So at full power, $4 \times 4 \times 0.07 = 1.1$ and at minimum power, $\frac 1 4 \times 4 \times 0.07 = 0.07$. Either way, texting does not make much difference. Speaker-phone? Speaker-phone is much better. That makes the distance during a phone call about a quarter of a metre. So the effect of the distance is $\frac 1 {0.25^2} = 16$ per hour per watt. So a quarter of an hour per day at maximum power would be an exposure of 16, and the same time at minimum power would be 1. So what have we learned? The effect of using a mobile phone is much more important than where the nearest mast is, even if it is very close. A close mast makes the phone use a lower power, which reduces exposure. Using speaker-phone reduces exposure significantly. Conclusion I am not saying that mobile phone masts should be installed for the good of our children. This rough calculation can not hold such responsibility, nor does it take into account any medical evidence. I am not an expert in radiology or the effects of radiation on the human body. However what this calculation has shown is that mobile phone masts do not significantly increase a child's exposure to electromagnetic radiation. Protesting a mobile phone mast in close proximity to a school is at best of questionable value, and at worst may be counter-productive.
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
Analytic function Set context $\mathcal O\subset \mathbb C$ definiendum $f\in \mathrm{it}$ inclusion $f:\mathcal O\to\mathbb C$ for all $c$ … series in $\mathbb C$ todo, roughly $\exists c.\ \forall z.\ f(z)=\sum_{n=-\infty}^\infty c_n\,z^n$ Discussion Picture a continuous function $f:\mathbb R^2\to\mathbb R$ as a surface given by $f(x,y)$ and imagine drawing a circle of radius $1$ around the point at the origin and is parametrized by $\langle \cos\theta,\sin\theta,h\rangle$ where $\theta\in[0,2\pi)$ and $h\in[0,f(\cos\theta,\sin\theta))$. See the picture below. It looks like a cylinder cut of at height $f(\cos\theta,\sin\theta)$. Let's call it a “fence”. What is the surface of that fence? Clearly, it's given by the integral $\int_0^{2\pi}\mathrm{d}\theta$ of $f(\cos\theta,\sin\theta)$. And so the average height (if we count negative height as negative contributions) of the fence is $\frac{1}{2\pi}\int_0^{2\pi}f(\cos\theta,\sin\theta)\,\mathrm{d}\theta$ For example, a parabola $x^2+y^2$ has average height of $1$ and a tilted plane like $7x+3y$ always has average height of $0$. As a remark, we can trivially extend the definition to compute the fence height of a fence with radius $R$ at a point $p=\langle p_x,p_y\rangle$ by shifting an scaling the integral $\frac{1}{2\pi R}\int_0^{2\pi}f(R\cos\theta+p_x,R\sin\theta+p_y)\,\mathrm{d}\theta$ A complex function $f(z)$ consists of two real functions, so it's fence height is just given by the sum of the fence heigh of $\mathrm{Re}\,f(z)$ and $i\,\mathrm{Im}\,f(z)$. Let's consider the function $f(z):=z=r\,\cos\theta+i\,r\,\sin\theta$. We have $z^n=r^n\,\mathrm{e}^{i\,n\,\theta}=r^n\,\cos(n\,\theta)+i\,r^n\,\sin(n\,\theta)$ Both real and imaginary part oscillate along $\theta$. So from the plots below alone it is obvious that the average fence height for $n\neq 0$ must be zero $n\neq 0\implies\frac{1}{2\pi}\int_0^{2\pi}z^n\mathrm{d}\theta=0$ and if $n=1$, then it’s clearly $1$. Note that we can compute the real and the imaginary fence height at once. The circle is parametrized by $z:=\mathrm{e}^{i\,\theta}$ and so we can express the infinitesimal length in $\mathbb R^2$ as $\mathrm{d}\theta=\frac{1}{i}\frac{\mathrm{d}z}{z}$. The factor $\frac{1}{z}$ corrects the orientation of $\mathrm{d}z$, it cancels out the complex mixing of components introduced by walking along the complex plane. The fence height of $z\cdot f(z)$ is called the residue and equals $\frac{1}{2\pi\,i}\oint_\gamma f(z)\ \mathrm d z$. In this language, the picture saying that only the fence height of the constant function $z^0$ isn't zero is the message that $\frac{1}{z}$ is the special function with non-vanishing line integral. Now consider an analytic function, i.e. a function which can be written by a countable series with coefficients $c_n\equiv a_n+i\,b_n$ $f(z)=\sum_{n=-\infty}^\infty c_nz^n=\sum_{n=-\infty}^\infty \left(a_n+i\,b_n\right)r^n\,\mathrm{e}^{i\,n\,\theta}$ Explicitly separating real and imaginary parts, this reads $\mathrm{Re}\,f(z)=\sum_{n=-\infty}^\infty \left(a_n\cos(n\,\theta)+b_n\sin(n\,\theta)\right)r^n$ $\mathrm{Im}\,f(z)=\sum_{n=-\infty}^\infty \left(b_n\cos(n\,\theta)+a_n\sin(n\,\theta)\right)r^n$ Now it’s clear that analyticity is a strong restriction: They are just a two real functions where, for every $n\neq 0$, their two coefficients oscillate along $\theta$, and even more, they even all have fence height zero. This implies that $f$'s coefficient $c_0$ is already the fence height of all of $f$. In fact, this implies that $f(z)=\sum_{n=-\infty}^\infty c_nz^n$ itself is determined by the fence heights of of the functions $z^k\,f(z)$! So for analytic functions we have $c_n = \frac{1}{2\pi\, i} \oint_\gamma \frac{f(z)}{z^n}\, \frac{\mathrm dz}{z}$, or by shifting the fences to $p$ we get Cauchy's integral formula $\frac{1}{n!}f^{(n)}(p) = \frac{1}{2\pi\, i} \oint_\gamma \frac{f(z)}{(z-p)^{n+1}}\, \mathrm dz$ Roughly, the Laplace transform uses this for a re-encoding of a functions $f:\mathbb R^+\to\mathbb R$ with Taylor expansion $f(t)=\sum_{n=0}^\infty a_n t^n$, namely by mapping $t^n$ to $s^{-n}\cdot \frac{1}{s}$. Reference Wikipedia: Analyticity of holomorphic functions
Difference between revisions of "Lebesgue space" (Importing text file) (TeX encoding is done) Line 1: Line 1: − A [[Measure space|measure space]] + A [[Measure space|measure space]] (where is a set, is a -algebra of subsets of , called measurable sets, and is a measure defined on the measurable sets), isomorphic to the "standard model" , consisting of an interval and an at most countable set of points (in "extreme" cases this "model" may consists of just the interval or of just the points ) endowed with the following measure : on one takes the usual [[Lebesgue measure|Lebesgue measure]], and to each of the points one ascribes a measure = ; the measure is assumed to be normalized, that is, = = . The "isomorphism" can be understood here in the strict sense or modulo 0; one obtains, respectively, a narrower or wider version of the concept of a Lebesgue space (in the latter case one can talk about a Lebesgue space modulo 0). One can give a definition of a Lebesgue space in terms of "intrinsic" properties of the measure space (see [[#References|[1]]]–[[#References|[3]]]). − A Lebesgue space is the most frequently occurring type of space with a normalized measure, since any complete separable metric space with a normalized measure (defined on its Borel subsets and then completed in the usual way) is a Lebesgue space. Apart from properties common to all measure spaces, a Lebesgue space has a number of specific "good" properties. For example, any automorphism of a Boolean + A Lebesgue space is the most frequently occurring type of space with a normalized measure, since any complete separable metric space with a normalized measure (defined on its Borel subsets and then completed in the usual way) is a Lebesgue space. Apart from properties common to all measure spaces, a Lebesgue space has a number of specific "good" properties. For example, any automorphism of a Boolean -algebra on a measure space is generated by some [[Automorphism|automorphism]] of a Lebesgue space . Under a number of natural operations, from a Lebesgue space one again obtains a Lebesgue space. Thus, a subset of positive measure in a Lebesgue space is itself a Lebesgue space (its measurable subsets are assumed to be those that are measurable in , and the measure is =/ ); the direct product of finitely or countably many Lebesgue spaces is a Lebesgue space. Other properties of Lebesgue spaces are connected with measurable partitions (cf. [[Measurable decomposition|Measurable decomposition]]). ====References==== ====References==== Revision as of 13:55, 4 December 2012 A measure space $(M,\mathfrak B, \mu)$ (where $M$ is a set, $\mathfrak B$ is a $\sigma$-algebra of subsets of $M$, called measurable sets, and $\mu$ is a measure defined on the measurable sets), isomorphic to the "standard model" , consisting of an interval $\Delta$ and an at most countable set of points $\alpha_i$ (in "extreme" cases this "model" may consists of just the interval $\Delta$ or of just the points $\alpha_i$) endowed with the following measure $\mathfrak m$: on $\Delta$ one takes the usual Lebesgue measure, and to each of the points $\alpha_i$ one ascribes a measure $\mathfrak(\alpha_i) = \mathfrak m_i$; the measure is assumed to be normalized, that is, $\mu(M) = \mathfrak m(\Delta) + \sum\mathfrak m_i = 1$. The "isomorphism" can be understood here in the strict sense or modulo $0$; one obtains, respectively, a narrower or wider version of the concept of a Lebesgue space (in the latter case one can talk about a Lebesgue space modulo $0$). One can give a definition of a Lebesgue space in terms of "intrinsic" properties of the measure space $(M,\mathfrak B, \mu)$ (see [1]–[3]). A Lebesgue space is the most frequently occurring type of space with a normalized measure, since any complete separable metric space with a normalized measure (defined on its Borel subsets and then completed in the usual way) is a Lebesgue space. Apart from properties common to all measure spaces, a Lebesgue space has a number of specific "good" properties. For example, any automorphism of a Boolean $\sigma$-algebra on a measure space $(\mathfrak B, \mu)$ is generated by some automorphism of a Lebesgue space $M$. Under a number of natural operations, from a Lebesgue space one again obtains a Lebesgue space. Thus, a subset $A$ of positive measure in a Lebesgue space $M$ is itself a Lebesgue space (its measurable subsets are assumed to be those that are measurable in $M$, and the measure is $\mu_A(X)=\mu(X) / \mu(A)$); the direct product of finitely or countably many Lebesgue spaces is a Lebesgue space. Other properties of Lebesgue spaces are connected with measurable partitions (cf. Measurable decomposition). References [1] P.R. Halmos, J. von Neumann, "Operator methods in classical mechanics. II" Ann. of Math. , 43 : 2 (1942) pp. 332–350 [2] V.A. Rokhlin, "On mean notions of measure theory" Mat. Sb. , 25 : 1 (1949) pp. 107–150 (In Russian) [3] J. Haezendonck, "Abstract Lebesgue–Rokhlin spaces" Bull. Soc. Math. Belg. , 25 : 3 (1973) pp. 243–258 Comments Cf. also [a1] for a discussion of Lebesgue spaces and measurable partitions, including an intrinsic description of Lebesgue spaces. References [a1] I.P. [I.P. Kornfel'd] Cornfel'd, S.V. Fomin, Ya.G. Sinai, "Ergodic theory" , Springer (1982) pp. Appendix 1 (Translated from Russian) How to Cite This Entry: Lebesgue space. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Lebesgue_space&oldid=29074
This post is based on problems 2.10 and 2.11 in, “Heard on the Street” by Timothy Falcon Crack. I was asked how to price a digital option in a job interview - and had no idea what to do! European Call Options A European call option is the right to buy an asset at the strike price, , on the option’s expiration date, . A call is only worth exercising (using) if the underlying price, , is greater than at , as the payoff from exercising is . The plot below shows the value of a call option, as a function of the underlying asset’s price, with : Selling a call option with a strike earns you the call’s price, , today, but your payoff will be decreasing in the underlying price: Digital Call Options A digital call option with is similar - it pays off one dollar if at expiration, and pays off zero otherwise: Suppose you have a model for pricing regular call options. If you’re using Black-Scholes the price of the call, , is a function of , , time to expiration , the volatility of the underlying asset , and the risk free rate : \begin{equation} c=F(K,S,T-t,\sigma,r) \end{equation} Now - suppose the model is correct. How can you use to price the digital option? Replicating the Digital Option The trick is to replicate the digital option’s payoff with regular calls. As a starting point, consider buying a call with and selling a call with : This is close to the digital option, but not exactly right. We want to make the slope at 100 steeper, so we need to buy more options. This is because a call’s payoff increases one-for-one with the underlying once the option is in the money, so with one option you are stuck with a slope of one. Consider buying two calls with and selling two calls at : As opposed to a slope of 1 between 100 and 101, now we have a slope of two between 100 and 100.5. Generalizing this idea - consider a number . To get a slope of , you buy calls at and you sell calls at . Here’s what it looks like for : Given that the slope is , to get an infinite slope, we take the limit as goes to zero. How much will the above portfolio cost? You earn from selling the calls, and pay for the calls. The net cost is: \begin{equation} lim_{\epsilon \rightarrow 0} \frac{F(100+\epsilon,\cdot)-F(100,\cdot)}{\epsilon} \end{equation} What does this look like? A derivative! It might look more familiar if I re-wrote it as: \begin{equation} lim_{\epsilon \rightarrow 0} \frac{F(K+\epsilon)-F(K)}{\epsilon} \end{equation} The price of the digital option is the derivative of with respect to the strike price . Conclusion Many complicated payoffs can be re-created as combinations of vanilla puts and calls. For an overview, see the first few chapters of Sheldon Natenberg’s, “Option Volatility & Pricing”.
Given an ordinal $\alpha$ let $\operatorname{Fn}(\omega, \alpha)$ be the set finite partial functions from $\omega$ to $\alpha$. Given a cardinal $\kappa$ let $${\prod_{\alpha<\kappa}}^{\text{fin}}\operatorname{Fn}(\omega,\alpha) = \{\vec{p}\in {\textstyle \prod_{\alpha<\kappa}}Fn(\omega,\alpha): |\{\alpha<\kappa: p_\alpha \not = \emptyset\}|<\omega\}.$$ We can then turn this into a partial order by letting $\vec{p}\leq \vec{q}$ when $p_\alpha \supseteq q_\alpha$ for all $\alpha<\kappa$. Let $\mathbb{P}$ denote this partial order. Exercise III.3.96 (p.195) of Kunen's Set Theory (2011 edition) claims that if $\kappa$ is weakly inaccessible, then $\mathbb{P}$ is $\kappa$-cc. Question: Is weak inaccessibility necessary? More precisely, can we prove that if $\kappa$ is regular, then $\mathbb{P}$ is $\kappa$-cc? I have something like the following proof in mind. Suppose $X\subseteq \mathbb{P}$ has size $\kappa$, and let $Y = \{y: \exists \vec{p}\in X(y = \{\alpha<\kappa: p_\alpha \not = \emptyset\})\}$. Then $Y$ is a set of finite sets and has size $\kappa$, and thus by the $\Delta$-system lemma there is some $Y'\subseteq Y$ also of size $\kappa$ with a finite kernal $r$. Let $X'$ be the corresponding subset of $X$. Then since there are $\kappa$-many elements of $X'$, $\kappa$-many of them will agree on $r$. Thus there will be $\kappa$-many which are compatible in $\mathbb{P}$.
So I want to prove that $L$ is continuous in the strong topologies, ie, $L$ takes bounded sets to bounded sets. Using bounded sets is a good plan. We have the fact that a continuous linear map between topological vector spaces maps bounded sets to bounded sets: Let $E,F$ be topological vector spaces (real or complex), and $T \colon E \to F$ a continuous linear map. Further, let $B \subset E$ be bounded. If $W$ is any neighbourhood of $0$ in $F$, by continuity there is a neighbourhood $V$ of $0$ in $E$ with $T(V) \subset W$. By boundedness, there is a $\delta > 0$ such that $\alpha B \subset V$ for all scalars $\alpha$ with $\lvert\alpha\rvert < \delta$. But then $\alpha T(B) = T(\alpha B) \subset T(V) \subset W$ for $\lvert\alpha\rvert < \delta$, so $T(B)$ is bounded. Thus in our situation, we know that $L$ maps norm-bounded subsets of $X$ to weakly bounded subsets of $Y$, in particular $L(B_X)$ is weakly bounded in $Y$, where $B_X$ is the unit ball in $X$. A theorem of Mackey tells us that in locally convex spaces every weakly bounded subset is bounded in the original topology, so in fact $L(B_X)$ is norm-bounded, but that means precisely that $$\lVert L\rVert = \sup \{ \lVert Lx\rVert_Y : \lVert x\rVert_X \leqslant 1\} < +\infty,$$ i.e. $L$ is continuous with respect to the norm topologies. In normed spaces, the assertion of Mackey's theorem follows easily from the Banach-Steinhaus theorem (aka the uniform boundedness principle): Let $S\subset Y$ be weakly bounded. Via the canonical isometric embedding $\Phi_Y \colon Y \to Y''$ of $Y$ into its bidual, we can view $S$ as a family of linear functionals on the Banach space $Y'$, and that $S$ is weakly bounded means precisely that this family is pointwise bounded, $$\sup \{ \lvert\Phi_Y(y)(\lambda)\rvert : y \in S\} = \sup \{ \lvert\lambda(y)\rvert : y \in S\} < +\infty$$ for all $\lambda \in Y'$. By the Banach-Steinhaus theorem it follows that $$\sup \{ \lVert \Phi_Y(y)\rVert_{Y''} : y \in S\} = \sup \{ \lVert y\rVert_Y : y \in S\} < +\infty,$$ i.e. $S$ is norm-bounded.
The Hamiltonian operator of Loop quantum gravity is a totally constraint system $$H = \int_\Sigma d^3x\ (N\mathcal{H}+N^a V_a+G)$$ Here, $\Sigma$ is a 3-dimensional hypersurface; a slice of spacetime. Moreover, $\mathcal{H}$ is the Hamiltonian constraint, $V_a$ the diffeomorphism constraint, $G$ the Gauss law term and $N,N^a$ corresponding constraint generators. In research literature this Hamiltonian was criticized to be not hermitean and would not form a Lie algebra from its generators. The variables of the theory are Ashtekar's variable $A_a^i$ and the triad $E_a^i$. Therefore the Master constraint $$M:=\int_\Sigma d^3x\ \mathcal{H}^2/\sqrt{\det q}$$ with 3-d-metric $q_{ab}$ was introduced that solves these issues. Loop quantum gravity can be treated canonically, but according to this paper: http://arxiv.org/abs/0911.3432 one can derive a path integral from the Master constraint. I can't understand the derivation of it (especially with the measure factor). Question: Is there a plausible path integral in 4-d-spacetime that computes spin foam amplitudes? What is if I treat Loop Quantum Gravity with the path integral with action $$S = \int d^4x\ (E_a^i \dot{A_i^a}-N\mathcal{H}+N^a V_a+G) \tag{$\star$}$$ is it plausible (this action is mentioned in one of my introductory textbooks) despite the non-hermiticity of the Hamiltonian? Or would this action lead to significant errors? P.S:: is the path integral $$\int d[E_a^i] d[A_i^a] d[N_{Master}] exp(i E_a^i \dot{A_i^a} - i\int dt N_{Master} M) $$ $$= \int d[E_a^i] d[A_i^a] exp(i E_a^i \dot{A_i^a}) \delta(M)$$ a better version than the path integral induced by the action $(\star)$? This post imported from StackExchange Physics at 2016-12-24 22:42 (UTC), posted by SE-user kryomaxim
Quality factor, or \(Q\), is one of the more mysterious quantities of seismology. It's right up there with Lamé's \(\lambda\) and Thomsen's \(\gamma\). For one thing, it's wrapped up with the idea of attenuation, and sometimes the terms \(Q\) and 'attenuation' are bandied about seemingly interchangeably. For another thing, people talk about it like it's really important, but it often seems to be completely ignored. A quick aside. There's another quality factor: the rock quality factor, popular among geomechnicists (geomechanics?). That \(Q\) describes the degree and roughness of jointing in rocks, and is probably related — coincidentally if not theoretically — to seismic \(Q\) in various nonlinear and probably profound ways. I'm not going to say any more about it, but if this interests you, read Nick Barton's book, Rock Quality, Seismic Velocity, Attenuation and Anistropy (2006; CRC Press) if you can afford it. So what is Q exactly? We know intuitively that seismic waves lose energy as they travel through the earth. There are three loss mechanisms: scattering (elastic losses resulting from reflections and diffractions), geometrical spreading, and intrinsic attenuation. This last one, anelastic energy loss due to absorption — essentially the deviation from perfect elasticity — is what I'm trying to describe here. I'm not going to get very far, by the way. For the full story, start at the seminal review paper entitled \(Q\) by Leon Knopoff (1964), which surely has the shortest title of any paper in geophysics. (Knopoff also liked short abstracts, as you see here.) The dimensionless seismic quality factor \(Q\) is defined in terms of the energy \(E\) stored in one cycle, and the change in energy — the energy dissipated in various ways, such as fluid movement (AKA 'sloshing', according to Carl Reine's essay in 52 Things... Geophysics) and intergranular frictional heat ('jostling') — over that cycle: $$ Q \stackrel{\mathrm{def}}{=} 2 \pi \frac{E}{\Delta E} $$ Remarkably, this same definition holds for any resonator, including pendulums and electronics. Physics is awesome! Because the right-hand side of that relationship is sort of upside down — the loss is in the denominator — it's often easier to talk about \(Q^{-1}\) which is, more or less, the percentage loss of energy in a single wavelength. This inverse of \(Q\) is proportional to the attenuation coefficient. For more details on that relationship, check out Carl Reine's essay. This connection with wavelengths means that we have to think about frequency. Because high frequencies have shorter cycles (by definition), they attenuate faster than low frequencies. You know this intuitively from hearing the beat, but not the melody, of distant music for example. This effect does not imply that \(Q\) depends on frequency... that's a whole other can of worms. (Confused yet?) The frequency dependence of \(Q\) It's thought that \(Q\) is roughly constant with respect to frequency below about 1 Hz, then increases with \(f^\alpha\), where \(\alpha\) is about 0.7, up to at least 25 Hz (I'm reading this in Mirko van der Baan's 2002 paper), and probably beyond. Most people, however, seem to throw their hands up and assume a constant \(Q\) even in the seismic bandwidth... mainly to make life easier when it comes to seismic processing. Attempting to measure, let alone compensate for, \(Q\) in seismic data is, I think it's fair to say, an unsolved problem in exploration geophysics. Why is it worth solving? I think the main point is that, if we could model and measure it better, it could be a semi-independent measure of some rock properties we care about, especially velocity. Actually, I think it's even a stretch to call velocity a rock property — most people know that velocity depends on frequency, at least across the gulf of frequencies between seismic and acoustic logging tools, but did you know that velocity also depends on amplitude? Paul Johnson tells about this effect in his essay in the forthcoming 52 Things... Rock Physics book — stay tuned for more on that. For a really wacky story about negative values of \(Q\) — which imply transmission coefficients greater than 1 (think about that) — check out Chris Liner's essay in the same book (or his 2014 paper in The Leading Edge). It's not going to help \(Q\) get any less mysterious, but it's a good story. Here's the punchline from a Jupyter Notebook I made a while back; it follows along with Chris's lovely paper: Hm, I had hoped to shed some light on \(Q\) in this post, but I seem to have come full circle. Maybe explaining \(Q\) is another unsolved problem. References Barton, N (2006). Rock Quality, Seismic Velocity, Attenuation and Anisotropy. Florida, USA: CRC Press. 756 pages. ISBN 9780415394413. Johnson, P (in press). The astonishing case of non-linear elasticity. In: Hall, M & E Bianco (eds), 52 Things You Should Know About Rock Physics. Nova Scotia: Agile Libre, 2016, 132 pp. Knopoff, L (1964). Q. Reviews of Geophysics 2 (4), 625–660. DOI: 10.1029/RG002i004p00625. Reine, C (2012). Don't ignore seismic attenuation. In: Hall, M & E Bianco (eds), 52 Things You Should Know About Geophysics. Nova Scotia: Agile Libre, 2012, 132 pp. Liner, C (in press). Negative Q. In: Hall, M & E Bianco (eds), 52 Things You Should Know About Rock Physics. Nova Scotia: Agile Libre, 2016, 132 pp. van der Bann, M (2002). Constant Q and a fractal, stratified Earth. Pure and Applied Geophysics 159 (7–8), 1707–1718. DOI: 10.1007/s00024-002-8704-0.
Frequency Translation by Way of Lowpass FIR Filtering Some weeks ago a question appeared on the dsp.related Forum regarding the notion of translating a signal down in frequency and lowpass filtering in a single operation [1]. It is possible to implement such a process by embedding a discrete cosine sequence's values within the coefficients of a traditional lowpass FIR filter. I first learned about this process from Reference [2]. Here's the story. Traditional Frequency Translation Prior To Filtering Think about the process shown in Figure 1. First, an $ x(n) $ input sequence is multiplied by a $ cos (2 \pi n f_t/f_s) $ sequence which translates $ x(n)'s $ spectral components both up and down in frequency by $ f_t $ Hz. (The 't' subscript in $ f_t $ means "translation" and $ f_s $ is the input data sample rate in Hz.) Following that, the frequency-translated sequence is lowpass filtered by a six-coefficient lowpass FIR filter. Note that the high-frequency spectral components generated by the cosine mixing are attenuated by the lowpass filter. Figure 1: Traditional frequency translation followedby a fifth-order lowpass filtering. The Figure 1 'frequency translation and filtering' operation can be implemented as shown in Figure 2, where no cosine multiplier is needed. The Figure 2 filter, what I call a " ", has integer Translating FIR Filter Nunique set of coefficients. (I show how to select the value Nin a later section of this blog.) Figure 2: Fifth-order lowpass Translating FIRFilter requiring N Coefficient Sets. As each new $ x(n) $ input sample arrives to the filter the rotary switches jump to the next "Coefficient Set". (After the last $ h_N(k) $ Coefficient Set is used, when the next $ x(n) $ sample arrives we switch back to first Set $ h_1(k) $ Coefficient.) Justification for why the Figure 2 filter is equivalent to the Figure 1 filter is given in Appendix A. Computing the Coefficient Sets' Coefficient Values Assuming the Figure 1 FIR filter has integer K coefficients, the Figure 2 Translating FIR Filter's coefficients are the Figure 1 $ h(k) $ coefficients multiplied by time-shifted versions of the $ cos(2 \pi n f_t / f_s) $ sequence. That is, the first $ h_1(k) $ Coefficient Set's K coefficients are defined by: \begin{eqnarray} h_1(n) & = & h(0)cos(2 \pi (0) f_t / f_s), \ h(1)cos[2 \pi (-1) f_t/f_s], \ h(2)cos[2 \pi (-2) f_t/f_s], \\ & & h(3)cos[2 \pi (-3) f_t / f_s], ..., h(K-1)cos\{2 \pi [-(K-1)]f_t / f_s\}. \end{eqnarray} The second and third Coefficient Sets' K coefficients are defined by: \begin{eqnarray} h_2(n) & = & h(0)cos(2 \pi (1) f_t / f_s), \ h(1)cos[2 \pi (0) f_t/f_s], \ h(2)cos[2 \pi (-1) f_t/f_s], \\ & & h(3)cos[2 \pi (-2) f_t / f_s], ..., h(K-1)cos\{2 \pi [-(K-1)+1]f_t / f_s\}. \end{eqnarray} \begin{eqnarray} h_3(n) & = & h(0)cos(2 \pi (2) f_t / f_s), \ h(1)cos[2 \pi (1) f_t/f_s], \ h(2)cos[2 \pi (0) f_t/f_s], \\ & & h(3)cos[2 \pi (-1) f_t / f_s], ..., h(K-1)cos\{2 \pi [-(K-1)+2]f_t / f_s\}. \end{eqnarray} The Nth Coefficient Set's K coefficients are defined by: \begin{eqnarray} h_N(n) = & h(0) & cos(2 \pi (N-1) f_t / f_s), \ h(1)cos[2 \pi (N-2) f_t/f_s], \\ & h(2) & cos[2 \pi (N-3) f_t/f_s], \ h(3)cos[2 \pi (N-4) f_t / f_s], \ ..., \\ & h(K & -1) cos\{2 \pi [-(K-1)+N-1]f_t / f_s\}. \tag{1} \end{eqnarray} Don't let the above equations trouble you--they are straightforward. For example, for a 6-tap ( K=6) FIR filter requiring N=8 Coefficient Sets, the six cosine indexing factors inside the parenthesis in the above equations are shown in the rows of the following table. Cosine index factors in Eq. (1) for K=6 and N=8 How Many Coefficient Sets Are Needed? The number of necessary Coefficient Sets, N, is unrelated to the number of coefficients in the original Figure 1 FIR filter. The number of necessary Coefficient Sets for the Figure 2 implementation is given by: where I is an integer that forces N to be an integer. For example, if your desired translation frequency is f t =2 f s/9 Hz, making the ratio f s/ f t =9/2, then the integer I is chosen to be I= 2 so that: $$ N = \frac{f_sI}{f_t}=\frac{9(2)}{2} =9 \text{ Coefficient Sets.}$$ If you decide to implement a Translating FIR Filter, Appendix B lists several issues you should keep in mind. Decimating With Translating FIR Filters Implementing decimation (downsampling) with a Translating FIR Filter is the process shown in Figure 3 for a simple second-order Translating FIR Filter. When decimation by factor D is used our equation for the number of Coefficient Sets, N D, becomes: $$ N_D = \frac{NI}{D}, \tag{3} $$ where, again, integer I is chosen to make N D an integer. It is with decimation that our Translating FIR Filter becomes most useful. Figure 3: Second-order Translating FIR filter using N Coefficient Sets followed by decimation-by-D. We can see the benefit of decimation in our above f t=2 f s/9 Hz example. If we decimate the output of our f t=2 f s/9 Hz ( N=9) Translating FIR Filter by a factor of D=3, our value for N D becomes: $ N_D=\frac{NI}{D}=\frac{9I}{3}=3 $ with I being set to 1. In this case the sequence of Coefficient Sets used are as shown in Figure 4. Figure 4: Reduced number of necessary Coefficient Sets, ND, when decimation by three is implemented: only three Coefficient Sets [h1(k), h4(k), and h7(k)] are needed when N=9 and D=3. In that case we only need to store three Coefficients Sets in memory rather than nine Coefficient Sets. In the general case for this decimation situation, where where N D Coefficient Sets are necessary, instead of inputting a single $ x(n) $ sample for each filter output sample computation, we input D $ x(n) $ samples and then proceed to compute a single output sample. And our processing is as shown in Figure 5. Figure 5: Second-order Translating FIR Filter with decimation by D using ND Coefficient Sets. Another way to view the relationship in Eq. (3) is to accept the following restraint: $$ \frac {N_DDf_t}{ f_s}= \text{an integer.} \tag{4} $$ Conclusion We've shown how it's possible to translate a signal down in frequency and lowpass filter in a single operation using a Translating FIR Filter, as shown in Figure 2. We discussed the restrictions on the acceptable f t translation frequency. This filtering scheme eliminates the cosine multiplier in Figure 1 at the expense of significantly increasing the necessary amount of filter coefficient storage. It's possible to incorporate filter output decimation (downsampling) in this process, as shown in Figure 5, which reduces the required coefficient storage to a possibly acceptable value. The restrictions on the f t translation frequency and the decimation factor D are given in Eq. (4). [This blog was updated on Feb. 12, 2017 based on very helpful comments from Marci Detwiller and Kevin Krieger (University of Saskatchewan).] References [2] Frerking, M. E., Digital Signal Processing in Communications Systems, Chapman & Hall, 1994, pp. 171-174 Appendix A: Why the Figure 2 Filter Works Here we explain why the Figure 2 Translating FIR Filter works as claimed. Figure A1 shows the traditional scheme for frequency translation followed by lowpass filtering. Figure A1: Traditional frequency translationfollowed by lowpass filtering. Once the filter's K=6 delay line is filled with data, the $ y(n) $ output sample is equal to: \begin{eqnarray} y(n) & = & h(0)cos(2 \pi n f_t / f_s)x(n) + h(1)cos[2 \pi (n-1) f_t/f_s]x(n-1) \\ & + & h(2)cos[2 \pi (n-2) f_t / f_s]x(n-2) + h(3)cos[2 \pi (n-3) f_t / f_s]x(n-3) \\ & + & h(4)cos[2 \pi (n-4) f_t / f_s]x(n-4) + h(5)cos[2 \pi (n-5) f_t / f_s]x(n-5). \tag{A1} \end{eqnarray} That $ y(n) $ output sample in Eq. (A1) would be the same if we eliminated the initial cosine multiplier altogether and our FIR filter's six coefficients were: The next $ y(n+1) $ filter output sample is equal to: \begin{eqnarray} y(n+1) & = & h(0)cos(2 \pi (n+1) f_t / f_s)x(n+1) + h(1)cos[2 \pi (n+1-1) f_t/f_s]x(n+1-1) \\ & + & h(2)cos[2 \pi (n+1-2) f_t / f_s]x(n+1-2) + h(3)cos[2 \pi (n+1-3) f_t / f_s]x(n+1-3) \\ & + & h(4)cos[2 \pi (n+1-4) f_t / f_s]x(n+1-4) + h(5)cos[2 \pi (n+1-5) f_t / f_s]x(n_1-5). \tag{A2} \end{eqnarray} For that $ y(n+1) $ output sample a multiplier-free FIR filter's six coefficients would be: which are different from Coefficient Set# 1. To compute the third $ y(n+3) $ output sample the FIR filter's six coefficients must be: which are different from Coefficient Set# 1 and Coefficient Set# 2. As such, to eliminate the initial cosine multiplier in Figure A1 we are forced to use a different set of coefficients for each new filter input sample. At first glance this sounds practically impossible to implement! (This situation, what I call "Compute to Extinction", is something to be avoided.) Here's the good news: if $ f_t $ is an integer submultiple of $ f_s $ then we only need a fixed number of Coefficient Sets because those Coefficient Sets' values repeat. For example, if the translation frequency is $ f_t = f_s/4 $ then Coefficient Set# 1 would be: And Coefficient Set# 5 would be which is equal to Coefficient Set# 1. Coefficient Set# 6 will be equal to Coefficient Set# 2, Coefficient Set# 7 will be equal to Coefficient Set# 3, and so on. In this scenario, for a simple second-order (each Coefficient Set has three coefficients) FIR filter, our Translating FIR Filter is implemented as shown in Figure A2 with N = 4 unique Coefficient Sets. As each new $ x(n) $ sample arrives to the filter the rotary switches jump to the next Coefficient Set. (After Coefficient Set $ h_4(k) $ is used we switch back to Coefficient Set $ h_1(k) $.) Again, this scheme only works when $ f_t $ is an integer submultiple of $ f_s $. Figure A2: Coefficient Sets for a second-order lowpassTranslating FIR Filter when ft = fs/4. Appendix B: Important Issues Regarding Translating FIR Filters • For down-conversion the cosine mixing sequence in Figure 1 would normally be the negative-frequency cosine $ cos(2\pi n [-f_t]/f_s) $ sequence. However, because $ cos(2\pi n [-f_t]/f_s) = cos(2 \pi n f_t / f_s) $, we'll follow Reference [2]'s convention and use the $ cos(2 \pi n f_t /f_s) $ mixing sequence. • The $ f_t $ translation frequency be an integer submultiple of $ f_s $ to guarantee proper operation in Figure 2. The origin of this restriction is explained in Appendix A. must • Again, the inherent cosine mixing within our Translating FIR Filter translates $ x(n) $ 's spectral components both up and down in frequency. As such, the stopband cutoff frequency of the Figure 1 lowpass FIR filter must be defined so the filter attenuates those higher-frequency spectral components. • Although the coefficients within any single Coefficient Set are generally not symmetrical, if your original Figure 1 FIR filter is linear phase then the Figure 2 Translating FIR Filter will exhibit linear phase. • The Translating FIR Filter in Figure 2 is not a linear time-invariant (LTI) system! As such, we cannot determine the filter's frequency response by way performing the discrete Fourier transform (DFT) of the filter's unit impulse response. I suggest using a "swept-frequency input" method to determine the filter's frequency response. • If the passband gain of the original Figure 1 lowpass FIR filter is G, the passband gain of the Figure 2 filter will be G/2. (That's because mixing with a cosine in Figure 1 reduces the frequency-translated spectral components' peak amplitudes by a factor of two.) To achieve a passband gain of unity for the Figure 2 filter we can merely double the Figure 1 $ h(k) $ coefficients' values or double the amplitude of the mixing cosine sequence. • Because the coefficients within any single Figure 2 Coefficient Set are generally not symmetrical we typically cannot implement our Figure 2 filter using a folded-FIR filter block diagram in order to reduce the number of multiplies per input sample. Previous post by Rick Lyons: The Real Star of Star Trek Next post by Rick Lyons: Sinusoidal Frequency Estimation Based on Time-Domain Samples That's really cool! Still trying to understand how it works, but I'm sure I'll get it. I take it applications include things like IF-to-baseband processing? Thanks for posting! p.s. TeX hint: standard functions like sin, cos, tan, etc. should be preceded by a backslash so they appear in roman type, e.g.: \tan x = \frac{\sin x}{\cos x} $\tan x = \frac{\sin x}{\cos x}$ HI Jason. You're correct in your "IF-to-baseband" comment. I think the reason this type of filtering is not so well-known is because it's not too useful. It eliminates the input cosine mixing operation, but at the expense of increased filter coefficient storage requirements. The only useful application that I can think of is complex IF-to-baseband down-conversion and decimation by two when the IF frequency is restricted to be one quarter of the input's sample rate. (Assuming I haven't won the Lottery in the mean time, I might write a blog about that application. Who knows?) Thanks for the TeX hint Jason. Thanks Rick, academically impressive and intuitive. With due respect, practically No way I will go this path. Direct mixing is so simple especially with Fs/4...etc followed by classic filter. Kaz Hi Kaz. I agree with you! The purpose of my blog was to merely show the not-so-well-known concept of embedding sinusoidal sequences within FIR filter coefficients. However, I see that embedding sinusoidal sequences within FIR filter coefficients is a major feature of the "channelizers" fred harris describes in Chapter 6 of his "Multirate Signal Processing For Communications Systems" book. I'll have to study fred's material in more detail. (When fred speaks, I listen.) Yeah, I have the same observation. For a single band the classical approach of a frequency rotator followed by a down-sampling filter chain (pick a method) can be done with much less hardware. But a channelizer is a different story. For instance, one easy to understand method is a multi-level band-splitter. Each stage does a high,mid,low band split and decimate by 2 using a half band filter and Fs/4 rotator. At the end is a selector matrix delivering some portion of the band. This allows the use of a very high speed direct RF sampling A/D with fairly modest hardware and still filter to achieve noise gain and selectivity. The simplest stages closest to the A/D can be broken into multiple paths so that the logic can meet timing. So the point is that a polyphase method may be the only practical way to process gigabit samples. To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
Search Now showing items 1-10 of 31 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... Dielectron production in proton-proton collisions at √s=7 TeV (Springer, 2018-09-12) The first measurement of e^+e^− pair production at mid-rapidity (|ηe| < 0.8) in pp collisions at √s=7 TeV with ALICE at the LHC is presented. The dielectron production is studied as a function of the invariant mass (m_ee ...
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
This question is largely about definitions of PCA/FA, so opinions might differ. My opinion is that PCA+varimax should not be called either PCA or FA, bur rather explicitly referred to e.g. as "varimax-rotated PCA". I should add that this is quite a confusing topic. In this answer I want to explain what a rotation actually is; this will require some mathematics. A casual reader can skip directly to the illustration. Only then we can discuss whether PCA+rotation should or should not be called "PCA". One reference is Jolliffe's book "Principal Component Analysis", section 11.1 "Rotation of Principal Components", but I find it could be clearer. Let $\mathbf X$ be a $n \times p$ data matrix which we assume is centered. PCA amounts (see my answer here) to a singular-value decomposition: $\mathbf X=\mathbf{USV}^\top$. There are two equivalent but complimentary views on this decomposition: a more PCA-style "projection" view and a more FA-style "latent variables" view. According to the PCA-style view, we found a bunch of orthogonal directions $\mathbf V$ (these are eigenvectors of the covariance matrix, also called "principal directions" or "axes"), and "principal components" $\mathbf{US}$ (also called principal component "scores") are projections of the data on these directions. Principal components are uncorrelated, the first one has maximally possible variance, etc. We can write: $$\mathbf X = \mathbf{US}\cdot \mathbf V^\top = \text{Scores} \cdot \text{Principal directions}.$$ According to the FA-style view, we found some uncorrelated unit-variance "latent factors" that give rise to the observed variables via "loadings". Indeed, $\widetilde{\mathbf U}=\sqrt{n-1}\mathbf{U}$ are standardized principal components (uncorrelated and with unit variance), and if we define loadings as $\mathbf L = \mathbf{VS}/\sqrt{n-1}$, then $$\mathbf X= \sqrt{n-1}\mathbf{U}\cdot (\mathbf{VS}/\sqrt{n-1})^\top =\widetilde{\mathbf U}\cdot \mathbf L^\top = \text{Standardized scores} \cdot \text{Loadings}.$$ (Note that $\mathbf{S}^\top=\mathbf{S}$.) Both views are equivalent. Note that loadings are eigenvectors scaled by the respective eigenvalues ($\mathbf{S}/\sqrt{n-1}$ are eigenvalues of the covariance matrix). (I should add in brackets that PCA$\ne$FA; FA explicitly aims at finding latent factors that are linearly mapped to the observed variables via loadings; it is more flexible than PCA and yields different loadings. That is why I prefer to call the above "FA-style view on PCA" and not FA, even though some people take it to be one of FA methods.) Now, what does a rotation do? E.g. an orthogonal rotation, such as varimax. First, it considers only $k<p$ components, i.e.: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_k \mathbf L^\top_k.$$ Then it takes a square orthogonal $k \times k$ matrix $\mathbf T$, and plugs $\mathbf T\mathbf T^\top=\mathbf I$ into this decomposition: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \mathbf U_k \mathbf T \mathbf T^\top \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_\mathrm{rot} \mathbf L^\top_\mathrm{rot},$$ where rotated loadings are given by $\mathbf L_\mathrm{rot} = \mathbf L_k \mathbf T$, and rotated standardized scores are given by $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U}_k \mathbf T$. (The purpose of this is to find $\mathbf T$ such that $\mathbf L_\mathrm{rot}$ became as close to being sparse as possible, to facilitate its interpretation.) Note that what is rotated are: (1) standardized scores, (2) loadings. But not the raw scores and not the principal directions! So the rotation happens in the latent space, not in the original space. This is absolutely crucial. From the FA-style point of view, nothing much happened. (A) The latent factors are still uncorrelated and standardized. (B) They are still mapped to the observed variables via (rotated) loadings. (C) The amount of variance captured by each component/factor is given by the sum of squared values of the corresponding loadings column in $\mathbf L_\mathrm{rot}$. (D) Geometrically, loadings still span the same $k$-dimensional subspace in $\mathbb R^p$ (the subspace spanned by the first $k$ PCA eigenvectors). (E) The approximation to $\mathbf X$ and the reconstruction error did not change at all. (F) The covariance matrix is still approximated equally well:$$\boldsymbol \Sigma \approx \mathbf L_k\mathbf L_k^\top = \mathbf L_\mathrm{rot}\mathbf L_\mathrm{rot}^\top.$$ But the PCA-style point of view has practically collapsed. Rotated loadings do not correspond to orthogonal directions/axes in $\mathbb R^p$ anymore, i.e. columns of $\mathbf L_\mathrm{rot}$ are not orthogonal! Worse, if you [orthogonally] project the data onto the directions given by the rotated loadings, you will get correlated (!) projections and will not be able to recover the scores. [Instead, to compute the standardized scores after rotation, one needs to multiply the data matrix with the pseudo-inverse of loadings $\widetilde{\mathbf U}_\mathrm{rot} = \mathbf X (\mathbf L_\mathrm{rot}^+)^\top$. Alternatively, one can simply rotate the original standardized scores with the rotation matrix: $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U} \mathbf T$.] Also, the rotated components do not successively capture the maximal amount of variance: the variance gets redistributed among the components (even though all $k$ rotated components capture exactly as much variance as all $k$ original principal components). Here is an illustration. The data is a 2D ellipse stretched along the main diagonal. First principal direction is the main diagonal, the second one is orthogonal to it. PCA loading vectors (eigenvectors scaled by the eigenvalues) are shown in red -- pointing in both directions and also stretched by a constant factor for visibility. Then I applied an orthogonal rotation by $30^\circ$ to the loadings. Resulting loading vectors are shown in magenta. Note how they are not orthogonal (!). An FA-style intuition here is as follows: imagine a "latent space" where points fill a small circle (come from a 2D Gaussian with unit variances). These distribution of points is then stretched along the PCA loadings (red) to become the data ellipse that we see on this figure. However, the same distribution of points can be rotated and then stretched along the rotated PCA loadings (magenta) to become the same data ellipse. [To actually see that an orthogonal rotation of loadings is a rotation, one needs to look at a PCA biplot; there the vectors/rays corresponding to original variables will simply rotate.] Let us summarize. After an orthogonal rotation (such as varimax), the "rotated-principal" axes are not orthogonal, and orthogonal projections on them do not make sense. So one should rather drop this whole axes/projections point of view. It would be weird to still call it PCA (which is all about projections with maximal variance etc.). From FA-style point of view, we simply rotated our (standardized and uncorrelated) latent factors, which is a valid operation. There are no "projections" in FA; instead, latent factors generate the observed variables via loadings. This logic is still preserved. However, we started with principal components, which are not actually factors (as PCA is not the same as FA). So it would be weird to call it FA as well. Instead of debating whether one "should" rather call it PCA or FA, I would suggest to be meticulous in specifying the exact used procedure: "PCA followed by a varimax rotation". Postscriptum. It is possible to consider an alternative rotation procedure, where $\mathbf{TT}^\top$ is inserted between $\mathbf{US}$ and $\mathbf V^\top$. This would rotate raw scores and eigenvectors (instead of standardized scores and loadings). The biggest problem with this approach is that after such a "rotation", scores will not be uncorrelated anymore, which is pretty fatal for PCA. One can do it, but it is not how rotations are usually being understood and applied.
I'm not entirely sure of the extent of their power, whether it is limited to simple compound manipulation, but theoretically, the amount of energy in a 10m sphere is more than he will ever need. Matter can be converted directly to energy, for example, during matter-antimatter collisions, so if we know the amount of matter in the area around him, we know the amount of energy. Let's say that he is standing on the ground, with a 10m hemisphere of air above him, and a 10m hemisphere of dirt below him. Well, the density of air (at sea level) is $1.225~\mathrm{kg/m^3}$ and the density of dirt $1000~\mathrm{kg/m^3}$. The volume of a hemisphere can be calculated as: $$V = \frac{\frac43\pi r^2}2$$ so, with a 10m radius, it is$$\frac{\frac43\pi 10^2}2$$which is about $210 ~\mathrm{m^3}$. Now, we can calculate the mass of the air around him:$$1.225 \times 210 = 257.25$$and dirt$$1000 \times 210 = 210000$$ So, in total the mage has $210257.25 ~\mathrm{kg}$ of matter around him. Plug into equation to get energy:$$E = mc^2\\~\\~\\E = 210257.25 \times (3 \times 10^8)^2$$which my calculator tells me is equal to $1.8923152 \times 10^{22} ~ \mathrm J$. So, your mage has access to$$18,923,152,000,000,000,000,000$$joules of energy. Hurl a fireball of that and you'd wipe out the planet. To put that into context, the energy released by the Little Boy atomic bomb was $6.276 \times 10^{13}$ joules. The energy your mage has access to is enormous, equivalent to $301516125$ nukes.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
This post is about Neural Combinatorial Optimization that tackles NP-hard combinatorial optimization problems using recurrent neural networks and reinforcement learning. Most NP-hard combinatorial problems are tackled by algorithms that rely on handcrafted heuristics. The performances of such heuristics vary widly depending on the conditions of problem instances. Hence, it is natural to seek a machine learning approach that can potentially generalize across a variety of problem instances or even different problems. One may ask “why RL over supervised learning?” The paper gives three reasons for using RL to NP-hard combinatorial optimization problems. First, it is difficult to perform supervised learning as it is hard to obtain global optimum labels. If it weren’t hard, it would imply we know how to solve the problems efficiently. Second, even if we use labels generated by heuristics, the performance of a supervised model will be tied to the local optima and not surpass them. Third, RL may find solutions different from the existing ones and hopefully at a competitive performance. One more reason I would add is that RL may be even more preferred to supervised learning when one must solve a sequence of problems over time where problem instances are not independent. That is the kind of problem RL can potentially excel at. Of course, RL requires a good reward function that is hard to engineer manually in practice. If a misspecified reward function is used, all sorts of weird behaviors can come about. However, in our problem, we have a reward function that naturally arises out of a problem definition–the total distance of a tour. Problem formulation Now, we prepare the ingredients for RL. We first modify the TSP formulation a bit. For the sake of simplicity, we consider the 2D Euclidean TSP. Again, we are given a set of $n$ cities, $\mathbf{x} = \{ x_i \}_{i=1}^n$ with each city $x_i \in \mathbb{R}^2$. Our aim is to learn a stochastic policy modeled as: where $W$ is model parameters, and the cost function is given by . We defined a model that assigns probabilities over the space of tours. Note the identity follows from the probability chain rule. Sequence-to-Sequence Models At this point, you may notice the factorized formulation (the right-hand side) is almost the same formulation as sequence-to-sequence models. Sequence-to-Sequence models use Recurrent Neural Networks (RNN) to accept input sequences of varying lengths (you can train one model for an arbitrary number of cities) and predict output sequences poentially of different lengths (not so relevant to our problem). In case you are not familiar, an RNN is a neural network that takes two types of inputs sequentially; the real input given by a problem (e.g. a word in a given sentence) and the other computed by itself at a previous step. The latter (also called a hidden state) is computed by at each time step where $f$ is usually just a composition of an affine transformation and an activation function. RNN is nice since $h$ can carry information from the past (at the superficial level…). Sequence-to-Sequence models usually consist of two phases: encoding and decoding where typically an RNN(s) is used for both. Let denote d-dimensional hidden states ($h_t$) of an encoder and a decoder, respectively. For encoding, you just feed to an RNN a component (token) of an input sequence one at a time to collect $\mathbf{e}$. For decoding, you add one more computation to each $d_t$: where $y$ output. Can we apply sequence-to-sequence models to our problem out of the box? The answer is no. Unlike the typical seq-to-seq formulation, in our problem, the output dictionary is fully determined by a given input sequence: i.e. $\pi_i \in \mathbf{x}, \forall i$. By the output dictionary I mean a set of values $p(\pi_i \vert \pi_{<i}, \mathbf{x})$ can take (ignoring feasibility of a tour for now). Contrast this to a machine translation task from English to French where the output dictionary (the French vocabulary) is fixed a priori and determined independent of inputs. Pointer Networks Now, how can we adjust output dictionary sizes depending on given input sequences? Clearly, we do not want to build separate models for different $n$. One simple idea called Pointer Networks has been proposed. The mechanism is similiar to the attention model. At each decoding step, Pointer Networks learn a softmax distribution over all input hidden states. Notice $u_j^i$ computes a scalar to $u_j$ at $i$-th decoding step: essentially, how much attention to where at each decoding time. This simple formulation, by design, allows output dictionary sizes to change according to input sequences. Neural Combinatorial Optimization with RL Now, we can finally talk about the main paper. What it does is basically Pointer Networks plus RL. First, we make a minor adjustment to the pointing mechanism above, so our model predicts feasible tours only (no city can be selected twice). Below, we just iterate over each city and if it has been already selected before $i$-th step, we give the $-\infty$ attention. From above, we derive the policy $\mathbf{X} \mapsto \Pi$: The setup is similar to a multi-armed bandit problem. You have $n!$ bandits to play and can choose only one lever to pull at each time. For each instance of a problem (an input graph of cities), you choose one permutation. (Technically, the dimension of $\mathbf{x}$ varies but we do not want to introduce an infinite sequence…) Policy Optimization with REINFORCE Now how do we optimize the policy? There are various ways. The paper suggested a model-free policy gradient algorithm called REINFORCE. The idea is quite simple. You have a (possibly non-convex) objective function and want to use a first-order (gradients-based) method that performs a local search. All you need to obtain is, well, a gradient of your objective with respect to the parameters of your policy. Concretely, consider that is the expected total tour distance. Assume there exists some unknown distribution that generates problem instances . Finally, we set our objective to be . Now, how do we compute $\nabla_W J(W \vert \mathbf{x})$? We need to take a gradient of an expectation that feels hard at a first encounter. In fact, this task is so common in machine learning that there is a special name to it: the log derivative trick. We observe the final line is not only an unbiased estimate, but also is easy to evaluate. Given a prediction $\pi$, it is easy to evaluate $L_x(\pi)$. As long as we keep our model as compositions of differentiable operations, obtaining $\nabla_W \log{p(\pi^{(j)} \vert \mathbf{x};W)}$ is a matter of applying the derivative chain rule. Now, let us make one last refinement to our policy gradient estimate. Notice the gradient estimate is multiplied by $L_{\mathbf{x}}(\pi)$. This is not desirable since larger problem instances will tend to have higher costs, regardless of $\pi$, than smaller problem instances: e.g. tours for 10 cities would likely be shorter than for 1,000 cities. Hence, larger problems will likely have a disproportionally larger effect during training. Can we adjust the cost term according to some baseline that reflects on the intrinsic difficulty of $\mathbf{x}$? The trick is to observe which is not hard to prove. Moreover, we can explicitly parameterize $b(\mathbf{x}; W_v)$. This is called an Actor-Critic algorithm where both policy and baseline are given parameters. There are various approaches regarding what $b(\mathbf{x})$ should be. To remain unbiased, we just require $b(\mathbf{x})$ to be some function that does not depend on $\pi$. The choice is especially critical for non-linearities like a neural network. In that case, there is usually no convergence guarantee and explicit tricks that stablilize training are required. For training, we can use another objective function. We choose a Mean Squared Loss (MSE): . Notice the cost is conditioned on $W$, the current policy. The exact architecture of the critic can be found in the Appendix of the paper. Basically, it begins with an LSTM encoder, a special kind of RNN designed to maintain long term information. An RNN is still needed to handle a variable-length input. It is then followed by a process block, a series of attention-like computations over a LSTM hidden state. Finally, a few fully connected layers are used to produce a scalar. Now, how do we train our actor ($p(\pi\vert \mathbf{x};W)$) and critic ($b(\mathbf{x}; W_v)$)? You can just iteratively apply a first-order stochastic optimization method such as Adam on sampled problem instances and on the cross-entropy loss based on the softmax output. Inference: Search Strategies Generally speaking, doing inference with a multi-variate output space (sequence/tour) is computationally challenging. There are usually three types of questions you could ask a probability distribution: random samples, the mode, and the mean. Samples of $\pi$ can be generated through $n$ softmax operations (pointing), where each softmax can be simulated by sampling methods like Inverse transform sampling–giving $O(n^2)$ per sample tour. To control the diversity of samples, we can introduce a temperature hyperpameter: $\text{softmax}(x/T)$. If $T > 1$ is large, $T$ will discount the weight of $x$ such that the distribution becomes relatively more flat. If $T$ is small, the effect is reversed. The paper cross-validated with a grid search to find a nice $T$. As for the mean, we take Monte Carlo samples for . As for the mode, we must solve . As we know, an exhaustive search is computationally infeasible for it is $O(n!)$. Recall the factorization: which has a form for tree traversal. One of the easiest approaches is to take $n$ greedy decisions $O(n^2)$. Another popular approach is called Beam search where we take a top-B greedy decision at each time $O(n^2B)$. If our method were supervised learning, During training, the computational burden for the mode inference would have been reduced as it can be easily parallelized by conditioning on ground-truth labels teacher forcing. With reinforcement learning, we must unroll our mode prediction sequentially. Now, what kind of inference did the paper choose for inference during test inference? It still used sampling and suggested two search methods driven both by the sampling: (vanilla) sampling and Active Search. Given some test input $\mathbf{x}$, the (vanilla) sampling approach simply generates a batch of samples from a fixed distribution $p( \pi \vert \mathbf{x}; W)$. Subsequently, it selects the shortest one based on the cost function which can be evaluated cheaply at $\Theta(n)$. Active Search, however, is a more diligent student; it generates $t$ batches of samples sequentially one batch at a time and actively update the parameters based on the loss computed for each batch. Basically, it does extra training fine-tuned on a specific test input. The paper reports a significant performance boost using Active Search over vanilla sampling. Well, that is all. The model produced competitve experimental results both in terms of optimality and wall-clock inference time. It outperformed OR-Tool’s local search and Christofide algorithm. This is impressive and promising in that there is not much of hand-crafted knowledge/heuristics baked-in in the model. I can imagine follow-up research can produce even more impressive results with more compute and better techniques. Issues Remaining Of course, the model is not applicable for all NP-hard combinatorial optimization problems. For example, it capitalized on a few nice properties of TSP. First, it is easy to generate a feasible tour (given a complete graph); we just need to prevent our model from repeating the same city. This may not always be the case (e.g. TSP with constraints like time windows). In that case, relaxing the feasibility constraint at the inference phase may be required while the feasibility is enforced post-hoc or via regularization. Second, TSP is a one-step problem without any time-varying components in inputs. It would not be straight-forward to apply the model to a multi-step problem like Vehicle Routing Problem with capacity. We will talk about another paper that deals with the second issue soon.
NTS ABSTRACTSpring2019 Return to [1] Jan 23 Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24 Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31 Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7 Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). Feb 14 Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28 Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh.
I was told today by someone smarter than myself that the time-dependent Schroedinger equation in one dimension was invariant under a Galilean transformation of $(x,t)$, namely under $$\begin{cases}x'=x+ut\\t'=t\end{cases}.\tag{1}$$ Going to check this, I looked at the time dependent Schroedinger equation of a free particle. $$i\hbar\frac{\partial\psi}{\partial t} = -\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2}\tag{2}$$ Computing the transformation of the differential operators via the chain rule: $$\begin{cases} \frac{\partial}{\partial x}=\frac{\partial t'}{\partial x}\frac{\partial}{\partial t'}+\frac{\partial x'}{\partial x}\frac{\partial}{\partial x'} = \frac{\partial}{\partial x'} \\ \frac{\partial}{\partial t}=\frac{\partial t'}{\partial t}\frac{\partial}{\partial t'}+\frac{\partial x'}{\partial t}\frac{\partial}{\partial x'} = \frac{\partial}{\partial t'}+u\frac{\partial}{\partial x'} \end{cases}$$ and plugging all of this back into $(2)$ gives the TDSE in the relatively inertial frame $(x',t')$. $$i\hbar\left(\frac{\partial\psi}{\partial t'}+u\frac{\partial\psi}{\partial x'}\right)=-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x'^2}\tag{3}$$ This would imply that there's some additional term like $i\hbar u\frac{\partial\psi}{\partial x'}$ in the equation that represents an asymmetry under $(1)$. We have that said term is not zero (for that would imply that the wavefunction is space-independent in the relative frame, which is clearly not the case). Clearly I've misunderstood something here - is $(2)$ not Galilean invariant after all?
Disclaimer: This answer derives the prices of two different binary options within the Black/Scholes framework. Note that this is not an appropriate valuation model to use for non-European contracts in most real-world markets. Up-and-In Binary Call After reading your question for a second time, I agree with Quantuple's comment that you seem to be looking for the solution to an up-and-in binary call option. Formally, let \begin{equation}\nu = \inf \left\{ t \in \mathbb{R}_+ : S_t \geq K \right\}\end{equation} be the first hitting time of $S$ to the strike $K$. The option has a unit payoff conditional on $\nu \leq T$ and $S_T \geq K$, i.e. \begin{equation}V_T = \mathrm{1} \left\{ S_T \geq K \right\} \mathrm{1} \left\{ \nu \leq T \right\}.\end{equation} Note however that $S_T \geq K \; \Rightarrow \; \nu \leq T$ and thus $\left\{ S_T \geq K \right\} \subseteq \left\{ \nu \leq T \right\}$. Consequently, we can skip the second indicator and your payoff is just \begin{equation}V_T = \mathrm{1} \left\{ S_T > K \right\}.\end{equation} I.e. the price of an up-and-in binary call option is the same of that of a normal binary call option. You thus have the standard result that \begin{equation}V_0 = e^{-r T} \mathcal{N} \left( d_- \right),\end{equation} where \begin{equation}d_- = \frac{1}{\sigma \sqrt{T}} \left( \ln \left( \frac{S_0}{K} \right) + \left( r - \frac{1}{2} \sigma^2 \right) T \right).\end{equation} Down-and-Out Binary Call A more interesting case is the down-and-out binary call. This is how I initially understood your question. Now let \begin{equation}\nu = \inf \left\{ t \in \mathbb{R}_+ : S_t \leq K \right\}\end{equation} and \begin{equation}V_T = \mathrm{1} \left\{ S_T \geq K \right\} \mathrm{1} \left\{ \nu > T \right\}.\end{equation} This option knocks out, should the spot price breach the barrier before maturity. Otherwise it has a digital payoff of one. Let $\tau = T - t$ be the time-to-maturity. The valuation function $\tilde{V}(S, \tau)$ of this option satisfies the initial boundary value problem \begin{eqnarray}\mathcal{L} \left\{ \tilde{V} \right\} (S, \tau) & = & 0 \qquad (S, \tau) \in \mathcal{D},\\\tilde{V}(K, \tau) & = & 0, \qquad \forall \tau \in \mathbb{R}_+\\\tilde{V}(S, 0) & = & \mathrm{1} \{ S \geq K \},\end{eqnarray} where $\mathcal{L}$ is the Black/Scholes forward operator and $\mathcal{D} = \left\{ (S, \tau): S > K, \tau \in \mathbb{R}_+ \right\}$. Using the method of images, see e.g. Buchen (2001), the solution can be shown to be \begin{equation}\tilde{V}(S, \tau) = \mathcal{B}_K^+(S, \tau) - \stackrel{K}{\mathcal{I}} \left\{ \mathcal{B}_K^+(S, \tau) \right\},\end{equation} where \begin{eqnarray}\mathcal{B}_K^+ (S, \tau) & = & e^{-r \tau} \mathcal{N} \left( d_- \right),\\d_- & = & \frac{1}{\sigma \sqrt{\tau}} \left( \ln \left( \frac{S}{K} \right) + \left( r - \frac{1}{2} \sigma^2 \right) \tau \right),\\\stackrel{K}{\mathcal{I}} \left\{ \mathcal{B}_K^+ (S, \tau) \right\} & = & \left( \frac{S}{K} \right)^{2 \alpha} \mathcal{B}_K^+ \left( \frac{K^2}{S}, \tau \right),\\\alpha & = & \frac{1}{2} - \frac{r}{\sigma^2}.\end{eqnarray} References Buchen, Peter W. (2001) "Image Options and the Road to Barriers," Risk Magazine, Vol. 14, No. 9, pp. 127-130
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
The simplest malaria model is given by $$\frac{dI}{dt} = \frac{\alpha \beta I}{\alpha I + r} (1-I) - \mu I$$ where $\mu$ is the death rate of humans, $\alpha$ is the transmission rate from humans to mosquitoes, $\beta$ is the transmission rate from infected mosquitoes to susceptible humans, and $r$ is the natural death rate of mosquitoes. Assuming the units are given in $d^{-1}$, or per day, we can assume $r$ to be rather large, given that it isn't part of a larger product (and thus isn't a proportion). We can thus assume $r$ to be anywhere from a few dozen to a few hundred, depending on the size of the population. Recall that this is the amount of mosquitoes killed per day. $\alpha$ is the transmission rate from humans to mosquitoes, and does appear to be a proportion, so $0 < \alpha < 1$. Thus, $\alpha$ is the probability a mosquito not carrying malaria bites a human $\textit{with}$ malaria. Assuming nothing more than 15% of a population of humans has malaria, and that 95% of a mosquito population do not carry malaria, this is $0.15*0.95 = \alpha = 0.143,$ or a 14% probability. $\beta$ is also a proportion, and is the transmission rate from infected mosquitoes to susceptible humans. Rather, $\beta$ is the probability an infected mosquito will bite a human. Assuming a generous mosquito population (a few thousand at the least), we assume that about 5% carry malaria and (based on our previous assumptions) they have a 85% chance of biting a susceptible human (given that 15% are already infected). So $\beta = 0.05*0.85 = 0.0425$. $\mu$ is given as the death rate of humans. Neglecting deaths by reasons other than malaria itself, we assume this value to be about $0.3\%$, given 627,000 deaths in 2012 out out of an estimated 207 million cases that year. Is this analysis correct? If not, where can I find values for the above parameters? Are there any case studies where I can calculate the above values? Thank you in advance.
Define the vector operator: $$ H(\textbf{x}) \equiv \alpha \textbf{n} \times\textbf{x}$$ For unit vector $\textbf{n}$ and some constant $\alpha$. We define further the operator: $$G \equiv I + H + \frac{H^2}{2!} + \frac{H^3}{3!} + ... $$ Where powers of $H$ represent iterations and $I$ is defined as the identity such that $I(\textbf{x}) = \textbf{x}$ We are to prove $$ G\textbf{x} = \textbf{x} +(\textbf{n}\times\textbf{x})\sin{\alpha} + \textbf{n}\times(\textbf{n}\times\textbf{x})(1-\cos{\alpha}) $$ My attempt at the proof involved noting: $$ G = e^{H} $$ And thus $$G\textbf{x} = e^H\textbf{x}$$ But I can't seem to see where to go from here, or even if the last step is a legitimate one. Many thanks in advance.
Why is $\arctan\frac{x+y}{1-xy} = \arctan x +\arctan y$? It is said that this is derived from trigonometry, but I couldn't find why this is the case. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Why is $\arctan\frac{x+y}{1-xy} = \arctan x +\arctan y$? It is said that this is derived from trigonometry, but I couldn't find why this is the case. Here's a proof that has essentially nothing to do with trigonometry: Hold $y$ constant and differentiate the function $$f(x) = \arctan{\frac{x + y}{1 - xy}} - \arctan{x} - \arctan{y}$$ to find that \begin{align}f'(x) &= \frac{1}{1 + \left(\frac{x + y}{1 - xy}\right)^2} \cdot \left(\frac{(1 - xy) - (x + y)(-y)}{(1 - xy)^2}\right) - \frac{1}{1 + x^2} \\ &= \frac{(1 - xy)^2}{(1 - xy)^2 + (x + y)^2} \cdot \left(\frac{1 - xy + xy + y^2}{(1 - xy)^2}\right) - \frac{1}{1 + x^2} \\ &= \frac{1 + y^2}{1 - 2xy + x^2y^2 + x^2 + 2xy + y^2} - \frac{1}{1 + x^2} \\ &= \frac{1 + y^2}{1 + x^2 + y^2 + x^2y^2} - \frac{1}{1 + x^2} \\ &= \frac{1 + y^2}{(1 + y^2) + x^2 (1 + y^2)} - \frac{1}{1 + x^2} \\ &= \frac{1}{1 + x^2} - \frac{1}{1 + x^2} = 0 \end{align} So $f$ is a constant. Letting $x = 0$, we find that $f(0) = 0$ and the identity follows. The following identity is useful for your purpose $$ \tan(\alpha \pm \beta) = \frac{\tan \alpha \pm \tan \beta}{1 \mp \tan \alpha \tan \beta}. $$ To prove it just use the identity $ \tan(t) = \frac{\sin(t)}{\cos(t)}$ and the identities for $\sin(a\pm b)$ and $\cos(a\pm b)$. Computing with complex numbers: $$ (1+xi)(1+yi)=(1-xy)+(x+y)i $$ Take arg on both sides $$ \arctan x + \arctan y = \arg(1+xi)+\arg(1+yi) = \arg((1+xi)(1+yi)) \\ = \arg((1-xy)+(x+y)i) = \arctan\frac{x+y}{1-xy} $$ Do NOT blindly use this for calculations. Example $\arctan(5000)+\arctan(5000) \sim \pi/2+\pi/2=\pi$ with the formula quoted you get $\arctan(10000/(1-25\times10^6))\sim\arctan(-1/2500)\sim~0$. Like Andre Nicolas said, you need to considr arctan ranges Consider $$A=\arctan(x)+\arctan(y),$$ then $$\tan(A)=\tan(\arctan(x)+\arctan(y)),$$ which is also equal to:$$\frac{\tan(\arctan(x))+\tan(\arctan(x))}{1-\tan(\arctan(x)) + \tan(\arctan(y)},$$ which simplifies to giv:$$\tan(A)=\frac{x+y}{1-xy}.$$ Thus,$$\arctan(x)+\arctan(y)=A=\arctan(\tan(A))=\arctan\frac{x+y}{1-xy}.$$ Ok fine, mlc. I'll answer the question. Here is a trigonometric proof. No calculus required. Statement: $\arctan(\frac {x+y} {1-xy})=\arctan(x)+\arctan(y)$ whenever $|xy|<1$. Proof. Choose $(x,y)\in\big\{|xy|<1\big\}$ arbitrarily. This formula is obvious if $x=0,$ so first consider when $x>0$. We prove that $\arctan(x)+\arctan(y)\in\Big(\frac{-\pi}{2},\frac{\pi}{2}\Big)$. Note $|xy|<1$ if and only if $-\frac{1}{x}<y<\frac1x$. Since $\arctan$ is an increasing and odd function, it follows $$-\arctan\Big(\frac{1}{x}\Big)<\arctan(y)<\arctan\Big(\frac{1}{x}\Big).$$ Indeed, $\arctan(x)+\arctan(\frac{1}{x})=\frac{\pi}{2},$ and so $$-\frac\pi2<2\arctan(x)-\frac\pi2<\arctan(x)+\arctan(y)<\frac\pi2.$$ This shows $\arctan(x)+\arctan(y)$ belongs to the range of $\arctan$, $\Big(\frac{-\pi}{2},\frac{\pi}{2}\Big)$. The angle sum identity tells us $$\begin{equation}\begin{split}\tan(\arctan(x)+\arctan(y)) &=\frac{\tan(\arctan(x))+\tan(\arctan(y))}{1-\tan(\arctan(x))\tan(\arctan(y))} \\ & =\frac{x+y}{1-xy}\end{split}\end{equation}$$ Hence $$\begin{equation}\begin{split}\arctan(x)+\arctan(y)&=\arctan(\tan(\arctan(x)+\arctan(y))) \\ &=\arctan\Big(\frac{x+y}{1-xy}\Big)\end{split}\end{equation}$$ This proves the identity for $x>0,$ so now assume $x<0$. Then $-x>0$ and $|xy|<1$ $\iff$ $-\frac{1}{x}<-y<\frac{1}{x}.$ Finally, $$\begin{equation}\begin{split}\arctan(x)+\arctan(y) & =-\big(\arctan(-x)+\arctan(-y)\big) \\ & =-\arctan\Big(\frac{-x-y}{1-(-x)(-y)}\Big) \\ & =\arctan\Big(\frac{x+y}{1-xy}\Big)\end{split}\end{equation}$$ The result follows.
I kind of have an elementary solution, it seems to be fine but I am not sure if everything is correct; please point out the mistake(s) I'm making, if any. Define $$H_n:=\sum_{i=1}^n \frac{1}{i}$$Since $0<H_n<n$, if $\exists$ some $n$ for which $H_n$ is integral then $H_n=k$ where $0<k<n$. Then $$H_n=k=1+\frac{1}{2}+\frac{1}{3}+\cdots\ +\frac{1}{k}+\cdots\ +\frac{1}{n}\\\Rightarrow k=\frac{1}{k}+\frac{p}{q}\Rightarrow qk^2-pk-q=0$$ where $\gcd(p,q)=1$. Then we get $$k=\frac{p\pm \sqrt{p^2+4q^2}}{2q}$$ Since $k$ is integer $$p^2+4q^2=r^2$$ for some $r\in \mathbb{Z}^+$. Let $\gcd(p,2q,r)=d$ and let $\displaystyle x=\frac{p}{d},\ y=\frac{2q}{d},\ z=\frac{r}{d}$. Then $$x^2+y^2=z^2$$ Now, I make the following claim: Claim: $p$ is odd and $q$ is even. Proof: Let $s=2^m\le n$ be the largest power of $2$ in $\{1,2,\cdots,\ n\}$. Then, if $k\ne s$ then the numerator of $\displaystyle \frac{p}{q}$ is the sum of $n-1$ terms out of which one will be odd and hence $p$ is odd. On the other hand, $q$ will have the term $s$ as a factor. So q is even. Now, if $k=s$, then since $n>2$(otherwise there is nothing to prove)then, there will be a factor $2^{m-1}\ge 2$ in $q$ and one of the sum terms in $p$ that corresponds to $2^{m-1}$ will be odd. Hence in this case also, $p$ is odd and $q$ is even. So the claim is proved. $\Box$ So, now we see that $d\ne 2$ and hence $2|y$. So we have a Pythgorian equation with $2|y, \ x,y,z>0$. hence the solutions will be $$x=u^2-v^2,\ y=2uv,\ z=u^2+v^2$$ with $(u,v)=1.$ So, since $k$ is positive, $$k=\frac{d(x+z)}{dy}=\frac{u}{v}$$ But since $(u,v)=1$, $k$ is not an integer (for $n\ge 2$) which is a contradiction. So $H_n$ can not be an integer. $\Box$
Search Now showing items 1-5 of 5 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2013-11) We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
Added: a Stanford course on neural networks,cs231n,gives yet another form of the steps: v = mu * v_prev - learning_rate * gradient(x) # GD + momentum v_nesterov = v + mu * (v - v_prev) # keep going, extrapolate x += v_nesterov Here v is velocity aka step aka state,and mu is a momentum factor, typically 0.9 or so.( v, x and learning_rate can be very long vectors;with numpy, the code is the same.) v in the first line is gradient descent with momentum; v_nesterov extrapolates, keeps going.For example, with mu = 0.9, v_prev v --> v_nesterov --------------- 0 10 --> 19 10 0 --> -9 10 10 --> 10 10 20 --> 29 The following description has 3 terms: term 1 alone is plain gradient descent (GD), 1 + 2 give GD + momentum, 1 + 2 + 3 give Nesterov GD. Nesterov GD is usually described as alternatingmomentum steps $x_t \to y_t$ and gradient steps $y_t \to x_{t+1}$: $\qquad y_t = x_t + m (x_t - x_{t-1}) \quad $ -- momentum, predictor $\qquad x_{t+1} = y_t + h\ g(y_t) \qquad $ -- gradient where $g_t \equiv - \nabla f(y_t)$ is the negative gradient,and $h$ is stepsize aka learning rate. Combine these two equations to one in $y_t$ only,the points at which the gradients are evaluated,by plugging the second equation into the first, and rearrange terms: $\qquad y_{t+1} = y_t$ $\qquad \qquad + \ h \ g_t \qquad \qquad \quad $ -- gradient $\qquad \qquad + \ m \ (y_t - y_{t-1}) \qquad $ -- step momentum $\qquad \qquad + \ m \ h \ (g_t - g_{t-1}) \quad $ -- gradient momentum The last term is the difference between GD with plain momentum,and GD with Nesterov momentum. One could use separate momentum terms, say $m$ and $m_{grad}$: $\qquad \qquad + \ m \ (y_t - y_{t-1}) \qquad $ -- step momentum $\qquad \qquad + \ m_{grad} \ h \ (g_t - g_{t-1}) \quad $ -- gradient momentum Then $m_{grad} = 0$ gives plain momentum, $m_{grad} = m$ Nesterov. $m_{grad} > 0 $ amplifies noise (gradients can be very noisy), $m_{grad} \sim -.1$ is an IIR smoothing filter. By the way, momentum and stepsize can vary with time, $m_t$ and $h_t$,or per component (ada* coordinate descent), or both -- more methods than test cases. A plot comparing plain momentum with Nesterov momentum on a simple 2d test case, $(x / [cond, 1] - 100) + ripple \times sin( \pi x )$ :
Bravais Lattice¶ Non-Scalar Structural In crystallography, a Bravais lattice 1 represents an infinite array of discrete points, constituting the underlying framework for a crystal structure. These points are generated by a set of discrete translation operations, which can be expressed in three dimensional space as follows. where n_i are any integers, and a_i are known as the lattice vectors spanning the lattice in three-dimensional space. The defining characteristic of a Bravais lattice is that, for any choice of position vector R, the lattice has to look exactly the same when viewed from any equivalent lattice point. Lattice types¶ Two Bravais lattices are considered equivalent if they have the same symmetry elements. In this sense, there are 14 possible distinct Bravais lattices in three-dimensional space, grouped together into 7 more general symmetry categorizations known as lattice systems. These two categories are tabulated in the image below for reference purposes. Lattice parameters¶ Bravais lattices can additionally be described by the lattice parameters defining their unit cells. These parameters comprise information about both the magnitudes of the lattice vectors measuring the lengths of the delimiting axes of the unit cell (the a, b and c lattice constants), and the angles between them (\alpha, \beta and \gamma), as explained in the figure below. Volume and Density¶ The corresponding volume V of the above unit cell is consequently given by the scalar triple product between the three lattice vectors defining its axes, as outlined in the formula below. The material density can in addition be obtained by dividing the sum of the atomic masses within the unit cell by this volume.
Here is a report of the Ray Tracer written by myself Christopher Chedeau. I've taken the file format and most of the examples from the Ray Tracer of our friends Maxime Mouial and Clément Bœsch. The source is available on Github. Check out the demo, or click on any of the images. Objects Our Ray Tracer supports 4 object types: Plane, Sphere, Cylinder and Cone. The core idea of the Ray Tracer is to send rays that will be reflected on items. Given a ray (origin and direction), we need to know if it intersect an object on the scene, and if it does, how to get a ray' that will be reflected on the object. Knowing that, we open up our high school math book and come up with all the following formulas. Legend: Ray Origin \(O\), Ray Direction \(D\), Intersection Position \(O'\), Intersection Normal \(N\) and Item Radius \(r\). Intersection Normal Plane \[t = \frac{O_z}{D_z}\] \[ N = \left\{ \begin{array}{l} x = 0 \\ y = 0 \\ z = -sign(D_z) \end{array} \right. \] Sphere \[ \begin{array}{l l l} & t^2 & (O \cdot O) \\ + & 2t & (O \cdot D) \\ + & & (O \cdot O) - r^2 \end{array} = 0\] \[ N = \left\{ \begin{array}{l} x = O'_x \\ y = O'_y \\ z = O'_z \end{array} \right. \] Cylinder \[ \begin{array}{l l l} & t^2 & (D_x D_x + D_y D_y) \\ + & 2t & (O_x D_x + O_y D_y) \\ + & & (O_x O_x + O_y O_y - r^2) \end{array} = 0\] \[ N = \left\{ \begin{array}{l} x = O'_x \\ y = O'_y \\ z = 0 \end{array} \right. \] Cone \[ \begin{array}{l l l} & t^2 & (D_x D_x + D_y D_y - r^2 D_z D_z) \\ + & 2t & (O_x D_x + O_y D_y - r^2 O_z D_z) \\ + & & (O_x O_x + O_y O_y - r^2 O_z O_z) \end{array} = 0\] \[ N = \left\{ \begin{array}{l} x = O'_x \\ y = O'_y \\ z = - O'_z * tan(r^2) \end{array} \right. \] In order to solve the equation \(at^2 + bt + c = 0\), we use \[\Delta = b^2 - 4ac \]\[ \begin{array}{c c c} \Delta \geq 0 & t_1 = \frac{-b - \sqrt{\Delta}}{2a} & t_2 = \frac{-b + \sqrt{\Delta}}{2a} \end{array} \] And here is the formula for the reflected ray: \[ \left\{ \begin{array}{l} O' = O + tD + \varepsilon D' \\ D' = D - 2 (D \cdot N) * N \end{array} \right. \] In order to fight numerical precision errors, we are going to move the origin of the reflected point a little bit in the direction of the reflected ray (\(\varepsilon D'\)). It will avoid to falsely detect a collision with the current object. Coordinates, Groups and Rotations We want to move and rotate objects. In order to do that, we compute a transformation matrix (and it's inverse) for each object in the scene using the following code: \[ T = \begin{array}{l} (Identity * Translate_g * RotateX_g * RotateY_g * RotateZ_g) * \\ (Identity * Translate_i * RotateX_i * RotateY_i * RotateZ_i) \end{array} \]\[ I = T^{-1} \] \[Translate(x, y, z) = \left(\begin{array}{c c c c} 1 & 0 & 0 & x \\ 0 & 1 & 0 & y \\ 0 & 0 & 1 & z \\ 0 & 0 & 0 & 1 \end{array}\right)\] \[RotateX(\alpha) = \left(\begin{array}{c c c c} 1 & 0 & 0 & 0 \\ 0 & cos(\alpha) & -sin(\alpha) & 0 \\ 0 & sin(\alpha) & cos(\alpha) & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)\] \[RotateY(\alpha) = \left(\begin{array}{c c c c} cos(\alpha) & 0 & sin(\alpha) & 0 \\ 0 & 1 & 0 & 0 \\ -sin(\alpha) & 0 & cos(\alpha) & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)\] \[RotateZ(\alpha) = \left(\begin{array}{c c c c} cos(\alpha) & -sin(\alpha) & 0 & 0 \\ sin(\alpha) & cos(\alpha) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)\] We have written the intersection and normal calculations in the object's coordinate system instead of the world's coordinate system. It makes them easier to write. We use the transformation matrix to do object -> world and the inverse matrix to do world -> object. \[ \left\{\begin{array}{l} O_{world} = T * O_{object} \\ D_{world} = (T * D_{object}) - (T * 0_4) \end{array}\right. \] \[ \left\{\begin{array}{l} O_{object} = I * O_{world} \\ D_{object} = (I * D_{world}) - (I * 0_4) \end{array}\right. \] \[0_4 = \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right) \] Bounding Box The previous equations give us objects with infinite dimensions (except for the sphere) whereas objects in real life have finite dimensions. To simulate this, it is possible to provide two points that will form a bounding box around the object. On the intersection test, we are going to use the nearest point that is inside the bounding box. This gives us the ability to do various objects such as mirrors, table surface and legs, light bubbles and even a Pokeball! Light An object is composed of an Intensity \(I_o\), a Color \(C_o\) and a Brightness \(B_o\). Each light has a Color \(C_l\) and there is an ambient color \(C_a\). Using all those properties, we can calculate the color of a point using the following formula: \[ I_o * (C_o + B_o) * \left(C_a + \sum_{l}{(N \cdot D) * C_l}\right) \] Only the lights visible from the intersection point are used in the sum. In order to check this, we send a shadow ray from the intersection point to the light and see if it intersects any object. The following images are examples to demonstrate the lights. Textures In order to put a texture on an object, we need to map a point \((x, y, z)\) in the object's coordinate system into a point \((x, y)\) in the texture's coordinate system. For planes, it is straightforward, we just the \(z\) coordinate (which is equal to zero anyway). For spheres, cylinders and cones it is a bit more involved. Here is the formula where \(w\) and \(h\) are the width and height of the texture. \[ \begin{array}{c c} \phi = acos(\frac{O'_y}{r}) & \theta = \frac{acos\left(\frac{O'_x}{r * sin(\phi)}\right)}{2\pi} \end{array} \]\[ \begin{array}{c c} x = w * \left\{\begin{array}{l l} \theta & \text{if } O'_x < 0 \\ 1 - \theta & \text{else}\end{array}\right. & y = h * \frac{\phi}{\pi} \end{array} \] Once we have the texture coordinates, we can easily create a checkerboard or put a texture. We added options such as scaling and repeat in order to control how the texture is placed. We also support the alpha mask in order to make a color from a texture transparent. Progressive Rendering Ray tracing is a slow technique. At first, I generated pixels line by line, but I found out that the first few lines do not hold much information. Instead, what we want to do is to have a fast overview of the scene and then improve on the details. In order to do that, during the first iteration we are only generating 1 pixel for a 32x32 square. Then we generate 1 pixel for a 16x16 square and so on ... We generate the top-left pixel and fill all the unknown pixels with it. In order not to regenerate pixels we already seen, I came up with a condition to know if a pixel has already been generated. \(size\) is the current square size (32, 16, ...). \[\left\{\begin{array}{l} x \equiv 0 \pmod{size * 2}\\ y \equiv 0 \pmod{size * 2} \end{array}\right. \] Supersampling Aliasing is a problem with Ray Tracing and we solve this issue using supersampling. Basically, we send more than one ray for each pixel. We have to chose representative points from a square. There are multiple strategies: in the middle, in a grid or random. Check the result of various combinations in the following image: Perlin Noise We can generate random textures using Perlin Noise. We can control several parameters such as \(octaves\), the number of basic noise, the initial scale \(f\) and the factor of contribution \(p\) of the high frequency noises. \[ noise(x, y, z) = \sum_{i = 0}^{octaves}{p^i * PerlinNoise(\frac{2^i}{f}x, \frac{2^i}{f}y, \frac{2^i}{f}z)} \] \[noise\] \[noise * 20 - \lfloor noise * 20 \rfloor\] \[\frac{cos(noise) + 1}{2}\] As seen in the example, we can apply additional functions after the noise has been generated to make interesting effects. Portal Last but not least, Portals from the self-titled game. They are easy to reproduce in a Ray Tracer and yet, I haven't seen any done. If a ray enters portal A, it will go out from portal B. It is trivial to implement it, it is just a coordinates system transformation. Like we did for world and object transformation, we do it between A and B using their transformation matrix. \[ \left\{\begin{array}{l} O_{a}' = T * O_{b} \\ D_{a}' = (T * D_{b}) - (T * 0_4) \end{array}\right. \] \[ \left\{\begin{array}{l} O_{b}' = T * O_{a} \\ D_{b}' = (T * D_{a}) - (T * 0_4) \end{array}\right. \] Scene Editor In order to create scenes more easily, we have defined a scene description language. We developed a basic CodeMirror syntax highlighting script. Just enter write your scene down and press Ray Trace 🙂
Affine transformation is a linear mapping method that preserves points, straight lines, and planes. Sets of parallel lines remain parallel after an affine transformation. The affine transformation technique is typically used to correct for geometric distortions or deformations that occur with non-ideal camera angles. For example, satellite imagery uses affine transformations to correct for wide angle lens distortion, panorama stitching, and image registration. Transforming and fusing the images to a large, flat coordinate system is desirable to eliminate distortion. This enables easier interactions and calculations that don’t require accounting for image distortion. The following table illustrates the different affine transformations: translation, scale, shear, and rotation. Affine Transform Example Transformation Matrix Translation \[ \left[\begin{array}{c}1 & 0 & 0\\0 & 1 & 0\\ t_x & t_y & 1\end{array}\right]\] \(t_y\) specifies the displacement along the \(y\) axis. Scale \[ \left[\begin{array}{c}s_x & 0 & 0\\0 & s_y & 0\\ 0 & 0 & 1\end{array}\right]\] \(s_x\) specifies the scale factor along the \(x\) axis \(s_y\) specifies the scale factor along the \(y\) axis. Shear \[ \left[\begin{array}{c}1 & sh_y & 0\\sh_x & 1 & 0\\ 0 & 0 & 1\end{array}\right]\] \(sh_y\) specifies the shear factor along the \(y\) axis. Rotation \[ \left[\begin{array}{c}\cos(q) & \sin(q) & 0\\-\sin(q) & \cos(q) & 0\\ 0 & 0 & 1\end{array}\right]\] \(q\) specifies the angle of rotation. \(t_x\)
Your understanding of what makes chess NP-Hard is slightly flawed. Yes, a nondeterministic machine is able to "play perfectly". But the language of chess is, $$Chess = \{Pos \quad | \quad \text{White wins with perfect play on an }n\times n \\ \text{ chess board, starting from position } Pos \quad \}$$ Does a certificate for this exist? Consider even just two moves, with white moving first. Then you ask whether a move for white exists such that white wins, for all moves of black. Let $W$ be a program that takes as input a board position and returns yes iff white has won. Then to check whether white wins with perfect play within four moves, you need to evaluate $$\exists w_1\colon \forall b_1\colon \exists w_2\colon \forall b_2\colon W(Move(Pos, w_1,b_1,w_2,b_2)) $$ But a nondeterministic Turing Machine can only answer questions if you ask them in the form $$\exists y\colon M(x,y) $$ Hence what makes chess, and other games, hard, is that the quantifiers alternate. From Even and Tarjan [1], who proved, to my knowledge, PSPACE-Completeness of a game for the first time: Our construction also suggests that what makes "games" harder than "puzzles" (e.g. NP-Complete problems) is the fact that the initiative ("the move") can shift back and forth between the players. Such a shift corresponds to an alternation of quantifiers in the Boolean formula (the NP-Complete problems correspond to Boolean formulas with no quantifier alternation). [1] Even, Shimon, and Robert Endre Tarjan. "A combinatorial problem which is complete in polynomial space." Journal of the ACM (JACM) 23.4 (1976): 710-719.
I need to prove the following that $e^{\bar{z}} = \bar{e^{z}}$ with $e^z := \Sigma_{k = 0}^{k = \infty} \frac{z^k}{k!}$ The problem that I am having is first we know $e^z$ defined that way always converge, but what I don't understand is we have to see that $\bar{z^k}$ put into the formula above converges to same number as if we do the summation and then converge to a number, and conjugate that. I don't know how to do that..
First, I am not exactly sure what you are asking. I am assuming that you know that the minimal norm solution to the least squares problem $\min_x {1 \over 2} \|Ax-y\|^2$ is given by $x=A^{\dagger} y$ and that the issue to show that the solution to problem$\min_X {1 \over 2} \| C-AXB\|_F^2$ is given by $A^{\dagger} C B^{\dagger}$. Second, note that $A^{\dagger} C B^{\dagger}$ is not necessarily a unique solution, however, of all solutions (and is at least one solution), it is the one of minimal norm.For example, if $A=0$ then any $X$ is a solution, but clearly not unique. One approach is to determine the SVD with respect to the basis $E_{ij} = e_i e_j^T$ and translate this into the above form. Note that if we can write $A = \sum_k \sigma_k u_k v_k^T$ (or equivalently, $Ax = \sum_k \sigma_k u_k \langle v_k, x \rangle$ for all $x$), where $\sigma_k \ge 0$, $u_k$ & $v_k$ are orthonormal, then we can extract the SVD from this formulation in the usual sort of way. Then$A^{\dagger} = \sum_{\sigma_k \neq 0} {1 \over \sigma_k } v_k u_k^T$ (or equivalently, $A^{\dagger} x = \sum_{\sigma_k \neq 0} {1 \over \sigma_k } v_k \langle u_k, x \rangle$ for all $x$). So, we look for a similar expansion forthe operator $L(X) = AXB$. \begin{eqnarray}AXB &=& (\sum_k \sigma_k(A) u_k(A)v_k(A)^T ) X (\sum_k \sigma_k(B) u_k(B)v_k(B)^T ) \\&=& \sum_{k,l} \sigma_k(A) \sigma_l(B) u_k(A)v_k(A)^T X u_l(B)v_l(B)^T \\&=& \sum_{k,l} \sigma_k(A) \sigma_l(B) u_k(A) v_l(B)^T v_k(A)^T X u_l(B) \\&=& \sum_{k,l} \sigma_k(A) \sigma_l(B) u_k(A) v_l(B)^T \operatorname{tr} (v_k(A)^T X u_l(B)) \\&=& \sum_{k,l} \sigma_k(A) \sigma_l(B) u_k(A) v_l(B)^T \operatorname{tr} ( u_l(B) v_k(A)^T X ) \\&=& \sum_{k,l} \sigma_k(A) \sigma_l(B) u_k(A) v_l(B)^T \langle v_k(A) u_l(B)^T , X \rangle \\\end{eqnarray}Hence $L$ has singular values $\sigma_k(A) \sigma_l(B) $ with left singular vectors $u_k(A) v_l(B)^T$ and right singular vectors $v_k(A) u_l(B)^T$.A quick check shows that the singular vectors are orthonormal. Hence $L^{\dagger}$ has the representation\begin{eqnarray}L^{\dagger}(X) &=& \sum_{\sigma_k(A) \sigma_l(B) \neq 0} {1 \over \sigma_k(A) \sigma_l(B)} u_k(A) v_l(B)^T \langle v_k(A) u_l(B)^T , X \rangle \\&=& \sum_{\sigma_k(A) \sigma_l(B) \neq 0} {1 \over \sigma_k(A) \sigma_l(B)} u_k(A) v_l(B)^T \operatorname{tr} ( u_l(B) v_k(A)^T X ) \\&=& \sum_{\sigma_k(A) \sigma_l(B) \neq 0} {1 \over \sigma_k(A) \sigma_l(B)} u_k(A) v_l(B)^T v_k(A)^T X u_l(B) \\&=& \sum_{\sigma_k(A) \sigma_l(B) \neq 0} {1 \over \sigma_k(A) \sigma_l(B)} u_k(A) v_k(A)^T X u_l(B) v_l(B)^T \\&=& ( \sum_{\sigma_k(A) \neq 0} {1 \over \sigma_k(A)} u_k(A) v_k(A)^T ) X ( \sum_{\sigma_l(B) \neq 0} {1 \over \sigma_l(B)} u_l(B) v_l(B)^T ) \\&=& A^{\dagger} X B^{\dagger}\end{eqnarray} Hence the minimal norm solution to the least squares problem is given by$L^{\dagger}(C) = A^{\dagger} C B^{\dagger}$.
Let $M,N,P$ be smooth manifolds, $F\in\mathcal{C}^\infty(M,N)$ a smooth map and $i\in\mathcal{C}^\infty(P,N)$ an injective immersion, such that $F(M)\subset i(P)$. We define a map $G:M\rightarrow P$ by $F(p)=i(G(p))$. I am to show the following: If $i$ is an embedding, then $G$ is continuous. My ideas so far: Let $i$ be an embedding, then $i:P\rightarrow i(P)$ is a homeomorphism, i.e. the inverse $i^{-1}$ exists and is also continuous. Therefore, from $F=G\circ i$ follows that $G=F\circ i^{-1}$. This is well-defined since $F(M)\subset i(P)$. Now one can pick an open subset $U\subset P$, such that $i(U)\subset F(M)$. Of course, $i(U)$ is open. But the question is how to show that $F^{-1}(i(U))=G^{-1}(U)$ is open in $M$. Any hints or ideas? I don't see how injectivity or differentials would help me here, except that I know: If a smooth map between manifolds is a diffeomorphism, then its differential is a bijection. However, the converse is not always true.
We need some notation before we state the problem. Let $K$ be a quadratic number field, $d$ its discriminant. Let $R$ be an order of $K$, i.e. a subring of $K$ which is a free $\mathbb{Z}$-module of rank $2$. Let $D$ its discriminant. Let $x_1,\cdots, x_n$ be a sequence of elements of $R$. We denote by $[x_1,\cdots,x_n]$ the $\mathbb{Z}$-submodule of $R$ generated by $x_1,\cdots, x_n$. By this question, $R = [1, \omega]$, where $\omega = \frac{(D + \sqrt D)}{2}$. Let $\sigma$ be the unique non-identity automorphism of $K/\mathbb{Q}$. We denote $\sigma(\alpha)$ by $\alpha'$ for $\alpha \in R$. We denote $\sigma(I)$ by $I'$ for an ideal $I$ of $R$. Let $f$ be the order of $\mathcal{O}_K/R$ as a $\mathbb{Z}$-module. Then $D = f^2d$ (see this question). Let $p$ be a prime number such that gcd$(p, f) = 1$. Then $pR$ is regular by this question. Hence $pR$ is uniquely decomposed as a product of regular prime ideals by this question. So it is natural to ask how it is decomposed. I came up with the following proposition. PropositionLet $K, R$, etc. be as above.Let $p$ be an odd prime number such that gcd$(p, f) = 1$. Case 1 $D$ is divisible by $p$. $P = [p, \omega]$ is a prime ideal and $pR = P^2$. Moreover $P = P'$. Case 2 gcd$(D, p) = 1$ and $D$ is quadratic residue modulo $p$. Let $b, b'$ be solutions of $(2x + d)^2 \equiv d$ (mod $p$) such that $b - b'$ is not divisible by $p$. Then $P = [p, b + \omega]$ and $P' = [p, b' + \omega]$ are distinct prime ideals and $pR = PP'$. Case 3 gcd$(D, p) = 1$ and $D$ is quadratic non-residue modulo $p$. $pR$ is a prime ideal. Method of my proofI used the result of this question. My questionHow do you prove the proposition?I would like to know other proofs based on different ideas from mine.I welcome you to provide as many different proofs as possible.I wish the proofs would be detailed enough for people who have basic knowledge of introductory algebraic number theory to be able to understand.
I'm trying not to be offended, I know how spam filters work and sometimes someone gets wrongly accused and it just happens to be me. I'm just a little frustrated because I carefully craft this question for over an hour, then I get "this looks like spam" with absolutely no suggestion to make it look less like it. My Notepad++ counts 913 "\w+" instances (so around 913 words), I use 2 links, 2 strongly marked phases ("Theorem 10." and "Here is my problem:"), 3 cursive parts (also short) and 5 lines beginning with > (quotes). Removing links and formatting hasn't helped. I could just shorten the question by leaving parts out and just edit them in again, but that seems cheap and is possibly unwanted. Please tell me what I can do? Edit: Original Post at https://pastebin.com/jvt9jPuD and below (first line was heading) Is the inverse of surreal numbers actually well-defined? J.Conway wrote in his book "On numbers and games" (1st edition, 1976) on p. 66 It seems to us, however, that mathematics has now reached the stage where formalization within some particular axiomatic set theory is irrelevant, even for foundational studies. I happen to disagree. In fact, I'm having some fun formalizing mathematics in the Mizar system, which has just one article on Conway games yet. I want to find out if further formalization would be fruitful, i.e. if the operations defined in the book can be formalized at all. The cited article only defines $-x$ for a game $x$, but gives a pretty good idea how to deal with the highly inductive nature of games and I'm certain I could formalize addition and multiplication. What bothers me is the definition of $y=\frac{1}{x}$, given by $$y=\left\{\left.0,\frac{1+(x^R-x)y^L}{x^R},\frac{1+(x^L-x)y^R}{x^L}\right|\frac{1+(x^L-x)y^L}{x^L},\frac{1+(x^R-x)y^R}{x^R}\right\}$$ for $x$ positive and only positive $x^L$ considered, which is needed for defining division. Conway, after giving this definition on p.21, writes himself Note that expressions involving $y^L$ and $y^R$ appear in the definition of $y$. It is this that requires us to "explain" the definition. The explanation is that we regard these parts of the definition as defining new options for $y$ in terms of old ones. In a footnote, the rather trivial example of $\frac{1}{3}=\{0,\frac{1}{4},\frac{5}{16},\ldots|\frac{1}{2},\frac{3}{8},\ldots\}$ is given to show "how the definition works". I can't see why $y$ is well defined in general. For example, given two uncountable cardinals $\alpha<\beta$ I'm having a hard time seeing how $\frac{1}{\beta-\alpha}$ should be computed. The emphasis here lies on "uncountable". Claus Tøndering gave a seemingly equivalent definition of the inverse here on p.44 (if the definition should not be equivalent, please point out why). He defines $y$ through $y^L$ and $y^R$ as such: $$0\in y^L$$ $$z\in y^L \Rightarrow \tfrac{1+(x^R-x)z}{x^R}\in y^L, \tfrac{1+(x^L-x)z}{x^L}\in y^R$$ $$z\in y^R \Rightarrow \tfrac{1+(x^L-x)z}{x^L}\in y^L, \tfrac{1+(x^R-x)z}{x^R}\in y^R$$ This is still "too" recursive to be formalized. One of my problems is that I can't comprehend the cardinality of $y^L$ and $y^R$. I mean, I could define $y^L_0 = \{0\}, y^R_0 = \{\}$ and for $n\in\mathbb{N}, n>0$ change Tønderings definitions to $z\in y^L_{n-1} \Rightarrow \ldots\in y^L_n$ and so on (or better: $$y^L_n = \left\{\left.\tfrac{1+(x^R-x)z}{x^R}\right|z\in y^L_{n-1}\right\}\cup\left\{\left.\tfrac{1+(x^L-x)z}{x^L}\in y^L \right|z\in y^R_{n-1}\right\}$$ and $y^R$ analogue) and conjecture $$y^L = \{0\}\cup\bigcup_{n\in\mathbb{N}} y^L_n,\quad y^R = \bigcup_{n\in\mathbb{N}} y^R_n$$ Here is my problem: I really doubt I could prove that. First off, I'm having trouble believing the equality holds, that I could miss something by merely having a countable union. Secondly, $y^L$ and $y^R$ are required to be sets and I can image how they accidentally could become classes this way with some $x$ nefarious enough (maybe $x=\beta-\alpha$ is enough already?), because maybe the set generation process never stops at a certain day. I get couldn't information about this topic at all. In papers about surreal numbers either they are just given like here without further doubt or not explicitly given at all. Some papers, like these from Philip Ehrlich, go deeper into cardinality or other theories above my understanding, so if the issue would be resolved there, I wouldn't have noticed. On the matter of the $y^L_n$ and $y^R_n$ being sets, Conway writes Theorem 10. We have (i) $xy^L<1<xy^R$ for all $y^L,y^R$. (ii) $y$ is a number.[(iii) and (iv) left out] Proof.We observe that the options of $y$ are defined by formulae of the form $$y''=\frac{1+(x'-x)y'}{x'}$$ where $y''$ is an earlier option of $y$, and $x'$ some non-zero-option of $x$. This formula can be written $$1-xy'' = (1-xy')\frac{x'-x}{x'}$$ which shows that $y''$ satisfies (i) if $y'$ does. Plainly $0$ does. Part (ii) now follows, since we cannot have any inequality $y^L\geq y^R$. [...] As far as my understanding goes, with "$y$ is a number" he means "if $y$ is a game, then it's a number", as this proof (directly following the remark after the definition) does not indicate the sethood of $y^L$ or $y_R$ in my eyes.
Definition:Two (Boolean Algebra) Definition Denote with $\top$ the canonical tautology. Denote with $\bot$ the canonical contradiction. Define $\mathbf 2 := \left\{{\bot, \top}\right\}$, read two. When endowed with the logical operations $\lor$, $\land$ and $\neg$, $\mathbf 2$ becomes a Boolean algebra. These operations have the following Cayley tables: $\begin{array}{c|cc} \lor & \bot & \top \\ \hline \bot & \bot & \top \\ \top & \top & \top \end{array} \qquad \begin{array}{c|cc} \land & \bot & \top \\ \hline \bot & \bot & \bot \\ \top & \bot & \top \end{array} \qquad \begin{array}{c|cc} & \bot & \top \\ \hline \neg & \top & \bot \end{array}$ Also known as Some sources use $0$ and $1$ for $\bot$ and $\top$, respectively. The three operations $\vee$, $\wedge$ and $\neg$ still have the same Cayley tables, so that this is a matter of convention only. Define $\preceq$ to be the ordered set determined by putting $\bot \preceq \top$. When endowed with the logical operations $\lor$, $\land$ and $\neg$, $\mathbf 2$ becomes a Boolean lattice. Also see
H S Bhatti Articles written in Pramana – Journal of Physics Volume 65 Issue 3 September 2005 pp 541-546 Singly and doubly doped ZnS phosphors have been synthesized using flux method. Laser-induced photoluminescence has been observed in ZnS-doped phosphors when these were excited by the pulsed UV N 2 laser radiation. Due to down-conversion phenomenon, fast phosphorescence emission in the visible region is recorded in milliseconds time domain for ZnS:Mn while in the case of ZnS:Mn:killer (Fe, Co and Ni) the lifetime reduces to microseconds time domain. Experimentally observed luminescent emission parameters of excited states such as, lifetimes, trap-depth values and decay constants have been reported here at room temperature. The high efficiency and fast recombination times observed in doped ZnS phosphors make these materials very attractive for optoelectronic applications. Volume 81 Issue 6 December 2013 pp 1021-1035 In the present research paper, phonons in graphene sheet have been calculated by constructing a dynamical matrix using the force constants derived from the second-generation reactive empirical bond order potential by Brenner and co-workers. Our results are comparable to inelastic X-ray scattering as well as first principle calculations. At $\Gamma$ point, for graphene, the optical modes (degenerate) lie near 1685 cm$^{−1}$. The frequency regimes are easily distinguishable. The lowfrequency ($\omega \to 0$) modes are derived from acoustic branches of the sheet. The radial modes can be identified with $\omega \to 584$ cm$^{−1}$. High-frequency regime is above 1200 cm$^{−1}$ (i.e. ZO mode) and consists of TO and LO modes. The phonons in a nanotube can be derived from zone folding method using phonons of a single layer of the hexagonal sheet. The present work aims to explore the agreement between theory and experiment. A better knowledge of the phonon dispersion of graphene is highly desirable to model and understand the properties of carbon nanotubes. The development and production of carbon nanotubes (CNTs) for possible applications need reliable and quick analytical characterization. Our results may serve as an accurate tool for the spectroscopic determination of the tube radii and chiralities. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
This could be a math question. But if you think about the physical aspect of the question, it is interesting to look at the Schrodinger equation: For free fields (without potential), you have (in units $\bar h = m = \omega = 1$): $$i\frac{\partial \Psi( k, t)}{\partial t} = \frac{ k^2}{2}\Psi( k, t)$$ or $$ E \tilde \Psi( k, E) = \frac{ k^2}{2} \tilde \Psi( k, E)$$ Whose solution is : $$\Psi( k, t) \sim e^{- i \frac{ k^2}{2} t}$$ or $$ \tilde \Psi( k, E) \sim \delta \left(E - \frac{ k^2}{2}\right)$$ Here $ \tilde \Psi( k, E)$ is a Fourier transform of $\Psi( k, t)$. It is clear, from the form of the equations, that there is no constraint about $E$. It is a continuous spectra, and this is clearly a non-normalizable solution. However, with potentials, things appear differently, and you will have some differential equations, for instance, for the harmonic oscillator potential, you will have : $$ E \Psi( k, E) = \frac{ k^2}{2} \Psi( k, E) - \frac{1}{2}\frac{\partial^2 \Psi( k, E)}{\partial k^2}$$ The solution for $\Psi$ involves a Hermite differential equation (multiply by some exponential $e^{-k^2}$). If E is taking continuous, then the Hermite solution (with a real index) is not bounded at infinity, and so the solution is not normalizable. If we want a normalizable solution, then we need a (positive) integer indexed solution $H_n$, whose name is Hermite polynomials. In this case, the spectrum of $E$ is discrete. The choice of $E$ discrete (and so a normalizable solution) is then a physical choice. In the case of the harmonic oscillator, it is unphysical to suppose that the solution is not bounded at infinity. The case of Hermite polynomials is a special case of orthogonal polynomials, which is very well suited to represent orthonormal states, corresponding to discrete eigenvalues of the hermitian operator Energy.
Let $f\ge 0$ be a measurable real-valued function defined on $\mathbb R$. For each $c>0$, Let $E_c:=\{x \in\mathbb R :f(x)\ge c \}$. Prove that \ $$\int_{E_c}f\,dm\ge c \cdot m(E_c).$$ My idea is to construct a sequence of simple functions approximation of $f$\ $\Phi_n=\sum_{i=1}^N a_i\chi_{E_i}$. since we are integrating over $E_c:=\{x \in\mathbb R :f(x)\ge c \}$, I will let $a_i \ge c \quad \forall a_i $ \ By definition, $\int_{E_c}f\,dm= \sup\left\{\int_{E_C}\Phi \, dm, \Phi\le f \right\}$ \and $\int_{E_C}\Phi \, dm =\sum_{i=1}^N a_i \cdot m(E_i)$. So In this case $E_c=\bigcup E_i$ \ But $$\sum_{i=1}^N a_i \cdot m(E_i) \color{red}{\ge} \sum_{i=1}^N c\cdot m(E_i) = c \sum_{i=1}^N m(E_i) \text{ since } a_i \ge c \quad \forall a_i $$ by additivity of measure \ $c\sum_{i=1}^N m(E_i)=c\cdot m\left(\bigcup E_i\right)=c\cdot m(E_c)$ but I feel something is off. Can you please help me out. thank you all in advance.
String theory may be considered as a framework to calculate scattering amplitudes (or other physically meaningful, gauge-invariant quantities) around a flat background; or any curved background (possibly equipped with nonzero values of other fields) that solves the equations of motion. The curvature of spacetime is physically equivalent to a coherent state (condensate) of closed strings whose internal degrees of freedom are found in the graviton eigenstates and whose zero modes and polarizations describe the detailed profile $g_{\mu\nu}(X^\alpha)$. Einstein's equations arise as equations for the vanishing of the beta-functions – derivatives of the (continuously infinitely many) world sheet coupling constants $g_{\mu\nu}(X^\alpha)$ with respect to the world sheet renormalization scale – which is needed for the scaling conformal symmetry of the world sheet (including the quantum corrections), a part of the gauge symmetry constraints of the world sheet theory. Equivalently, one may realize that the closed strings are quanta of a field and calculate their interactions in an effective action from their scattering amplitudes at any fixed background. The answer is, once again, that the low-energy action is the action of general relativity; and the diffeomorphism symmetry is actually exact. It is not a surprise that the two methods produce the same answer; it is guaranteed by the state-operator correspondence, a mathematical fact about conformal field theories (such as the theory on the string world sheet). The relationship between the spacetime curvature and the graviton mode of the closed string is that the former is the condensate of the latter. They're the same thing. They're provably the same thing. Adding closed string excitations to a background is the only way to change the geometry (and curvature) of this background. (This is true for all of other physical properties; everything is made out of strings in string theory.) On the contrary, when we add closed strings in the graviton mode to a state of the spacetime, their effect on other gravitons and all other particles is physically indistinguishable from a modification of the background geometry. Adjustment of the number and state of closed strings in the graviton mode is the right and only way to change the background geometry. See also http://motls.blogspot.cz/2007/05/why-are-there-gravitons-in-string.html?m=1 Let me be a more mathematical here. The world sheet theory in a general background is given by the action$$ S = \int d^2\sigma\,g_{\mu\nu}(X^\alpha(\sigma)) \partial_\alpha X^\mu(\sigma)\partial^\alpha X^\nu(\sigma) $$It is a modified Klein-Gordon action for 10 (superstring) or 26 (bosonic string theory) scalar fields in 1+1 dimensions. The functions $g_{\mu\nu}(X^\alpha)$ define the detailed theory; they play the role of the coupling constants. The world sheet metric may always be (locally) put to the flat form, by a combination of the 2D diffeomorphisms and Weyl scalings. Now, the scattering amplitudes in (perturbative) string theory are calculated as$$ A = \int {\mathcal D} h_{\alpha\beta}\cdots \exp(-S)\prod_{i=1}^n \int d^2\sigma V_i $$We integrate over all metrics on the world sheet, add the usual $\exp(-S)$ dependence on the world sheet action (Euclideanized, to make it mathematically convenient by a continuation), and insert $n$ "vertex operators" $V_i$, integrated over the world sheet, corresponding to the external states. The key thing for your question is that the vertex operator for a graviton has the form $$V_{\rm graviton} = \epsilon_{\mu\nu}\partial_\alpha X^\mu (\sigma)\partial^\alpha X^\nu(\sigma)\cdot \exp(ik\cdot X(\sigma)).$$The exponential, the plane wave, represents (the basis for) the most general dependence of the wave function on the spacetime, $\epsilon$ is the polarization tensor, and each of the two $\partial_\alpha X^\mu(\sigma)$ factors arises from one excitation $\alpha_{-1}^\mu$ of the closed string (or with a tilde) above the tachyonic ground state. (It's similar for the superstring but the tachyon is removed from the physical spectrum.) Because of these two derivatives of $X^\mu$, the vertex operator has the same form as the world sheet Lagrangian (kinetic term) itself, with a more general background metric. So if we insert this graviton into a scattering process (in a coherent state, so that it is exponentiated), it has exactly the same effect as if we modify the integrand by changing the factor $\exp(-S)$ by modifying the "background metric" coupling constants that $S$ depends upon. So the addition of the closed string external states to the scattering process is equivalent to not adding them but starting with a modified classical background. Whether we include the factor into $\exp(-S)$ or into $\prod V_i$ is a matter of bookkeeping – it is the question which part of the fields is considered background and which part is a perturbation of the background. However, the dynamics of string theory is background-independent in this sense. The total space of possible states, and their evolution, is independent of our choice of the background. By adding perturbations, in this case physical gravitons, we may always change any allowed background to any other allowed background. We always need some vertex operators $V_i$, in order to build the "Fock space" of possible states with particles – not all states are "coherent", after all. However, you could try to realize the opposite extreme attitude, namely to move "all the factors", including those from $\exp(-S)$, from the action part to the vertex operators. Such a formulation of string theory would have no classical background, just the string interactions. It's somewhat singular but it's possible to formulate string theory in this way, at least in the cubic string field theory (for open strings). It's called the "background-independent formulation of the string field theory": instead of the general $\int\Psi*Q\Psi+\Psi*\Psi*\Psi$ quadratic-and-cubic action, we may take the action of string field theory to be just $\int\Psi*\Psi*\Psi$ and the quadratic term (with all the kinetic terms that know about the background spacetime geometry) may be generated if the string field $\Psi$ has a vacuum condensate. Well, it's a sort of a singular one, an excitation of the "identity string field", but at least formally, it's possible: the whole spacetime may be generated purely out of stringy interactions (the cubic term), with no background geometry to start with.
The suspension isomorphism for homology index braids DOI: http://dx.doi.org/10.12775/TMNA.2006.028 Abstract Let $X$ be a metric space, $\pi$ be a local semiflow on $X$, $k\in\mathbb N$, $E$ be a $k$-dimensional normed space and $\widetilde\pi$ be the semiflow generated by the equation $\dot y=Ly$, where $L\co E\to E$ is a linear map whose all eigenvalues have positive real parts. We show in this paper that for every admissible isolated $\pi$-invariant set $S$ there is a well-defined isomorphism of degree $-k$ from the homology categorial Conley-Morse index of $(\pi\times\widetilde\pi,S\times\{0\})$ to the homology categorial Conley-Morse index of $(\pi,S)$ such that the family of these isomorphisms commutes with homology index sequences. In particular, given a partially ordered Morse decomposition $(M_i)_{i\in P}$ of $S$ there is an isomorphism of degree $-k$ from the homology index braid of $(M_i\times\{0\})_{i\in P}$ to the homology index braid of $(M_i)_{i\in P}$, so $C$-connection matrices of $(M_i\times\{0\})_{i\in P}$ are just $C$-connection matrices of $(M_i)_{i\in P}$ shifted by $k$ to the right. semiflow on $X$, $k\in\mathbb N$, $E$ be a $k$-dimensional normed space and $\widetilde\pi$ be the semiflow generated by the equation $\dot y=Ly$, where $L\co E\to E$ is a linear map whose all eigenvalues have positive real parts. We show in this paper that for every admissible isolated $\pi$-invariant set $S$ there is a well-defined isomorphism of degree $-k$ from the homology categorial Conley-Morse index of $(\pi\times\widetilde\pi,S\times\{0\})$ to the homology categorial Conley-Morse index of $(\pi,S)$ such that the family of these isomorphisms commutes with homology index sequences. In particular, given a partially ordered Morse decomposition $(M_i)_{i\in P}$ of $S$ there is an isomorphism of degree $-k$ from the homology index braid of $(M_i\times\{0\})_{i\in P}$ to the homology index braid of $(M_i)_{i\in P}$, so $C$-connection matrices of $(M_i\times\{0\})_{i\in P}$ are just $C$-connection matrices of $(M_i)_{i\in P}$ shifted by $k$ to the right. Keywords Conley index; homology index braid; suspension isomorphism; connection matrix Full Text:FULL TEXT Refbacks There are currently no refbacks.
I thought of this question somewhat randomly on a walk, and have discussed it with another friend of mine (we both have pure mathematics degrees). We have made some headway, and we think we have generated a proof, but we would appreciate any additional insight and proof verification. Let $f:\mathbb{R}\to\mathbb{R}$ be a continuous function such that $f(ax)=f(x)+f(a)$. How much can we say about the behavior of $f(x)$? What additional restrictions, if any, allow us to show $f(x)=\log_b(x)$? We conjecture that, with the restriction that $f$ is not $0$ everywhere, $f$ must be the logarithm. We have, by the log rules, $$\log_b(|ax|)=\log_b(|x|)+\log_b(|a|)$$ So the logarithm is indeed one such $f$. I will give what we have been able to show about $f$ below, followed by our general proof. If you can offer any insight into this problem, we would greatly appreciate it. Edit: Turns out this can be shown fairly easily by considering the Cauchy Functional Equation and showing $g(x)=f(e^x)=cx$ for some $c$. The proof given below does not use this fact. From here on, we assume $f$ is not trivial. Result 1: $f(1)=0$ We note that $$f(a)=f(a\cdot1)=f(1)+f(a)$$ which shows $f(1)=0$. Result 2: $f(0)$ is not defined We see that $$f(0)=f(a\cdot0)=f(0)+f(a)$$ which implies $f(a)=0$ for all $a$. However, we have assumed $f(x)\neq0$ for some $x$, giving a contradiction. Thus $f(0)$ must not be defined. So, we redefine $f$ as $f:\mathbb{R}\setminus\{0\}\to\mathbb{R}$. Result 3: $f(-x)=f(x)$ If we allow $x<0$, we then have $$0=f(1)=f(-1\cdot-1)=2f(-1)$$ Showing that $f(-1)=0$ as well. Using this result, we have $$f(-a)=f(-1)+f(a)=f(a)$$ Our Proof: From here on, we use results from elementary abstract algebra and analysis. We realized that the condition on $f$ is that of a group homomorphism $\varphi:(\mathbb{R}\setminus\{0\},*)\to(\mathbb{R},+)$ where $$\varphi(xy)=\varphi(x)+\varphi(y)$$ Similarly, you can show the above results for $\varphi$. We noted that if we also define the continuous group homomorphism $\psi:(\mathbb{R},+)\to(\mathbb{R}\setminus\{0\},*)$ where $$\psi(x+y)=\psi(x)\psi(y)$$ $\psi$ and $\varphi$ seem like they could possibly be inverses with some additional restrictions. We see that $$\psi(x)=\psi(0+x)=\psi(0)\psi(x)=1$$ So that $\psi(0)=1$. We can then say for any integer $n\geq0$, $$\psi(n)=\psi(1+1+1+...+1)=\psi(1)\cdot\psi(1)\cdot\cdot\cdot\psi(1)=\psi(1)^n$$ Note, $\psi(1)<0$ may give us complex results, so we restrict $\psi(1)>0$. For any integer $n<0$, $$\psi(n)=\psi(-1-1+...-1)=\psi(-1)^{|n|}=(\psi(1)^{-1})^{|n|}=\psi(1)^{-|n|}=\psi(1)^n$$ Then, for any rational number $\frac{p}{q}$, $$\psi(\frac{p}{q})=\psi(\frac{1}{q})^p=\psi(q^{-1})^p=(\psi(q)^{-1})^p=\psi(1)^{\frac{p}{q}}$$ Let $x\in\mathbb{R}\setminus\mathbb{Q}$. Since the rationals are dense in the reals, we can find rational $\frac{p}{q}<x<\frac{r}{s}$ arbitrarily close to $x$. Note that $\psi$ is strictly increasing over the rationals, and thus $$\psi(1)^{\frac{p}{q}}<\psi(x)<\psi(1)^{\frac{r}{s}}$$ Since $\psi$ is continuous, this implies $\psi(x)=\psi(1)^x$ for all $x\in\mathbb{R}$. Therefore, $$\psi(x)=\psi(1)^x$$ This shows that $\psi$ must be the exponential function. Note that it is completely characterized by its value at $1$. Noting that $\varphi$ seems like it should be related to the inverse of $\psi$, we would expect $\varphi(x)=\log_b(x)$. We note that for integer $n\geq0$ and real $a>0$, $$\varphi(a^n)=\varphi(a)+\varphi(a)+...+\varphi(a)=n\varphi(a)$$ For $n<0$, we have $$\varphi(a^n)=\varphi(\frac{1}{a})+...+\varphi(\frac{1}{a})=|n|\varphi(a^{-1})=-|n|\varphi{a}=n\varphi(a)$$ The rational case is slightly trickier. We note that $a^{\frac{n}{n}}=a$, and thus $$\varphi(a)=\varphi(a^{\frac{n}{n}})=n\varphi(a^{\frac{1}{n}})$$ And thus $\varphi(a^{\frac{1}{n}})=\frac{1}{n}\varphi(a)$. Therefore, for any rational $\frac{p}{q}$ $$\varphi(a^{\frac{p}{q}})=p\varphi(a^{\frac{1}{q}})=\frac{p}{q}\varphi(a)$$ Using the same argument as before, we can extend this to all $x\in\mathbb{R}$. Therefore, $$\varphi(a^x)=x\varphi(a)$$ Combining these, we have $$\varphi(\psi(x))=\varphi(\psi(1)^x)=x\varphi(\psi(1))$$ Thus, $\varphi$ and $\psi$ are inverses up to multiplication by a constant, which confirms that $\psi$ must be the logarithm. If we restrict $\varphi(\psi(1))=1$, these are exactly inverses. If $\psi(1)>0$, we must have $\varphi_{\psi(1)}(x)=\log_{\psi(1)}(x)$. This proof requires that $x>0$. However, we showed earlier that $\varphi(-x)=\varphi(x)$, and for positive $x$, $\varphi_b(x)=\log_b(x)$. To make this function even, we modify it as $$\varphi_b(x)=\log_b|x|$$ and this is the only possible continuous solution for $\varphi$ defined on all of $\mathbb{R}\setminus\{0\}$. In other words, $f$ must be the logarithm. We have shown the logarithm rules are unique to logarithms! One thing that concerns us is the restriction that $f$ is continuous. Are there discontinuous $f$ that satisfy this property? Please let us know if we made any invalid assumptions somewhere, or if our statements about the uniqueness of these functions is incorrect. We would also appreciate any alternate proofs you may have.
I have data tuples $(x_i,\varepsilon_i,t_i)$ generated from some observations and I suspect that $\varepsilon \sim \mathcal{N}\left(0,\sigma(t)^2\right)$, where $\sigma(t)$ is an increasing function of time, i.e. the data is zero-mean but clearly manifests increasing variance as time progresses. The first approach is to bin the data and estimate the distribution of each bin, but the spacing of the data is not uniform in $t$ and so some bins have large populations and others are empty, plus the choice of bin size in arbitrary and I'd prefer something that uses the time series more naturally. I want to know how to 1) infer the functional form of $\sigma(t)$. To start I assume $\sigma(t)^2 =\sigma_1^2 + \sigma_2^2 t$ and want to estimate $\sigma_1$,$\sigma_2$ from observations 2) assess the relative agreement between the proposed distribution and the observations My approach is now to try something like $p(\varepsilon_i \vert \boldsymbol{\theta},t_i) = \frac{1}{\sqrt{2\pi \left(\theta_1^2 + \theta_2^2t \right)}}\exp\left[\frac{-\varepsilon^2}{2\left(\theta_1^2 + \theta_2^2t \right)}\right]$ and put some prior on $\boldsymbol{\theta}$ and do regular inference basically treating samples as independent (conditioned on $t$). I can then use general tests of agreement with distributions to assess goodness of fit. My worry is that I am not treating the time parameter correctly. I'd be very interested in any comments, any references or other ways of casting this problem.
Since buying computation power is much affordable than in the past, are the knowledge of algorithms and being efficient getting less important? It's clear that you would want to avoid an infinite loop, so, not everything goes. But if you have better hardware, could you have somehow worse software? I really like the example from Introduction to Algorithms book, which illustrates significance of algorithm efficiency: Let's compare two sorting algorithms: insertion sort and merge sort. Their complexity is $O(n^2) = c_1n^2$ and $O(n\log n) = c_2n \lg n$ respectively. Typically merge sort has a bigger constant factor, so let's assume $c_1 < c_2$. To answer your question, we evaluate execution time of a faster computer (A) running insertion sort algorithm against slower computer (B) running merge sort algorithm. We assume: the size of input problem is 10 million numbers: $n=10^7$; computer A executes $10^{10}$ instructions per second (~ 10GHz); computer B executes only $10^7$ instructions per second (~ 10MHz); the constant factors are $c_1=2$ (what is slightly overestimated) and $c_2=50$ (in reality is smaller). So with these assumptions it takes $$ \frac{2 \cdot (10^7)^2 \text{ instructions}} {10^{10} \text{ instructions}/\text{second}} = 2 \cdot 10^4 \text{ seconds} $$ for the computer A to sort $10^7$ numbers and $$ \frac{50 \cdot 10^7 \lg 10^7 \text{ instructions}} {10^{7} \text{ instructions}/\text{second}} \approx 1163 \text{ seconds}$$ for the computer B. So the computer, which is 1000 times slower, can solve the problem 17 times faster. In reality the advantage of merge sort will be even more significant and increasing with the size of the problem. I hope this example helps to answer your question. However, this is not all about algorithm complexity. Today it is almost impossible to get a significant speedup just by the use of the machine with higher CPU frequency. People need to design algorithms for multi-core systems that scale well. This is also a tricky task, because with the increase of cores, an overhead (for managing memory accesses, for instance) increases as well. So it's nearly impossible to get a linear speedup. So to sum up, the design of efficient algorithms today is equally important as before, because neither frequency increase nor extra cores will give you the speedup compared to the one brought by the efficient algorithm. On the contrary. At the same time that hardware is getting cheaper, several other developments take place. First, the amount of data to be processed is growing exponentially. This has led to the study of quasilinear time algorithms, and the area of big data. Think for example about search engines - they have to handle large volumes of queries, process large amounts of data, and do it quickly. Algorithms are more important than ever. Second, the area of machine learning is growing strong, and full of algorithms (albeit of a different kind than what you learn in your BA). The area is thriving, and every so often a truly new algorithm is invented, and improves performance significantly. Third, distributed algorithms have become more important, since we are hitting a roadblock in increasing CPU processing speed. Nowadays computing power is being increased by parallelizing, and that involves dedicated algorithms. Fourth, to counterbalance the increasing power of CPUs, modern programming paradigms employ virtual machine methods to combat security loopholes. That slows these programs down by an appreciable factor. Adding to the conundrum, your operating system is investing more CPU time on bells and whistles, leaving less CPU time for your actual programs, which could include CPU-intensive algorithms such as video compression and decompression. So while hardware is faster, it's not used as efficiently. Summarizing, efficient algorithms are necessary to handle large amounts of data; new kinds of algorithms are popping up in the area of artificial intelligence; distributed algorithms are coming into focus; and CPU power is harnessed less efficiently for various reasons (but mainly, because computers are getting more powerful). Algorithms are not dead yet. The knowledge of algorithms is much more than how to write fast algorithms. It also gives you problem solving methods (e.g. divide and conquer, dynamic programming, greedy, reduction, linear programming, etc) that you can then apply when approaching a new and challenging problem. Having a suitable approach usually leads to codes which are simpler and much faster to write. So I have to disagree with Kevin's answer since codes that are not carefully put together are often not only slow but also complicated. I like this quote by David Parnas: I often hear developers described as "someone who knows how to build a large system quickly." There is no trick in building large systems quickly; the quicker you build them, the larger they get! (Of course, we also need to combine algorithms with software design methods to write good codes.) The knowledge of algorithms also tells us how to organize your data so that you can process them more easily and efficiently through the use of data structures. Furthermore, it gives us a way to estimate the efficiency of your approach, and to understand the trade-offs between several different approaches in term of time complexity, space complexity, and the complexity of the codes. Knowing these trade-offs is the key to make the right decision within your resource constraints. On the importance of software efficiency, I will quote Wirth's Law: Software is getting slower more rapidly than hardware becomes faster. Larry Page recently restated that software gets twice as slow every 18 months, and thus outpaces Moore’s law. Yes, they are 'relatively' less important in wide industry. Text editor may be 'fast enough' and it might not need much improvements. Large part of IT effort goes to making sure component A written in Java works with component B written with C communicates correctly via message queue written in Cobol (or something), or to get the product to the market etc. Furthermore the architecture got complicated. When you had plain old simple processors where you had 1 instruction per cycle and you wrote in assembly the optimizations were 'easy' (you just needed to count number of instructions). Currently you don't have a simple processor but a fully-pipelined, superscalar, out-of-order processor with register renaming and multiple level cache. And you don't write in assembly but in C/Java/etc. where code's compiled/JITed (usually to better code then you or I would wrote in assembly), or in Python/Ruby/... where code is interpreted and you are separated by several level of abstraction from machine. Microoptimalizations are hard and most programmers would achieve opposite effect. No, they are as ever as important in research and in 'absolute' terms. There are areas where speed is important as they operate on large amount of data. On this scale the complexities matter as shown by Pavel example. However there are further cases - going 'down' from algorithms is still an option chosen when speed matters (HPC, embedded devices etc.). You will find on many universities groups specializing in compilers and/or software optimization. For example a simple swap of loop ordering can get a thousand time speedup just because it utilizes cache efficiently - while it might be a borderline example the CPU-Memory gap grow 1000 times over past 30 years. Also Computer Architecture is part of CS. Therefore many of the improvements in speed of computation are in fact a part of general CS field. On the industrial side - when you have a HPC cluster speed matters because single program can run for days, months or years. Not only you need to pay the electricity bill but waiting also can cost money. You can throw twice as much hardware but 700M$ can be hardly considered a pocket change for all but biggest companies - in such cases the programmers are the cheaper option and if rewriting the program into new language mean just a 'small' speedup - they might consider it. Also speed might mean better UX. Many reviews of mobile phones OS states which one is 'snappier' and while it can be done by 'tricks' it is certainly an area of study. Also you want to access your data faster and quickly do what you need. Sometimes it means you can do more - in games you have 0.017s to do everything and the faster you are the more candies you can put. It is an interesting discussion. And we have a few things to look at here. The theoretical computer science - This is an evolving science which means as time goes on we will find new and better ways to solve problems, which means improved algorithms for searching and sorting for instance. Larger communities / larger libraries - Because a lot of work has been done by other people we can just build on their work and use the algorithms they have already created and even coded. And these libraries will be updated with time allowing us automatic access to more efficient programs/algorithms. Development - Now here we have a problem I think. A lot of programmers are not computer scientists so they write code to solve business problems not technical / theoretical problems and would be as happy using a bubble sort as a quick sort for instance. And here the speed of hardware is allowing bad programmers to get away with using bad algorithms and bad coding practices. Memory, CPU speed, storage space these things are no longer major concerns and every few months things are getting larger, faster and cheaper. I mean look at the new Cellphones. They are now more advanced than the mainframe computers / servers from the 1970's / 80's . More storage, more processing power, faster memory. UI & DATA - User Interface/ User Experience and DATA are now considered more important than super efficient code in most areas of development. So speed only becomes and issue when a user has to wait long. If we give the user a good look and feel and he gets good response from the application he is happy. And if business knows all data is stored safely and securely and they can retrieve it and manipulate it at anytime they do not care how much space it needs. So I would have to say it is not that efficient programmers are no longer important or needed, it is just that very few companies/users reward people for being super efficient programmers, and because of the hardware being better we are getting away with being less efficient. But there are at least still people out there focusing on efficiency and because of the community spirit everyone in time gains benefit from this. Some other angles on this interesting and deep question that emphasize the interdisciplinary and crosscutting aspects of the phenomenon. Dai quotes Wirth's law in his answer: Software is getting slower more rapidly than hardware becomes faster. There are interesting parallels of this idea to phenomena observed in economics. Note that economics has many deep connections with computer science e.g. in say scheduling where scarce resources (say threads etc.) are allocated on request, by "load-balancing" algorithms. Another example is what is called a producer-consumer queue. Also, auctions. Also e.g., List of eponymous laws, Wikipedia: Parkinson's law – "Work expands so as to fill the time available for its completion." Coined by C. Northcote Parkinson (1909–1993), who also coined its corollary, "Expenditure rises to meet income." In computers: Programs expand to fill all available memory. There is some strong similarity also to Jevon's paradox which was observed in the increase in energy use after the more efficient Watt steam engines began to replace the Newcomen design, but use or proliferation of the engines increased: In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons effect) is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource. The analogy is that hardware is the resource and software is like the consumption of the resource (aka, supply vs demand). So software and hardware (and advances in each) exist somewhat in a tightly-coupled symbiotic feedback loop with each other, in a sense, coevolving. There are many complex and interrelating factors influencing this interplay, e.g.: No, mostly while considering space complexity! Storage capacity of a normal computer is growing exponentially. protected by Gilles♦ Oct 13 '13 at 11:43 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
I see entropy as a number that gives you an idea of how random an outcome will be based on the probability values of each of the possible outcomes in a situation. Let's start with a simple case. Suppose only a single outcome is possible, then there is only one value of $i$ ($=1$) and $p_{1}=1$. From the formula, the entropy is then zero: $$-p_{1} \log(p_{1}) = p_{1} \log \left(\frac{1}{p_{1}} \right) = 1 * 0 = 0$$ This is cool! When the outcome will be the same every single time, the "randomness" is zero, and so the entropy does indeed correspond to a measure of randomness. Now, before moving to more complicated cases, let's look at a plot of the factors involved in the entropy formula. Let me rewrite the formula first as follows: $$ - \sum_{i} p_{i} \log(p_{i}) = \sum_{i} p_{i} \log \left(\frac{1}{p_{i}} \right)$$ Looking at this plot you see that there is nothing really special about $\log \left( \frac{1}{p} \right)$, really any function of $p$ such that $f(1) = 0$ would have done the trick. Now, you might wonder, what if I have two possible outcomes, one that is nearly certain and one that is very unlikely; for example $p_{1} = 0.999$ and $p_{2} = 0.001$. This case is tricky! For the first outcome, we see that $p_{1} \log\left(\frac{1}{p_{1}} \right)$ is a number very close to zero. That first outcome is not too different from the single-outcome situation we looked at before. For the second outcome, $p_{2} = 0.001$, let's think about the limit of the product $p\log(\frac{1}{p})$ as $p \rightarrow 0$. Intuitively, we know that if we add an extremely unlikely event, such as the one with $p_{2} = 0.001$, the "randomness" situation should not really be that different from our original single-outcome process. Let's look at a graph to see what the definition of entropy does for us in this case: Beautiful! This means that an extremely unlikely event contributes nearly zero to the entropy of the system. Extremely likely and extremely unlikely are similar in terms of their "randomness": they have pretty much none of it! Why the logarithm? At this point you might be wondering, what is so special about the logarithm? It does seem kind of an arbitrary choice. There certainly must be other functions of $p$ that have the same convergence properties as $p$ goes to $0$ and $p$ goes to $1$. So, I'll give you a situation to think about. Suppose you have a system where there are two equally likely choices $1$ and $2$, with probabilities $p_{1} = p_{2} = \frac{1}{2}$. That situation will have some entropy, let's call it $S_{2}$. Consider also a second system with an entropy $S_{3}$ where there are three equally likely choices $A$, $B$ and $C$, with probabilities $p_{A} = p_{B} = p_{C} = \frac{1}{3}$. It would be nice if the entropy were a function such that if I considered the union of the two independent systems, the resulting entropy of the global system would be additive, that is $$ S_{g} = S_{2} + S_{3} $$ In simpler words, it would be nice for our measure of "randomness" to be additive. Let's be explicit here and write down the full expression for $S_{g}$, assuming that the events from one system are completely independent from events in the other system. \begin{align}S_{g} = p_{1} p_{A} \log \left (\frac{1}{p_{1}p_{A}} \right) +p_{1}p_{B} \log \left(\frac{1}{p_{1}p_{B}} \right) + p_{1}p_{C} \log \left( \frac{1}{p_{1}p_{C}} \right) + \\p_{2}p_{A} \log \left( \frac{1}{p_{2}p_{A}} \right) + p_{2}p_{B} \log \left( \frac{1}{p_{2}p_{B}} \right) +p_{2}p_{C} \log \left( \frac{1}{p_{2}p_{C}} \right)\end{align} The property of the logarithm that makes it a good choice for defining entropy is then more clear: $$ \log \left( \frac{1}{p_{1}p_{A}} \right) = \log \left( \frac{1}{p_1} \right) + \log \left( \frac{1}{p_A} \right)$$ Given this property, we can simplify $S_{g}$ as $$ S_{g} = p_{1}\log\left( \frac{1}{p_{1}} \right) (p_{A} + p_{B} + p_{C}) + p_{1} S_{3} + p_{2} \log \left( \frac{1}{p_{2}} \right) (p_{A} + p_{B} + p_{C}) + p_{2} S_{3} $$ $$ S_{g} = S_{2} (p_{A} + p_{B} + p_{C}) + S_{3} (p_{1} + p_{2}) $$ Since probabilities add up to $1$, this gives us the desired property: $$ S_{g} = S_{2} + S_{3} $$ This culminates our motivation for why the formula for entropy is what it is! Key takeaway I will summarize by saying that the key point is that "randomness" is hard thing to quantify. We can choose a measure for "randomness" (such as Shannon's entropy formula), and that choice is only informed by the properties that we want the measure to have. When you look at $$ S = - \sum_{i} p_{i} \log(p_{i}) $$ for the first time in your life you might think: where on earth did they pull this out from? But it turns out that it was a definition only informed by the properties that it holds. An informal enumeration of these properties is given below: An extremely likely event should not contribute much to the randomness measure. An extremely unlikely event should not contribute much to the randomness measure. Randomness should be additive.
Let $R$ be a commutative unital ring and $M$ an $R$-module. Then $M$ is projective iff $\operatorname{Hom}(M,-)$ is exact, injective iff $\operatorname{Hom}(-,M)$ is exact, and flat iff $M\otimes-$ is exact. Furthermore, $M$ is faithfully flat when every chain complex is exact iff the induced $M\otimes-$ chain complex is exact. Question 1: What are the modules with the property: every chain complex is exact iff the induced $\operatorname{Hom}(-,M)$ chain complex is exact; every chain complex is exact iff the induced $\operatorname{Hom}(M,-)$ chain complex is exact. Is there a notion faithfully projective/injective, and does it coincide with projective/injective? Question 2: Why is $M$ faithfully flat precisely when $(\ast)$ every map $A\to B$ is injective iff $A\!\otimes\!M\to B\!\otimes\!M$ is injective? I know that $-\!\otimes\!M$ is right exact, so it preserves epimorphisms, but if we assume $(\ast)$, how does $A\!\otimes\!M\to B\!\otimes\!M$ surjective imply $A\to B$ surjective?
Find the radius of convergence of this series and study what happens in the border. $\sum_{n=1}^{\infty}\frac{z^n}{n}$ ($z\in \Bbb{C}$) I easily found that the radius of convergence is $\rho =1$, therefore the series doesn't converge absolutely for $|z|=\rho=1$ , since $\sum|\frac{z^n}{n}|$ diverges in this case. Therefore I want to use a convergence criteria, Dedekind or Dirichlet, but my problem is that the partial sums of $z_n =z^n$ with $|z|=1$ are not bounded. Any hint?
Find all $\theta \in \Bigl (-\dfrac{\pi}{2},\dfrac{\pi}{2}\Bigr)$ satisfying: $$\sec ^{2} \theta(1-\tan \theta)(1+\tan \theta)+2^{\tan^{2}\theta}=0$$ I have tried a lot but couldn't crack this one. I could only bring it down to the following problem (Solving the following problem is equivalent to solving the above equation): Find all $t \in \mathbb R^{+}$ satisfying $$\begin{align} t^{2}=2^{t}+1 \tag{1}\end{align}$$ Any suggestions on how to solve either of the two problems? By plotting a rough graph, I could figure out that there are two such $t$'s satisfying $(1)$, but which ones? Thanks for the help.
Product of Coprime Factors Jump to navigation Jump to search Theorem Let $a, b, c \in \Z$ such that $a$ and $b$ are coprime. Let both $a$ and $b$ be divisors of $c$. Then $a b$ is also a divisor of $c$. That is: $a \perp b \land a \divides c \land b \divides c \implies a b \divides c$ Proof We have: $a \divides c \implies \exists r \in \Z: c = a r$ $b \divides c \implies \exists s \in \Z: c = b s$ So: \(\displaystyle a\) \(\perp\) \(\displaystyle b\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \exists m, n \in \Z: m a + n b\) \(=\) \(\displaystyle 1\) Integer Combination of Coprime Integers \(\displaystyle \leadsto \ \ \) \(\displaystyle c m a + c n b\) \(=\) \(\displaystyle c\) \(\displaystyle \leadsto \ \ \) \(\displaystyle b s m a + a r n b\) \(=\) \(\displaystyle c\) \(\displaystyle \leadsto \ \ \) \(\displaystyle a b \paren {s m + r n}\) \(=\) \(\displaystyle c\) \(\displaystyle \leadsto \ \ \) \(\displaystyle a b\) \(\divides\) \(\displaystyle c\) $\blacksquare$ Sources 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): Chapter $2$: Some Properties of $\Z$: Exercise $2.6$ 1980: David M. Burton: Elementary Number Theory(revised ed.) ... (previous) ... (next): Chapter $2$: Divisibility Theory in the Integers: $2.2$ The Greatest Common Divisor: Theorem $2 \text{-} 4$: Corollary $2$
Previous abstract Next abstract Session 36 - Active Galactic Nuclei. Display session, Tuesday, June 09 Atlas Ballroom, We present initial results from an Owens Valley millimeter array survey of the molecular gas in ten nearby active galaxies. Our sample is drawn from those systems in whose nuclei Ho et al. (1997, ApJS, 112, 391) detect broad H\alpha emission. We have so far mapped five Seyfert galaxies with d < 20\,Mpc at arcsecond resolution in the CO(2\rightarrow 1) rotational transition, and at lower resolution in the CO(1\rightarrow 0) transition. We find these galaxies' nuclei harbor massive molecular gas structures, which are flattened and show strong velocity gradients along axes which are perpendicular to emergent ionization cones and/or radio jets. From the intensity ratios of the two CO lines, we deduce that the transitions have high excitation temperatures and are optically thin in at least some systems. We interpret the observed \sim 400\,pc structures as molecular disks or rings which serve as reservoirs for the replenishment of molecular tori and accretion disks at smaller radii. Program listing for Tuesday
I'm trying to solve: $ y'' -4xy' + (4x^2 -1)y = -3e^{x^2} \sin(2x)$ Which has a general form of $y'' + P(x) y' + Q(x) = R(x)$ I reduced it to the normal form by using the substitution $y = u e^{ - \frac{1}{2} \int P(x)} dx$ $u'' + I(x) u = S(x)$ Where $I(x) = Q - \frac {p'}{2} - \frac {p^2}{4}$ and $S(x) = R(x) e^{ \frac{1}{2} \int P(x) dx}$ I end up with: $ u'' + u = -3 \sin(2x)$ I solve first for the complementary function when its homogeneous $ u_{C.F} = A \cos(x) + B \sin(x)$ Then to solve for the particular solution, I first decided to use variation of parameters, then, $u_{P.I} = v_1y_1 + v_2y_2$ Where $y1$ and $y2$ are the solutions to the homogeneous equation, I let $y1 = \cos(x)$ and $y2 = \sin(x)$ I compute the Wronskian of both functions since they are linearly independent solutions, $ W(y1(x),y2(x) = \left|\begin{matrix}y1 & y2 \\ y1' & y2' \end{matrix}\right| = \left|\begin{matrix}\cos(x) & \sin(x) \\ - \sin(x) & \cos(x) \end{matrix}\right| = 1$ $ (v_2)' = \frac {-y_2 S(x)}{W} = \frac {- \sin(x) (-3 \sin(2x))}{1} = 6 \sin^2(x) \cos(x)$ $ (v_1)' = \frac {y_1 S(x)}{W} = \frac { \cos(x) (-3 \sin(2x))}{1} = -6 \cos^2(x) \sin(x)$ Integrating $v_1$ and $v_2$ I get, $v_1 = 2 \cos^3(x)$ , $v_2 = 2sin^3(x)$ Substituting in $u_{P.I} = v_1y_1 + v_2y_2$ , $u_{P.I} = 2 [ \cos^4(x) + \sin^4(x)]$ So to get $y_{P.I}$ , $y_{P.I} = u_{P.I} e^{ - \frac{1}{2} \int P(x)} dx$ Therefore, $y_{P.I} = 2e^{x^2} [ \cos^4(x) + \sin^4(x) ]$ <------ This is a solution using variation of parameters, now when I try to do it using undetermined coefficients: $u'' + u = -3 \sin(2x)$ Let $u_{P.I} = A \sin(2x) + B \cos(2x)$ Skipping some steps, we arrive that $A = 1$ , and $ B = 0$ Then the solution is $u_{P.I} = \sin(2x) Therefore, $y_{P.I} = e^{x^2} \sin(2x)$ <------ These are two different particular solutions, where's the problem?
I am attempting to fit a model to a dataset with frequency (Hz) is the dependent variable. Using a generalized linear model based on a gamma distribution seems appropriate since the values of the dependent variable are $0 \rightarrow \infty$ and I have confirmed that the observed values align with a gamma distribution using a qqplot. I am attempting to fit the model in R using glm, however it is unclear to me what the estimate values returned by summary.glm refer to. The example from the documentation is provided for context clotting <- data.frame(u = c(5,10,15,20,30,40,60,80,100),lot1 = c(118,58,42,35,27,25,21,19,18))glm.clotting <- glm( lot1 ~ log(u), data = clotting, family = Gamma )summary(glm.clotting) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.0165544 0.0009275 -17.85 4.28e-07 ***log(u) 0.0153431 0.0004150 36.98 2.75e-09 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 I assume these refer to one of the shape parameters of the fitted gamma distribution, but I have not been able to find a clear explanation. Do these values refer to the gamma distribution average $\mu = k\theta = \frac{\alpha}{\beta}$? the rate parameter $\beta$ ? or the scale parameter $\theta$? Also, how should these estimates be interpreted differently in the context of a continuous independent variable (as in the example) verse a discrete independent variable?
To every rational complex curve C \subset (\mathbf{C}^\times)^n we associate a rational tropical curve \Gamma \subset \mathbf{R}^n so that the amoeba \mathcal{A}(C) \subset \mathbf{R}^n of C is... Pages Given n \geq 2 and 1<p<n, we consider the critical p-Laplacian equation \Delta_p u + u^{p^*-1}=0, which corresponds to critical points of the Sobolev inequality. Exploiting the moving... The supermeasure whose integral is the genus g vacuum amplitude of superstring theory is potentially singular on the locus in the moduli space of supercurves where the corresponding even theta-... We investigate the weak lensing corrections to the CMB polarization anisotropies. We concentrate on the effect of rotation and show that the rotation of polarisation is a true physical effect... We consider irrational nilflows on any nilmanifold of step at least 2. We show that there exists a dense set of smooth time-changes such that any time-change in this class which is not... An important non-perturbative effect in quantum physics is the energy gap of superconductors, which is exponentially small in the coupling constant. A natural question is whether this effect can... We present a systematic procedure to extract the perturbative series for the ground state energy density in the Lieb-Liniger and Gaudin-Yang models, starting from the Bethe ansatz solution. This... We explore consequences of the Averaged Null Energy Condition (ANEC) for scaling dimensions \Delta of operators in four-dimensional \mathcal{N}=1 superconformal field theories. We show that in... The background photon temperature \bar T is one of the fundamental cosmological parameters. Despite its significance, \bar T has never been allowed to vary in the data analysis, owing to the... We develop a theoretical framework to describe the cosmological observables on the past light cone such as the luminosity distance, weak lensing, galaxy clustering, and the cosmic microwave... Given n≥3, consider the critical elliptic equation \Delta u + u^{2^*-1}=0 in \mathbb R^n with u>0. This equation corresponds to the Euler-Lagrange equation induced by the Sobolev embedding H^... These notes give an introduction to the mathematical framework of the Batalin-Vilkovisky and Batalin-Fradkin-Vilkovisky formalisms. Some of the presented content was given as a mini course by...
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
CDS 140b Spring 2014 Homework 2 R. Murray, D. MacMartin Issued: 9 Feb 2014 (Wed) CDS 140b, Spring 2014 Due: 16 Feb 2014 (Wed) Note: In the upper left hand corner of the second page of your homework set, please put the number of hours that you spent onthis homework set (including reading). WARNING: UNDER CONSTRUCTION, DO NOT START A planar pendulum (in the $x$-$z$ plane) of mass $m$ and length $\ell$ hangs from a support point that moves according to $x=a\cos (\omega t)$. Find the Lagrangian, the Hamiltonian, and write the first-order equations of motion for the pendulum. Perko, Section 3.3, problem 5.Show that <amsmath>\aligned \dot x &=y+y(x^2+y^2)\\ \dot y &=x-x(x^2+y^2)\endaligned</amsmath> is a Hamiltonian system with $4H(x,y)=(x^2+y^2)^2-2(x^2-y^2)$. Show that $dH/dt=0$ along solution curves of this system and therefore that solution curves of this system are given by <amsmath> (x^2+y^2)^2-2(x^2-y^2)=C</amsmath> Show that the origin is a saddle for this system and that $(\pm 1,0)$ are centers for this system. (Note the symmetry with respect to the $x$-axis.) Sketch the two homoclinic orbits corresponding to $C=0$ and sketch the phase portrait for this system, noting the occurrence of a compound separatrix cycle. Perko, Section 3.6, problem 4. Consider the stability of the Lagrange points (with some simplifying steps). With the mass of the sun as $1-\mu$ and planet as $\mu$, then in the rotating coordinate system the Hamiltonian is given by <amsmath> H=\frac{(p_x+y)^2+(p_y-x)^2}{2}+\Omega(x,y)</amsmath> where the potential function <amsmath> \Omega(x,y)=\frac{x^2+y^2}{2}+\frac{1-\mu}{r_1}+\frac{\mu}{r_2}+\frac{\mu(1-\mu)}{2}</amsmath> with $r_1=\sqrt{(x+\mu)^2+y^2}$ and $r_2=\sqrt{(x-1+\mu)^2+y^2}$. Show that the equilibrium points are given by the critical points of the (messy) function $\Omega$. (This leads to 5 solutions, for L1 through L5, with L1, L2, and L3 collinear, i.e., y=0. If you are interested, the collinear solutions are not too difficult to solve for numerically for some $\mu$.) To explore the linearized dynamics it is sufficient to retain only quadratic terms in $H$ (why?). For the collinear Lagrange points this leads to <amsmath> H=\frac{(p_x+y)^2+(p_y-x)^2}{2}-ax^2+by^2</amsmath> for $a>0$ and $b>0$. E.g., for the L1 point in the Earth-Jupiter system then a=9.892, b=3.446. Describe the linearized dynamics about this Lagrange point; are periodic orbits stable?
Find the sum of the roots, real and non-real, of the equation $x^{2001}+\left(\frac 12-x\right)^{2001}=0$, given that there are no multiple roots. The first solution (on AoPS) involves the use of Vieta's formula's and is quite clear. The third solution states the following : Note that if $r$ is a root, then $\frac{1}{2}-r$ is a root and they sum up to $\frac{1}{2}.$ We make the substitution $y=x-\frac{1}{4}$ so $(\frac{1}{4}+y)^{2001}+(\frac{1}{4}-y)^{2001}=0.$ Expanding gives $2\cdot\frac{1}{4}\cdot\binom{2001}{1}y^{2000}-0y^{1999}+\cdots$ so by Vieta, the sum of the roots of $y$ is 0. Since $x$ has a degree of 2000, then $x$ has 2000 roots so the sum of the roots is $2000(\sum_{n=1}^{2000} y+\frac{1}{4})=2000(0+\frac{1}{4})=\boxed{500}.$ I do not understand two things in the above solution : "Note that if $r$ is a root, then $\frac{1}{2}-r$ is a root and they sum up to $\frac{1}{2}.$" a) Here what is being referred to as "they"? (Shouldn't $\frac{1}{2}-r$ be a factor and not a root). Answered by the third comment b)Why is the sum 1/2? Also answered by the third comment c)Why is $\frac{1}{2}-r$ a root? Answered by the sixth comment How is the final expression arrived upon to find the sum of all 2000 roots? (Answered by @YvesDaoust) The second solution is more mystifying (possibly because it is similar to the one above): We find that the given equation has a $2000^{\text{th}}$ degree polynomial. Note that there are no multiple roots. Thus, if $\frac{1}{2} - x$ is a root, $x$ is also a root. Thus, we pair up $1000$ pairs of roots that sum to $\frac{1}{2}$ to get a sum of $\boxed{500}$. Again, why is $\frac{1}{2} - x$ a root. By "$x$ is also a root" does it mean $x$ representing the set of all roots? Answered by the sixth comment Why does the pairing up occur? Why is the sum of each pair 1/2?(Answered by @YvesDaoust) I wonder if it could be solved as follows : Let the roots be $P_1,P_2,...P_{2000}$. The polynomial can be expressed as a product of factors as follows : $(2001/2)($x$-P_1)($x$-P_2)....($x$-P_{2000}) = 0$. The above expression is the same as $x^{2001}+\left(\frac 12-x\right)^{2001}=0$. Thus, $x^{2001}+\left(\frac 12-x\right)^{2001}$ = $(2001/2)($x$-P_1)($x$-P_2)....($x$-P_{2000})$ Here the coefficient of $x^{1999}$ on the $RHS$ should represent $\sum\limits_{i=1}^{2000}P_i$$\times$$(-1/2)$ On the $LHS$ the corresponding term would be the term with $x^{1999}$ and thus the coefficient of this term on the $LHS$ should also be the required sum. On the LHS the coefficient of the $x^{1999}$ term is -${2001}\choose{2}$*$(1/2)^2$ which represent the sum of the roots. [Picked up this approach here, but I don't see how this would work ] (https://youtu.be/S6FRtmDUl-s?t=2806) In this solution I find some errors(?) : Are there any inconsistencies in the reasoning? Wouldn't the sum of roots differ from the binomial coefficient since the expression involves unique values of $P_i$ (no multiple roots). The answers do not match, which seems to suggest so. Is there a way of arriving at the answer without using Vieta's formula and by expressing the polynomial as a product of factors and then using binomial coefficients as attempted above?
A fairly random example of a great pedagogical technique for mathematics - examples first. For example, suppose I was trying to explain to you what a ring is in algebra. I could start by telling you that a ring is a set $R$ together with two binary operations $+$ and $.$ such that: $r+s=s+r$ and $(r+s)+t$=$r+(s+t)$ for all $r,s,t\in R$ There exists $0\in R$ such that $r+0=r$ for all $r\in R$ etc. ... However, you are much more likely to understand what a ring is if I give you some concrete examples. For example, I could say, 'Rings are like $\mathbb{Z}$. If we consider the integers, then we have two important operations $+$ and $\times$, which have the following interesting properties...' Then I could point out that other interesting sets like the integers $\mod n$, which also have two operations which behave in a similar way. Only then would I give you the formal axiomatic definition of a ring. The next step would be showing that many interesting things that are true for the integers are true for all rings, and that many other interesting things about the integers (like unique factorization) are true for rings with certain properties. Then you would gain some understanding of what rings are and why they are worth introducing. Similarly, I don't think anyone would find the following definition at all meaningful if they hadn't studied topology before: A topological space is a set $X$ together with a collection $\tau$ of subsets of $X$ (called the open sets) such that: $\varnothing,X\in\tau$. $\tau$ is closed under taking unions: for all (possibly infinite) collections of sets $(U_\alpha)_{\alpha\in A}$ with $U_\alpha\in\tau$, the union $\bigcup_{\alpha\in A}U_\alpha$ is in $\tau$. $\tau$ is closed under taking finite intersections: for all $A, B\in\tau$, $A\cap B\in\tau$ (and therefore all intersections of finitely many members of $\tau$ are contained in $\tau$). Much better is to start off by introducing the more concrete idea of metric spaces (by first showing that a lot of concepts in real analysis, like convergence and continuity, can be expressed entirely in terms of the distance between points, and showing that more abstract ideas of distance can be useful) and then showing that the definition of a continuous function between metric spaces can be expressed entirely in terms of the open sets in that space, and then introducing topology from there. Having given you some examples, I'll now state the general principle: if you want to explain some mathematics to someone, always start by telling them some motivating examples. This serves two purposes. Firstly, it shows them why they should care about the idea you're introducing, since all mathematical concepts were originally created because they were interesting in some way. Secondly, it helps them to understand the ideas that you're telling them about, since they have some concrete examples to link them to in their mind. Of course, that's just one useful pedagogical technique, and it's not necessarily useful in all contexts. But it is often very useful. For a much better exposition of this idea, see this blog post by Timothy Gowers.
Open set in a topological space An element of the topology (cf. Topological structure (topology)) of this space. More specifically, let the topology $\tau$ of a topological space $(X, \tau)$ be defined as a system $\tau$ of subsets of the set $X$ such that: $X\in\tau$, $\emptyset\in\tau$; if $O_i\in\tau$, where $i=1,2$, then $O_1\cap O_2\in\tau$; if $O_{\alpha}\in\tau$, where $\alpha\in\mathfrak{A}$, then $\bigcup\{O_{\alpha} : \alpha\in\mathfrak{A} \}$. The open sets in the space $(X, \tau)$ are then the elements of the topology $\tau$ and only them. References [a1] R. Engelking, "General topology" , Heldermann (1989) How to Cite This Entry: Open set. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Open_set&oldid=29690
BERTRAND-B CURVES IN THE THREE DIMENSIONAL SPHERE DOI Number First page Last page Abstract We dene a Bertrand-B curve in Riemannian manifold M such that there exists an isometry \phi of M, that is, \left( \phi \circ \beta \right) (s)=X\left( s,t(s)\right) and the binormal vector of another curve \beta is the paralel vector of binormal vector of \alpha at corresponding points. We obtain the conditions of existence of a Bertrand-B curve in the event E^3, S^3 and H^3 of M. The rst of our main results is that the curve \alpha in E^3 is a Bertrand-B curve if and only if it is planar. Second one, we prove that the curve \alpha with the curvatures \epsilon _{1},\epsilon _{2} in S^3 is a Bertrand-B curve if and only if it is satises \epsilon _{1}^{2}+\epsilon _{2}^{2}=1. Finally, we state that there not exists a Bertrand-B curve in H^3. Keywords Keywords Full Text:PDF References M. Barros, "General helices and a theorem of Lancret." Proceedings of the American Mathematical Society 125.5: 1503-1509, 1997. J. Bertrand, "Mémoire sur la théorie des courbes à double courbure." Journal de Mathématiques Pures et Appliquées, 332-350, 1850. R, Blum, "A remarkable class of Mannheim curves." Canad. Math. Bull 9.2: 223-228,1966. J. H. Choi, T. H. Kang, and Y. H. Kim, "Bertrand curves in 3-dimensional space forms." Applied Mathematics and Computation 219.3: 1040-1046,2012. J. H. Choi, T. H. Kang, and Y. H. Kim, "Mannheim curves in 3-dimensional space forms." Bulletin of the Korean Mathematical Society 50.4: 1099-1108,2013. A. O. Öğrenmiş, H. Öztekin, and M. Ergüt, "Bertrand curves in Galilean space and their characterizations." Kragujevac Journal of Mathematics 32.32: 139-147,2009. K. Orbay, and E. Kasap, "On Mannheim partner curves in E3." International Journal of Physical Sciences 4.5: 261-264,2009. J. A. Thorpe, Elementary topics in differential geometry. Springer Science & Business Media, 2012. F. Yerlikaya, S. Karaahmetoglu, and I. Aydemir. "ON THE BERTRAND B-PAIR CURVES IN 3-DIMENSIONAL EUCLIDEAN SPACE." Journal of Science and Arts 3: 215-224,2016. M. Y. Yilmaz, and M. Bektaş. "General properties of Bertrand curves in Riemann–Otsuki space." Nonlinear Analysis: Theory, Methods & Applications 69.10: 3225-3231,2008. S. Yılmaz, and M. Turgut. "A new version of Bishop frame and an application to spherical images." Journal of Mathematical Analysis and Applications 371.2: 764-776,2010. W. Zhao, D. Pei, and X. Cao. "Mannheim curves in nonflat 3-dimensional space forms." Advances in Mathematical Physics, 2015. DOI: https://doi.org/10.22190/FUMI1902261Y Refbacks There are currently no refbacks. ISSN 0352-9665 (Print)
Revisiting the Work-Energy Theorem The work-energy theorem discussed in the previous section was derived from Newton's second law, which carries with it a reference to the center of mass of the object on which the force is acting. So technically, the velocity and displacement that appear in the work-energy theorem are the velocity and displacement of the center of mass, which would suggest altering Equation 4.1.4 to: \[ \Delta \left( \frac{1}{2} mv_{cm}^2 \right) = \int \limits_A^B \overrightarrow F_{net} \cdot \overrightarrow {dl}_{cm} \] While accurate, this introduces a lot of cumbersome subscripts, which are entirely unnecessary for the kinds of physical situations we had in mind – a force acting on a solid object to accelerate every part of the object the same as its center of mass. But this does suggest an interesting extension of the idea: What if the force acts on one part of a system that includes multiple objects? Indeed, even a solid object is technically comprised of lots of particles, and many forces that act on such an object are only exerted on a fraction of the particles. We now explore this idea. An Instructive Model – A System of Two Particles Let's explore the work-energy theorem in the context of a system of two particles of differing masses. For simplicity, we'll keep everything in one dimension – the particles can only move along the \(x\)-axis, and the force that does the work can only act parallel to the \(x\)-axis. The thing to remember about our two-particle system is that we will be watching how the system's motion evolves, regardless of what happens to the individual particles. One can imagine cloaking the details of the particle motions and just watching the motion of the center of mass of the conglomerate. For the two-particle system shown in Figure 4.4.1, the center of mass is closer to \(m_1\) than \(m_2\), which means that \(m_1>m_2\). In terms of the positions of the two particles, the center of mass location is found using Equation 4.2.1: \[x_{cm} = \dfrac{m_1 x_1 +m_2 x_2}{m_1 +m_2} \] Now let's apply a force on this system (which we will assume starts from rest) from the left, which means it is exerted only on \(m_1\), but as far as we are concerned, it is exerted on the system (we can't see \(m_1\)). Figure 4.4.2 – Work Performed on a System of Two Particles After \(m_1\) starts moving, the velocity of the center of mass is: We can combine this force with the displacement of the center of mass to find the work done on the system (\(m_2\) doesn't displace at all, so \(\Delta x_2 = 0\)): We want to check to see if this work results equals what we measure for the change of kinetic energy of the system (which starts at zero), so we calculate the final kinetic energy using the total mass of the system and the speed of the center of mass from Equation 4.4.3: \[{\Delta KE}_{system} = \frac{1}{2}\left(m_1+m_2\right)v_{cm}^2 = \frac{1}{2}\left(m_1+m_2\right)\left[\left(\dfrac{m_1}{m_1 +m_2}\right) v_1\right]^2 = \left(\dfrac{m_1}{m_1 +m_2}\right)\left[\frac{1}{2}m_1v_1^2\right] \] \[W_{on\;system}={\Delta KE}_{system} \;\;\; \iff \;\;\; F \Delta x_1=\frac{1}{2}m_1v_1^2 \] But we recognize the equation as the work-energy theorem applied to \(m_1\), so we have demonstrated that the work-energy theorem is equally applicable to systems of particles as individual ones. Mechanical and Internal Energy The reader may be puzzled about why the same force acting on the same system appears to transfer two different amounts of energy, depending upon one's perspective. From the particle point of view, the energy transferred to the two particle system is \(\frac{1}{2}m_1v_1^2\) to one particle and zero to the other for a total energy transfer of \(\frac{1}{2}m_1v_1^2\). From the view of someone looking at the system as a whole from outside, the system gains the same amount of energy, reduced by a fraction of \(\frac{m_1}{m_1+m_2}\). It clearly can't be both, so which amount of energy does the system really get? If we imagine a system as a closed box with a bunch of particles in it, the box has an energy equal to the sum of the kinetic energies of the particles. If all of the particles happen to be moving in the same direction at the same speed, then the box must also be moving, and the kinetic energy of the system equals the sum of the kinetic energies of the particles. But suppose while all the particles move at the same speed, half are going in the opposite direction as the other half. In this case, the center of mass remains stationary, and the kinetic energy of the box is zero (one-half the total mass times the square of the center of mass velocity). Clearly the energy in the system is not zero, but from our outside-the-box perspective, we are unable to witness it directly. The kinetic energy of the system as a whole is what we have been referring to as "mechanical" in nature. The remaining energy that is hidden to us due to individual motions of the particles being concealed within the box we refer to as internal energy. To see how this internal energy is defined, let us return again to the two-particle model above. We already have the (mechanical) kinetic energy of the system, given by Equation 4.4.5. Now let's place ourselves within the system by changing reference frames to the rest frame of the system. That is, suppose we are moving along with the center of mass of the system, and measure the total kinetic energy of the two particles. To make this change of frames, we use the method described back in Section 1.8. The velocity of each particle in the new frame is the velocity vector in the "laboratory" frame, minus the velocity vector of the center of mass. The kinetic energy of the two particles in this frame is: \[KE_{internal} =\left(\dfrac{m_2}{m_1+m_2}\right) \left[\frac{1}{2}m_1v_1^2\right] \] Comparing this result with that for the system's kinetic energy (Equation 4.4.5), we see that the sum of the mechanical energy given to the system and the internal energy given to the system is indeed the total energy given to the system. Put mathematically: \[W_{external} = \Delta KE_{mechanical} + \Delta KE_{internal} \] It is important to note that this is not a modification of the work-energy theorem. The right hand side of this equation still equals the total kinetic energy of the system. It just repackages it to accommodate what we can readily see (the mechanical energy) and what we cannot (the internal energy). We can once again take the steps we outlined previously to construct our energy conservation models. Alert Use of the word "system" in Section 3.4 is subtly different from how the word is used here. Here it refers to a collection of particles within a single object, allowing us to distinguish mechanical energy from internal energy for that object. Our previous use of the word "system" referred to a collection of objects. What we refer to as "external" work here could be work done between objects that are within the same collection-of-objects system , and should not be confused with work that could be done from outside the system of objects. If this external work between objects doesn't result in any internal energy created within the objects, then the force between the objects is conservative, and can be expressed as a potential energy . It can be confusing to keep these different system definitions straight, and it might help to remember our current discussion as "systems of particles," and the previous discussion as "systems of objects," with objects being collections of particles. Notice how important the model we use is to the definition of what energy is mechanical and what energy is internal. If our model is constructed to take into account the movements of all the particles, then all of the energy is mechanical. But as soon as we have to lump together particles into a system (whether by choice or out of necessity), we define this division of energy types. No matter how we define our system and do our accounting, however, the total energy is still conserved. Notice also that this doesn't just apply to internal kinetic energy. If the particles within the system interact with each other through some internal force, then the potential energy that results goes into the accounting of the internal energy. In our two-particle model, we might imagine a spring connecting the two particles that is compressed when the force is applied. Even in this case, the particles get closer together, which means that the work done on \(m_1\) is greater than the work done on the system as a whole (the center of mass doesn't move as far as \(m_1\)), resulting in some of the energy going internal. In this case, the internal energy is manifested by the two particles vibrating back-and-forth as the center of mass of the system moves along at a steady speed. Demystifying Non-Conservative Forces and Thermal Energy When we discussed non-conservative forces and how they lead to thermal energy, the mechanism by which this occurred was rather mysterious. Perhaps now with the insight we have into work done by a force resulting in both mechanical energy change and internal energy change, we can get a better sense of how all this plays out. First of all, it should be clear that thermal energy is a form of internal energy. The only additional property thermal energy requires is that it involves a random distribution among the particles in the system. With this criterion, one can hardly consider the internal energy of the two-particle example above to be "thermal," while it's clear that we have no choice but to treat the shared internal energy of trillions of particles in that manner. Still, there is nothing fundamentally different between these two cases. What makes thermal energy so interesting is that while we can't "see inside the box" to follow the intricate details of the internal energy, we do have a way to measure it – through the temperature. We see from the example above that in order for work done on a system to contribute to its internal energy, the force acting on the system must accelerate various particles in the system differently. In our two-particle example, internal energy arose because the force acted on only one of the two particles. It is nevertheless possible for work done on a system to go purely into mechanical energy (i.e. no thermal or other internal energy created) in one of two ways: The system of particles is a solid, rigid, object, so that any force on one part of the system accelerates every particle in the system in precisely the same way. (We will see an important exception to this in Chapter 5.) The force applied to the system acts on every particle in proportion to its mass, so that even though the particles are not rigidly bound to each other, they all accelerate the same. For the vast majority of situations in classical mechanics, we resort to the first of these criteria in cases of conservative forces: A solid mass is pushed by a spring; a solid magnet is repulsed by another magnet, and so on. It is possible, however, to avoid a change in internal energy for a system of particles that are not rigidly held together, such as gases and liquids, if the second criterion is met. The latter criterion is clearly satisfied by gravity. Sure enough, we know that work done by gravity changes only the mechanical energy of a system, and not its thermal energy (it is "conservative"). Digression: Gravitation Contributes to Thermal Energy It is well-known that the moon of Jupiter named "Io" exhibits extensive volcanic activity. It is believed that the source of the thermal energy is an imbalance in the gravitational forces on different parts of the moon. This imbalance comes from Io's gravitational interaction with its sister moons Europa, Ganymede, and Callisto, along with its primary gravitational interaction with Jupiter. Io, with two plumes erupting from its surface. NASA Forces other than gravity that act on a gas, however, will have an effect on the internal energy of the system. For example, if we compress a gas in a confined space, the piston doing the compressing only does work on the particles with which it comes in contact, After the piston is done moving, the center of mass of the gas comes back to rest, which means the piston added nothing to the gas system's mechanical energy. But clearly work was done, and this goes into the internal energy of the gas, the change of which we can measure with an increase in temperature. Consider next the case of kinetic friction. We know that the mechanism for this force involves microscopic interactions of irregularities in the two surfaces involved. This means that the particles only at the surfaces are exerting forces on each other – all the particles in the objects that are sliding are not involved equally. We cannot resort to saying that the objects involved are systems of particles held rigidly together, or they would stop moving immediately when surface irregularities interacted. These irregularities must be capable of some deformation for any sliding to occur. For static friction, the particles can be considered to be held rigidly in place, and therefore no thermal energy is generated from static friction as there is with kinetic friction. Kinetic Energy Distribution Within a System Let’s return once again to an example we looked at in the previous section (Figure 4.3.1), and ask a new question about it (the example has been simplified slightly by giving one block exactly twice the mass of the second block).
\[ \int_{-\infty}^{\infty} \mathrm{d}x \exp\left(-\frac{(x-\mu)^4}{2}\right) \] was given. Using RcppNumerical is straight forward. One defines a class that extends Numer::Func for the function and an interface function that calls Numer::integrate on it: // [[Rcpp::depends(RcppEigen)]]// [[Rcpp::depends(RcppNumerical)]]#include class exp4: public Numer::Func {private: double mean;public: exp4(double mean_) : mean(mean_) {} double operator()(const double& x) const { return exp(-pow(x-mean, 4) / 2); }};// [[Rcpp::export]]Rcpp::NumericVector integrate_exp4(const double &mean, const double &lower, const double &upper) { exp4 function(mean); double err_est; int err_code; const double result = Numer::integrate(function, lower, upper, err_est, err_code); return Rcpp::NumericVector::create(Rcpp::Named("result") = result, Rcpp::Named("error") = err_est);} This works fine for finite ranges: integrate_exp4(4, 0, 4) ## result error ## 1.077900e+00 9.252237e-08 However, it produces NA for infinite ones: integrate_exp4(4, -Inf, Inf) ## result error ## NaN NaN This is disappointing, since base R’s integrate() handles this without problems: exp4 <- function(x, mean) exp(-(x - mean)^4 / 2)integrate(exp4, 0, 4, mean = 4) ## 1.0779 with absolute error < 1.3e-07 integrate(exp4, -Inf, Inf, mean = 4) ## 2.155801 with absolute error < 7.9e-06 In this particular case the problem can be easily solved in two different ways. First, the integral can be expressed in terms of the Gamma function: \[ \int_{-\infty}^{\infty} \mathrm{d}x \exp\left(-\frac{(x-\mu)^4}{2}\right) = 2^{-\frac{3}{4}} \Gamma\left(\frac{1}{4}\right) \approx 2.155801 \] Second, the integrand is almost zero almost everywhere: It is therefore sufficient to integrate over a small region around mean to get a reasonable approximation for the integral over the infinite range: integrate_exp4(4, 1, 7) ## result error ## 2.155801e+00 9.926448e-13 However, the trick to approximate the integral over an infinite range with an integral over a (possibly large) finite range does not work for functions that approach zero more slowly. The help page for integrate() has a nice example for this effect: ## a slowly-convergent integralintegrand <- function(x) {1/((x+1)*sqrt(x))}integrate(integrand, lower = 0, upper = Inf) ## 3.141593 with absolute error < 2.7e-05 ## don't do this if you really want the integral from 0 to Infintegrate(integrand, lower = 0, upper = 10) ## 2.529038 with absolute error < 3e-04 integrate(integrand, lower = 0, upper = 100000) ## 3.135268 with absolute error < 4.2e-07 integrate(integrand, lower = 0, upper = 1000000, stop.on.error = FALSE) ## failed with message 'the integral is probably divergent' How does integrate() handle the infinite range and can we replicate this in Rcpp? The help page states: If one or both limits are infinite, the infinite range is mapped onto a finite interval. This is in fact done by a different function from R’s C-API: Rdqagi() instead of Rdqags(). In principle one could call Rdqagi() via Rcpp, but this is not straightforward. Fortunately, there are at least two other solutions. The GNU Scientific Library provides a function to integrate over the infinte interval \((-\infty, \infty)\), which can be used via the RcppGSL package: // [[Rcpp::depends(RcppGSL)]]#include #include double exp4 (double x, void * params) { double mean = *(double *) params; return exp(-pow(x-mean, 4) / 2);}// [[Rcpp::export]]Rcpp::NumericVector normalize_exp4_gsl(double &mean) { gsl_integration_workspace *w = gsl_integration_workspace_alloc (1000); double result, error; gsl_function F; F.function = &exp4; F.params = &mean; gsl_integration_qagi(&F, 0, 1e-7, 1000, w, &result, &error); gsl_integration_workspace_free (w); return Rcpp::NumericVector::create(Rcpp::Named("result") = result, Rcpp::Named("error") = error);} normalize_exp4_gsl(4) ## result error ## 2.155801e+00 3.718126e-08 Alternatively, one can apply the transformation used by GSL (and probably R) also in conjunction with RcppNumerical. To do so, one has to substitute \(x = (1-t)/t\) resulting in \[ \int_{-\infty}^{\infty} \mathrm{d}x f(x) = \int_0^1 \mathrm{d}t \frac{f((1-t)/t) + f(-(1-t)/t)}{t^2} \] Now one could write the code for the transformed function directly, but it is of course nicer to have a general solution, i.e. use a class template that can transform any function in the desired fashion // [[Rcpp::depends(RcppEigen)]]// [[Rcpp::depends(RcppNumerical)]]#include class exp4: public Numer::Func {private: double mean;public: exp4(double mean_) : mean(mean_) {} double operator()(const double& x) const { return exp(-pow(x-mean, 4) / 2); }};// [[Rcpp::plugins(cpp11)]]template class trans_func: public T {public: using T::T; double operator()(const double& t) const { double x = (1-t)/t; return (T::operator()(x) + T::operator()(-x))/pow(t, 2); }};// [[Rcpp::export]]Rcpp::NumericVector normalize_exp4(const double &mean) { trans_func f(mean); double err_est; int err_code; const double result = Numer::integrate(f, 0, 1, err_est, err_code); return Rcpp::NumericVector::create(Rcpp::Named("result") = result, Rcpp::Named("error") = err_est);} normalize_exp4(4) ## result error ## 2.155801e+00 1.439771e-06 Note that the exp4 class is identical to the one from the initial example. This means one can use the same class to calculate integrals over a finite range and after transformation over an infinite range.
In the derivation of the Miller-Rabin Primality (or actually, "probably composite") test, Wikipedia http://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test as well as other sites (including several Stack Exchange answers, as a lot of people have asked about this test) have a couple of small details that I am a bit stumped on. You start with the exponent p-1 in Fermat's Little Theorem (FLT) being odd, so after picking a < p, you express the a^(p-1) in FLT as a^[(2^s)d] for d odd, so sloppily using "=" as "equivalent", a^[(2^s)d]=1 mod p Taking square roots, we get a^[(2^(s-1))d]=+/- 1 mod p If s-1 > 0, we can take the +1 mod p and do it again, etc. until we run out of factors of 2. So far, so good. But here is my question. In the last step, Wiki (and everyone else) gets a^[(2^r)d]=-1 mod p for SOME r, -1 < r < s, OR a^d =1 mod p. But as far as I can see, a^[(2^r)d]=+/-1 for ALL -1 < r < s Why the exclusions? For any prime $p$, it is true that $x^2 \equiv +1\pmod{p} \implies x \equiv \pm 1 \pmod{p}$ [*]. However it is not true that $x^2 \equiv -1\pmod{p} \implies x \equiv \pm 1 \pmod{p}$. If you know that $a^{2^{r+1}d} \equiv +1\pmod{p}$, then by [*] you get $a^{2^rd} \equiv \pm 1 \pmod{p}$. But, you can't say whether $a^{2^rd} \equiv -1 \pmod{p}$ or $a^{2^rd} \equiv +1 \pmod{p}$. If it turns out that $a^{2^rd} \equiv -1 \pmod{p}$, then you can't use [*] to say anything about $a^{2^{r-1}d} \equiv -1 \pmod{p}$, and so, you have to stop the process at $a^{2^rd} \equiv -1 \pmod{p}$. However, if it is the case that $a^{2^rd} \equiv +1 \pmod{p}$, then you can use [*] to get that $a^{2^{r-1}d} \equiv \pm1 \pmod{p}$, and the process continues on. If we stopped the process at some $r$, $0 \le r \le s-1$, then we got that $a^{2^rd} \equiv -1 \pmod{p}$. If this process continued on as long as it could, then at $r = 0$ we got $a^{d} \equiv +1 \pmod{p}$, and we stopped there since $d$ is odd. This is why you can conclude that either $a^{2^rd} \equiv -1 \pmod{p}$ for some $r$, $0 \le r \le s-1$ or $a^d \equiv +1\pmod{p}$.
This is a cross-post of someone else's question. I am cross-posting this question from MSE since it hasn't received any answers. In the paper Natural Operations on Differential Forms, the author R. Palais shows that the exterior derivative $d$ is characterized as the unique "natural" linear map from $\Phi^p$ to $\Phi^{p+1}$ (Palais' $\Phi^p$ is what is perhaps more commonly written as $\Omega^p$, and "commutes with all diffeomorphisms", I believe, means $f^*(d\omega) = d(f^*\omega)$): the exterior derivative on $p$-forms is determined to within a scalar factor by the condition that it be a linear mapping into $p+1$ forms which commutes with all diffeomorphisms. I've tried to read the proof in the paper, but I'm struggling to follow the details and missing a sense of the big picture of the proof. I'm interested in Palais' claim because this characterization is the most compelling one I have seen $-$ it seems far more appropriate as an axiomatic definition of $d$ than the definitions found in many textbooks, which often define $d$ based on properties such as$d(\alpha \wedge \beta) = d\alpha \wedge \beta + (-1)^k \alpha \wedge d\beta$ (where $\alpha$ is a $k$-form),$d^2=0$,or $d(\sum \omega_{i_1...i_k}dx_{i_1} \wedge \cdots \wedge dx_{i_k}) = \sum d\omega_{i_1...i_k} \wedge dx_{i_1} \wedge \cdots \wedge dx_{i_k}$.While these are indeed quite basic properties of $d$, they are more appropriate as theorems than as a priori assumptions. (Of course, what is more "natural" is a matter of opinion, so please don't belabor the issue.) Since it's such an innocent-looking and natural characterization, I would like to see a clear, motivated, reasonably elementary proof of it. Why ought it to be true that there is only one natural linear map from $\Omega^p$ to $\Omega^{p+1}$, up to a constant multiple? What are the key steps of a proof?
I am trying to understand how one obtains the Galilean algebra from the Poincare algebra, specifically through the method of central extension. I'm doing this by imposing that the generators of the Poincare group scale with the velocity in certain ways, and then taking the small velocity limit. Also I was told I should make the definition $$H = M+W,\tag{1}$$ where $H$ is the Hamiltonian, and $M$ will turn out to be a central element of the algebra. Now I have been able to fine many of the commutation relations and they make sense (i.e. match the Galilean algebra) but I don't really understand how to show $M$ commutes with everything. Also, is the physical interpretation that $M$ is the potential energy and $W$ is the kinetic energy? In physical, non-covariant language, (WP conventions), the Poincaré algebra presents as $$ [P_0,P_i]=0,\\ [P_i,P_j]=0, \\ [J_i,P_0] = 0 ~, \\ [K_i,P_k] =- i \delta_{ik} P_0 ~,\\ [K_i,P_0] = -i P_i ~,\\ [K_m,K_n] = -i \epsilon_{mnk} J_k ~, \\ [J_m,P_n] = i \epsilon_{mnk} P_k ~,\\ [J_m,K_n] = i \epsilon_{mnk} K_k ~, \\ [J_m,J_n] = i \epsilon_{mnk} J_k ~, $$ where one of the Casimir invariants is $P_\mu P^\mu= P_0^2-\vec{P}^2$. Now redefine the boosts and $P^0$ up and down by the speed of light $c$, so $K^i\equiv c C^i$ and $P_0\equiv \frac {1}{c} E$. The Wigner-İnönü contraction $c\to\infty$ (slowness!) results in the naive Galilean Lie algebra G(3), $$[E,P_i]=0, \\ [P_i,P_j]=0, \\ [L_i,E]=0 , \\ [C_i,P_j]= 0 ~,\\ [C_i,E]=i P_i , \\ [C_i,C_j]=0, \\ [L_{m},L_{n}]=i \epsilon _{ink} L_{k} ,\\ [L_{m},P_k]=i \epsilon _{mkj}P_j , \\ [L_{m},C_k]=i \epsilon _{mkj}C_j . $$ In effect, the boost has lost its time-translation piece and is but space translations proportional to the time, Galilean boosts, $C^i$; and the timelike momentum is a plain time-translation oblivious of c, namely a "hamiltonian", E. The spacelike $P_i$ are generators of translations as before (momentum operators), and $L^i$ stand for generators of space rotations, having merely changed name from J, to banish any inapposite thoughts of spin. Observe how this limit has trivialized several right-hand sides to 0. In fact the 10D regular representation (matrix of structure constants, Gilmore p 220) is a very sparse matrix, indeed. It amounts to extreme structural violence. Note the quadratic invariants $P^iP^i$ and $L^iL^i$. The Bargmann central extension algebra is the above, but now with $[C_i,P_j]=iM\delta_{ij} $ instead of the above trivial relation ($E/c^2\to M$ as $c\to \infty$), where the central charge M is an invariant, as the name implies, easy to check consistency of. The quadratic momentum invariant now morphs into a new invariant, $ME-P^2/2$, the mass-shell invariant, and since M is invariant, $E-\frac{P^2}{2M}$ is invariant as well, the potential energy. E is like the Hamiltonian, but it is not an algebra invariant, as it fails to commute with the Galilean boosts. It is merely a time invariant, i.e. it commutes with itself--pfffft.... Cosmas Zachos has already given a correct answer. The main point is that The natural non-relativistic Lie algebra in Newtonian mechanics is the Bargmann algebra, not the Galilean algebra! The Bargmann algebra is an Inonu-Wigner contraction of $$iso(n\!-\!1,1)\oplus u(1).\tag{A} $$ Here $iso(n\!-\!1,1)$ is the Poincare algebra in $n$ spacetime dimensions, generated by $J^{\mu\nu}$ and $p^{\mu}$; while $u(1)$ is an algebra generated by the mass generator $m$, which belongs to the center. Concerning OP's eq. (1), the Hamiltonian $$H~=~p^0c-mc^2~=~\sqrt{{\bf p}^2c^2+m^2c^4}-mc^2\quad\longrightarrow\quad\frac{{\bf p}^2}{2m}\quad\text{for}\quad c\to \infty \tag{B}$$ is the kinetic energy, i.e. the energy minus the rest energy. For further details, see e.g. my Phys.SE answer here.
We use MOEADr to optimize a simple 2-objective problem composed of the Sphere function and the Rastrigin function in \(X = \{x_1,x_2,\dots,x_D\} \in\mathbb{R}^D\). For this example, we define the sphere and rastrigin function in \(\mathbb{R}\) as: \[ \text{sphere}(X) = \sum_{i=1}^D x_i^2\\ \text{rastrigin}(X) = \sum_{i=1}^D (x_i^2 - 10 \text{cos}(2\pi x_i) + 10) \] Their R implementation is as follows. Because both functions have the optimum at the zero, we apply a simple displacement on the input parameters to make the output of the example problem more interesting. sphere <- function(X) { X.shift <- X + seq_along(X)*0.1 # displace input parameters sum(X.shift**2) }rastringin <- function(X) { X.shift <- X - seq_along(X)*0.1 # displace input parameters in the opposite direction sum((X.shift)**2 - 10 * cos(2 * pi * X.shift) + 10) } The MOEADr package requires the multi objective problem to be defined as a function that receives the entire solution set as a matrix, and return the objective values also as a matrix. For details see the Problem Definition vignette. We will achieve this requirement by wraping the sphere function and the rastrigin function as follows: problem.sr <- function(X) { t(apply(X, MARGIN = 1, FUN = function(X) { c(sphere(X), rastringin(X)) } ))} Finally, we need to create a problem definition list that specifies the number of objectives, and the minimum and maximum parameter values for each dimension: problem.1 <- list(name = "problem.sr", # function that executes the MOP xmin = rep(-1,30), # minimum parameter value for each dimension xmax = rep(1,30), # maximum parameter value for each dimension m = 2) # number of objectives To load the package and run the problem using the original MOEA/D variant, we use the following commands: library(MOEADr)results <- moead(problem = problem.1, preset = preset_moead("original"), showpars = list(show.iters = "none"), seed = 42) The moead() function requires a problem definition (discussed in the previous section) and an algorithm configuration (in this case, an algorithm preset, and optional changes to the preset), logging parameters, and a seed. The preset_moead() function can output a number of different presets based on combinations found on the literature. In this example, preset_moead(“original”) returns the original MOEA/D configuration, as proposed by Zhang and Li (2007) 1. You can get a list of available presets by running: preset_moead()#> name x description#> 1 original | Original MOEA/D: Zhang and Li (2007), Sec. V-E, p.721-722#> 2 original2 | Original MOEA/D, v2: Zhang and Li (2007), Sec. V-F, p.724#> 3 moead.de | MOEA/D-DE: Li and Zhang (2009) #> #> #> Use preset_moead("name") to generate a standard MOEAD composition The moead() function returns a list object of class moead, containing the final solution set, objective values for each solution, and other information about the optimization process. The MOEADr package uses S3 to implement versions of plot() and summary() for this object. plot() will show the pareto front for the objectives, as in the figure below. When the number of objectives is greater than 2, a parallel coordinates plot is also produced (see the next example). summary() displays information about the number of non-dominated and feasible solution points, the estimated ideal and nadir values (when available), and (optionally) the IDR and hypervolume yielded by the feasible, nondominated set. summary(results)#> Warning: reference point not provided:#> #> using the maximum in each dimension instead.#> Summary of MOEA/D run#> #====================================#> Total function evaluations: 20100#> Total iterations: 200#> Population size: 100#> Feasible points found: 100 (100% of total)#> Nondominated points found: 100 (100% of total)#> Estimated ideal point: 31.879 92.925#> Estimated nadir point: 117.372 450.424#> Estimated HV: 23972.73#> Ref point used for HV: 117.3718 450.4242#> #====================================plot(results, suppress.pause = TRUE) The smoof package 2 provides generators for a large number of single and multi objective test functions. The MOEADr package provides a wrapper (function make_vectorized_smoof()) to easily convert smoof functions to the format required by the moead() function. The code snipped below generates a MOP with five objective functions from smoof, and the necessary problem definition for moead(). library(smoof)#> Loading required package: ParamHelpers#> Loading required package: BBmisc#> Loading required package: checkmateDTLZ2 <- make_vectorized_smoof(prob.name = "DTLZ2", dimensions = 20, n.objectives = 5)problem.dtlz2 <- list(name = "DTLZ2", xmin = rep(0, 20), xmax = rep(1, 20), m = 5) In this example we will also show how to modify an algorithm preset. Because of the higher number of objectives, we want to reduce the value of the parameter \(H\) in the decomposition component SLD used by the preset from 100 to 8: results.dtlz <- moead(problem = problem.dtlz2, preset = preset_moead("original"), decomp = list(name = "SLD", H = 8), showpars = list(show.iters = "dots"), seed = 42)#> #> MOEA/D running: .................... Notice on the figure below that MOEADr plots extra information when the number of objectives in a problem is greater than 2: summary(results.dtlz)#> Warning: reference point not provided:#> #> using the maximum in each dimension instead.#> Summary of MOEA/D run#> #====================================#> Total function evaluations: 99495#> Total iterations: 200#> Population size: 495#> Feasible points found: 495 (100% of total)#> Nondominated points found: 242 (48.9% of total)#> Estimated ideal point: 0 0 0 0 0#> Estimated nadir point: 1.252 1.176 1.174 1.571 2.16#> Estimated HV: 4.974365#> Ref point used for HV: 1.252253 1.176129 1.174102 1.57124 2.160007#> #====================================plot(results.dtlz, suppress.pause = TRUE) Note that for more complex MOPs, the preset values suggested by preset_moead() might not be effective. For example, the standard value of 100 for \(H\) in the SLD component is appropriate for 2 objectives, but exceeds the available memory for \(m > 3\). Therefore, we strongly recommend that the user explore the meta-parameter space. The next case study shows one way to perform this task semi-automatically.
We study randomly growing trees governed by the affine preferential attachment rule. Starting with a seed tree S, vertices are attached one by one, each linked by an edge to a random vertex of... Pages We consider the S-matrix bootstrap of four dimensional scattering amplitudes with O(3) symmetry and no bound-states. We explore the allowed space of scattering lengths which parametrize the... We perform a unified systematic analysis of d+1 dimensional, spin \ell representations of the isometry algebra of the maximally symmetric spacetimes AdS_d+1, \mathbb{R}_{1,d} and dS_d+1. This... The Ratner property, a quantitative form of divergence of nearby trajectories, is a central feature in the study of parabolic homogeneous flows. Discovered by Marina Ratner and used in her... We show that the loop O(n) model exhibits exponential decay of loop sizes whenever n\geq 1 and x<\tfrac{1}{\sqrt{3}}+\varepsilon(n), for some suitable choice of \varepsilon(n)>0. It... Fixed points of scalar field theories with quartic interactions in d=4-\varepsilon dimensions are considered in full generality. For such theories it is known that there exists a scalar function... The goal of this paper is to investigate the Theta invariant --- an invariant of framed 3-manifolds associated with the lowest order contribution to the Chern-Simons partition function --- in... Uniform integer-valued Lipschitz functions on a domain of size N of the triangular lattice are shown to have variations of order \sqrt{\log N}. The level lines of such functions form a... We prove a version of the Weak Gravity Conjecture for 6d F-theory or heterotic string compactifications with 8 supercharges. This sharpens our previous analysis by including massless scalar... This text describes the content of the Takagi lectures given by the author in Kyoto in 2017. The lectures present some aspects of the theory of sharp thresholds for boolean functions and its... In this paper we give an overview of some recent progress in using holography to study various far-from-equilibrium condensed matter systems. Non-equilibrium problems are notoriously difficult... We give a proof of the replica symmetric formula for the free energy of the Sherrington-Kirkpatrick model in high temperature which is based on the TAP formula. This is achieved by showing that...
Recall: Fourier series representation of a periodic signal $\tilde{x(t)}$ with time period $'T'$ is given by:- Suppose, $x(t)$ is not periodic.Is there a representation for $x(t)$ as a linear combination of complex exponentials? The main idea is to think of $x(t)$ as the limit of $\tilde{x}(t)$ when $T \rightarrow \infty$ i.e $$\lim_{T \to \infty}\tilde{x}(t)$$. Summary:- 1. F.S representation applies to periodic signals i.e A signal contains only frequencies which are integer multiples of a fundamental frequency. 2. F.T representation applies to Non-periodic (and periodic) signals i.e The signal may contain a continuum of frequencies $X(j\omega)$ refers to the F.T,where $\omega$ is a continuously changing variable. So,the Analysis and Synthesis Equations respectively are given by:-(3)
The 4D SUSY algebra can be written as $$\{ Q_{\alpha}^{A} , Q_{\beta}^{B \dagger} \} = 2 m \delta^{AB} \delta_{\alpha \beta} + 2 i Z^{AB} \Gamma^0_{\alpha \beta}, \tag{B.2.37} $$ in a particular reference frame. One can find this formula in the Appendix B, page 448 of Polchinski's String Theory vol.II. I am confused with the $'i'$ before the central charge. If we do a Hermitian conjugate on both side: $$\{ Q_{\alpha}^{A \dagger} , Q_{\beta}^{B} \} = 2 m \delta^{AB} \delta_{\alpha \beta} - 2 i Z^{AB} (\Gamma^0_{\alpha \beta})^* $$ and then exchange $(A,\alpha)$ with $(B,\beta)$, the LHS is invariant. But the RHS is $$2 m \delta^{AB} \delta_{\alpha \beta} - 2 i Z^{BA} (\Gamma^0_{ \beta \alpha})^* = 2 m \delta^{AB} \delta_{\alpha \beta} - 2 i Z^{BA} (\Gamma^0)^{\dagger}_{ \alpha \beta}.$$ Since $Z_{AB}$ is anti-symmetric and $(\Gamma^0)^{\dagger} = -\Gamma^0$, It seems that we have the wrong sign before the central charge term: $$2 m \delta^{AB} \delta_{\alpha \beta} - 2 i Z^{AB} (\Gamma^0)_{ \alpha \beta}.$$ I think I made a mistake but I can not figure out where is it. This post imported from StackExchange Physics at 2018-04-18 20:42 (UTC), posted by SE-user JQ Skywalker
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
Why? How do we know how these definitions are related? Fix a set $S$ Consider an equivalence relation $\sim$ on $S$. Consider the equivalence classes $[a]=\{ x \in S : x \sim a \}$. By definition these are subsets of $S$. Their union is $S$ because by reflexivity $a\in[a]$ for every $a\in S$. Finally, they are disjoint because if $x \in [a]$ and $x \in [b]$ then $x \sim a$ and so $a \sim x$, by symmetry. But then $a \sim b$ by transitivity and so $a\in [b]$. Every $y \in [a]$ is also in $[b]$ again by transitivity. This means that $[a]\subseteq [b]$. Swapping $a$ and $b$, we conclude $[a]=[b]$. All this means that the equivalence classes form a partition of $S$. Conversely, given a partition of $S$ in subsets $C_\lambda$, define an equivalence relation in $S$ by $a\sim b$ iff there is a $\lambda$ such that $a$ and $b$ are both in $C_\lambda$. Since the $C_\lambda$ cover $S$, every $a\in S$ is in one of those and so $\sim$ is reflexive. By definition, $\sim$ is symmetric. Since the $C_\lambda$ are disjoint, $\sim$ is transitive. Note: Every partition of a set determines an equivalence relation on that set, and for every equivalence relation, the equivalence classes corresponding to that relation form a partition of the set. To try to put into words the relationship between a partition on a set, and the equivalence relation determined by that partition (or vice versa): Think of simple examples of an equivalence relation on a set X, and its corresponding equivalence classes (say, $\equiv \pmod 2$ on the set of integers). What are the corresponding equivalence classes? There are two: the set of even integers, and the set of odd integers. Evens: $E = \{x \in \mathbb{Z}\mid x\equiv 0 \pmod{2}\}$, Odds: $O = \{y \in \mathbb{Z} \mid x\equiv 1\pmod{2}\}$ Is the union of the two equivalence classes equal to the set of integers? (yes).That is, $E \cup O = \mathbb{Z}$. Is any integer in more than one of those classes? (no). That is, $E\cap O = \varnothing$. So we have two equivalence classes whose union is the set of integers, and whose intersection is empty. Hence, we have a partition on $\mathbb{Z}$ into two sets: the set of all even integers, and the set of all odd integers. By definition every element in a given equivalence class is related to every other element in that class, and not to any element belonging to a different equivalence class. In the example I give above, all even numbers are related (they are even i.e., $\equiv 0 \pmod{2}$), all odd numbers are related (they are all odd, i.e., $\equiv 1 \pmod{2}$), but no integer is both even and odd. The collection of all the equivalence classes is a partition of $X$. Every $x \in X$ belongs to one and only one equivalence class. For example, the second definition is telling you that the union of all the equivalence classes of $X$ IS $X$ (put differently, every element of X is contained in an equivalence class) and that if any two equivalence classes are not equal, then they are disjoint: their intersection is the empty set. So every element of $X$ is in one and only one equivalence class.
Unit 02: Differentiation Notes (Solutions) of Unit 02: Differentiation, Calculus and Analytic Geometry, MATHEMATICS 12 (Mathematics FSc Part 2 or HSSC-II), Punjab Text Book Board Lahore. You can view online or download PDF. To view PDF, you must have PDF Reader installed on your system and it can be downloaded from Software section. Here are few online resource, which are very helpful to find derivative. Contents & summary Introduction Average Rate of Change Derivative of a Function Finding $f'(x)$ from Definition of Derivative Derivative of $x^n$ where $n \in \mathbb{Z}$ Exercise 2.1 Differentiation of Expressions of the types Exercise 2.2 Theorems on Differentiation Exercise 2.3 The Chain Rule Derivatives of Inverse Function Derivative of a Function given in form of parametric Equations Differentiation of Implicit Relation Exercise 2.4 Derivatives of Trigonometric Function Derivatives of Inverse Trigonometric Functions Exercise 2..5 Derivative of Exponential Functions Derivative of the Logarithmic Function Logarithmic Differentiation Derivative of Hyperbolic Function Derivative of the Inverse Hyperbolic Function Exercise 2.6 Successive Differentiation ( or Derivatives) Exercise 2.7 Series Expansions of Function Tailor Series Expansions of Function Exercise 2.8 Geometrical Interpretation of a Derivative Increasing and Decreasing Function Relative Extrema Critical Values and Critical Points Exercise 2.9 Exercise 2.10 Which method is better In this chapter many questions can be solved in much easier way. Actually in every exercise some formula/method is introduced to solve the question. In examination it is not necessary to do the same method as given in exercise. Here is one example: We have to find the derivative of $\frac{x+1}{x-1}$ with respect to $x$. Method 1 $$ \begin{aligned} \frac{d}{dx}\left(\frac{x+1}{x-1}\right) &= \frac{(x-1)\frac{d}{dx}(x+1)-(x+1)\frac{d}{dx}(x-1)}{(x-1)^2}\\ &= \frac{(x-1)(1)-(x+1)(1)}{(x-1)^2}\\ &= \frac{x-1-x-1}{(x-1)^2}\\ &= \frac{-2}{(x-1)^2} \end{aligned} $$ Method 2 By converting improper to proper fraction $$ \frac{x+1}{x-1}= 1+\frac{2}{x-1}=1+2(x-1)^{-1} $$ Now $$ \begin{aligned} \frac{d}{dx}\left(\frac{x+1}{x-1}\right) &=\frac{d}{dx}\left(1+2(x-1)^{-1}\right)\\ &= 0-2(x-1)^{-2}(1)\\ &= \frac{-2}{(x-1)^2} \end{aligned} $$ This was a simple example but try it to find the derivative of $\frac{x^2+1}{x^2-1}$. Solutions Here are previous and next chapters fsc/fsc_part_2_solutions/ch02 Last modified: 4 weeks ago by Dr. Atiq ur Rehman
So I am studying spring mass systems, and I'm confused by why the solution, which is complex, becomes real. Such that $$x(t) = C_1\cos(\omega t)+iC_2\sin(\omega t) = C_1\cos(\omega t)+C_2\sin(\omega t)$$ Why does the imaginary term just become an arbitrary constant? Also the general solution is also simplified to: $$x(t)=A\cos(\omega t+\phi)$$ How do you get that result? I tried drawing sine and cosine waves to correlate them, but I don't see it. I also looked at some trig identities, but no avail.
So, given some data, Mathematica 10.2 can now attempt to figure out what probability distribution might have produced it. Cool! But suppose that, instead of having data, we have something that is in some ways better -- a formula. Let's call it $f$. We suspect -- perhaps because $f$ is non-negative over some domain and because the integral of $f$ over that domain is 1 -- that $f$ is actually the PDF of some distribution (Normal, Lognormal, Gamma, Weibull, etc.) or some relatively simple transform of that distribution. Is there any way that Mathematica can help figure out the distribution (or simple transform) whose PDF is the same as $f$? Example: Consider the following formula: 1/(2*E^((-m + Log[5])^2/8)*Sqrt[2*Pi]) $$\frac{e^{-\frac{1}{8} (\log (5)-m)^2}}{2 \sqrt{2 \pi }}$$ As it happens -- and as I discovered with some research and guesswork -- this formula is the PDF of NormalDistribution[Log[5], 2] evaluated at $m$. But is there a better way than staring or guessing to discover this fact? That is, help me write FindExactDistribution[f_, params_]. Notes The motivation for the problem comes from thinking about Conjugate Prior distributions but I suspect it might have a more general application. One could start with mapping PDF evaluated at $m$ over a variety of continuous distributions. And if I did this I would at some point get to what I will call $g$, which is the PDF or the NormalDistributionwith parameters $a$ and $b$ evaluated at $m$. 1/(b*E^((-a + m)^2/(2*b^2))*Sqrt[2*Pi]) $$\frac{e^{-\frac{(m-a)^2}{2 b^2}}}{\sqrt{2 \pi } b}$$ But unless I knew that if I replaced $a$ by Log[5] and $b$ by $2$ that I would get $f$, this fact would not mean a lot to me. I suppose I could look at the TreeForm of $f$ and $g$ and I would notice certain similarities, and that might be a hint, but I am not sure how to make much progress beyond that observation. Ultimately, the problem looks to be about finding substitutions in parts of a tree ($g$) which, after evaluation, yield a tree that matches a target $f$. I have the suspicion that this is a difficult problem with an NKS flavor but one for which Mathematica and its ability to transform expressions might be well suited. I appreciate the responses here. But let me provide an example that is perhaps not so easy. Suppose the target function f is as follows: $\frac{7}{10 (a-2)^2}$ for the domain ($-\infty,\frac{13}{10}$]. If we create a probability distribution out of this and then generate 10,000 random samples from the distribution and then run FindDistribution dis = ProbabilityDistribution[7/(10 (-2 + a)^2), {a, -\[Infinity], 13/10}]; rv = RandomVariate[dis,10^4]; fd=FindDistribution[rv,5] The result is a mixture distribution of normal distributions, a beta distribution, a weibull distribution, a normal distribution and a mixture distribution of a normal distribution and a gamma distribution. The mixture distributions are clearly of the wrong form, the normal distribution is clearly not right, Although I am not positive, I don't believe the Weibull Distribution or the Beta Distribution is correct either. In fact, I don't know what the correct answer is, though I think it might be a fairly simple transform of a single parameter distribution. The point, however, is that the FindDistribution process, does not seem to work in this case. And that's why I am hoping for something better.
Numerical method for image registration model based on optimal mass transport 1. Department of Applied Mathematics, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada 2. David R. Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada This paper proposes a numerical method for solving a non-rigid image registration model based on optimal mass transport. The main contribution of this paper is to address two issues. One is that we impose a proper periodic boundary condition, such that when the reference and template images are related by translation, or a combination of translation and non-rigid deformation, the numerical solution gives the underlying transformation. The other is that we design a numerical scheme that converges to the optimal transformation between the two images. As an additional benefit, our approach can decompose the transformation into translation and non-rigid deformation. Our numerical results show that the numerical solutions yield good-quality transformations for non-rigid image registration problems. Keywords:Image registration, mass transport, Monge-Ampère equation, periodic boundary condition, viscosity solution, Hamilton-Jacobi-Bellman (HJB) equation, monotonicity, finite difference. Mathematics Subject Classification:Primary: 65N06, 65N22; Secondary: 35J96. Citation:Yangang Chen, Justin W. L. Wan. Numerical method for image registration model based on optimal mass transport. Inverse Problems & Imaging, 2018, 12 (2) : 401-432. doi: 10.3934/ipi.2018018 References: [1] [2] J. -D. Benamou, Y. Brenier and K. Guittet, The Monge-Kantorovitch mass transfer and itscomputational fluid mechanics formulation, [3] J.-D. Benamou, B. D. Froese and A. M. Oberman, Two numerical methods for the elliptic Monge-Ampère equation, [4] [5] S. C. Brenner, T. Gudi, M. Neilan and L.-y. Sung, $C^0$ penalty methods for the fully nonlinear Monge-Ampère equation, [6] C. Broit, Optimal registration of deformed images.Google Scholar [7] [8] [9] [10] [11] K. Y. Chan and J. W. Wan, Reconstruction of missing cells by a killing energy minimizingnonrigid image registration, in [12] [13] [14] G. E. Christensen, [15] M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, [16] [17] E. J. Dean and R. Glowinski, Numerical methods for fully nonlinear elliptic equations of the Monge-Ampère type, [18] P. Dupuis, U. Grenander and M. I. Miller, Variational problems on flows of diffeomorphisms for image matching, [19] X. Feng, R. Glowinski and M. Neilan, Recent developments in numerical methods for fully nonlinear second order partial differential equations, [20] X. Feng and M. Neilan, Vanishing moment method and moment solutions for fully nonlinear second order partial differential equations, [21] [22] [23] [24] B. D. Froese and A. M. Oberman, Convergent finite difference solvers for viscosity solutions of the elliptic Monge-Ampère equation in dimensions two and higher, [25] [26] A. A. Goshtasby, [27] S. Haker and A. Tannenbaum, Optimal mass transport and image registration, in [28] S. Haker, L. Zhu, A. Tannenbaum and S. Angenent, Optimal mass transport for registration and warping, [29] [30] [31] [32] [33] [34] P. -L. Lions, Hamilton-Jacobi-Bellman equations and the optimal control of stochastic systems, in [35] [36] [37] J. Modersitzki, [38] J. Modersitzki, [39] O. Museyko, M. Stiglmayr, K. Klamroth and G. Leugering, On the application of the Monge-Kantorovich problem to image registration, [40] A. M. Oberman, Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the Hessian, [41] V. I. Oliker and L. D. Prussner, On the numerical solution of the equation $(\partial^ 2z/\partial x^ 2)(\partial^ 2z/\partial y^ 2)-((\partial^ 2z/\partial x\partial y))^ 2 = f$ and its discretizations. Ⅰ, [42] [43] [44] L.-P. Saumier, M. Agueh and B. Khouider, An efficient numerical algorithm for the $L^ 2$ optimal transport problem with periodic densities, [45] [46] P. Thevenaz, U. E. Ruttimann and M. Unser, A pyramid approach to subpixel registration based on intensity, [47] [48] E. F. Toro, [49] U. Trottenberg, C. W. Oosterlee and A. Schüller, [50] [51] show all references References: [1] [2] J. -D. Benamou, Y. Brenier and K. Guittet, The Monge-Kantorovitch mass transfer and itscomputational fluid mechanics formulation, [3] J.-D. Benamou, B. D. Froese and A. M. Oberman, Two numerical methods for the elliptic Monge-Ampère equation, [4] [5] S. C. Brenner, T. Gudi, M. Neilan and L.-y. Sung, $C^0$ penalty methods for the fully nonlinear Monge-Ampère equation, [6] C. Broit, Optimal registration of deformed images.Google Scholar [7] [8] [9] [10] [11] K. Y. Chan and J. W. Wan, Reconstruction of missing cells by a killing energy minimizingnonrigid image registration, in [12] [13] [14] G. E. Christensen, [15] M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, [16] [17] E. J. Dean and R. Glowinski, Numerical methods for fully nonlinear elliptic equations of the Monge-Ampère type, [18] P. Dupuis, U. Grenander and M. I. Miller, Variational problems on flows of diffeomorphisms for image matching, [19] X. Feng, R. Glowinski and M. Neilan, Recent developments in numerical methods for fully nonlinear second order partial differential equations, [20] X. Feng and M. Neilan, Vanishing moment method and moment solutions for fully nonlinear second order partial differential equations, [21] [22] [23] [24] B. D. Froese and A. M. Oberman, Convergent finite difference solvers for viscosity solutions of the elliptic Monge-Ampère equation in dimensions two and higher, [25] [26] A. A. Goshtasby, [27] S. Haker and A. Tannenbaum, Optimal mass transport and image registration, in [28] S. Haker, L. Zhu, A. Tannenbaum and S. Angenent, Optimal mass transport for registration and warping, [29] [30] [31] [32] [33] [34] P. -L. Lions, Hamilton-Jacobi-Bellman equations and the optimal control of stochastic systems, in [35] [36] [37] J. Modersitzki, [38] J. Modersitzki, [39] O. Museyko, M. Stiglmayr, K. Klamroth and G. Leugering, On the application of the Monge-Kantorovich problem to image registration, [40] A. M. Oberman, Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the Hessian, [41] V. I. Oliker and L. D. Prussner, On the numerical solution of the equation $(\partial^ 2z/\partial x^ 2)(\partial^ 2z/\partial y^ 2)-((\partial^ 2z/\partial x\partial y))^ 2 = f$ and its discretizations. Ⅰ, [42] [43] [44] L.-P. Saumier, M. Agueh and B. Khouider, An efficient numerical algorithm for the $L^ 2$ optimal transport problem with periodic densities, [45] [46] P. Thevenaz, U. E. Ruttimann and M. Unser, A pyramid approach to subpixel registration based on intensity, [47] [48] E. F. Toro, [49] U. Trottenberg, C. W. Oosterlee and A. Schüller, [50] [51] 1: Start with an initial guess 2: Set 3: for do 4: if then 5: 6: 7: end if 8: Compute 9: Compute 10: Compute 11: Solve $ \begin{array}{l} [\lambda I + (\mathbf{J}^{(k+\frac{1}{2})})^T \mathbf{J}^{(k+\frac{1}{2})}] E^{(k+\frac{1}{2})} \\ = -(\mathbf{J}^{(k+\frac{1}{2})})^T R^{(k+\frac{1}{2})}. \end{array} $ for 12: 13: end for 1: Start with an initial guess 2: Set 3: for do 4: if then 5: 6: 7: end if 8: Compute 9: Compute 10: Compute 11: Solve $ \begin{array}{l} [\lambda I + (\mathbf{J}^{(k+\frac{1}{2})})^T \mathbf{J}^{(k+\frac{1}{2})}] E^{(k+\frac{1}{2})} \\ = -(\mathbf{J}^{(k+\frac{1}{2})})^T R^{(k+\frac{1}{2})}. \end{array} $ for 12: 13: end for Examples Example 2 Example 3 Example 4 Example 5 Example 6 Example 7 0.47 0.52 0.61 0.64 0.69 0.59 0 Examples Example 2 Example 3 Example 4 Example 5 Example 6 Example 7 0.47 0.52 0.61 0.64 0.69 0.59 0 net flow of mass/pixels area change of a square element change of mass/pixels intensity morphing magnitude color of a square element zero invariance invariance white inflow compressed increase red outflow expanded decrease blue net flow of mass/pixels area change of a square element change of mass/pixels intensity morphing magnitude color of a square element zero invariance invariance white inflow compressed increase red outflow expanded decrease blue Examples Example 2 Example 3 Example 4 Example 5 Periodic: Periodic: Periodic: Mass transport, periodic: Neumann: Neumann: Neumann: Two-step empirical: Examples Example 2 Example 3 Example 4 Example 5 Periodic: Periodic: Periodic: Mass transport, periodic: Neumann: Neumann: Neumann: Two-step empirical: Example Example 3 Image size 100x100 200x200 400x400 800x800 Number of steps for convergence 5 3 3 3 CPU time for corrections of translation kernels (sec) 1.0 4.6 30 259 CPU time for the primary nonlinear solver (sec) 3.1 7.3 58 1083 Total CPU time (sec) 4.1 11.9 88 1342 Example Example 3 Image size 100x100 200x200 400x400 800x800 Number of steps for convergence 5 3 3 3 CPU time for corrections of translation kernels (sec) 1.0 4.6 30 259 CPU time for the primary nonlinear solver (sec) 3.1 7.3 58 1083 Total CPU time (sec) 4.1 11.9 88 1342 Examples Example 3 Example 4 Example 5 Example 6 Example 7 Image size 600x600 Number of steps for convergence 3 3 10 10 19 CPU time for the primary nonlinear solver (sec) 147 152 668 627 1613 Examples Example 3 Example 4 Example 5 Example 6 Example 7 Image size 600x600 Number of steps for convergence 3 3 10 10 19 CPU time for the primary nonlinear solver (sec) 147 152 668 627 1613 The number of iteration 1 2 3 4 5 Residual 1382 195 2.32 1.71 0.131 The number of iteration 6 7 8 9 10 Residual 0.0236 0.00492 9.50x10 3.42x10 9.41x10 The number of iteration 1 2 3 4 5 Residual 1382 195 2.32 1.71 0.131 The number of iteration 6 7 8 9 10 Residual 0.0236 0.00492 9.50x10 3.42x10 9.41x10 [1] Adam M. Oberman. Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the Hessian. [2] Daniele Castorina, Annalisa Cesaroni, Luca Rossi. On a parabolic Hamilton-Jacobi-Bellman equation degenerating at the boundary. [3] [4] [5] Steven Richardson, Song Wang. The viscosity approximation to the Hamilton-Jacobi-Bellman equation in optimal feedback control: Upper bounds for extended domains. [6] [7] Alessio Figalli, Young-Heon Kim. Partial regularity of Brenier solutions of the Monge-Ampère equation. [8] [9] [10] Haitao Yang, Yibin Chang. On the blow-up boundary solutions of the Monge -Ampére equation with singular weights. [11] Shouchuan Hu, Haiyan Wang. Convex solutions of boundary value problem arising from Monge-Ampère equations. [12] Barbara Brandolini, Carlo Nitsch, Cristina Trombetti. Shape optimization for Monge-Ampère equations via domain derivative. [13] [14] [15] [16] Mohamed Assellaou, Olivier Bokanowski, Hasnaa Zidani. Error estimates for second order Hamilton-Jacobi-Bellman equations. Approximation of probabilistic reachable sets. [17] [18] [19] Cristian Enache. Maximum and minimum principles for a class of Monge-Ampère equations in the plane, with applications to surfaces of constant Gauss curvature. [20] Baojun Bian, Shuntai Hu, Quan Yuan, Harry Zheng. Constrained viscosity solution to the HJB equation arising in perpetual American employee stock options pricing. 2018 Impact Factor: 1.469 Tools Metrics Other articles by authors [Back to Top]
Here's a way of thinking about it. To be clear however, this does not prove the Pythagorean theorem. Given two 2-dimensional vectors $\mathbb{x},\mathbb{y} \in \Bbb R^2,$ with entries $\mathbb{x} = (x_1,x_2)$ and $\mathbb{y} = (y_1,y_2),$ we define a norm as,$$ |\mathbb{x}| = \sqrt{x_1^2 + x_2^2} $$We say two vectors are orthogonal if,$$ x_1y_1 + x_2y_2 = 0. $$ Theorem If $\mathbb{x}, \mathbb{y} \in \Bbb R^2$ are orthogonal, then $$ |\mathbb{x} + \mathbb{y}|^2 = |\mathbb{x}|^2 + |\mathbb{y}|^2 $$ Proof: We have,$$\begin{align*} |\mathbb{x} + \mathbb{y}|^2 &= | (x_1+y_1,x_2+y_2)|^2 \\&= \left( \sqrt{ (x_1+y_1)^2 + (x_2+y_2)^2 } \right)^2 \\&= (x_1^2 + 2x_1y_1 + y_2^2) + (x_2^2 + 2x_2y_2 + y_2^2) \\&= (x_1^2 + x_2^2) + 2 ( x_1y_1 + x_2y_2 ) + (y_1^2 + y_2^2) \\&= |\mathbb{x}|^2 + |\mathbb{y}|^2\end{align*}$$As required. Note that the simplification $(\sqrt{x})^2 = x$ only holds if $x\geq 0,$ which holds since for any real number $x,$ $x^2 \geq 0.$ The geometric interpretation of the result is that if two vectors are orthogonal, then they are perpendicular (so their angles form a right angle). The norm corresponds to the length of the vector and $|\mathbb{x}+\mathbb{y}|$ corresponds to the length of the vector obtained by adding the two together - so the hypotenuse of the triangle formed by the two vectors. It must be emphasized however that this does not prove the Pythagorean theorem. Rather, I have just proved an result about 2-dimensional real vectors and gave a geometrical interpretation. To complete this proof, you would need to prove this correspondence, which requires geometry. On a side-note, this result holds for a large class of spaces, known as inner product spaces using a suitable norm and definition for orthogonality.
From WolframAlpha it seems that $$ \frac{1}{2}=\sum_{k=1}^{\infty} \zeta(2k)-\zeta(2k+1) $$ Could someone provide a proof for this? Thanks. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I have the following sum $$\sum_{a=0}^{[p/2]}\frac{p!}{(a!)^2(p-2a)!}2^{p-2a},$$ where $[p/2]$ denotes the closest integer and $p \in Z^+$. Is there any way to make the sum look simpler? Especially in regards to the fact that the upper bound bust be an integer. Any suggestion will be appreciated. (This is what I found to be the sum of the $2p$th powers of some lengths.)
That formula doesn't break down. It can be proven that the right-hand side is equal to $\pi$. One reason it comes up in connection with approximation is that the arctangent terms can be approximated well by adding terms from the Taylor expansion of the arctangent function, $$\arctan(x)=x-\frac{x^3}{3}+\frac{x^5}{5}-\frac{x^7}{7}+\frac{x^9}{9}-\cdots.$$ This is valid for $|x|\leq 1$, but the series converges more rapidly the closer $x$ is to $0$, so with the relatively small $\frac{1}{7}$ and $\frac{3}{79}$ as inputs it gives a good approximation without having to take too many terms. This can be contrasted with the formula $$\pi=4\arctan(1)=4\left(1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\cdots\right),$$ where the series converges much more slowly. In 1950 H.C. Schepler published a 3 part "Chronology of pi" in Mathematics Magazine, available to those with access to JSTOR here, here, and here. In the second part there is the following excerpt indicating that Hutton may have suggested using the formula before Euler: Schepler's chronology can also be found in L. Berggren, J. Borwein, and P. Borwein's Pi: A Source Book.
For a prediction interval in linear regression you still use $\hat{E}[Y|x] = \hat{\beta_0}+\hat{\beta}_{1}x$ to generate the interval. You also use this to generate a confidence interval of $E[Y|x_0]$. What's the difference between the two? Your question isn't quite correct. A confidence interval gives a range for $\text{E}[y \mid x]$, as you say. A prediction interval gives a range for $y$ itself. Naturally, our best guess for $y$ is $\text{E}[y \mid x]$, so the intervals will both be centered around the same value, $x\hat{\beta}$. As @Greg says, the standard errors are going to be different---we guess the expected value of $\text{E}[y \mid x]$ more precisely than we estimate $y$ itself. Estimating $y$ requires including the variance that comes from the true error term. To illustrate the difference, imagine that we could get perfect estimates of our $\beta$ coefficients. Then, our estimate of $\text{E}[y \mid x]$ would be perfect. But we still wouldn't be sure what $y$ itself was because there is a true error term that we need to consider. Our confidence "interval" would just be a point because we estimate $\text{E}[y \mid x]$ exactly right, but our prediction interval would be wider because we take the true error term into account. Hence, a prediction interval will be wider than a confidence interval. The difference between a prediction interval and a confidence interval is the standard error. The standard error for a confidence interval on the mean takes into account the uncertainty due to sampling. The line you computed from your sample will be different from the line that would have been computed if you had the entire population, the standard error takes this uncertainty into account. The standard error for a prediction interval on an individual observation takes into account the uncertainty due to sampling like above, but also takes into account the variability of the individuals around the predicted mean. The standard error for the prediction interval will be wider than for the confidence interval and hence the prediction interval will be wider than the confidence interval. I found the following explanation helpful: Confidence intervalstell you about how well you have determined the mean. Assume that the data really are randomly sampled from a Gaussian distribution. If you do this many times, and calculate a confidence interval of the mean from each sample, you'd expect about 95 % of those intervals to include the true value of the population mean. The key point is that the confidence interval tells you about the likely location of the true population parameter. Prediction intervalstell you where you can expect to see the next data point sampled. Assume that the data really are randomly sampled from a Gaussian distribution. Collect a sample of data and calculate a prediction interval. Then sample one more value from the population. If you do this many times, you'd expect that next value to lie within that prediction interval in 95% of the samples.The key point is that the prediction interval tells you about the distribution of values, not the uncertainty in determining the population mean. Prediction intervals must account for both the uncertainty in knowing the value of the population mean, plus data scatter. So a prediction interval is always wider than a confidence interval. One is a prediction of a future observation, and the other is a predicted mean response. I will give a more detailed answer to hopefully explain the difference and where it comes from, as well as how this difference manifests itself in wider intervals for prediction than for confidence. This example might illustrate the difference between confidence and prediction intervals: suppose we have a regression model that predicts the price of houses based on number of bedrooms, size, etc. There are two kinds of predictions we can make for a given $x_0$: We can predict the price for a specific new house that comes on the market with characteristics $x_0$ ( "what is the predicted price for this house $x_0$?"). Its true price will be $$y = x_0^T\beta+\epsilon$$. Since $E(\epsilon)=0$, the predicted price will be $$\hat{y} = x_0^T\hat{\beta}$$ In assessing the variance of this prediction, we need to include our uncertainty about $\hat{\beta}$, as well as our uncertainty about our prediction (the error of our prediction) and so must include the variance of $\epsilon$ (the error of our prediction). This is typically called a prediction of a future value. We can also predict the average price of a house with characteristics $x_0$ ( "what would be the average price for a house with characteristics $x_0$?"). The point estimate is still $$\hat{y} = x_0^T\hat{\beta}$$, but now only the variance in $\hat{\beta}$ needs to be accounted for. This is typically called prediction of the mean response. Most times, what we really want is the first case. We know that $$var(x_0^T\hat{\beta}) = x_0^T(X^TX)^{-1}x_0\sigma^2$$ This is the variance for our mean response (case 2). But, for a prediction of a future observation (case 1), recall that we need the variance of $x_0^T\hat{\beta} + \epsilon$; $\epsilon$ has variance $\sigma^2$ and is assumed to be independent of $\hat{\beta}$. Using some simple algebra, this results in the following confidence intervals: CI for a single future response for $x_0$: $$\hat{y}_0\pm t_{n-p}^{(\alpha/2)}\hat{\sigma}\sqrt{x_0^T(X^TX)^{-1}x_0 + 1}$$ CI for the mean response given $x_0$: $$\hat{y}_0\pm t_{n-p}^{(\alpha/2)}\hat{\sigma}\sqrt{x_0^T(X^TX)^{-1}x_0}$$ Where $t_{n-p}^{\alpha/2}$ is a t-statistic with $n-p$ degrees of freedom at the $\alpha/2$ quantile. Hopefully this makes it a bit clearer why the prediction interval is always wider, and what the underlying difference between the two intervals is. This example was adapted from Faraway, Linear Models with R, Sec. 4.1. Short answer: A prediction interval is an interval associated with a random variable yet to be observed (forecasting). A confidence interval is an interval associated with a parameter and is a frequentist concept. Check full answer here from Rob Hyndman, the creator of forecast package in R. This answer is for those readers who could not fully understand the previous answers. Let's discuss a specific example. Suppose you try to predict the people's weight from their height, sex (male, female) and diet (standard, low carb, vegetarian). Currently, there are more than 8 billion people on Earth. Of course, you can find many thousands of people having the same height and other two parameters but different weight. Their weights differ wildly because some of them have obesity and others may suffer from starvation. Most of those people will be somewhere in the middle. One task is to predict the average weight of all the people having the same values of all three explanatory variables. Here we use the confidence interval. Another problem is to forecast the weight of some specific person. And we don't know the living circumstances of that individual. Here the prediction interval must be used. It is centered around the same point, but it must be much wider than the confidence interval.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Many interesting infinite graphs are given by taking a ground set $X$ and placing edges between pairs of vertices in $X$ at certain prescribed distances.The Hadwiger-Nelson problem is to determine the chromatic number of the unit-distance graph over the plane.As a modified version of this problem, Eggleton, Erdős, and Skilton defined a distance graph to be a graph $G(S)$ with the integers as a vertex set and edges between pairs of vertices at distance within some set $S$.A significant research effort has gone into determining the chromatic number of these graphs.One tool to study the chromatic number is the fractional chromatic number. Instead of directly studying the fractional chromatic number, one can study the independence ratio of a distance graph.That is, let $A$ be an independent set in a distance graph $G(S)$ generated by a distance set $S$.Then, the density of $A$ is $\delta(A) = \limsup_{N\to\infty} \frac{|A\cap [-N,N]|}{2N+1}$.The independence ratio of $G(S)$ is The independence ratio is bounded from below by periodic independent sets and is bounded above by discharging arguments. It is determined that for every finite set $S$ there is a periodic independent set of density $\overline{\alpha}(S)$, and the minimum period of such a set is bounded by a function of the maximum element in $S$. Several exact values of $\overline{\alpha}(S)$ are determined, especially for sets of size three, and several conjectures are stated. distance_cliquer computes the independence ratio of a distance graph, given the set $S$.Simply download, unzip, type make, then run the program with the following command structure: cliquer (by Niskanen and Östergârd) computes the clique number of a given graph. The algorithm for distance_cliquer is based on the algorithm in cliquer. We computed the values of $\overline{\alpha}(S)$ for many sets $S$. These values can be found in the Table of Independence Ratios. For the extremal sets and exact performance data, see the following data files. The values DALPHA_CYC_S_* provide the best-computed lower bound on $\overline{\alpha}(S)$ while the values DALPHA_INT_S_* provide the best-computed upper bound on $\overline{\alpha}(S)$. R.B. Eggleton, P. Erdős, D.K. Skilton, Colouring the real line, J. Combin. Theory B 39 (1985) 86-100. G. Gao and X. Zhu, Star extremal graphs and the lexicographic product, Discrete Math. 165/166 (1996) 147-156. X. Zhu, Circular Chromatic Number of Distance Graphs with Distance Sets of Cardinality 3, Journal of Graph Theory 41 (2002) 195-207. Finding cliques and independent sets in circulant graphs, Computational Combinatorics (Blog).
I will implement some calculators to estimate the proper settings for the boundary prism layer meshing. Simple mathematical calculations, such as the four arithmetic operations and power, can be done using HTML and JavaScript. Boundary Layer Mesh \begin{align} l_{n} = l_{1} r^{n-1} \end{align} \begin{align} l_{tot} = \sum_{k=1}^{n} l_{1} r^{k-1} = \frac{l_{1} \left( r^n – 1 \right)}{r – 1} \left( r \neq 1 \right) \end{align}
I have encountered a problem with Mathematica that I really don't know how to solve. I'm sorry if I am going to be verbose but I need it to explain the problem properly (and also make the code I'm going to show more comprehensible). very I'm using Mathematica to work for my master's thesis and in these days I'm dealing with the ecosystem model described in this article. Amongst the things I have to do there's the study of the eigenvalue distribution of the community matrix of this ecosystem, i.e. the jacobian matrix of the system of coupled ODEs that describe the population abundances of the various species, computed at the steady state of the system. In a single sentence my problem is the following: as the size of this matrix increases Mathematica finds that it has also very small positive eigenvalues, while this matrix is definite negative. always More in detail (now I have to be very verbose): the model describes a system of $m$ animal species competing for $p$ resources. These resources are supplied with constant rates $\vec{s}=(s_1,\dots,s_p)$, and every species can be identified with its metabolic strategy $\vec{\alpha}_{\sigma}=(\alpha_{\sigma 1}, \dots, \alpha_{\sigma p})$ (where $\sigma=\{1,\dots,m\}$ labels the species, and $\alpha_{\sigma i}$ can be interpreted as the ratio at which species $\sigma$ eats resource $i$). Now, assuming $m \geq p$ and that $\vec{\alpha}_\sigma$-s (properly renormalized) satisfy$$\sum_{i=1}^p \alpha_{\sigma i} = 1 \quad \forall \sigma$$(called "metabolic trade-off condition") it can be shown that the system is stable (i.e. all species survive) if $\vec{s}$ belongs to the convex hull of metabolic strategies, i.e. if:$$\exists \{n_1^*>0, \dots, n_m^*>0\} : \sum_{\sigma=1}^m n_\sigma^* = 1 \quad \text{and} \quad \vec{s} = \sum_{\sigma=1}^m n_\sigma^* \vec{\alpha}_\sigma$$Now, studying the model analytically it turns out that the jacobian matrix of the system computed at this stable steady state can be written as:$$\mathcal{M}=-DASA^T$$where:$$D=\operatorname{diag}(n_1^*, \dots, n_m^*) \qquad S=\operatorname{diag}(1/s_1,\dots,1/s_p) \qquad A_{\sigma i} = \alpha_{\sigma i}$$and it can be shown that for $m \geq p$ it is definite negative. always Now, with Mathematica I'm trying to study numerically the eigenvalue distribution of $\mathcal{M}$. The code I am using in order to do so is the following: m = 1000;p = m;A = #/Total[#, {2}] &@RandomReal[{0, 1}, {m, p}]; \[ScriptCapitalN] = (#/Total[#, {2}] &@RandomReal[{0, 1}, {1, m}])[[1]];S = Sum[\[ScriptCapitalN][[i]]*A[[i]], {i, 1, m}];\[ScriptCapitalD] = DiagonalMatrix[\[ScriptCapitalN]];\[ScriptCapitalS] = DiagonalMatrix[1/S];\[ScriptCapitalM] = -\[ScriptCapitalD].A.\[ScriptCapitalS].Transpose[A];eigenvalues = Eigenvalues[\[ScriptCapitalM]];ListPlot[{Re[#], Im[#]} &/@ eigenvalues, PlotRange -> Automatic]Length[Select[eigenvalues, # > 0 &]]Max[eigenvalues] (I added the last two lines to keep track how many eigenvalues are positive and which is the largest eigenvalue). Now, If I set $m=p$ no problems arise and all eigenvalues are indeed negative. For example, for $m=p=1000$ I get the plot in figure $a)$ (below). However, problems start popping up if I set $m > p$ (which is the most interesting case for the model). In fact, the results of the analytical study of $\mathcal{M}$ (i.e. the fact that it's negative definite) are valid for $m \geq p$, but using the code I have written it happens that some eigenvalues start "collapsing" around 0, and they turn out to be both positive and negative (even if very very small in magnitude). The number of "collapsed" eigenvalues increases as the ratio $m/p$ increases, until they become the majority of the eigenvalues. For example, if I use the same code with $m=1000$ and $p=100$ I get figure $b)$ below. My question is therefore the following: is there something am I doing wrong in the code? Or is this just a problem of numerical precision? I first thought it could be the latter, so I have tried to increase the numerical precision a bit by changing the third and fourth line of the previous code to: A = #/Total[#, {2}] &@ RandomReal[{0, 1}, {m, p}, WorkingPrecision -> 7];\[ScriptCapitalN] = (#/Total[#, {2}] &@ RandomReal[{0, 1}, {1, m}, WorkingPrecision -> 7])[[1]]; However, new problems arise: apart from the fact that computation now requires more time (I had to use much smaller values of $m$ and $p$ in order to get results in minutes) the eigenvalues are now complex, instead of purely real! For example, for $m=500$; $p=80$ I get figure $c)$ below. a lot Therefore, if this is indeed a problem of numerical precision, how can I fix it? Is there a more efficient way to increase it? Sorry again for the verboseness.
I'm trying to perform a linear regression in a Bayesian way. The response is normal,the prior I would like to put over $\mathbf{\beta}$ (vector of regression coefficients) and $\Sigma^2$ (variance of the error term) is a Normal inverse-gamma one: Pi(Beta,Sig^2)=P(Beta|Sig^2)*P(Sig^2) $P(\mathbf{\beta}|\Sigma^2) \sim N_p(b,B) $ $P(\Sigma^2) \sim InvGamma(u,U) $ My problem regards the choice of the parameters of the prior($b,B,u,U$). Thank you for your help!
The maximal tensor product satisfies the following universal property: Let $A, B,$ and $C$ be C$^*$-algebras. If $\phi: A \rightarrow C$ and $\psi: B \rightarrow C$ are $*$-homomorphisms whose images commute, then there's a unique $*$-homomorphism $\phi \otimes \psi : A\otimes_{max} B \rightarrow C$ such that $(\phi\otimes \psi) (a \otimes b) = \phi(a)\psi(b)$. See page 193 of Murphy's book for a proof here. If we take $A \otimes_{max} B$ with inclusions $i_A (a) = a \otimes 1$ and $i_B(b) = 1 \otimes b$, then the universal property above guarantees that we have found our coproduct in C$^*$-alg$_{com}^1$. Note that the "max" in the previous sentence wasn't necessary, since $A$ and $B$ are nuclear anyway. So you could just as easily take the spatial tensor product, if you like that construction better.
This will be a short tutorial as the ideas are very simple. I have previously discussed standardized survival functions. In survival analysis we know that there is a simple mathematical transformation from hazard to survival function and vice versa. The idea here is to transform to a hazard function from the standardized survival function. Recall that a standardized survival funnction; $S_s(t|X=x,Z)$ is estimated by $$ S_s(t|X=x,Z) = \frac{1}{N}\sum_{i=1}^{N}S(t|X=x,Z=z_i) $$ If we apply the usual transformation from survival to hazard to function ($h(t) = \frac{-d}{dt}\log[S(t)]$) we get $$ h_s(t|X=x,Z) = \frac{1}{N} \frac{\sum_{i=1}^{N}S(t|X=x,Z=z_i)h(t|X=x,Z=z_i)}{\sum_{i=1}^{N}S(t|X=x,Z=z_i)} $$ This is a weighted average of the $N$ individual hazard functions with weights equal to $S(t|X=x,Z=z_i)$, i.e. the predicted survival function for individual $i$ when forced to take a specific value of the exposure variable, $X$, but their observed values of confounding variables, $Z$. This is implemented in stpm2_standsurv using the hazard option. Example I will use the Rotterdam Breast cancer data. The code below loads and stset’s the data and then fits a model using stpm2. . use https://www.pclambert.net/data/rott2b, clear(Rotterdam breast cancer data (augmented with cause of death)). stset os, f(osi==1) scale(12) exit(time 120) failure event: osi == 1obs. time interval: (0, os] exit on or before: time 120 t for analysis: time/12------------------------------------------------------------------------------ 2,982 total observations 0 exclusions------------------------------------------------------------------------------ 2,982 observations remaining, representing 1,171 failures in single-record/single-failure data 20,002.424 total analysis time at risk and under observation at risk from t = 0 earliest observed entry t = 0 last observed exit t = 10. stpm2 hormon age enodes pr_1, scale(hazard) df(4) eform nolog tvc(hormon) dftvc(3)Log likelihood = -2666.5999 Number of obs = 2,982-------------------------------------------------------------------------------- | exp(b) Std. Err. z P>|z| [95% Conf. Interval]---------------+----------------------------------------------------------------xb | hormon | .8019893 .0741703 -2.39 0.017 .6690322 .961369 age | 1.013249 .0024115 5.53 0.000 1.008534 1.017987 enodes | .1132406 .011008 -22.41 0.000 .0935961 .1370082 pr_1 | .9061179 .0119267 -7.49 0.000 .883041 .9297979 _rcs1 | 2.644573 .0814503 31.58 0.000 2.489656 2.809129 _rcs2 | 1.209479 .0379393 6.06 0.000 1.13736 1.286172 _rcs3 | 1.014 .0162037 0.87 0.384 .9827339 1.046262 _rcs4 | .9961807 .0072731 -0.52 0.600 .9820273 1.010538 _rcs_hormon1 | 1.003465 .0756175 0.05 0.963 .8656822 1.163176 _rcs_hormon2 | .891054 .056664 -1.81 0.070 .786637 1.009331 _rcs_hormon3 | 1.025052 .0390804 0.65 0.516 .9512477 1.104583 _cons | 1.103353 .1771893 0.61 0.540 .8054133 1.511508--------------------------------------------------------------------------------Note: Estimates are transformed only in the first equation. I have made the effect of our exposure, hormon, time-dependent using the tvc option. I first calculate the standardized survival curves where everyone is forced to be exposed and then unexposed. . range timevar 0 10 100(2,882 missing values generated). stpm2_standsurv, at1(hormon 0) at2(hormon 1) timevar(timevar) ci contrast(difference). . twoway (rarea _at1_lci _at1_uci timevar, color(red%25)) ///> (rarea _at2_lci _at2_uci timevar, color(blue%25)) ///> (line _at1 timevar, sort lcolor(red)) ///> (line _at2 timevar, sort lcolor(blue)) ///> , legend(order(1 "No hormonal treatment" 2 "Hormonal treatment") ring(0) cols(1) pos(1)) ///> ylabel(0.5(0.1)1,angle(h) format(%3.1f)) ///> ytitle("S(t)") ///> xtitle("Years from surgery") If I run stpm2_standsurv again with the hazard option I get the corresponding hazard functions of the standardized curves. This is the marginal hazard ratio (as a function of time). . capture drop _at* _contrast*. stpm2_standsurv, at1(hormon 0) at2(hormon 1) timevar(timevar) hazard ci contrast(ratio) per(1000) Plot the standardized hazard functions. . twoway (rarea _at1_lci _at1_uci timevar, color(red%30)) ///> (rarea _at2_lci _at2_uci timevar, color(blue%30)) ///> (line _at1 timevar, color(red)) ///> (line _at2 timevar, color(blue)) ///> , legend(off) ///> ylabel(,angle(h) format(%3.1f)) ///> xtitle("Years from surgery") I can’t explain the lower and then higher hazard for those on hormon therapy. Perhaps better adjustment for confounders would change this. I can also plot the ratio of these two hazard functions with a 95% confidence interval. . twoway (rarea _contrast2_1_lci _contrast2_1_uci timevar, color(red%30)) ///> (line _contrast2_1 timevar, color(red)) ///> , yscale(log) ///> ylabel(0.5 1 2 4 8 20 40, angle(h) format(%3.1f)) ///> xtitle("Years from surgery") ///> legend(off) ///> yscale(log) If I had used the difference argument of the contrast() option I would have obtained the absolute difference in the standardized hazard functions. I am still thinking about the usefulness of this - in general I prefer the idea of standardized survival functions rather than the corresponding hazard function. However, it is harder to see how the risk of events changes over follow-up time with a cumulative measure (i.e. standardized survival).